threads
listlengths
1
2.99k
[ { "msg_contents": "Hi,\n\nWe found that if we provide *--enable-tap-tests * switch at the time of PG\nsources configuration, getting this below error\n\"\nchecking for Perl modules required for TAP tests... Can't locate IPC/Run.pm\nin @INC (you may need to install the IPC::Run module) (@INC contains:\n/usr/lib/perl5/site_perl/5.18.2/x86_64-linux-thread-multi\n/usr/lib/perl5/site_perl/5.18.2\n/usr/lib/perl5/vendor_perl/5.18.2/x86_64-linux-thread-multi\n/usr/lib/perl5/vendor_perl/5.18.2\n/usr/lib/perl5/5.18.2/x86_64-linux-thread-multi /usr/lib/perl5/5.18.2\n/usr/lib/perl5/site_perl .) at ./config/check_modules.pl line 11.\n\nBEGIN failed--compilation aborted at ./config/check_modules.pl line 11.\n\nconfigure: error: Additional Perl modules are required to run TAP tests\n\"\n\nlook like this is happening because the Perl-IPC-Run package is not\navailable on SLES 12 where Perl-IPC-Run3 is available.\n\nSrinu (my teammate) found that IPC::Run is hard coded in config/\ncheck_modules.pl and if we replace Run to Run3 it works (patch is attached,\ncreated by Srinu)\n\nDo we have any better option to work without this workaround?\n\nregards,", "msg_date": "Wed, 4 Jan 2023 17:27:55 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Getting an error if we provide --enable-tap-tests switch on SLES 12" }, { "msg_contents": "Hi,\n\nOn 2023-01-04 17:27:55 +0530, tushar wrote:\n> We found that if we provide *--enable-tap-tests * switch at the time of PG\n> sources configuration, getting this below error\n> \"\n> checking for Perl modules required for TAP tests... Can't locate IPC/Run.pm\n> in @INC (you may need to install the IPC::Run module) (@INC contains:\n> /usr/lib/perl5/site_perl/5.18.2/x86_64-linux-thread-multi\n> /usr/lib/perl5/site_perl/5.18.2\n> /usr/lib/perl5/vendor_perl/5.18.2/x86_64-linux-thread-multi\n> /usr/lib/perl5/vendor_perl/5.18.2\n> /usr/lib/perl5/5.18.2/x86_64-linux-thread-multi /usr/lib/perl5/5.18.2\n> /usr/lib/perl5/site_perl .) at ./config/check_modules.pl line 11.\n> \n> BEGIN failed--compilation aborted at ./config/check_modules.pl line 11.\n> \n> configure: error: Additional Perl modules are required to run TAP tests\n> \"\n> \n> look like this is happening because the Perl-IPC-Run package is not\n> available on SLES 12 where Perl-IPC-Run3 is available.\n\nHm. It's available in newer suse versions:\nhttps://scc.suse.com/packages/22892843\n\n\n> Srinu (my teammate) found that IPC::Run is hard coded in config/\n> check_modules.pl and if we replace Run to Run3 it works (patch is attached,\n> created by Srinu)\n\nI don't think that can work. The patch changes what configure tests, but none\nof the many uses of IPC::Run in the tests. And I don't think IPC::Run3\nactually provides all the features of IPC::Run we use.\n\nHave you actually tested running the tests with the patch applied?\n\n\n> Do we have any better option to work without this workaround?\n\nYou could install the module via cpan :/.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 4 Jan 2023 13:10:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Getting an error if we provide --enable-tap-tests switch on SLES\n 12" }, { "msg_contents": "On 1/5/23 2:40 AM, Andres Freund wrote:\n>\n> Have you actually tested running the tests with the patch applied?\nYes but getting an errors like\nt/006_edb_current_audit_logfile.pl .. Can't locate IPC/Run.pm in @INC \n(you may need to install the IPC::Run module) (@INC contains: \n/home/runner/edbas/src/bin/pg_ctl/../../../src/test/perl \n/home/runner/edbas/src/bin/pg_ctl\n>\n>> Do we have any better option to work without this workaround?\n> You could install the module via cpan :/.\n>\n>\nYes, will try to install.\n\nThanks Andres.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Fri, 6 Jan 2023 16:11:34 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Getting an error if we provide --enable-tap-tests switch on SLES\n 12" } ]
[ { "msg_contents": "Hi -hackers,\n\nI've spent some time fighting against \"out of memory\" errors coming\nout of psql when trying to use the cursor via FETCH_COUNT. It might be\na not so well known fact (?) that CTEs are not executed with cursor\nwhen asked to do so, but instead silently executed with potential huge\nmemory allocation going on. Patch is attached. My one doubt is that\nnot every statement starting with \"WITH\" is WITH(..) SELECT of course.\n\nDemo (one might also get the \"out of memory for query result\"):\n\npostgres@hive:~$ psql -Ant --variable='FETCH_COUNT=100' -c \"WITH data\nAS (SELECT generate_series(1, 20000000) as Total) select repeat('a',\n100) || data.Total || repeat('b', 800) as total_pat from data;\"\nKilled\npostgres@hive:~$ tail -4 /var/log/postgresql/postgresql-14-main.log\n[..]\n2023-01-04 12:46:20.193 CET [32936] postgres@postgres LOG: could not\nsend data to client: Broken pipe\n[..]\n2023-01-04 12:46:20.195 CET [32936] postgres@postgres FATAL:\nconnection to client lost\n\nWith the patch:\npostgres@hive:~$ /tmp/psql16-with-patch -Ant\n--variable='FETCH_COUNT=100' -c \"WITH data AS (SELECT\ngenerate_series(1, 20000000) as Total) select repeat('a', 100) ||\ndata.Total || repeat('b', 800) as total_pat from data;\" | wc -l\n20000000\npostgres@hive:~$\n\nRegards,\n-Jakub Wartak.", "msg_date": "Wed, 4 Jan 2023 13:10:20 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tJakub Wartak wrote:\n\n> It might be a not so well known fact (?) that CTEs are not executed\n> with cursor when asked to do so, but instead silently executed with\n> potential huge memory allocation going on. Patch is attached. My one\n> doubt is that not every statement starting with \"WITH\" is WITH(..)\n> SELECT of course.\n\nYes, that's why WITH queries are currently filtered out by the\nFETCH_COUNT feature.\n\nCase in point:\n\ntest=> begin;\nBEGIN\n\ntest=> create table tbl(i int);\nCREATE TABLE\n\ntest=> declare psql_cursor cursor for\n with r(i) as (values (1))\n insert into tbl(i) select i from r;\nERROR:\tsyntax error at or near \"insert\"\nLINE 3: insert into tbl(i) select i from r;\n\n\nSo the fix you're proposing would fail on that kind of queries.\n\nA solution would be for psql to use PQsetSingleRowMode() to retrieve\nresults row-by-row, as opposed to using a cursor, and then allocate\nmemory for only FETCH_COUNT rows at a time. Incidentally it solves\nother problems like queries containing multiple statements, that also\nfail to work properly with cursors, or UPDATE/INSERT... RETURNING.. on\nlarge number of rows that could also benefit from pagination in\nmemory.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Wed, 04 Jan 2023 16:22:03 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Wed, Jan 4, 2023 at 10:22 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n> A solution would be for psql to use PQsetSingleRowMode() to retrieve\n> results row-by-row, as opposed to using a cursor, and then allocate\n> memory for only FETCH_COUNT rows at a time. Incidentally it solves\n> other problems like queries containing multiple statements, that also\n> fail to work properly with cursors, or UPDATE/INSERT... RETURNING.. on\n> large number of rows that could also benefit from pagination in\n> memory.\n\nIs there any reason that someone hasn't, like, already done this?\n\nBecause if there isn't, we should really do this. And if there is,\nlike say that it would hurt performance or something, then we should\ncome up with a fix for that problem and then do something like this.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 10:57:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jan 4, 2023 at 10:22 AM Daniel Verite <daniel@manitou-mail.org> wrote:\n>> A solution would be for psql to use PQsetSingleRowMode() to retrieve\n>> results row-by-row, as opposed to using a cursor, and then allocate\n>> memory for only FETCH_COUNT rows at a time.\n\n> Is there any reason that someone hasn't, like, already done this?\n\nAs you well know, psql's FETCH_COUNT mechanism is far older than\nsingle-row mode. I don't think anyone's tried to transpose it\nonto that. I agree that it seems like a good idea to try.\nThere will be more per-row overhead, but the increase in flexibility\nis likely to justify that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 04 Jan 2023 11:36:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Wed, Jan 4, 2023 at 11:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> As you well know, psql's FETCH_COUNT mechanism is far older than\n> single-row mode. I don't think anyone's tried to transpose it\n> onto that. I agree that it seems like a good idea to try.\n> There will be more per-row overhead, but the increase in flexibility\n> is likely to justify that.\n\nYeah, I was vaguely worried that there might be more per-row overhead,\nnot that I know a lot about this topic. I wonder if there's a way to\nmitigate that. I'm a bit suspicious that what we want here is really\nmore of an incremental mode than a single-row mode i.e. yeah, you want\nto fetch rows without materializing the whole result, but maybe not in\nbatches of exactly size one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 4 Jan 2023 12:38:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Wed, Jan 4, 2023 at 6:38 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Jan 4, 2023 at 11:36 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > As you well know, psql's FETCH_COUNT mechanism is far older than\n> > single-row mode. I don't think anyone's tried to transpose it\n> > onto that. I agree that it seems like a good idea to try.\n> > There will be more per-row overhead, but the increase in flexibility\n> > is likely to justify that.\n>\n> Yeah, I was vaguely worried that there might be more per-row overhead,\n> not that I know a lot about this topic. I wonder if there's a way to\n> mitigate that. I'm a bit suspicious that what we want here is really\n> more of an incremental mode than a single-row mode i.e. yeah, you want\n> to fetch rows without materializing the whole result, but maybe not in\n> batches of exactly size one.\n\nGiven the low importance and very low priority of this, how about\nadding it as a TODO wiki item then and maybe adding just some warning\ninstead? I've intentionally avoided parsing grammar and regexp so it's\nnot perfect (not that I do care about this too much either, as web\ncrawlers already have indexed this $thread). BTW I've found two\nthreads if know what are you looking for [1][2]\n\n-Jakub Wartak.\n\n[1] - https://www.postgresql.org/message-id/flat/a0a854b6-563c-4a11-bf1c-d6c6f924004d%40manitou-mail.org\n[2] - https://www.postgresql.org/message-id/flat/1274761885.4261.233.camel%40minidragon", "msg_date": "Tue, 10 Jan 2023 13:23:19 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Tom Lane wrote:\n\n> I agree that it seems like a good idea to try.\n> There will be more per-row overhead, but the increase in flexibility\n> is likely to justify that.\n\nHere's a POC patch implementing row-by-row fetching.\n\nIf it wasn't for the per-row overhead, we could probably get rid of\nExecQueryUsingCursor() and use row-by-row fetches whenever\nFETCH_COUNT is set, independently of the form of the query.\n\nHowever the difference in processing time seems to be substantial: on\nsome quick tests with FETCH_COUNT=10000, I'm seeing almost a 1.5x\nincrease on large datasets. I assume it's the cost of more allocations.\nI would have hoped that avoiding the FETCH queries and associated\nround-trips with the cursor method would compensate for that, but it\ndoesn't appear to be the case, at least with a fast local connection.\n\nSo in this patch, psql still uses the cursor method if the\nquery starts with \"select\", and falls back to the row-by-row in\nthe main code (ExecQueryAndProcessResults) otherwise.\nAnyway it solves the main issue of the over-consumption of memory\nfor CTE and update/insert queries returning large resultsets.\n\n\nBest regards,\n\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Thu, 12 Jan 2023 13:27:32 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "I wrote:\n\n> Here's a POC patch implementing row-by-row fetching.\n\nPFA an updated patch.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Wed, 01 Mar 2023 11:41:13 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> PFA an updated patch.\n\nThis gives me several \"-Wincompatible-pointer-types\" warnings\n(as are also reported by the cfbot):\n\ncommon.c: In function 'ExecQueryAndProcessResults':\ncommon.c:1686:24: warning: passing argument 1 of 'PrintQueryTuples' from incompatible pointer type [-Wincompatible-pointer-types]\n PrintQueryTuples(result_array, ntuples, &my_popt, tuples_fout);\n ^~~~~~~~~~~~\ncommon.c:679:35: note: expected 'const PGresult **' {aka 'const struct pg_result **'} but argument is of type 'PGresult **' {aka 'struct pg_result **'}\n PrintQueryTuples(const PGresult **result, int nresults, const printQueryOpt *opt,\n ~~~~~~~~~~~~~~~~~^~~~~~\ncommon.c:1720:24: warning: passing argument 1 of 'PrintQueryTuples' from incompatible pointer type [-Wincompatible-pointer-types]\n PrintQueryTuples(result_array, ntuples, &my_popt, tuples_fout);\n ^~~~~~~~~~~~\ncommon.c:679:35: note: expected 'const PGresult **' {aka 'const struct pg_result **'} but argument is of type 'PGresult **' {aka 'struct pg_result **'}\n PrintQueryTuples(const PGresult **result, int nresults, const printQueryOpt *opt,\n ~~~~~~~~~~~~~~~~~^~~~~~\n\nI think the cause is the inconsistency about whether PGresult pointers\nare pointer-to-const or not. Even without compiler warnings, I find\ncode like this very ugly:\n\n-\t\t\t\tsuccess = PrintQueryTuples(result, opt, printQueryFout);\n+\t\t\t\tsuccess = PrintQueryTuples((const PGresult**)&result, 1, opt, printQueryFout);\n\nI think what you probably ought to do to avoid all that is to change\nthe arguments of PrintQueryResult and nearby routines to be \"const\nPGresult *result\" not just \"PGresult *result\".\n\nI find it sad that we can't get rid of ExecQueryUsingCursor().\nMaybe a little effort towards reducing overhead in the single-row\nmode would help?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 24 Mar 2023 16:12:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Tom Lane wrote:\n\n> This gives me several \"-Wincompatible-pointer-types\" warnings\n> [...]\n> I think what you probably ought to do to avoid all that is to change\n> the arguments of PrintQueryResult and nearby routines to be \"const\n> PGresult *result\" not just \"PGresult *result\".\n\nThe const-ness issue that I ignored in the previous patch is that\nwhile C is fine with passing T* to a function expecting const T*, it's\nnot okay with passing T** to a function expecting const T**,\nor more generally converting T** to const T**.\n\nWhen callers need to pass arrays of PGresult* instead of const\nPGresult*, I've opted to remove the const qualifiers for the functions\nthat are concerned by this change.\n\n\nPFA an updated patch.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Fri, 07 Jul 2023 19:42:57 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Hi,\n\nHere's a new version to improve the performance of FETCH_COUNT\nand extend the cases when it can be used.\n\nPatch 0001 adds a new mode in libpq to allow the app to retrieve\nlarger chunks of results than the single row of the row-by-row mode.\nThe maximum number of rows per PGresult is set by the user.\n\nPatch 0002 uses that mode in psql and gets rid of the cursor\nimplementation as suggested upthread.\n\nThe performance numbers look good.\nFor a query retrieving 50M rows of about 200 bytes:\n select repeat('abc', 200) from generate_series(1,5000000)\n/usr/bin/time -v psql -At -c $query reports these metrics\n(medians of 5 runs):\n\n version | fetch_count | clock_time | user_time | sys_time | max_rss_size\n(kB) \n-----------+-------------+------------+-----------+----------+-------------------\n 16-stable |\t 0 |\t 6.58 | 3.98 |\t2.09 |\t\t\n3446276\n 16-stable |\t 100 |\t 9.25 | 4.10 |\t1.90 |\t\t \n8768\n 16-stable |\t 1000 |\t11.13 | 5.17 |\t1.66 |\t\t \n8904\n 17-patch |\t 0 |\t 6.5 | 3.94 |\t2.09 |\t\t\n3442696\n 17-patch |\t 100 |\t 5 | 3.56 |\t0.93 |\t\t \n4096\n 17-patch |\t 1000 |\t 6.48 | 4.00 |\t1.55 |\t\t \n4344\n\nInterestingly, retrieving by chunks of 100 rows appears to be a bit faster\nthan the default one big chunk. It means that independently\nof using less memory, FETCH_COUNT implemented that way\nwould be a performance enhancement compared to both\nnot using it and using it in v16 with the cursor implementation.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Mon, 20 Nov 2023 20:13:35 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Hi,\n\nPFA a rebased version.\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Tue, 02 Jan 2024 15:58:14 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Tue, 2 Jan 2024 at 20:28, Daniel Verite <daniel@manitou-mail.org> wrote:\n>\n> Hi,\n>\n> PFA a rebased version.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\na3a836fb5e51183eae624d43225279306c2285b8 ===\n=== applying patch\n./v5-0001-Implement-retrieval-of-results-in-chunks-with-lib.patch\npatching file doc/src/sgml/libpq.sgml\n...\npatching file src/backend/replication/libpqwalreceiver/libpqwalreceiver.c\n...\npatching file src/interfaces/libpq/exports.txt\nHunk #1 FAILED at 191.\n1 out of 1 hunk FAILED -- saving rejects to file\nsrc/interfaces/libpq/exports.txt.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4233.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 09:01:23 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "vignesh C wrote:\n\n> patching file src/interfaces/libpq/exports.txt\n> Hunk #1 FAILED at 191.\n> 1 out of 1 hunk FAILED -- saving rejects to file\n> src/interfaces/libpq/exports.txt.rej\n> \n> Please post an updated version for the same.\n\nPFA a rebased version.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Tue, 30 Jan 2024 15:29:37 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Hi Daniel,\n\n\nOn Tue, Jan 30, 2024 at 3:29 PM Daniel Verite <daniel@manitou-mail.org> wrote:\n\n> PFA a rebased version.\n\nThanks for the patch! I've tested it using my original reproducer and\nit works great now against the original problem description. I've\ntaken a quick look at the patch, it looks good for me. I've tested\nusing -Werror for both gcc 10.2 and clang 11.0 and it was clean. I\nhave one slight doubt:\n\nwhen I run with default pager (more or less):\n\\set FETCH_COUNT 1000\nWITH data AS (SELECT generate_series(1, 20000000) as Total) select\nrepeat('a',100) || data.Total || repeat('b', 800) as total_pat from\ndata;\n-- it enters pager, a skip couple of pages and then \"q\"\n\n.. then - both backend and psql - go into 100% CPU as it were still\nreceiving (that doesn't happen e.g. with export PAGER=cat). So I'm\nnot sure, maybe ExecQueryAndProcessResults() should somewhat faster\nabort when the $PAGER is exiting normally(?).\n\nAnd oh , btw, in v6-0001 (so if you would be sending v7 for any other\nreason -- other reviewers -- maybe worth realigning it as detail):\n\n+ int PQsetChunkedRowsMode(PGconn *conn,\n+ int maxRows);\n\nbut the code has (so \"maxRows\" != \"chunkSize\"):\n\n+PQsetChunkedRowsMode(PGconn *conn, int chunkSize)\n\n-J.\n\n\n", "msg_date": "Thu, 8 Feb 2024 12:06:33 +0100", "msg_from": "Jakub Wartak <jakub.wartak@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tJakub Wartak wrote:\n\n> when I run with default pager (more or less):\n> \\set FETCH_COUNT 1000\n> WITH data AS (SELECT generate_series(1, 20000000) as Total) select\n> repeat('a',100) || data.Total || repeat('b', 800) as total_pat from\n> data;\n> -- it enters pager, a skip couple of pages and then \"q\"\n> \n> .. then - both backend and psql - go into 100% CPU as it were still\n> receiving\n\nThanks for looking into this patch!\n\nWhat's happening after the pager has quit is that psql continues\nto pump results from the server until there are no more results.\n\nIf the user wants to interrupt that, they should hit Ctrl+C to\ncancel the query. I think psql should not cancel it implicitly\non their behalf, as it also cancels the transaction.\n\nThe behavior differs from the cursor implementation, because in\nthe cursor case, when the pager is displaying results, no query is\nrunning. The previous FETCH results have been entirely\nread, and the next FETCH has not been sent to the server yet.\nThis is why quitting the pager in the middle of this can\nbe dealt with instantly.\n\n> (that doesn't happen e.g. with export PAGER=cat). So I'm\n> not sure, maybe ExecQueryAndProcessResults() should somewhat\n> faster abort when the $PAGER is exiting normally(?).\n\nI assume that when using PAGER=cat, you cancel the display\nwith Ctrl+C, which propagates to psql and have the effect\nto also cancel the query. In that case it displays\n\"Cancel request sent\",\nand then shortly after it gets back from the server:\n\"ERROR: canceling statement due to user request\".\nThat case corresponds to the generic query canceling flow.\n\nOTOH if killing the \"cat\" process with kill -TERM I see the same\nbehavior than with \"more\" or \"less\", that is postgres running\nthe query to completion and psql pumping the results.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 12 Feb 2024 19:30:30 +0100", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Tue, 2024-01-30 at 15:29 +0100, Daniel Verite wrote:\n> PFA a rebased version.\n\nI had a look at patch 0001 (0002 will follow).\n\n> - <sect1 id=\"libpq-single-row-mode\">\n> - <title>Retrieving Query Results Row-by-Row</title>\n> + <sect1 id=\"libpq-chunked-results-modes\">\n> + <title>Retrieving Query Results by chunks</title>\n\nThat should be \"in chunks\".\n\n> + <para>\n> + <variablelist>\n> + <varlistentry id=\"libpq-PQsetChunkedRowsMode\">\n> + <term><function>PQsetChunkedRowsMode</function>\n> + <indexterm><primary>PQsetChunkedRowsMode</primary></indexterm></term>\n> + <listitem>\n> + <para>\n> + Select the mode retrieving results in chunks for the currently-executing query.\n\nThat is questionable English. How about\n\n Select to receive the results for the currently-executing query in chunks.\n\n> + This function is similar to <xref linkend=\"libpq-PQsetSingleRowMode\"/>,\n> + except that it can retrieve a user-specified number of rows\n> + per call to <xref linkend=\"libpq-PQgetResult\"/>, instead of a single row.\n\nThe \"user-specified number\" is \"maxRows\". So a better wording would be:\n\n ... except that it can retrieve <replaceable>maxRows</replaceable> rows\n per call to <xref linkend=\"libpq-PQgetResult\"/> instead of a single row.\n\n> - error. But in single-row mode, those rows will have already been\n> + error. But in single-row or chunked modes, those rows will have already been\n\nI'd say it should be \"in *the* single-row or chunk modes\".\n\n> --- a/src/interfaces/libpq/fe-exec.c\n> +++ b/src/interfaces/libpq/fe-exec.c\n> @@ -41,7 +41,8 @@ char *const pgresStatus[] = {\n> \"PGRES_COPY_BOTH\",\n> \"PGRES_SINGLE_TUPLE\",\n> \"PGRES_PIPELINE_SYNC\",\n> - \"PGRES_PIPELINE_ABORTED\"\n> + \"PGRES_PIPELINE_ABORTED\",\n> + \"PGRES_TUPLES_CHUNK\"\n> };\n\nI think that PGRES_SINGLE_TUPLE and PGRES_TUPLES_CHUNK should be next to each\nother, but that's no big thing.\nThe same applies to the change in src/interfaces/libpq/libpq-fe.h\n\nI understand that we need to keep the single-row mode for compatibility\nreasons. But I think that under the hood, \"single-row mode\" should be the\nsame as \"chunk mode with chunk size one\".\nThat should save some code repetition.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 29 Mar 2024 14:07:06 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "On Fri, 2024-03-29 at 14:07 +0100, Laurenz Albe wrote:\n> I had a look at patch 0001 (0002 will follow).\n\nHere is the code review for patch number 2:\n\n\n> diff --git a/src/bin/psql/common.c b/src/bin/psql/common.c\n[...]\n+static bool\n+SetupGOutput(PGresult *result, FILE **gfile_fout, bool *is_pipe)\n[...]\n+static void\n+CloseGOutput(FILE *gfile_fout, bool is_pipe)\n\nIt makes sense to factor out this code.\nBut shouldn't these functions have a prototype at the beginning of the file?\n\n> + /*\n> + * If FETCH_COUNT is set and the context allows it, use the single row\n> + * mode to fetch results and have no more than FETCH_COUNT rows in\n> + * memory.\n> + */\n\nThat comment talks about single-row mode, whey you are using chunked mode.\nYou probably forgot to modify the comment from a previous version of the patch.\n\n> + if (fetch_count > 0 && !pset.crosstab_flag && !pset.gexec_flag && !is_watch\n> + && !pset.gset_prefix && pset.show_all_results)\n> + {\n> + /*\n> + * The row-by-chunks fetch is not enabled when SHOW_ALL_RESULTS is false,\n> + * since we would need to accumulate all rows before knowing\n> + * whether they need to be discarded or displayed, which contradicts\n> + * FETCH_COUNT.\n> + */\n> + if (!PQsetChunkedRowsMode(pset.db, fetch_count))\n> + {\n\nI think that comment should be before the \"if\" statement, not inside it.\n\nHere is a suggestion for a consolidated comment:\n\n Fetch the result in chunks if FETCH_COUNT is set. We don't enable chunking\n if SHOW_ALL_RESULTS is false, since that requires us to accumulate all rows\n before we can tell what should be displayed, which would counter the idea\n of FETCH_COUNT. Chunk fetching is also disabled if \\gset, \\crosstab,\n \\gexec and \\watch are used.\n\n> + if (fetch_count > 0 && result_status == PGRES_TUPLES_CHUNK)\n\nCould it be that result_status == PGRES_TUPLES_CHUNK, but fetch_count is 0?\nif not, perhaps there should be an Assert that verifies that, and the \"if\"\nstatement should only check for the latter condition.\n\n> --- a/src/bin/psql/t/001_basic.pl\n> +++ b/src/bin/psql/t/001_basic.pl\n> @@ -184,10 +184,10 @@ like(\n> \"\\\\set FETCH_COUNT 1\\nSELECT error;\\n\\\\errverbose\",\n> on_error_stop => 0))[2],\n> qr/\\A^psql:<stdin>:2: ERROR: .*$\n> -^LINE 2: SELECT error;$\n> +^LINE 1: SELECT error;$\n> ^ *^.*$\n> ^psql:<stdin>:3: error: ERROR: [0-9A-Z]{5}: .*$\n> -^LINE 2: SELECT error;$\n> +^LINE 1: SELECT error;$\n\nWhy does the output change? Perhaps there is a good and harmless\nexplanation, but the naïve expectation would be that it doesn't.\n\n\nThe patch does not apply any more because of a conflict with the\nnon-blocking PQcancel patch.\n\nAfter fixing the problem manually, it builds without warning.\nThe regression tests pass, and the feature works as expected.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 29 Mar 2024 18:25:29 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tLaurenz Albe wrote:\n\n> I had a look at patch 0001 (0002 will follow).\n\nThanks for reviewing this!\n\nI've implemented the suggested doc changes. A patch update\nwill follow with the next part of the review.\n\n> > --- a/src/interfaces/libpq/fe-exec.c\n> > +++ b/src/interfaces/libpq/fe-exec.c\n> > @@ -41,7 +41,8 @@ char *const pgresStatus[] = {\n> > \"PGRES_COPY_BOTH\",\n> > \"PGRES_SINGLE_TUPLE\",\n> > \"PGRES_PIPELINE_SYNC\",\n> > - \"PGRES_PIPELINE_ABORTED\"\n> > + \"PGRES_PIPELINE_ABORTED\",\n> > + \"PGRES_TUPLES_CHUNK\"\n> > };\n> \n> I think that PGRES_SINGLE_TUPLE and PGRES_TUPLES_CHUNK should be next to\n> each other, but that's no big thing.\n> The same applies to the change in src/interfaces/libpq/libpq-fe.h\n\nI assume we can't renumber/reorder existing values, otherwise it would be\nan ABI break. We can only add new values.\n\n> I understand that we need to keep the single-row mode for compatibility\n> reasons. But I think that under the hood, \"single-row mode\" should be the\n> same as \"chunk mode with chunk size one\".\n\nI've implemented it like that at first, and wasn't thrilled with the result.\nlibpq still has to return PGRES_SINGLE_TUPLE in single-row\nmode and PGRES_TUPLES_CHUNK with chunks of size 1, so\nthe mutualization did not work that well in practice.\nI also contemplated not creating PGRES_TUPLES_CHUNK\nand instead using PGRES_SINGLE_TUPLE for N rows, but I found\nit too ugly.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 01 Apr 2024 18:09:55 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Laurenz Albe wrote:\n\n> Here is the code review for patch number 2:\n\n> +static void\n> +CloseGOutput(FILE *gfile_fout, bool is_pipe)\n> \n> It makes sense to factor out this code.\n> But shouldn't these functions have a prototype at the beginning of the file?\n\nLooking at the other static functions in psql/common.c, there\nare 22 of them but only 3 have prototypes at the top of the file.\nThese 3 functions are called before being defined, so these prototypes\nare mandatory.\nThe other static functions that are defined before being called happen\nnot to have forward declarations, so SetupGOutput() and CloseGOutput()\nfollow that model.\n\n> Here is a suggestion for a consolidated comment:\n> \n> Fetch the result in chunks if FETCH_COUNT is set. We don't enable chunking\n> if SHOW_ALL_RESULTS is false, since that requires us to accumulate all rows\n> before we can tell what should be displayed, which would counter the idea\n> of FETCH_COUNT. Chunk fetching is also disabled if \\gset, \\crosstab,\n> \\gexec and \\watch are used.\n\nOK, done like that.\n\n> > + if (fetch_count > 0 && result_status == PGRES_TUPLES_CHUNK)\n> \n> Could it be that result_status == PGRES_TUPLES_CHUNK, but fetch_count is 0?\n> if not, perhaps there should be an Assert that verifies that, and the \"if\"\n> statement should only check for the latter condition.\n\nGood point. In fact it can be simplified to\n if (result_status == PGRES_TUPLES_CHUNK),\nand fetch_count as a variable can be removed from the function.\nDone that way.\n\n\n> > --- a/src/bin/psql/t/001_basic.pl\n> > +++ b/src/bin/psql/t/001_basic.pl\n> > @@ -184,10 +184,10 @@ like(\n> > \"\\\\set FETCH_COUNT 1\\nSELECT error;\\n\\\\errverbose\",\n> > on_error_stop => 0))[2],\n> > qr/\\A^psql:<stdin>:2: ERROR: .*$\n> > -^LINE 2: SELECT error;$\n> > +^LINE 1: SELECT error;$\n\n> > ^ *^.*$\n> > ^psql:<stdin>:3: error: ERROR: [0-9A-Z]{5}: .*$\n> > -^LINE 2: SELECT error;$\n> > +^LINE 1: SELECT error;$\n> \n> Why does the output change? Perhaps there is a good and harmless\n> explanation, but the naïve expectation would be that it doesn't.\n\nUnpatched, psql builds this query:\n DECLARE _psql_cursor NO SCROLL CURSOR FOR \\n\n\t\t\t\t\t <user-query>\ntherefore the user query starts at line 2.\n\nWith the patch, the user query is sent as-is, starting at line1,\nhence the different error location.\n\n\n> After fixing the problem manually, it builds without warning.\n> The regression tests pass, and the feature works as expected.\n\nThanks for testing.\nUpdated patches are attached.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite", "msg_date": "Mon, 01 Apr 2024 19:52:42 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> Updated patches are attached.\n\nI started to look through this, and almost immediately noted\n\n- <sect1 id=\"libpq-single-row-mode\">\n- <title>Retrieving Query Results Row-by-Row</title>\n+ <sect1 id=\"libpq-chunked-results-modes\">\n+ <title>Retrieving Query Results in chunks</title>\n\nThis is a bit problematic, because changing the sect1 ID will\nchange the page's URL, eg\n\nhttps://www.postgresql.org/docs/current/libpq-single-row-mode.html\n\nAside from possibly breaking people's bookmarks, I'm pretty sure this\nwill cause the web docs framework to not recognize any cross-version\ncommonality of the page. How ugly would it be if we left the ID\nalone? Another idea could be to leave the whole page alone and add\na new <sect1> for chunked mode.\n\nBut ... TBH I'm not convinced that we need the chunked mode at all.\nWe explicitly rejected that idea back when single-row mode was\ndesigned, see around here:\n\nhttps://www.postgresql.org/message-id/flat/50173BF7.1070801%40Yahoo.com#7f92ebad0143fb5f575ecb3913c5ce88\n\nand I'm still very skeptical that there's much win to be had.\nI do not buy that psql's FETCH_COUNT mode is a sufficient reason\nto add it. FETCH_COUNT mode is not something you'd use\nnon-interactively, and there is enough overhead elsewhere in psql\n(notably in result-set formatting) that it doesn't seem worth\nmicro-optimizing the part about fetching from libpq.\n\n(I see that there was some discussion in that old thread about\nmicro-optimizing single-row mode internally to libpq by making\nPGresult creation cheaper, which I don't think anyone ever got\nback to doing. Maybe we should resurrect that.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 12:50:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tTom Lane wrote:\n\n> I do not buy that psql's FETCH_COUNT mode is a sufficient reason\n> to add it. FETCH_COUNT mode is not something you'd use\n> non-interactively\n\nI should say that I've noticed significant latency improvements with\nFETCH_COUNT retrieving large resultsets, such that it would benefit\nnon-interactive use cases.\n\nFor instance, with the current v7 patch, a query like the OP's initial\ncase and batches of 1000 rows:\n\n$ cat fetchcount-test.sql\n\nselect repeat('a', 100) || '-' ||\ni || '-' || repeat('b', 500) as total_pat\nfrom generate_series(1, 5000000) as i\n\\g /dev/null\n\n$ export TIMEFORMAT=%R\n\n$ for s in $(seq 1 10); do time /usr/local/pgsql/bin/psql -At \\\n -v FETCH_COUNT=1000 -f fetchcount-test.sql;\tdone\n\n3.597\n3.413\n3.362\n3.612\n3.377\n3.416\n3.346\n3.368\n3.504\n3.413\n\n=> Average elapsed time = 3.44s\n\nNow without FETCH_COUNT, fetching the 5 million rows in one resultset:\n\n$ for s in $(seq 1 10); do time /usr/local/pgsql/bin/psql -At \\\n -f fetchcount-test.sql; done\n\n4.200\n4.178\n4.200\n4.169\n4.195\n4.217\n4.197\n4.234\n4.225\n4.242\n\n=> Average elapsed time = 4.20s\n\nBy comparison the unpatched version (cursor-based method)\ngives these execution times with FETCH_COUNT=1000:\n\n4.458\n4.448\n4.476\n4.455\n4.450\n4.466\n4.395\n4.429\n4.387\n4.473\n\n=> Average elapsed time = 4.43s\n\nNow that's just one test, but don't these numbers look good?\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 02 Apr 2024 22:09:04 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\"Daniel Verite\" <daniel@manitou-mail.org> writes:\n> \tTom Lane wrote:\n>> I do not buy that psql's FETCH_COUNT mode is a sufficient reason\n>> to add it. FETCH_COUNT mode is not something you'd use\n>> non-interactively\n\n> I should say that I've noticed significant latency improvements with\n> FETCH_COUNT retrieving large resultsets, such that it would benefit\n> non-interactive use cases.\n\nDo you have a theory for why that is? It's pretty counterintuitive\nthat it would help at all.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 02 Apr 2024 16:13:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tTom Lane wrote:\n\n> > I should say that I've noticed significant latency improvements with\n> > FETCH_COUNT retrieving large resultsets, such that it would benefit\n> > non-interactive use cases.\n> \n> Do you have a theory for why that is? It's pretty counterintuitive\n> that it would help at all.\n\nI've been thinking that it's a kind of pipeline/parallelism effect.\nWhen libpq accumulates all rows in one resultset, if the network\nor the server are not fast enough, it spends a certain amount of\ntime waiting for the data to come in.\nBut when it accumulates fewer rows and gives back control\nto the app to display intermediate results, during that time the\nnetwork buffers can fill in, resulting, I assume, in less time waiting\noverall.\n\nI think the benefit is similar to what we get with \\copy. In fact\nwith the above-mentioned test, the execution times with\nFETCH_COUNT=1000 look very close to \\copy of the same query.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Tue, 02 Apr 2024 22:54:47 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "So what was really bothering me about this patchset was that I\ndidn't think marginal performance gains were a sufficient reason\nto put a whole different operating mode into libpq. However,\nI've reconsidered after realizing that implementing FETCH_COUNT\natop traditional single-row mode would require either merging\nsingle-row results into a bigger PGresult or persuading psql's\nresults-printing code to accept an array of PGresults not just\none. Either of those would be expensive and ugly, not to mention\nneeding chunks of code we don't have today.\n\nAlso, it doesn't really need to be a whole different operating mode.\nThere's no reason that single-row mode shouldn't be exactly equivalent\nto chunk mode with chunk size 1, except for the result status code.\n(We've got to keep PGRES_SINGLE_TUPLE for the old behavior, but\nusing that for a chunked result would be too confusing.)\n\nSo I whacked the patch around till I liked it better, and pushed it.\nI hope my haste will not come back to bite me, but we are getting\npretty hard up against the feature-freeze deadline.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 06 Apr 2024 20:53:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tTom Lane wrote:\n\n> I've reconsidered after realizing that implementing FETCH_COUNT\n> atop traditional single-row mode would require either merging\n> single-row results into a bigger PGresult or persuading psql's\n> results-printing code to accept an array of PGresults not just\n> one. Either of those would be expensive and ugly, not to mention\n> needing chunks of code we don't have today.\n\nYes, we must accumulate results because the aligned format needs to\nknow the columns widths for a entire \"page\", and the row-by-row logic\ndoes not fit that well in that case.\nOne of the posted patches implemented this with an array of PGresult\nin single-row mode [1] but I'm confident that the newer version you\npushed with the libpq changes is a better approach.\n\n> So I whacked the patch around till I liked it better, and pushed it.\n\nThanks for taking care of this!\n\n\n[1]\nhttps://www.postgresql.org/message-id/092583fb-97c5-428f-8d99-fd31be4a5290@manitou-mail.org\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 08 Apr 2024 16:25:36 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Hello Daniel and Tom,\n\n08.04.2024 17:25, Daniel Verite wrote:\n>\n>> So I whacked the patch around till I liked it better, and pushed it.\n> Thanks for taking care of this!\n\nNow that ExecQueryUsingCursor() is gone, it's not clear, what does\nthe following comment mean:?\n    * We must turn off gexec_flag to avoid infinite recursion.  Note that\n    * this allows ExecQueryUsingCursor to be applied to the individual query\n    * results.\n\nShouldn't it be removed?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Apr 2024 18:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> Now that ExecQueryUsingCursor() is gone, it's not clear, what does\n> the following comment mean:?\n>    * We must turn off gexec_flag to avoid infinite recursion.  Note that\n>    * this allows ExecQueryUsingCursor to be applied to the individual query\n>    * results.\n\nHmm, the point about recursion is still valid isn't it? I agree the\nreference to ExecQueryUsingCursor is obsolete, but I think we need to\nreconstruct what this comment is actually talking about. It's\ncertainly pretty obscure ...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 11:08:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "08.04.2024 18:08, Tom Lane wrote:\n> Alexander Lakhin <exclusion@gmail.com> writes:\n>> Now that ExecQueryUsingCursor() is gone, it's not clear, what does\n>> the following comment mean:?\n>>    * We must turn off gexec_flag to avoid infinite recursion.  Note that\n>>    * this allows ExecQueryUsingCursor to be applied to the individual query\n>>    * results.\n> Hmm, the point about recursion is still valid isn't it? I agree the\n> reference to ExecQueryUsingCursor is obsolete, but I think we need to\n> reconstruct what this comment is actually talking about. It's\n> certainly pretty obscure ...\n\nSorry, I wasn't clear enough, I meant to remove only that reference, not\nthe quoted comment altogether.\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Mon, 8 Apr 2024 18:15:15 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "\tAlexander Lakhin wrote:\n\n> >> Now that ExecQueryUsingCursor() is gone, it's not clear, what does\n> >> the following comment mean:?\n> >> * We must turn off gexec_flag to avoid infinite recursion. Note that\n> >> * this allows ExecQueryUsingCursor to be applied to the individual query\n> >> * results.\n> > Hmm, the point about recursion is still valid isn't it? I agree the\n> > reference to ExecQueryUsingCursor is obsolete, but I think we need to\n> > reconstruct what this comment is actually talking about. It's\n> > certainly pretty obscure ...\n> \n> Sorry, I wasn't clear enough, I meant to remove only that reference, not\n> the quoted comment altogether.\n\nThe comment might want to stress the fact that psql honors\nFETCH_COUNT \"on top of\" \\gset, so if the user issues for instance:\n\n select 'select ' || i from generate_series(1,<N>) as i \\gexec\n\nwhat's going to be sent to the server is a series of:\n\n BEGIN\n DECLARE _psql_cursor NO SCROLL CURSOR FOR\n\tselect <i>\n FETCH FORWARD <FETCH_COUNT> FROM _psql_cursor (possibly repeated)\n CLOSE _psql_cursor\n COMMIT\n\nAnother choice would be to ignore FETCH_COUNT and send exactly the\nqueries that \\gset produces, with the assumption that it better\nmatches the user's expectation. Maybe that alternative was considered\nand the comment reflects the decision.\n\nSince the new implementation doesn't rewrite the user-supplied queries,\nthe point is moot, and this part should be removed:\n \"Note that this allows ExecQueryUsingCursor to be applied to the\n individual query results\"\nI'll wait a bit for other comments and submit a patch.\n\n\nBest regards,\n-- \nDaniel Vérité\nhttps://postgresql.verite.pro/\nTwitter: @DanielVerite\n\n\n", "msg_date": "Mon, 08 Apr 2024 22:03:21 +0200", "msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" }, { "msg_contents": "Alexander Lakhin <exclusion@gmail.com> writes:\n> 08.04.2024 18:08, Tom Lane wrote:\n>> Hmm, the point about recursion is still valid isn't it? I agree the\n>> reference to ExecQueryUsingCursor is obsolete, but I think we need to\n>> reconstruct what this comment is actually talking about. It's\n>> certainly pretty obscure ...\n\n> Sorry, I wasn't clear enough, I meant to remove only that reference, not\n> the quoted comment altogether.\n\nAfter looking at it, I realized that the comment's last sentence was\nalso out of date, since SendQuery() isn't where the check of\ngexec_flag happens any more. I concluded that documenting the\nbehavior of other functions here isn't such a hot idea, and removed\nboth sentences in favor of expanding the relevant comments in\nExecQueryAndProcessResults.\n\nWhile doing that, I compared the normal and chunked-fetch code paths\nin ExecQueryAndProcessResults more carefully, and realized that the\npatch was a few other bricks shy of a load:\n\n* it didn't honor pset.queryFout;\n\n* it ignored the passed-in \"printQueryOpt *opt\" (maybe that's always\nNULL, but doesn't seem like a great assumption);\n\n* it failed to call PrintQueryStatus, so that INSERT RETURNING\nand the like would print a status line only in non-FETCH_COUNT\nmode.\n\nI cleaned all that up at c21d4c416.\n\nBTW, I had to reverse-engineer the exact reasoning for the cases\nwhere we don't honor FETCH_COUNT. Most of them are clear enough,\nbut I'm not totally sure about \\watch. I wrote:\n\n+ * * We're doing \\watch: users probably don't want us to force use of the\n+ * pager for that, plus chunking could break the min_rows check.\n\nIt would not be terribly hard to make the chunked-fetch code path\nhandle min_rows correctly, and AFAICS the only other thing that\nis_watch does differently is to not do SetResultVariables, which\nwe could match easily enough. So this is really down to whether\nforcing pager mode is okay for a \\watch'd query. I wonder if\nthat was actually Daniel's reasoning for excluding \\watch, and\nhow strong that argument really is.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 08 Apr 2024 16:04:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql's FETCH_COUNT (cursor) is not being respected for CTEs" } ]
[ { "msg_contents": "Hello Hackers,\n\nThe attached patch adds pl/pgsql versions of \"tpcb-like\" and\n\"simple-update\" internal test scripts\n\nThe tests perform functionally exactly the same, but are generally\nfaster as they avoid most client-server latency.\n\nThe reason I'd like to have them as part of pgbench are two\n\n1. so I don't have to create the script and function manually each\ntime I want to test mainly the database (instead of the\nclient-database system)\n\n2. so that new users of PostgreSQL can easily see how much better OLTP\nworkloads perform when packaged up as a server-side function\n\nThe new user-visible functionalities are two new build-in scripts -b list :\n\n$ pgbench -b list\nAvailable builtin scripts:\n tpcb-like: <builtin: TPC-B (sort of)>\n plpgsql-tpcb-like: <builtin: TPC-B (sort of) as a pl/pgsql function>\n simple-update: <builtin: simple update>\n plpgsql-simple-update: <builtin: simple update as a pl/pgsql function>\n select-only: <builtin: select only>\n\nwhich one can run using the -b / --builtin= option\n\npgbench -b plpgsql-tpcb-like ...\nor\npgbench -b plpgsql-simple-update ...\n\nAnd a flag --no-functions which lets you not to create the functions at init\n\nthere are also character flags to -I / --init ,\n-- Y to drop the functions and\n-- y to create the functions. Creating is default behaviour, but can\nbe disabled fia long flag --no-functions )\n\nI selected Yy as they were unused and can be thought of as \"inverted\nlambda symbol\" :)\n\nIf there are no strong objections, I'll add it to the commitfest as well\n\n-----\nHannu Krosing\nGoogle Cloud - We have a long list of planned contributions and we are hiring.\nContact me if interested.", "msg_date": "Wed, 4 Jan 2023 19:06:44 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "\nHello,\n\n> The attached patch adds pl/pgsql versions of \"tpcb-like\" and\n> \"simple-update\" internal test scripts\n\nWhy not, it makes sense because it is relevant to some usage patterns.\n\nWhy not having the select version as a version as well?\n\nIf we are going to follow this road, we could also consider\n\"combined\" queries with \\; as well?\n\n> $ pgbench -b list\n> Available builtin scripts:\n> tpcb-like: <builtin: TPC-B (sort of)>\n> plpgsql-tpcb-like: <builtin: TPC-B (sort of) as a pl/pgsql function>\n> simple-update: <builtin: simple update>\n> plpgsql-simple-update: <builtin: simple update as a pl/pgsql function>\n> select-only: <builtin: select only>\n>\n> which one can run using the -b / --builtin= option\n\nISTM that the -b had a fast selection so that only a prefix was enough to \nselect a script (-b se = -b select-only). Maybe such convenient shortcut \nshould be preserved, it seems that the long name will be needed for the pl \nversions.\n\n> And a flag --no-functions which lets you not to create the functions at init\n\nHmmm. Not so sure.\n\n> there are also character flags to -I / --init ,\n> -- Y to drop the functions and\n> -- y to create the functions. Creating is default behaviour, but can\n> be disabled fia long flag --no-functions )\n\nOk.\n\n> I selected Yy as they were unused and can be thought of as \"inverted\n> lambda symbol\" :)\n\n:-)\n\n> If there are no strong objections, I'll add it to the commitfest as well\n\nPlease do that.\n\n-- \nFabien Coelho.\n\n\n", "msg_date": "Tue, 10 Jan 2023 14:43:58 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: not tested\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi\r\n\r\nthank you for the patch. It can be applied to current master branch and compiled fine. \r\n\r\nThe feature works as described, I am able to run plpgsql-tpcb-like and plpgsql-simple-update scripts as you have added them.\r\n\r\nBut I am not sure the purpose of --no-function to prevent the creation of pl/pgsql functions when the new plpgsql test scripts need them. \r\n\r\nI initialized pgbench database with --no-function, and plpgsql-tpcb-like and plpgsql-simple-update scripts will fail to run\r\n\r\nthanks\r\n\r\nCary Huang\r\n===============\r\nHighgo Software Canada\r\nwww.highgo.ca", "msg_date": "Fri, 24 Mar 2023 22:17:33 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "On Fri, 24 Mar 2023 22:17:33 +0000\nCary Huang <cary.huang@highgo.ca> wrote:\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: not tested\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n\nThe patch would need documentations describing the new options.\n\n> \n> Hi\n> \n> thank you for the patch. It can be applied to current master branch and compiled fine. \n\nI also confirmed that it can be applied and complied, although it raised warnings\nabout whitespace errors.\n\n/tmp/pgbench-plpgsql-001.patch:68: trailing whitespace.\n\texecuteStatement(con, \n/tmp/pgbench-plpgsql-001.patch:102: trailing whitespace.\n\texecuteStatement(con, \nwarning: 2 lines add whitespace errors.\n\n> The feature works as described, I am able to run plpgsql-tpcb-like and plpgsql-simple-update scripts as you have added them.\n> \n> But I am not sure the purpose of --no-function to prevent the creation of pl/pgsql functions when the new plpgsql test scripts need them. \n> \n> I initialized pgbench database with --no-function, and plpgsql-tpcb-like and plpgsql-simple-update scripts will fail to run\n\nI am not sure either whether --no-function option is necessary.\nAlthough there is --no-vacuum, I guess this would be intended to\nreduce the initialization time. I don't think omitting creating\nfunctions has such effect. So, I wonder -no-function is unnecessary,\nas similar to that there are no option to omitting to create tables.\n\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Mon, 5 Jun 2023 17:05:26 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "On Wed, Jan 04, 2023 at 07:06:44PM +0100, Hannu Krosing wrote:\n> 1. so I don't have to create the script and function manually each\n> time I want to test mainly the database (instead of the\n> client-database system)\n> \n> 2. so that new users of PostgreSQL can easily see how much better OLTP\n> workloads perform when packaged up as a server-side function\n\nI'm not sure we should add micro-optimized versions of the existing scripts\nto pgbench. Your point about demonstrating the benefits of server-side\nfunctions seems reasonable, but it also feels a bit like artifically\nimproving pgbench numbers. I think I'd rather see some more variety in the\nbuilt-in scripts so that folks can more easily test a wider range of common\nworkloads. Perhaps this could include a test that is focused on\nserver-side functions.\n\nIn any case, it looks like there is unaddressed feedback for this patch, so\nI'm marking it as \"waiting on author.\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 14 Aug 2023 08:07:37 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "\nHello Nathan,\n\n>> 1. so I don't have to create the script and function manually each\n>> time I want to test mainly the database (instead of the\n>> client-database system)\n>>\n>> 2. so that new users of PostgreSQL can easily see how much better OLTP\n>> workloads perform when packaged up as a server-side function\n>\n> I'm not sure we should add micro-optimized versions of the existing scripts\n> to pgbench. Your point about demonstrating the benefits of server-side\n> functions seems reasonable, but it also feels a bit like artifically\n> improving pgbench numbers. I think I'd rather see some more variety in the\n> built-in scripts so that folks can more easily test a wider range of common\n> workloads. Perhaps this could include a test that is focused on\n> server-side functions.\n\nISTM that your argument suggests to keep the tpcb-like PL/pgSQL version.\nIt is the more beneficial anyway as it merges 4/5 commands in one call, so \nit demonstrate the effect of investing in this kind of approach.\n\nI'm unclear about what variety of scripts that could be provided given the \ntables made available with pgbench. ISTM that other scenari would involve \nboth an initialization and associated scripts, and any proposal would be \nbared because it would open the door to anything.\n\n> In any case, it looks like there is unaddressed feedback for this patch, so\n> I'm marking it as \"waiting on author.\"\n\nIndeed.\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 15 Aug 2023 09:46:59 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "On Tue, Aug 15, 2023 at 09:46:59AM +0200, Fabien COELHO wrote:\n> I'm unclear about what variety of scripts that could be provided given the\n> tables made available with pgbench. ISTM that other scenari would involve\n> both an initialization and associated scripts, and any proposal would be\n> bared because it would open the door to anything.\n\nWhy's that? I'm not aware of any project policy that prohibits such\nenhancements to pgbench. It might take some effort to gather consensus on\na proposal like this, but IMHO that doesn't mean we shouldn't try. If the\nprevailing wisdom is that we shouldn't add more built-in scripts because\nthere is an existing way to provide custom ones, then it's not clear that\nwe should proceed with $SUBJECT, anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Aug 2023 08:34:23 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "\nHello Nathan,\n\n>> I'm unclear about what variety of scripts that could be provided given the\n>> tables made available with pgbench. ISTM that other scenari would involve\n>> both an initialization and associated scripts, and any proposal would be\n>> bared because it would open the door to anything.\n>\n> Why's that?\n\nJust a wild guess based on 19 years of occasional contributions to pg and \npgbench in particular:-)\n\n> I'm not aware of any project policy that prohibits such enhancements to \n> pgbench.\n\nAttempts in extending pgbench often fall under \"you can do it outside (eg \nwith a custom script) so there is no need to put that in pgbench as it \nwould add to the maintenance burden with a weak benefit proven by the fact \nthat it is not there already\".\n\n> It might take some effort to gather consensus on a proposal like this, \n> but IMHO that doesn't mean we shouldn't try.\n\nDone it in the past. Probably will do it again in the future:-)\n\n> If the prevailing wisdom is that we shouldn't add more built-in scripts \n> because there is an existing way to provide custom ones, then it's not \n> clear that we should proceed with $SUBJECT, anyway.\n\nI'm afraid there is that argument. I do not think that this policy is good \nwrt $SUBJECT, ISTM that having an easy way to test something with a \nPL/pgSQL function would help promote the language by advertising/showing \nthe potential performance benefit (or not, depending). Just one function \nwould be enough for that.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 16 Aug 2023 10:06:09 +0200 (CEST)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "I will address the comments here over this coming weekend.\n\n\nI think that in addition to current \"tpc-b like\" test we could also\nhave more modern \"tpc-c like\" and \"tpc-h like\" tests\n\nAnd why not any other \"* -like\" from the rest of TPC-*, YCSP, sysbench, ... :)\n\nthough maybe not as part of pg_bench but as extensions ?\n\n---\nHannu\n\nOn Wed, Aug 16, 2023 at 10:06 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Nathan,\n>\n> >> I'm unclear about what variety of scripts that could be provided given the\n> >> tables made available with pgbench. ISTM that other scenari would involve\n> >> both an initialization and associated scripts, and any proposal would be\n> >> bared because it would open the door to anything.\n> >\n> > Why's that?\n>\n> Just a wild guess based on 19 years of occasional contributions to pg and\n> pgbench in particular:-)\n>\n> > I'm not aware of any project policy that prohibits such enhancements to\n> > pgbench.\n>\n> Attempts in extending pgbench often fall under \"you can do it outside (eg\n> with a custom script) so there is no need to put that in pgbench as it\n> would add to the maintenance burden with a weak benefit proven by the fact\n> that it is not there already\".\n>\n> > It might take some effort to gather consensus on a proposal like this,\n> > but IMHO that doesn't mean we shouldn't try.\n>\n> Done it in the past. Probably will do it again in the future:-)\n>\n> > If the prevailing wisdom is that we shouldn't add more built-in scripts\n> > because there is an existing way to provide custom ones, then it's not\n> > clear that we should proceed with $SUBJECT, anyway.\n>\n> I'm afraid there is that argument. I do not think that this policy is good\n> wrt $SUBJECT, ISTM that having an easy way to test something with a\n> PL/pgSQL function would help promote the language by advertising/showing\n> the potential performance benefit (or not, depending). Just one function\n> would be enough for that.\n>\n> --\n> Fabien.\n\n\n", "msg_date": "Fri, 18 Aug 2023 19:34:03 +0200", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "On Fri, 18 Aug 2023 at 23:04, Hannu Krosing <hannuk@google.com> wrote:\n>\n> I will address the comments here over this coming weekend.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now. As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible. Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Fri, 2 Feb 2024 00:02:59 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "Thanks for the update.\n\nI will give it another go over the weekend\n\nCheers,\nHannu\n\nOn Thu, Feb 1, 2024 at 7:33 PM vignesh C <vignesh21@gmail.com> wrote:\n\n> On Fri, 18 Aug 2023 at 23:04, Hannu Krosing <hannuk@google.com> wrote:\n> >\n> > I will address the comments here over this coming weekend.\n>\n> The patch which you submitted has been awaiting your attention for\n> quite some time now. As such, we have moved it to \"Returned with\n> Feedback\" and removed it from the reviewing queue. Depending on\n> timing, this may be reversible. Kindly address the feedback you have\n> received, and resubmit the patch to the next CommitFest.\n>\n> Regards,\n> Vignesh\n>\n\nThanks for the update. I will give it another go over the weekendCheers,HannuOn Thu, Feb 1, 2024 at 7:33 PM vignesh C <vignesh21@gmail.com> wrote:On Fri, 18 Aug 2023 at 23:04, Hannu Krosing <hannuk@google.com> wrote:\n>\n> I will address the comments here over this coming weekend.\n\nThe patch which you submitted has been awaiting your attention for\nquite some time now.  As such, we have moved it to \"Returned with\nFeedback\" and removed it from the reviewing queue. Depending on\ntiming, this may be reversible.  Kindly address the feedback you have\nreceived, and resubmit the patch to the next CommitFest.\n\nRegards,\nVignesh", "msg_date": "Fri, 2 Feb 2024 18:13:37 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "On Tue, Aug 15, 2023 at 11:41 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> Why's that? I'm not aware of any project policy that prohibits such\n> enhancements to pgbench. It might take some effort to gather consensus on\n> a proposal like this, but IMHO that doesn't mean we shouldn't try. If the\n> prevailing wisdom is that we shouldn't add more built-in scripts because\n> there is an existing way to provide custom ones, then it's not clear that\n> we should proceed with $SUBJECT, anyway.\n\nI don't think there's a policy against adding more built-in scripts to\npgbench, but I'm skeptical of such efforts because I don't see how to\ndecide which ones are worthy of inclusion and which are not. Adding\neveryone's favorite thing will be too cluttered, and adding nothing\nforecloses nothing because people can always provide their own. If we\ncould establish that certain custom scripts are widely used across\nmany people, then those might be worth adding.\n\nI have a vague recollection of someone proposing something similar to\nthis in the past, possibly Jeff Davis. If there is in fact a paper\ntrail showing that the same thing has been proposed more than once by\nunrelated people, that would be a point in favor of adding that\nparticular thing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 2 Feb 2024 15:44:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" }, { "msg_contents": "My justification for adding pl/pgsql tests as part of the immediately\navailable tests is that pl/pgsql itself is always enabled, so having a\nno-effort way to test its performance benefits would be really helpful.\nWe also should have \"tps-b-like as SQL function\" to round up the \"test\nwhat's available in server\" set.\n\n---\nHannu\n\nOn Fri, Feb 2, 2024 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Aug 15, 2023 at 11:41 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > Why's that? I'm not aware of any project policy that prohibits such\n> > enhancements to pgbench. It might take some effort to gather consensus\n> on\n> > a proposal like this, but IMHO that doesn't mean we shouldn't try. If\n> the\n> > prevailing wisdom is that we shouldn't add more built-in scripts because\n> > there is an existing way to provide custom ones, then it's not clear that\n> > we should proceed with $SUBJECT, anyway.\n>\n> I don't think there's a policy against adding more built-in scripts to\n> pgbench, but I'm skeptical of such efforts because I don't see how to\n> decide which ones are worthy of inclusion and which are not. Adding\n> everyone's favorite thing will be too cluttered, and adding nothing\n> forecloses nothing because people can always provide their own. If we\n> could establish that certain custom scripts are widely used across\n> many people, then those might be worth adding.\n>\n> I have a vague recollection of someone proposing something similar to\n> this in the past, possibly Jeff Davis. If there is in fact a paper\n> trail showing that the same thing has been proposed more than once by\n> unrelated people, that would be a point in favor of adding that\n> particular thing.\n>\n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n>\n\nMy justification for adding pl/pgsql tests as part of the immediately available tests is that pl/pgsql itself is always enabled, so having a no-effort way to test its performance benefits would be really helpful.We also should have \"tps-b-like as SQL function\" to round up the \"test what's available in server\" set.---HannuOn Fri, Feb 2, 2024 at 9:44 PM Robert Haas <robertmhaas@gmail.com> wrote:On Tue, Aug 15, 2023 at 11:41 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> Why's that?  I'm not aware of any project policy that prohibits such\n> enhancements to pgbench.  It might take some effort to gather consensus on\n> a proposal like this, but IMHO that doesn't mean we shouldn't try.  If the\n> prevailing wisdom is that we shouldn't add more built-in scripts because\n> there is an existing way to provide custom ones, then it's not clear that\n> we should proceed with $SUBJECT, anyway.\n\nI don't think there's a policy against adding more built-in scripts to\npgbench, but I'm skeptical of such efforts because I don't see how to\ndecide which ones are worthy of inclusion and which are not. Adding\neveryone's favorite thing will be too cluttered, and adding nothing\nforecloses nothing because people can always provide their own. If we\ncould establish that certain custom scripts are widely used across\nmany people, then those might be worth adding.\n\nI have a vague recollection of someone proposing something similar to\nthis in the past, possibly Jeff Davis. If there is in fact a paper\ntrail showing that the same thing has been proposed more than once by\nunrelated people, that would be a point in favor of adding that\nparticular thing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 3 Feb 2024 08:54:16 +0100", "msg_from": "Hannu Krosing <hannuk@google.com>", "msg_from_op": true, "msg_subject": "Re: pgbench - adding pl/pgsql versions of tests" } ]
[ { "msg_contents": "Hi !\nI discovered an interesting behavior in PostgreSQL bulk update query using\n`from (values %s)` syntax.\n\nLet's see an example;\n```\nupdate persons p\nset age = t.age\nfrom (\n values\n ('uuid1', null),\n ('uuid2', null)\n) as t(id, age)\nwhere p.id = t.id;\n```\nThe `age` column is of type integer. The above query will give this\nerror: *\"age\"\nis of type integer but expression is of type text.* (PostgreSQL resolves\nthe type as a text).\n\nBut if we change the values to these;\n```\nvalues\n ('uuid1', 21),\n ('uuid2', null)\n```\nWe won't get any error because PostgreSQL will detect that at least one\ninteger value exists in the 2nd position, so let's resolve this guy to\n`integer`.\n\nThe issue here is that it's very unexpected behavior which might succeed in\nmost of the cases and fail in one case. This behavior can be seen in the\n`parser/parse_coerce.c` file.\n```\n /*\n * If all the inputs were UNKNOWN type --- ie, unknown-type literals\n---\n * then resolve as type TEXT. This situation comes up with constructs\n * like SELECT (CASE WHEN foo THEN 'bar' ELSE 'baz' END); SELECT 'foo'\n * UNION SELECT 'bar'; It might seem desirable to leave the construct's\n * output type as UNKNOWN, but that really doesn't work, because we'd\n * probably end up needing a runtime coercion from UNKNOWN to something\n * else, and we usually won't have it. We need to coerce the unknown\n * literals while they are still literals, so a decision has to be made\n * now.\n */\n if (ptype == UNKNOWNOID)\n ptype = TEXTOID;\n```\n\nSo here are the 2 options I suggest:\n*Option 1:* Cast to the relevant column type in that position (to `integer`\nin this case), whenever we have an unknown type.\n*Option 2:* Always give error if unknown type is not casted to desired type\n(`null::integer` will be necessary).\n\nHi !I discovered an interesting behavior in PostgreSQL bulk update query using `from (values %s)` syntax.Let's see an example;```update persons pset age = t.agefrom  (    values        ('uuid1', null),        ('uuid2', null)) as t(id, age)where p.id = t.id;```The `age` column is of type integer. The above query will give this error: \"age\" is of type integer but expression is of type text. (PostgreSQL resolves the type as a text).But if we change the values to these;```values        ('uuid1', 21),        ('uuid2', null)```We won't get any error because PostgreSQL will detect that at least one integer value exists in the 2nd position, so let's resolve this guy to `integer`.The issue here is that it's very unexpected behavior which might succeed in most of the cases and fail in one case. This behavior can be seen in the `parser/parse_coerce.c` file.```     /*      * If all the inputs were UNKNOWN type --- ie, unknown-type literals ---      * then resolve as type TEXT.  This situation comes up with constructs      * like SELECT (CASE WHEN foo THEN 'bar' ELSE 'baz' END); SELECT 'foo'      * UNION SELECT 'bar'; It might seem desirable to leave the construct's      * output type as UNKNOWN, but that really doesn't work, because we'd      * probably end up needing a runtime coercion from UNKNOWN to something      * else, and we usually won't have it.  We need to coerce the unknown      * literals while they are still literals, so a decision has to be made      * now.      */     if (ptype == UNKNOWNOID)         ptype = TEXTOID;```So here are the 2 options I suggest:Option 1: Cast to the relevant column type in that position (to `integer` in this case), whenever we have an unknown type.Option 2: Always give error if unknown type is not casted to desired type (`null::integer` will be necessary).", "msg_date": "Thu, 5 Jan 2023 11:10:50 +0500", "msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>", "msg_from_op": true, "msg_subject": "Resolve UNKNOWN type to relevant type instead of text type while bulk\n update using values" }, { "msg_contents": "On Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>\nwrote:\n>\n>\n> *Option 1:* Cast to the relevant column type in that position (to\n> `integer` in this case), whenever we have an unknown type.\n>\n\nThis happens when possible so any remaining cases are not possible. Or, at\nleast apparently not worth the effort it would take to make work.\n\n\n> *Option 2:* Always give error if unknown type is not casted to desired\n> type (`null::integer` will be necessary).\n>\n\nBreaking working queries for this is not acceptable.\n\nDavid J.\n\nOn Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> wrote:Option 1: Cast to the relevant column type in that position (to `integer` in this case), whenever we have an unknown type.This happens when possible so any remaining cases are not possible.  Or, at least apparently not worth the effort it would take to make work. Option 2: Always give error if unknown type is not casted to desired type (`null::integer` will be necessary).Breaking working queries for this is not acceptable.David J.", "msg_date": "Wed, 4 Jan 2023 22:23:24 -0800", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" }, { "msg_contents": ">\n> Breaking working queries for this is not acceptable.\n\n\nGood point, let's exclude Option 2.\n\n\n> This happens when possible so any remaining cases are not possible. Or,\n> at least apparently not worth the effort it would take to make work.\n\n\nActually this doesn't happen when all of the values in that position are\nnull. Or maybe I don't understand what you mean.\nIf we don't consider the effort it would take to make it work, do you think\nOption 1 would be good to have? Because when I\nhave an integer column in that position, I wouldn't want the unknown (null)\nvalues I supply to be resolved to `text` type.\n\n\nOn Thu, Jan 5, 2023 at 11:23 AM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> On Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <\n> sasrizavi@gmail.com> wrote:\n>>\n>>\n>> *Option 1:* Cast to the relevant column type in that position (to\n>> `integer` in this case), whenever we have an unknown type.\n>>\n>\n> This happens when possible so any remaining cases are not possible. Or,\n> at least apparently not worth the effort it would take to make work.\n>\n>\n>> *Option 2:* Always give error if unknown type is not casted to desired\n>> type (`null::integer` will be necessary).\n>>\n>\n> Breaking working queries for this is not acceptable.\n>\n> David J.\n>\n>\n\nBreaking working queries for this is not acceptable.Good point, let's exclude Option 2. This happens when possible so any remaining cases are not possible.  Or, at least apparently not worth the effort it would take to make work. Actually this doesn't happen when all of the values in that position are null. Or maybe I don't understand what you mean.If we don't consider the effort it would take to make it work, do you think Option 1 would be good to have? Because when Ihave an integer column in that position, I wouldn't want the unknown (null) values I supply to be resolved to `text` type.On Thu, Jan 5, 2023 at 11:23 AM David G. Johnston <david.g.johnston@gmail.com> wrote:On Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> wrote:Option 1: Cast to the relevant column type in that position (to `integer` in this case), whenever we have an unknown type.This happens when possible so any remaining cases are not possible.  Or, at least apparently not worth the effort it would take to make work. Option 2: Always give error if unknown type is not casted to desired type (`null::integer` will be necessary).Breaking working queries for this is not acceptable.David J.", "msg_date": "Thu, 5 Jan 2023 11:30:39 +0500", "msg_from": "Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" }, { "msg_contents": "Please don’t top-post\n\nOn Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com>\nwrote:\n\n> Breaking working queries for this is not acceptable.\n>\n>\n> Good point, let's exclude Option 2.\n>\n>\n>> This happens when possible so any remaining cases are not possible. Or,\n>> at least apparently not worth the effort it would take to make work.\n>\n>\n> Actually this doesn't happen when all of the values in that position are\n> null. Or maybe I don't understand what you mean.\n> If we don't consider the effort it would take to make it work, do you\n> think Option 1 would be good to have? Because when I\n> have an integer column in that position, I wouldn't want the unknown\n> (null) values I supply to be resolved to `text` type.\n>\n>>\n>>\nThe VALUES subquery has to produce its tabular output without being aware\nof how the outer query is going to use it. The second column of your\nvalues subquery lacks type information so the system chooses a default -\ntext.\n\nDealing with types is one of the harder medium-hard problems in computer\nscience…encountering this problem in real life has never seen me motivated\nenough to gripe about it rather than just add an explicit cast and move\non. And I’ve been around long enough to know that the project is, and long\nhas been, aware of the dull pain points in this area.\n\nDavid J.\n\nPlease don’t top-postOn Wednesday, January 4, 2023, Sayyid Ali Sajjad Rizavi <sasrizavi@gmail.com> wrote:Breaking working queries for this is not acceptable.Good point, let's exclude Option 2. This happens when possible so any remaining cases are not possible.  Or, at least apparently not worth the effort it would take to make work. Actually this doesn't happen when all of the values in that position are null. Or maybe I don't understand what you mean.If we don't consider the effort it would take to make it work, do you think Option 1 would be good to have? Because when Ihave an integer column in that position, I wouldn't want the unknown (null) values I supply to be resolved to `text` type.The VALUES subquery has to produce its tabular output without being aware of how the outer query is going to use it.  The second column of your values subquery lacks type information so the system chooses a default - text.Dealing with types is one of the harder medium-hard problems in computer science…encountering this problem in real life has never seen me motivated enough to gripe about it rather than just add an explicit cast and move on.  And I’ve been around long enough to know that the project is, and long has been, aware of the dull pain points in this area.David J.", "msg_date": "Wed, 4 Jan 2023 23:12:34 -0800", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" }, { "msg_contents": "On Thu, Jan 5, 2023 at 12:42 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n\n>\n> The VALUES subquery has to produce its tabular output without being aware of how the outer query is going to use it. The second column of your values subquery lacks type information so the system chooses a default - text.\n>\n> Dealing with types is one of the harder medium-hard problems in computer science…encountering this problem in real life has never seen me motivated enough to gripe about it rather than just add an explicit cast and move on. And I’ve been around long enough to know that the project is, and long has been, aware of the dull pain points in this area.\n>\n\nbeing here for quite a few years now I agree. It's tempting to trying\nto fix a problem in this area since it seems the fix is simple but it\nis hard to realize the wider impact that simple fix has. Still let me\ntry to propose something :)\n\nwe cast a quoted value to UNKNOWN type, but this is a special value\nnull which can be casted to any SQL data type. Probably we could add a\nANYNULLTYPE or some such generic null type which can be casted to any\ndata type. Then a null value without any type is labeled as\nANYNULLTYPE if specific type information is not available. This\nproblem wouldn't arise then. Of course that's a lot of code to fix\nseemingly rare problem so may not be worth it still.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 6 Jan 2023 18:51:44 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" }, { "msg_contents": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> we cast a quoted value to UNKNOWN type, but this is a special value\n> null which can be casted to any SQL data type. Probably we could add a\n> ANYNULLTYPE or some such generic null type which can be casted to any\n> data type. Then a null value without any type is labeled as\n> ANYNULLTYPE if specific type information is not available.\n\nAnd ... how does that differ from the existing behavior of UNKNOWN?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Jan 2023 09:58:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" }, { "msg_contents": "On Fri, Jan 6, 2023 at 8:28 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Bapat <ashutosh.bapat.oss@gmail.com> writes:\n> > we cast a quoted value to UNKNOWN type, but this is a special value\n> > null which can be casted to any SQL data type. Probably we could add a\n> > ANYNULLTYPE or some such generic null type which can be casted to any\n> > data type. Then a null value without any type is labeled as\n> > ANYNULLTYPE if specific type information is not available.\n>\n> And ... how does that differ from the existing behavior of UNKNOWN?\n>\n\n From the below comment\n /*\n * If all the inputs were UNKNOWN type --- ie, unknown-type literals ---\n * then resolve as type TEXT. This situation comes up with constructs\n * like SELECT (CASE WHEN foo THEN 'bar' ELSE 'baz' END); SELECT 'foo'\n * UNION SELECT 'bar'; It might seem desirable to leave the construct's\n * output type as UNKNOWN, but that really doesn't work, because we'd\n * probably end up needing a runtime coercion from UNKNOWN to something\n * else, and we usually won't have it. We need to coerce the unknown\n * literals while they are still literals, so a decision has to be made\n * now.\n */\n\nA constant null can be coerced to be null of any data type. So it\ndoesn't need to be coerced to text or anything for the reason\nmentioned in the comment. Using UNKNOWN type, we have problem of not\nbeing able to coerce it to another type. But ANYNULLVALUE can be\ncoerced to anything and thus can continue to be used till a point\nwhere we know the data type it needs to be coerced to.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 10 Jan 2023 18:13:51 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Resolve UNKNOWN type to relevant type instead of text type while\n bulk update using values" } ]
[ { "msg_contents": "Hi,\n\nWhile I was running some isolation tests for MERGE, I noticed one issue\nwhen MERGE tries to UPDATE rows that are concurrently updated by another\nsession.\n\nBelow is the test case for the same.\n\n\n==================== TEST CASE START =============================\n\n\n DROP TABLE target;\n\n DROP TABLE source;\n\n\n CREATE TABLE source (id int primary key, balance int);\n\n INSERT INTO source VALUES (1, 100);\n\n INSERT INTO source VALUES (2, 200);\n\n\n CREATE TABLE target (id int primary key, balance int);\n\n INSERT INTO target VALUES (1, 10);\n\n INSERT INTO target VALUES (2, 20);\n\n\nSession 1:\n\n\nbegin;\n\nUPDATE target SET balance = balance + 1;\n\nselect * from target;\n\n\nSession 2:\n\n\nbegin;\n\nMERGE INTO target t\n\n USING (SELECT * from source) s\n\n ON (s.id = t.id)\n\n WHEN MATCHED THEN\n\n UPDATE SET balance = t.balance + s.balance\n\n WHEN NOT MATCHED THEN\n\n INSERT (id, balance) VALUES (s.id, s.balance);\n\n\n< MERGE will wait because the rows are locked by Session 1 >\n\n\n\nSession 1:\n\n\ncommit;\n\n\nSession 2:\n\n\n SELECT * FROM target;\n\n commit;\n\n\n================================ TEST CASE END\n=================================\n\n\n\nThe MERGE fails with the error :\n\nERROR: duplicate key value violates unique constraint \"target_pkey\"\nDETAIL: Key (id)=(2) already exists.\n\n\n\nHowever, the above test case works fine when the target table has only one\nmatching row with the source table. When there are multiple matching rows\nand those rows are concurrently updated, only the first record gets updated\nin MERGE. The subsequent records fail to update and return from\nExecMergeMatched( ) from the below place and enter into the WHEN NOT\nMATCHED INSERT flow.\n\n\n(void) ExecGetJunkAttribute(epqslot,\n\n resultRelInfo->ri_RowIdAttNo,\n\n &isNull);\n\n if (isNull)\n\n return false;\n\n\n\n\nRegards,\nShruthi KC\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,\nWhile I was running some isolation tests for MERGE, I noticed one issue when MERGE tries to UPDATE rows that are concurrently updated by another session.\nBelow is the test case for the same.\n\n==================== TEST CASE START =============================\n\n DROP TABLE target;\n  DROP TABLE source;\n\n  CREATE TABLE source (id int primary key, balance int);\n  INSERT INTO source VALUES (1, 100);\n  INSERT INTO source VALUES (2, 200);\n\n  CREATE TABLE target (id int primary key, balance int);\n  INSERT INTO target VALUES (1, 10);\n  INSERT INTO target VALUES (2, 20);\n\nSession 1:\n\nbegin;\nUPDATE target SET balance = balance + 1;\nselect * from target; \n\nSession 2:\n\nbegin;\nMERGE INTO target t\n  USING (SELECT * from source) s\n  ON (s.id = t.id)\n  WHEN MATCHED THEN\n    UPDATE SET balance = t.balance + s.balance\n  WHEN NOT MATCHED THEN\n    INSERT (id, balance) VALUES (s.id, s.balance);\n\n< MERGE will wait because the rows are locked by Session 1 >\n\n\nSession 1:\n\ncommit;\n\nSession 2:\n\n SELECT * FROM target;\n  commit;\n\n================================ TEST CASE END =================================The MERGE fails with the error :ERROR:  duplicate key value violates unique constraint \"target_pkey\"DETAIL:  Key (id)=(2) already exists.However, the above test case works fine when the target table has only one matching row with the source table. When there are multiple matching rows and those rows are concurrently updated, only the first record gets updated in MERGE. The subsequent records fail to update and return from ExecMergeMatched( ) from the below place and enter into the WHEN NOT MATCHED INSERT flow. (void) ExecGetJunkAttribute(epqslot,                                              resultRelInfo->ri_RowIdAttNo,                                               &isNull); if (isNull)     return false;\nRegards,Shruthi KCEnterpriseDB: http://www.enterprisedb.com", "msg_date": "Thu, 5 Jan 2023 16:06:19 +0530", "msg_from": "Shruthi Gowda <gowdashru@gmail.com>", "msg_from_op": true, "msg_subject": "Issue in MERGE with concurrent UPDATE and MERGE" } ]
[ { "msg_contents": "Hi.\n\nI changed the src/test/regress/sql/interval.sql, How can I generate the new\nsrc/test/regress/expected/interval.out file.\n\nHi.I changed the src/test/regress/sql/interval.sql, How can I generate the new src/test/regress/expected/interval.out file.", "msg_date": "Thu, 5 Jan 2023 16:12:12 +0530", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": true, "msg_subject": "How to generate the new expected out file." }, { "msg_contents": "On Thu, Jan 5, 2023 at 4:12 PM jian he <jian.universality@gmail.com> wrote:\n> Hi.\n>\n> I changed the src/test/regress/sql/interval.sql, How can I generate the new src/test/regress/expected/interval.out file.\n>\n\nYou can run the tests and copy the required changes from\nsrc/test/regress/output/interval.out to\nsrc/test/regress/expected/interval.out\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 5 Jan 2023 16:22:01 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate the new expected out file." }, { "msg_contents": "Hi,\n\nOn 2023-01-05 16:22:01 +0530, Amit Kapila wrote:\n> On Thu, Jan 5, 2023 at 4:12 PM jian he <jian.universality@gmail.com> wrote:\n> > Hi.\n> >\n> > I changed the src/test/regress/sql/interval.sql, How can I generate the new src/test/regress/expected/interval.out file.\n> >\n> \n> You can run the tests and copy the required changes from\n> src/test/regress/output/interval.out to\n> src/test/regress/expected/interval.out\n\nWonder if we should have a bit of content about that in doc/src/sgml/regress.sgml?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:09:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: How to generate the new expected out file." }, { "msg_contents": "On Thu, Jan 12, 2023 at 4:39 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-01-05 16:22:01 +0530, Amit Kapila wrote:\n> > On Thu, Jan 5, 2023 at 4:12 PM jian he <jian.universality@gmail.com> wrote:\n> > > Hi.\n> > >\n> > > I changed the src/test/regress/sql/interval.sql, How can I generate the new src/test/regress/expected/interval.out file.\n> > >\n> >\n> > You can run the tests and copy the required changes from\n> > src/test/regress/output/interval.out to\n> > src/test/regress/expected/interval.out\n>\n> Wonder if we should have a bit of content about that in doc/src/sgml/regress.sgml?\n>\n\nYeah, I think it could be useful, especially for new people. The other\noption could be to add some information in src/test/regress/README/\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Jan 2023 08:41:58 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to generate the new expected out file." }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Thu, Jan 12, 2023 at 4:39 AM Andres Freund <andres@anarazel.de> wrote:\n>> On 2023-01-05 16:22:01 +0530, Amit Kapila wrote:\n>>> You can run the tests and copy the required changes from\n>>> src/test/regress/output/interval.out to\n>>> src/test/regress/expected/interval.out\n\n>> Wonder if we should have a bit of content about that in doc/src/sgml/regress.sgml?\n\n> Yeah, I think it could be useful, especially for new people. The other\n> option could be to add some information in src/test/regress/README/\n\nYeah, regress.sgml is more aimed at consumers of the regression tests\nthan developers. I could see expanding the README to include some\ndeveloper tips.\n\nIn this case I can think of another important tip, which is to be\nsure to update all of the variant expected-files when a test has\nmore than one of them. If you are not in a position to reproduce\nall of the variants directly (say, there's a Windows-specific\nexpected-file and you're not on Windows) it often works to take\nthe diff you have for one variant and apply it to the other(s).\nIf that's not quite right, well, the cfbot or buildfarm will\nhelp you out eventually.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 22:32:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to generate the new expected out file." } ]
[ { "msg_contents": "Hello, hackers.\n\nIt seems like PG 14 works incorrectly with vacuum_defer_cleanup_age\n(or just not cleared rows, not sure) and SELECT FOR UPDATE + UPDATE.\nI am not certain, but hot_standby_feedback probably able to cause the\nsame issues.\n\nSteps to reproduce:\n\n1) Start Postgres like this:\n\n docker run -it -p 5432:5432 --name pg -e\nPOSTGRES_PASSWORD=postgres -e LANG=C.UTF-8 -d postgres:14.6 -c\nvacuum_defer_cleanup_age=1000000\n\n2) Prepare scheme:\n\n CREATE TABLE something_is_wrong_here (id bigserial PRIMARY KEY,\nvalue numeric(15,4) DEFAULT 0 NOT NULL);\n INSERT INTO something_is_wrong_here (value) (SELECT 10000 from\ngenerate_series(0, 100));\n\n3) Prepare file for pgbench:\n\n BEGIN;\n\n SELECT omg.*\n FROM something_is_wrong_here AS omg\n ORDER BY random()\n LIMIT 1\n FOR UPDATE\n \\gset\n\n UPDATE something_is_wrong_here SET value = :value + 1 WHERE id = :id;\n\n END;\n\n4) Run pgbench:\n\n pgbench -c 50 -j 2 -n -f reproduce.bench 'port= 5432\nhost=localhost user=postgres dbname=postgres password=postgres' -T 100\n-P 1\n\nI was able to see such a set of errors (looks scary):\n\nERROR: MultiXactId 30818104 has not been created yet -- apparent wraparound\nERROR: could not open file \"base/13757/16385.1\" (target block\n39591744): previous segment is only 24 blocks\nERROR: attempted to lock invisible tuple\nERROR: could not access status of transaction 38195704\nDETAIL: Could not open file \"pg_subtrans/0246\": No such file or directory.\n\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Thu, 5 Jan 2023 16:12:32 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello, Andres.\n\nI apologize for the direct ping, but I think your snapshot scalability\nwork in PG14 could be related to the issue.\n\nThe TransactionIdRetreatedBy implementation looks correct... But with\ntxid_current=212195 I see errors like \"could not access status of\ntransaction 58643736\"...\nSo, maybe vacuum_defer_cleanup_age just highlights some special case\n(something with \"previous\" xids on the left side of zero?)....\n\nThanks,\nMichail.\n\n\n", "msg_date": "Fri, 6 Jan 2023 00:39:44 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Thu, 5 Jan 2023 at 14:12, Michail Nikolaev <michail.nikolaev@gmail.com>\nwrote:\n>\n> Hello, hackers.\n>\n> It seems like PG 14 works incorrectly with vacuum_defer_cleanup_age\n> (or just not cleared rows, not sure) and SELECT FOR UPDATE + UPDATE.\n> I am not certain, but hot_standby_feedback probably able to cause the\n> same issues.\n>\n> Steps to reproduce:\n>\n> [steps]\n>\n> I was able to see such a set of errors (looks scary):\n>\n> ERROR: MultiXactId 30818104 has not been created yet -- apparent\nwraparound\n> ERROR: could not open file \"base/13757/16385.1\" (target block\n> 39591744): previous segment is only 24 blocks\n\nThis looks quite suspicious too - it wants to access a block at 296GB of\ndata, where only 196kB exist.\n\n> ERROR: attempted to lock invisible tuple\n> ERROR: could not access status of transaction 38195704\n> DETAIL: Could not open file \"pg_subtrans/0246\": No such file or\ndirectory.\n\nI just saw two instances of this \"attempted to lock invisible tuple\" error\nfor the 15.1 image (run on Docker in Ubuntu in WSL) with your reproducer\nscript, so this does not seem to be specific to PG14 (.6).\n\nAnd, after some vacuum and restarting the process, I got the following:\n\nclient 29 script 0 aborted in command 2 query 0: ERROR: heap tid from\nindex tuple (111,1) points past end of heap page line pointer array at\noffset 262 of block 1 in index \"something_is_wrong_here_pkey\"\n\nThere is indeed something wrong there; the page can't be read by\npageinspect:\n\n$ select get_raw_page('public.something_is_wrong_here', 111)::bytea;\nERROR: invalid page in block 111 of relation base/5/16385\n\nI don't have access to the v14 data anymore (I tried a restart, which\ndropped the data :-( ), but will retain my v15 instance for some time to\nhelp any debugging.\n\nKind regards,\n\nMatthias van de Meent\n\n\n\nOn Thu, 5 Jan 2023 at 14:12, Michail Nikolaev <michail.nikolaev@gmail.com> wrote:\n>\n> Hello, hackers.\n>\n> It seems like PG 14 works incorrectly with vacuum_defer_cleanup_age\n> (or just not cleared rows, not sure) and SELECT FOR UPDATE + UPDATE.\n> I am not certain, but hot_standby_feedback probably able to cause the\n> same issues.\n>\n> Steps to reproduce:\n>\n> [steps]\n>\n> I was able to see such a set of errors (looks scary):\n>\n> ERROR:  MultiXactId 30818104 has not been created yet -- apparent wraparound\n> ERROR:  could not open file \"base/13757/16385.1\" (target block\n> 39591744): previous segment is only 24 blocks\n\nThis looks quite suspicious too - it wants to access a block at 296GB of data, where only 196kB exist.\n\n> ERROR:  attempted to lock invisible tuple\n> ERROR:  could not access status of transaction 38195704\n> DETAIL:  Could not open file \"pg_subtrans/0246\": No such file or directory.\n\nI just saw two instances of this \"attempted to lock invisible tuple\" error for the 15.1 image (run on Docker in Ubuntu in WSL) with your reproducer script, so this does not seem to be specific to PG14 (.6).\n\nAnd, after some vacuum and restarting the process, I got the following:\n\nclient 29 script 0 aborted in command 2 query 0: ERROR:  heap tid from index tuple (111,1) points past end of heap page line pointer array at offset 262 of block 1 in index \"something_is_wrong_here_pkey\"\n\nThere is indeed something wrong there; the page can't be read by pageinspect:\n\n$ select get_raw_page('public.something_is_wrong_here', 111)::bytea;\nERROR:  invalid page in block 111 of relation base/5/16385\nI don't have access to the v14 data anymore (I tried a restart, which dropped the data :-( ), but will retain my v15 instance for some time to help any debugging.\n\nKind regards,\n\nMatthias van de Meent", "msg_date": "Thu, 5 Jan 2023 22:49:23 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Thu, Jan 5, 2023 at 1:49 PM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> client 29 script 0 aborted in command 2 query 0: ERROR: heap tid from index tuple (111,1) points past end of heap page line pointer array at offset 262 of block 1 in index \"something_is_wrong_here_pkey\"\n\nThis particular error message is from the hardening added to Postgres\n15 in commit e7428a99. So it's not surprising that Michail didn't see\nthe same error on 14.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 5 Jan 2023 15:27:50 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Thu, Jan 5, 2023 at 3:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> This particular error message is from the hardening added to Postgres\n> 15 in commit e7428a99. So it's not surprising that Michail didn't see\n> the same error on 14.\n\nReproduced this on HEAD locally (no docker), without any difficulty.\n\nFWIW, I find that the \"Assert(ItemIdIsNormal(lp));\" at the top of\nheap_lock_tuple() is the first thing that fails on my assert-enabled\nbuild.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 5 Jan 2023 16:50:24 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello!\n\nThanks for checking the issue!\n\n> FWIW, I find that the \"Assert(ItemIdIsNormal(lp));\" at the top of\n> heap_lock_tuple() is the first thing that fails on my assert-enabled\n> build.\n\nYes, the same for me:\n\n TRAP: failed Assert(\"ItemIdIsNormal(lp)\"), File: \"heapam.c\",\nLine: 4292, PID: 33416\n\n\n> Reproduced this on HEAD locally (no docker), without any difficulty.\n\nIt is a little bit harder without docker in my case, need to adjust\nconnections and number of threads:\n\n pgbench -c 90 -j 8 -n -f reproduce.bench 'port= 5432\nhost=localhost user=postgres dbname=postgres password=postgres' -T\n2000 -P 1\n\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Fri, 6 Jan 2023 14:45:40 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello.\n\nThe few things I have got so far:\n\n1) It is not required to order by random() to reproduce the issue - it\ncould be done using queries like:\n\n BEGIN;\n SELECT omg.*\n FROM something_is_wrong_here AS omg\n ORDER BY value -- change is here\n LIMIT 1\n FOR UPDATE\n \\gset\n\n\n UPDATE something_is_wrong_here SET value = :value + 1 WHERE id = :id;\n COMMIT;\n\nBut for some reason it is harder to reproduce without random in my\ncase (typically need to wait for about a minute with 100 connections).\n\n2) It is not an issue at table creation time. Issue is reproducible if\nvacuum_defer_cleanup_age set after table preparation.\n\n3) To reproduce the issue, vacuum_defer_cleanup_age should flip xid\nover zero (be >= txid_current()).\nAnd it is stable.... So, for example - unable to reproduce with 733\nvalue, but 734 gives error each time.\nJust a single additional txid_current() (after data is filled) fixes a\ncrash... It looks like the first SELECT FOR UPDATE + UPDATE silently\npoisons everything somehow.\nYou could use such PSQL script:\n\n DROP TABLE IF EXISTS something_is_wrong_here;\n\n CREATE TABLE something_is_wrong_here (id bigserial PRIMARY KEY,\nvalue numeric(15,4) DEFAULT 0 NOT NULL);\n\n INSERT INTO something_is_wrong_here (value) (SELECT 10000 from\ngenerate_series(0, 100));\n\n SELECT txid_current() \\gset\n\n SELECT :txid_current + 1 as txid \\gset\n\n ALTER SYSTEM SET vacuum_defer_cleanup_age to :txid;SELECT\npg_reload_conf();\n\nI have attached some scripts if someone goes to reproduce.\n\nBest regards,\nMichail.", "msg_date": "Sat, 7 Jan 2023 21:06:06 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nThomas, CCing you because of the 64bit xid representation aspect.\n\n\nOn 2023-01-06 00:39:44 +0300, Michail Nikolaev wrote:\n> I apologize for the direct ping, but I think your snapshot scalability\n> work in PG14 could be related to the issue.\n\nGood call!\n\n\n> The TransactionIdRetreatedBy implementation looks correct... But with\n> txid_current=212195 I see errors like \"could not access status of\n> transaction 58643736\"...\n> So, maybe vacuum_defer_cleanup_age just highlights some special case\n> (something with \"previous\" xids on the left side of zero?)....\n\nI think the bug is close to TransactionIdRetreatedBy(). Arguably in\nFullXidRelativeTo(). Or a more fundamental data representation issue with\n64bit xids.\n\nTo explain, here's a trace to the bottom of GetSnapshotData() leading to the\nproblem:\n\nIn the case I'm looking at here we end up with 720:\n\t\toldestfxid = FullXidRelativeTo(latest_completed, oldestxid);\nand xmin is 255271, both correct.\n\nThen in TransactionIdRetreatedBy:\n\t\t/* apply vacuum_defer_cleanup_age */\n\t\tdef_vis_xid_data =\n\t\t\tTransactionIdRetreatedBy(xmin, vacuum_defer_cleanup_age);\n\nthings start to be iffy. Because we retreat by vacuum_defer_cleanup_age, which\nwas set to txid_current() in scheme.sql, and the xmin above is that xid,\nTransactionIdRetreatedBy() first ends up with 0. It then backtracks further to\nthe highest 32 xid (i.e. 4294967295). So far so good.\n\nWe could obviously end up with values further in the past as well, if\nvacuum_defer_cleanup_age were larger.\n\n\nThings start to seriously go off the rails when we convert that 32bit xid to\n64 bit with:\n\t\tdef_vis_fxid = FullXidRelativeTo(latest_completed, def_vis_xid);\nwhich returns {value = 18446744073709551615}, which 0-1 in 64bit.\n\n\nHowever, as 64bit xids are not supposed to wrap around, we're in trouble -\nit's an xid *very* far into the future. Allowing things to be pruned that\nshouldn't, because everything is below that.\n\n\nI don't quite know how to best fix this. 4294967295 is the correct result for\nTransactionIdRetreatedBy() in this case, and it'd not cause problems for\nFullXidRelativeTo() if we actually had wrapped around the xid counter before\n(since then we'd just represent it as a fxid \"of the proper epoch\").\n\nBasically, in contrast to 32bit xids, 64bit xids cannot represent xids from\nbefore the start of the universe, whereas with the modulo arithmetic of 32bit\nthat's not a real problem.\n\n\nIt's probably not too hard to fix specifically in this one place - we could\njust clamp vacuum_defer_cleanup_age to be smaller than latest_completed, but\nit strikes me as as a somewhat larger issue for the 64it xid infrastructure. I\nsuspect this might not be the only place running into problems with such\n\"before the universe\" xids.\n\n\nFor a bit I was thinking that we should introduce the notion that a\nFullTransactionId can be from the past. Specifically that when the upper 32bit\nare all set, we treat the lower 32bit as a value from before xid 0 using the\nnormal 32bit xid arithmetic. But it sucks to add overhead for that\neverywhere.\n\nIt might be a bit more palatable to designate one individual value,\ne.g. 2^32-1<<32, as a \"before xid 0\" indicator - it doesn't matter how far\nbefore the start of the universe an xid point to...\n\n\nFullXidRelativeTo() did assert that the input 32bit xid is in a reasonable\nrange, but unfortunately didn't do a similar check for the 64bit case.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 7 Jan 2023 16:29:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-07 21:06:06 +0300, Michail Nikolaev wrote:\n> 2) It is not an issue at table creation time. Issue is reproducible if\n> vacuum_defer_cleanup_age set after table preparation.\n> \n> 3) To reproduce the issue, vacuum_defer_cleanup_age should flip xid\n> over zero (be >= txid_current()).\n> And it is stable.... So, for example - unable to reproduce with 733\n> value, but 734 gives error each time.\n> Just a single additional txid_current() (after data is filled) fixes a\n> crash... It looks like the first SELECT FOR UPDATE + UPDATE silently\n> poisons everything somehow.\n> You could use such PSQL script:\n\nFWIW, the concrete value for vacuum_defer_cleanup_age is not crucial to\nencounter the problem. It needs to be a value that, when compared to the xid\nthat did the \"INSERT INTO something_is_wrong_here\", results in value <= 0.\n\nSetting vacuum_defer_cleanup_age than the xid to a much larger value allows\nthe crash to be encountered repeatedly.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 7 Jan 2023 16:34:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-07 16:29:23 -0800, Andres Freund wrote:\n> It's probably not too hard to fix specifically in this one place - we could\n> just clamp vacuum_defer_cleanup_age to be smaller than latest_completed, but\n> it strikes me as as a somewhat larger issue for the 64it xid infrastructure. I\n> suspect this might not be the only place running into problems with such\n> \"before the universe\" xids.\n\nI haven't found other problematic places in HEAD, but did end up find a less\nserious version of this bug in < 14: GetFullRecentGlobalXmin(). I did verify\nthat with vacuum_defer_cleanup_age set GetFullRecentGlobalXmin() returns\nvalues that look likely to cause problems. Its \"just\" used in gist luckily.\n\nIt's hard to find places that do this kind of arithmetic, we traditionally\nhaven't had a helper for it. So it's open-coded in various ways.\n\n\txidStopLimit = xidWrapLimit - 3000000;\n\tif (xidStopLimit < FirstNormalTransactionId)\n\t\txidStopLimit -= FirstNormalTransactionId;\n\nand oddly:\n\txidVacLimit = oldest_datfrozenxid + autovacuum_freeze_max_age;\n\tif (xidVacLimit < FirstNormalTransactionId)\n\t\txidVacLimit += FirstNormalTransactionId;\n\nor (in < 14):\n\n\tRecentGlobalXmin = globalxmin - vacuum_defer_cleanup_age;\n\tif (!TransactionIdIsNormal(RecentGlobalXmin))\n\t\tRecentGlobalXmin = FirstNormalTransactionId;\n\n\nThe currently existing places I found, other than the ones in procarray.c,\nluckily don't seem to convert the xids to 64bit xids.\n\n\n> For a bit I was thinking that we should introduce the notion that a\n> FullTransactionId can be from the past. Specifically that when the upper 32bit\n> are all set, we treat the lower 32bit as a value from before xid 0 using the\n> normal 32bit xid arithmetic. But it sucks to add overhead for that\n> everywhere.\n> \n> It might be a bit more palatable to designate one individual value,\n> e.g. 2^32-1<<32, as a \"before xid 0\" indicator - it doesn't matter how far\n> before the start of the universe an xid point to...\n\nOn IM Thomas suggested we could reserve the 2^32-1 epoch for invalid values. I\nhacked up a patch that converts various fxid functions to inline functions\nwith such assertions, and it indeed quickly catches the problem this thread\nreported, close to the source of the use.\n\nOne issue with that is is that it'd reduce what can be input for the xid8\ntype. But it's hard to believe that'd be a real issue?\n\n\nIt's quite unfortunate that we don't have a test for vacuum_defer_cleanup_age\nyet :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 7 Jan 2023 19:09:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Sun, 8 Jan 2023 at 04:09, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-07 16:29:23 -0800, Andres Freund wrote:\n> > It's probably not too hard to fix specifically in this one place - we could\n> > just clamp vacuum_defer_cleanup_age to be smaller than latest_completed, but\n> > it strikes me as as a somewhat larger issue for the 64it xid infrastructure. I\n> > suspect this might not be the only place running into problems with such\n> > \"before the universe\" xids.\n>\n> I haven't found other problematic places in HEAD, but did end up find a less\n> serious version of this bug in < 14: GetFullRecentGlobalXmin(). I did verify\n> that with vacuum_defer_cleanup_age set GetFullRecentGlobalXmin() returns\n> values that look likely to cause problems. Its \"just\" used in gist luckily.\n>\n> It's hard to find places that do this kind of arithmetic, we traditionally\n> haven't had a helper for it. So it's open-coded in various ways.\n> [...]\n>\n> The currently existing places I found, other than the ones in procarray.c,\n> luckily don't seem to convert the xids to 64bit xids.\n\nThat's good to know.\n\n> > For a bit I was thinking that we should introduce the notion that a\n> > FullTransactionId can be from the past. Specifically that when the upper 32bit\n> > are all set, we treat the lower 32bit as a value from before xid 0 using the\n> > normal 32bit xid arithmetic. But it sucks to add overhead for that\n> > everywhere.\n> >\n> > It might be a bit more palatable to designate one individual value,\n> > e.g. 2^32-1<<32, as a \"before xid 0\" indicator - it doesn't matter how far\n> > before the start of the universe an xid point to...\n>\n> On IM Thomas suggested we could reserve the 2^32-1 epoch for invalid values. I\n> hacked up a patch that converts various fxid functions to inline functions\n> with such assertions, and it indeed quickly catches the problem this thread\n> reported, close to the source of the use.\n\nWouldn't it be enough to only fix the constructions in\nFullXidRelativeTo() and widen_snapshot_xid() (as attached, $topic does\nnot occur with the patch), and (optionally) bump the first XID\navailable for any cluster to (FirstNormalXid + 1) to retain the 'older\nthan any running transaction' property?\n\nThe change only fixes the issue for FullTransactionId, which IMO is OK\n- I don't think we need to keep xid->xid8->xid symmetric in cases of\nxid8 wraparound.\n\n> One issue with that is is that it'd reduce what can be input for the xid8\n> type. But it's hard to believe that'd be a real issue?\n\nYes, it's unlikely anyone would ever hit that with our current WAL\nformat - we use 24 bytes /xid just to log it's use, so we'd use at\nmost epoch 0x1000_0000 in unrealistic scenarios. In addition;\ntechnically, we already have (3*2^32 - 3) \"invalid\" xid8 values that\ncan never be produced in FullXidRelativeTo - those few extra invalid\nvalues don't matter much to me except \"even more special casing\".\n\nKind regards,\n\nMatthias van de Meent.", "msg_date": "Mon, 9 Jan 2023 17:50:10 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nRobert, Mark, CCing you because of the amcheck aspect (see below).\n\nOn 2023-01-09 17:50:10 +0100, Matthias van de Meent wrote:\n> On Sun, 8 Jan 2023 at 04:09, Andres Freund <andres@anarazel.de> wrote:\n> > > For a bit I was thinking that we should introduce the notion that a\n> > > FullTransactionId can be from the past. Specifically that when the upper 32bit\n> > > are all set, we treat the lower 32bit as a value from before xid 0 using the\n> > > normal 32bit xid arithmetic. But it sucks to add overhead for that\n> > > everywhere.\n> > >\n> > > It might be a bit more palatable to designate one individual value,\n> > > e.g. 2^32-1<<32, as a \"before xid 0\" indicator - it doesn't matter how far\n> > > before the start of the universe an xid point to...\n> >\n> > On IM Thomas suggested we could reserve the 2^32-1 epoch for invalid values. I\n> > hacked up a patch that converts various fxid functions to inline functions\n> > with such assertions, and it indeed quickly catches the problem this thread\n> > reported, close to the source of the use.\n> \n> Wouldn't it be enough to only fix the constructions in\n> FullXidRelativeTo() and widen_snapshot_xid() (as attached, $topic does\n> not occur with the patch), and (optionally) bump the first XID\n> available for any cluster to (FirstNormalXid + 1) to retain the 'older\n> than any running transaction' property?\n\nIt's not too hard to fix in individual places, but I suspect that we'll\nintroduce the bug in future places without some more fundamental protection.\n\nLocally I fixed it by clamping vacuum_defer_cleanup_age to a reasonable value\nin ComputeXidHorizons() and GetSnapshotData().\n\nFixing it in FullXidRelativeTo() doesn't seem quite right, while it's ok to\njust return FirstNormalTransactionId in the case of vacuum_defer_cleanup_age,\nit doesn't see necessarily correct for other cases.\n\n\n> The change only fixes the issue for FullTransactionId, which IMO is OK\n> - I don't think we need to keep xid->xid8->xid symmetric in cases of\n> xid8 wraparound.\n\nI think we should keep that symmetric, it just gets too confusing / easy to\nmiss bugs otherwise.\n\n\n> > One issue with that is is that it'd reduce what can be input for the xid8\n> > type. But it's hard to believe that'd be a real issue?\n> \n> Yes, it's unlikely anyone would ever hit that with our current WAL\n> format - we use 24 bytes /xid just to log it's use, so we'd use at\n> most epoch 0x1000_0000 in unrealistic scenarios. In addition;\n> technically, we already have (3*2^32 - 3) \"invalid\" xid8 values that\n> can never be produced in FullXidRelativeTo - those few extra invalid\n> values don't matter much to me except \"even more special casing\".\n\nYep. The attached 0002 is a first implementation of this.\n\nThe new assertions found at least one bug in amcheck, and one further example\nof the problem of representing past 32 xids in 64bit:\n\n1) Because ctx->next_xid is set after the XidFromFullTransactionId() call in\nupdate_cached_xid_range(), we end up using the xid 0 (or an outdated value in\nsubsequent calls) to determine whether epoch needs to be reduced.\n\n2) One test generates includes an xid from the future (4026531839). Which\ncauses epoch to wrap around (via the epoch--) in\nFullTransactionIdFromXidAndCtx(). I've hackily fixed that by just representing\nit as an xid from the future instead. But not sure that's a good answer.\n\n\nA different approach would be to represent fxids as *signed* 64bit\nintegers. That'd of course loose more range, but could represent everything\naccurately, and would have a compatible on-disk representation on two's\ncomplement platforms (all our platforms). I think the only place that'd need\nspecial treatment is U64FromFullTransactionId() / its callers. I think this\nmight be the most robust approach.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 9 Jan 2023 11:34:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Tue, Jan 10, 2023 at 8:34 AM Andres Freund <andres@anarazel.de> wrote:\n> A different approach would be to represent fxids as *signed* 64bit\n> integers. That'd of course loose more range, but could represent everything\n> accurately, and would have a compatible on-disk representation on two's\n> complement platforms (all our platforms). I think the only place that'd need\n> special treatment is U64FromFullTransactionId() / its callers. I think this\n> might be the most robust approach.\n\nIt does sound like an interesting approach; it means you are free to\nretreat arbitrarily without ever thinking about it, and by the\narguments given (LSN space required to consume fxids) it's still\n'enough'. Essentially all these bugs are places where the author\nalready believed it worked that way.\n\n(Two's complement is required in the C23 draft.)\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:27:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "\n\n> On Jan 9, 2023, at 11:34 AM, Andres Freund <andres@anarazel.de> wrote:\n> \n> 1) Because ctx->next_xid is set after the XidFromFullTransactionId() call in\n> update_cached_xid_range(), we end up using the xid 0 (or an outdated value in\n> subsequent calls) to determine whether epoch needs to be reduced.\n\nCan you say a bit more about your analysis here, preferably with pointers to the lines of code you are analyzing? Does the problem exist in amcheck as currently committed, or are you thinking about a problem that arises only after applying your patch? I'm a bit fuzzy on where xid 0 gets used.\n\nThanks\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 9 Jan 2023 13:55:02 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-09 13:55:02 -0800, Mark Dilger wrote:\n> > On Jan 9, 2023, at 11:34 AM, Andres Freund <andres@anarazel.de> wrote:\n> > \n> > 1) Because ctx->next_xid is set after the XidFromFullTransactionId() call in\n> > update_cached_xid_range(), we end up using the xid 0 (or an outdated value in\n> > subsequent calls) to determine whether epoch needs to be reduced.\n> \n> Can you say a bit more about your analysis here, preferably with pointers to\n> the lines of code you are analyzing? Does the problem exist in amcheck as\n> currently committed, or are you thinking about a problem that arises only\n> after applying your patch? I'm a bit fuzzy on where xid 0 gets used.\n\nThe problems exist in the code as currently committed. I'm not sure what\nexactly the consequences are, the result is that oldest_fxid will be, at least\ntemporarily, bogus.\n\nConsider the first call to update_cached_xid_range():\n\n/*\n * Update our cached range of valid transaction IDs.\n */\nstatic void\nupdate_cached_xid_range(HeapCheckContext *ctx)\n{\n\t/* Make cached copies */\n\tLWLockAcquire(XidGenLock, LW_SHARED);\n\tctx->next_fxid = ShmemVariableCache->nextXid;\n\tctx->oldest_xid = ShmemVariableCache->oldestXid;\n\tLWLockRelease(XidGenLock);\n\n\t/* And compute alternate versions of the same */\n\tctx->oldest_fxid = FullTransactionIdFromXidAndCtx(ctx->oldest_xid, ctx);\n\tctx->next_xid = XidFromFullTransactionId(ctx->next_fxid);\n}\n\nThe problem is that the call to FullTransactionIdFromXidAndCtx() happens\nbefore ctx->next_xid is assigned, even though FullTransactionIdFromXidAndCtx()\nuses ctx->next_xid.\n\nstatic FullTransactionId\nFullTransactionIdFromXidAndCtx(TransactionId xid, const HeapCheckContext *ctx)\n{\n\tuint32\t\tepoch;\n\n\tif (!TransactionIdIsNormal(xid))\n\t\treturn FullTransactionIdFromEpochAndXid(0, xid);\n\tepoch = EpochFromFullTransactionId(ctx->next_fxid);\n\tif (xid > ctx->next_xid)\n\t\tepoch--;\n\treturn FullTransactionIdFromEpochAndXid(epoch, xid);\n}\n\nBecause ctx->next_xid is 0, due to not having been set yet, \"xid > ctx->next_xid\"\nwill always be true, leading to epoch being reduced by one.\n\nIn the common case of there never having been an xid wraparound, we'll thus\nunderflow epoch, generating an xid far into the future.\n\n\nThe tests encounter the issue today. If you add\n\tAssert(TransactionIdIsValid(ctx->next_xid));\n\tAssert(FullTransactionIdIsValid(ctx->next_fxid));\nearly in FullTransactionIdFromXidAndCtx() it'll be hit in the\namcheck/pg_amcheck tests.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 9 Jan 2023 14:07:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "\n\n> On Jan 9, 2023, at 2:07 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> The tests encounter the issue today. If you add\n> Assert(TransactionIdIsValid(ctx->next_xid));\n> Assert(FullTransactionIdIsValid(ctx->next_fxid));\n> early in FullTransactionIdFromXidAndCtx() it'll be hit in the\n> amcheck/pg_amcheck tests.\n\nOk, I can confirm that. I find the assertion\n\n Assert(epoch != (uint32)-1);\n\na bit simpler to reason about, but either way, I agree it is a bug. Thanks for finding this.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Mon, 9 Jan 2023 19:24:33 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Mon, 9 Jan 2023 at 20:34, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-09 17:50:10 +0100, Matthias van de Meent wrote:\n> > Wouldn't it be enough to only fix the constructions in\n> > FullXidRelativeTo() and widen_snapshot_xid() (as attached, $topic does\n> > not occur with the patch), and (optionally) bump the first XID\n> > available for any cluster to (FirstNormalXid + 1) to retain the 'older\n> > than any running transaction' property?\n>\n> It's not too hard to fix in individual places, but I suspect that we'll\n> introduce the bug in future places without some more fundamental protection.\n>\n> Locally I fixed it by clamping vacuum_defer_cleanup_age to a reasonable value\n> in ComputeXidHorizons() and GetSnapshotData().\n\nI don't think that clamping the value with oldestXid (as seen in patch\n0001, in GetSnapshotData) is right.\nIt would clamp the value relative to the oldest frozen xid of all\ndatabases, which can be millions of transactions behind oldestXmin,\nand thus severely skew the amount of transaction's changes you keep on\ndisk (that is, until oldestXid moves past 1000_000).\nA similar case can be made for the changes in ComputeXidHorizons - for\nthe described behaviour of vacuum_defer_cleanup_age we would need to\nclamp the used offset separately for each of the fields in the horizon\nresult to retain all transaction data for the first 1 million\ntransactions, and the ones that may still see these transactions.\n\n> Fixing it in FullXidRelativeTo() doesn't seem quite right, while it's ok to\n> just return FirstNormalTransactionId in the case of vacuum_defer_cleanup_age,\n> it doesn't see necessarily correct for other cases.\n\nUnderstood.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 10 Jan 2023 15:03:42 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 15:03:42 +0100, Matthias van de Meent wrote:\n> On Mon, 9 Jan 2023 at 20:34, Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-09 17:50:10 +0100, Matthias van de Meent wrote:\n> > > Wouldn't it be enough to only fix the constructions in\n> > > FullXidRelativeTo() and widen_snapshot_xid() (as attached, $topic does\n> > > not occur with the patch), and (optionally) bump the first XID\n> > > available for any cluster to (FirstNormalXid + 1) to retain the 'older\n> > > than any running transaction' property?\n> >\n> > It's not too hard to fix in individual places, but I suspect that we'll\n> > introduce the bug in future places without some more fundamental protection.\n> >\n> > Locally I fixed it by clamping vacuum_defer_cleanup_age to a reasonable value\n> > in ComputeXidHorizons() and GetSnapshotData().\n> \n> I don't think that clamping the value with oldestXid (as seen in patch\n> 0001, in GetSnapshotData) is right.\n\nI agree that using oldestXid to clamp is problematic.\n\n\n> It would clamp the value relative to the oldest frozen xid of all\n> databases, which can be millions of transactions behind oldestXmin,\n> and thus severely skew the amount of transaction's changes you keep on\n> disk (that is, until oldestXid moves past 1000_000).\n\nWhat precisely do you mean with \"skew\" here? Do you just mean that it'd take a\nlong time until vacuum_defer_cleanup_age takes effect? Somehow it sounds like\nyou might mean more than that?\n\n\nI'm tempted to go with reinterpreting 64bit xids as signed. Except that it\nseems like a mighty invasive change to backpatch.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Jan 2023 11:14:49 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Tue, 10 Jan 2023 at 20:14, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-10 15:03:42 +0100, Matthias van de Meent wrote:\n> > On Mon, 9 Jan 2023 at 20:34, Andres Freund <andres@anarazel.de> wrote:\n> > > It's not too hard to fix in individual places, but I suspect that we'll\n> > > introduce the bug in future places without some more fundamental protection.\n> > >\n> > > Locally I fixed it by clamping vacuum_defer_cleanup_age to a reasonable value\n> > > in ComputeXidHorizons() and GetSnapshotData().\n> >\n> > I don't think that clamping the value with oldestXid (as seen in patch\n> > 0001, in GetSnapshotData) is right.\n>\n> I agree that using oldestXid to clamp is problematic.\n>\n>\n> > It would clamp the value relative to the oldest frozen xid of all\n> > databases, which can be millions of transactions behind oldestXmin,\n> > and thus severely skew the amount of transaction's changes you keep on\n> > disk (that is, until oldestXid moves past 1000_000).\n>\n> What precisely do you mean with \"skew\" here? Do you just mean that it'd take a\n> long time until vacuum_defer_cleanup_age takes effect? Somehow it sounds like\n> you might mean more than that?\n\nh->oldest_considered_running can be extremely old due to the global\nnature of the value and the potential existence of a snapshot in\nanother database that started in parallel to a very old running\ntransaction.\n\nExample: With vacuum_defer_cleanup_age set to 1000000, it is possible\nthat a snapshot in another database (thus another backend) would\nresult in a local intermediate status result of h->o_c_r = 20,\nh->s_o_n = 20, h->d_o_n = 10030. The clamped offset would then be 20\n(clamped using h->o_c_r), which updates h->data_oldest_nonremovable to\n10010. The obvious result is that all but the last 20 transactions\nfrom this database's data files are available for cleanup, which\ncontradicts with the intention of the vacuum_defer_cleanup_age GUC.\n\n> I'm tempted to go with reinterpreting 64bit xids as signed. Except that it\n> seems like a mighty invasive change to backpatch.\n\nI'm not sure either. Protecting against underflow by halving the\neffective valid value space is quite the intervention, but if it is\nnecessary to make this work in a performant manner, it would be worth\nit. Maybe someone else with more experience can provide their opinion\nhere.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 10 Jan 2023 21:32:54 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello.\n\nI have registered it as patch in the commit fest:\nhttps://commitfest.postgresql.org/42/4138/\n\nBest regards,\nMichail.\n\n\n", "msg_date": "Sun, 22 Jan 2023 13:49:43 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 21:32:54 +0100, Matthias van de Meent wrote:\n> On Tue, 10 Jan 2023 at 20:14, Andres Freund <andres@anarazel.de> wrote:\n> > On 2023-01-10 15:03:42 +0100, Matthias van de Meent wrote:\n> > What precisely do you mean with \"skew\" here? Do you just mean that it'd take a\n> > long time until vacuum_defer_cleanup_age takes effect? Somehow it sounds like\n> > you might mean more than that?\n> \n> h->oldest_considered_running can be extremely old due to the global\n> nature of the value and the potential existence of a snapshot in\n> another database that started in parallel to a very old running\n> transaction.\n\nHere's a version that, I think, does not have that issue.\n\nIn an earlier, not posted, version I had an vacuum_defer_cleanup_age specific\nhelper function for this, but it seems likely we'll need it in other places\ntoo. So I named it TransactionIdRetreatSafely(). I made it accept the xid by\npointer, as the line lengths / repetition otherwise end up making it hard to\nread the code. For now I have TransactionIdRetreatSafely() be private to\nprocarray.c, but I expect we'll have to change that eventually.\n\nNot sure I like TransactionIdRetreatSafely() as a name. Maybe\nTransactionIdRetreatClamped() is better?\n\n\nI've been working on a test for vacuum_defer_cleanup_age. It does catch the\ncorruption at hand, but not much more. It's quite painful to write, right\nnow. Some of the reasons:\nhttps://postgr.es/m/20230130194350.zj5v467x4jgqt3d6%40awork3.anarazel.de\n\n\n\n> > I'm tempted to go with reinterpreting 64bit xids as signed. Except that it\n> > seems like a mighty invasive change to backpatch.\n> \n> I'm not sure either. Protecting against underflow by halving the\n> effective valid value space is quite the intervention, but if it is\n> necessary to make this work in a performant manner, it would be worth\n> it. Maybe someone else with more experience can provide their opinion\n> here.\n\nThe attached assertions just removes 1/2**32'ths of the space, by reserving\nthe xid range with the upper 32bit set as something that shouldn't be\nreachable.\n\nStill requires us to change the input routines to reject that range, but I\nthink that's a worthy tradeoff. I didn't find the existing limits for the\ntype to be documented anywhere.\n\nObviously something like that could only go into HEAD.\n\nGreetings,\n\nAndres Freund", "msg_date": "Mon, 30 Jan 2023 12:19:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Mon, 30 Jan 2023 at 21:19, Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-10 21:32:54 +0100, Matthias van de Meent wrote:\n> > On Tue, 10 Jan 2023 at 20:14, Andres Freund <andres@anarazel.de> wrote:\n> > > On 2023-01-10 15:03:42 +0100, Matthias van de Meent wrote:\n> > > What precisely do you mean with \"skew\" here? Do you just mean that it'd take a\n> > > long time until vacuum_defer_cleanup_age takes effect? Somehow it sounds like\n> > > you might mean more than that?\n> >\n> > h->oldest_considered_running can be extremely old due to the global\n> > nature of the value and the potential existence of a snapshot in\n> > another database that started in parallel to a very old running\n> > transaction.\n>\n> Here's a version that, I think, does not have that issue.\n\nThanks!\n\n> In an earlier, not posted, version I had an vacuum_defer_cleanup_age specific\n> helper function for this, but it seems likely we'll need it in other places\n> too. So I named it TransactionIdRetreatSafely(). I made it accept the xid by\n> pointer, as the line lengths / repetition otherwise end up making it hard to\n> read the code. For now I have TransactionIdRetreatSafely() be private to\n> procarray.c, but I expect we'll have to change that eventually.\n\nIf TransactionIdRetreatSafely will be exposed outside procarray.c,\nthen I think the xid pointer should be replaced with normal\narguments/returns; both for parity with TransactionIdRetreatedBy and\nto remove this memory store dependency in this hot code path.\n\n> Not sure I like TransactionIdRetreatSafely() as a name. Maybe\n> TransactionIdRetreatClamped() is better?\n\nI think the 'safely' version is fine.\n\n> I've been working on a test for vacuum_defer_cleanup_age. It does catch the\n> corruption at hand, but not much more. It's quite painful to write, right\n> now. Some of the reasons:\n> https://postgr.es/m/20230130194350.zj5v467x4jgqt3d6%40awork3.anarazel.de\n>\n>\n>\n> > > I'm tempted to go with reinterpreting 64bit xids as signed. Except that it\n> > > seems like a mighty invasive change to backpatch.\n> >\n> > I'm not sure either. Protecting against underflow by halving the\n> > effective valid value space is quite the intervention, but if it is\n> > necessary to make this work in a performant manner, it would be worth\n> > it. Maybe someone else with more experience can provide their opinion\n> > here.\n>\n> The attached assertions just removes 1/2**32'ths of the space, by reserving\n> the xid range with the upper 32bit set as something that shouldn't be\n> reachable.\n\nI think that is acceptible.\n\n> Still requires us to change the input routines to reject that range, but I\n> think that's a worthy tradeoff.\n\nAgreed.\n\n> I didn't find the existing limits for the\n> type to be documented anywhere.\n>\n> Obviously something like that could only go into HEAD.\n\nYeah.\n\nComments on 0003:\n\n> + /*\n> + * FIXME, doubtful this is the best fix.\n> + *\n> + * Can't represent the 32bit xid as a 64bit xid, as it's before fxid\n> + * 0. Represent it as an xid from the future instead.\n> + */\n> + if (epoch == 0)\n> + return FullTransactionIdFromEpochAndXid(0, xid);\n\nShouldn't this be an error condition instead, as this XID should not\nbe able to appear?\n\non 0004:\n\n> - '0xffffffffffffffff'::xid8,\n> - '-1'::xid8;\n> + '0xefffffffffffffff'::xid8,\n> + '0'::xid8;\n\nThe 0xFF... usages were replaced with \"0xEFFF...\". Shouldn't we also\ntest on 0xffff_fffE_ffff_ffff to test for input of our actual max\nvalue?\n\n> @@ -326,7 +329,11 @@ parse_snapshot(const char *str, Node *escontext)\n> while (*str != '\\0')\n> {\n> /* read next value */\n> - val = FullTransactionIdFromU64(strtou64(str, &endp, 10));\n> + raw_fxid = strtou64(str, &endp, 10);\n> +\n> + val = FullTransactionIdFromU64(raw_fxid);\n> + if (!InFullTransactionIdRange(raw_fxid))\n> + goto bad_format;\n\nWith assertions enabled FullTransactionIdFromU64 will assert the\nInFullTransactionIdRange condition, meaning we wouldn't hit the branch\ninto bad_format.\nI think these operations should be swapped, as parsing a snapshot\nshouldn't run into assertion failures like this if it can error\nnormally. Maybe this can be added to tests as well?\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:05:17 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-01-31 15:05:17 +0100, Matthias van de Meent wrote:\n> On Mon, 30 Jan 2023 at 21:19, Andres Freund <andres@anarazel.de> wrote:\n> > In an earlier, not posted, version I had an vacuum_defer_cleanup_age specific\n> > helper function for this, but it seems likely we'll need it in other places\n> > too. So I named it TransactionIdRetreatSafely(). I made it accept the xid by\n> > pointer, as the line lengths / repetition otherwise end up making it hard to\n> > read the code. For now I have TransactionIdRetreatSafely() be private to\n> > procarray.c, but I expect we'll have to change that eventually.\n> \n> If TransactionIdRetreatSafely will be exposed outside procarray.c,\n> then I think the xid pointer should be replaced with normal\n> arguments/returns; both for parity with TransactionIdRetreatedBy\n\nThat's why I named one version *Retreat the other Retreated :)\n\nI think it'll make the code easier to read in the other places too, the\nvariable names / function names in this space are uncomfortably long to\nfit into 78chars..., particularly when there's two references to the\nsame variable in the same line.\n\n\n> and to remove this memory store dependency in this hot code path.\n\nI doubt that matters much here and the places it's going to be used\nin. And presumably the compiler will inline it anyway. I'd probably make\nit a static inline in the header too.\n\nWhat's making me hesitate about exposing it is that it's quite easy to\nget things wrong by using a wrong fxid or such.\n\n\n> > + /*\n> > + * FIXME, doubtful this is the best fix.\n> > + *\n> > + * Can't represent the 32bit xid as a 64bit xid, as it's before fxid\n> > + * 0. Represent it as an xid from the future instead.\n> > + */\n> > + if (epoch == 0)\n> > + return FullTransactionIdFromEpochAndXid(0, xid);\n> \n> Shouldn't this be an error condition instead, as this XID should not\n> be able to appear?\n\nIf you mean error in the sense of ERROR, no, I don't think so. That code\ntries hard to be able to check many tuples in a row. And if we were to\nerror out here, we'd not able to do that. We should still report those\ntuples as corrupt, fwiw.\n\nThe reason this path is hit is that a test intentionally corrupts some\nxids. So the path is reachable and we need to cope somehow.\n\nI'm not really satisfied with this fix either - I mostly wanted to\ninclude something sufficient to prevent assertion failures.\n\nI had hoped that Mark would look at the amcheck bits and come up with\nmore complete fixes.\n\n\n> on 0004:\n> \n> > - '0xffffffffffffffff'::xid8,\n> > - '-1'::xid8;\n> > + '0xefffffffffffffff'::xid8,\n> > + '0'::xid8;\n> \n> The 0xFF... usages were replaced with \"0xEFFF...\". Shouldn't we also\n> test on 0xffff_fffE_ffff_ffff to test for input of our actual max\n> value?\n\nProbably a good idea.\n\n\n> > @@ -326,7 +329,11 @@ parse_snapshot(const char *str, Node *escontext)\n> > while (*str != '\\0')\n> > {\n> > /* read next value */\n> > - val = FullTransactionIdFromU64(strtou64(str, &endp, 10));\n> > + raw_fxid = strtou64(str, &endp, 10);\n> > +\n> > + val = FullTransactionIdFromU64(raw_fxid);\n> > + if (!InFullTransactionIdRange(raw_fxid))\n> > + goto bad_format;\n> \n> With assertions enabled FullTransactionIdFromU64 will assert the\n> InFullTransactionIdRange condition, meaning we wouldn't hit the branch\n> into bad_format.\n> I think these operations should be swapped, as parsing a snapshot\n> shouldn't run into assertion failures like this if it can error\n> normally.\n\nYep.\n\n\n> Maybe this can be added to tests as well?\n\nI'll check. I thought for a bit it'd not work because we'd perform range\nchecks on the xids, but we don't...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 31 Jan 2023 14:38:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Tue, 31 Jan 2023 at 23:48, Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-01-31 15:05:17 +0100, Matthias van de Meent wrote:\n> > If TransactionIdRetreatSafely will be exposed outside procarray.c,\n> > then I think the xid pointer should be replaced with normal\n> > arguments/returns; both for parity with TransactionIdRetreatedBy\n>\n> That's why I named one version *Retreat the other Retreated :)\n\nThat part I noticed too :) I don't mind either way, I was just\nconcerned with exposing the function as a prototype, not as an inline\nstatic.\n\n> I think it'll make the code easier to read in the other places too, the\n> variable names / function names in this space are uncomfortably long to\n> fit into 78chars..., particularly when there's two references to the\n> same variable in the same line.\n\nI guess that's true, and once inlined there should indeed be no extra\nruntime overhead.\n\n> 78 chars\nDidn't we use 80 columns/chars? How did you get to 78? Not that I\ncan't think of any ways, but none of them stand out to me as obviously\ncorrect.\n\n> > and to remove this memory store dependency in this hot code path.\n>\n> I doubt that matters much here and the places it's going to be used\n> in.\n\nI thought that this was executed while still in ProcArrayLock, but\ninstead we've released that lock already by the time we're trying to\nadjust the horizons, so the 'hot code path' concern is mostly\nrelieved.\n\n> And presumably the compiler will inline it anyway. I'd probably make\n> it a static inline in the header too.\n\nYes, my concern was based on an extern prototype with private\nimplementation, as that does prohibit inlining and thus would have a\nrequirement to push the data to memory (probably only L1, but still\nmemory).\n\n> What's making me hesitate about exposing it is that it's quite easy to\n> get things wrong by using a wrong fxid or such.\n\nI'm less concerned about that when the function is well-documented.\n\n> > > + /*\n> > > + * FIXME, doubtful this is the best fix.\n> > > + *\n> > > + * Can't represent the 32bit xid as a 64bit xid, as it's before fxid\n> > > + * 0. Represent it as an xid from the future instead.\n> > > + */\n> > > + if (epoch == 0)\n> > > + return FullTransactionIdFromEpochAndXid(0, xid);\n> >\n> > Shouldn't this be an error condition instead, as this XID should not\n> > be able to appear?\n>\n> If you mean error in the sense of ERROR, no, I don't think so. That code\n> tries hard to be able to check many tuples in a row. And if we were to\n> error out here, we'd not able to do that. We should still report those\n> tuples as corrupt, fwiw.\n>\n> The reason this path is hit is that a test intentionally corrupts some\n> xids. So the path is reachable and we need to cope somehow.\n\nI see.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Wed, 1 Feb 2023 01:23:09 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nHeikki, Andrey, CCing you because you wrote\n\ncommit 6655a7299d835dea9e8e0ba69cc5284611b96f29\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: 2019-07-24 20:24:07 +0300\n\n Use full 64-bit XID for checking if a deleted GiST page is old enough.\n\n\nOn 2023-01-07 19:09:56 -0800, Andres Freund wrote:\n> I haven't found other problematic places in HEAD, but did end up find a less\n> serious version of this bug in < 14: GetFullRecentGlobalXmin(). I did verify\n> that with vacuum_defer_cleanup_age set GetFullRecentGlobalXmin() returns\n> values that look likely to cause problems. Its \"just\" used in gist luckily.\n\nIs there a good way to make breakage in the page recycling mechanism\nvisible with gist? I guess to see corruption, I'd have to halt a scan\nbefore a page is visited with gdb, then cause the page to be recycled\nprematurely in another session, then unblock the first? Which'd then\nvisit that page, thinking it to be in a different part of the tree than\nit actually is?\n\nI'm pretty sure it's broken though.\n\nOn 13, with vacuum_defer_cleanup_age=0, the attached script has two\nconsecutive VACUUM VERBOSEs output\n\n106 index pages have been deleted, 0 are currently reusable.\n106 index pages have been deleted, 0 are currently reusable.\n\nin the presence of a prepared transaction. Which makes sense.\n\nBut with vacuum_defer_cleanup_age=10000\n\n106 index pages have been deleted, 0 are currently reusable.\n106 index pages have been deleted, 106 are currently reusable.\n\n\nwhich clearly doesn't seem right.\n\nI just can't quite judge how bad that is.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 4 Feb 2023 02:57:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello, Andres.\n\n> Not sure I like TransactionIdRetreatSafely() as a name. Maybe\n> TransactionIdRetreatClamped() is better?\n\nI think it is better to just replace TransactionIdRetreatedBy.\nIt is not used anymore after\n`v3-0001-WIP-Fix-corruption-due-to-vacuum_defer_cleanup_ag.patch` -\nso, it is better to replace the dangerous version in order to avoid\nsimilar issues in the future.\nBut we need also to move FullXidRelativeTo in that case (not sure it is safe).\n\n> I think it'll make the code easier to read in the other places too, the\n> variable names / function names in this space are uncomfortably long to\n> fit into 78chars..., particularly when there's two references to the\n> same variable in the same line.\n\nLooks fine for my taste, but it is pretty far from perfect :)\n\nBest regards,\nMichail.", "msg_date": "Sat, 4 Feb 2023 15:21:15 +0300", "msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>", "msg_from_op": true, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Sat, Feb 4, 2023 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Is there a good way to make breakage in the page recycling mechanism\n> visible with gist? I guess to see corruption, I'd have to halt a scan\n> before a page is visited with gdb, then cause the page to be recycled\n> prematurely in another session, then unblock the first? Which'd then\n> visit that page, thinking it to be in a different part of the tree than\n> it actually is?\n>\n\nIn most cases landing on one extra page will not affect the scan.\nWorst case that I can imagine - scan is landing on a page that is the\nnew parent of the deleted page. Even then we cannot end up with\ninfinite index scan - we will just make one extra loop. Although,\nIndexScan will yield duplicate tids.\n\nIn case of interference with concurrent insertion we will get a tree\nstructure departed from optimal, but that is not a problem.\n\nBest regards, Andrey Borodin.\n\n\n", "msg_date": "Sat, 4 Feb 2023 09:43:35 -0800", "msg_from": "Andrey Borodin <amborodin86@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On Sat, Feb 4, 2023 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n> Is there a good way to make breakage in the page recycling mechanism\n> visible with gist? I guess to see corruption, I'd have to halt a scan\n> before a page is visited with gdb, then cause the page to be recycled\n> prematurely in another session, then unblock the first? Which'd then\n> visit that page, thinking it to be in a different part of the tree than\n> it actually is?\n\nYes. This bug is similar to an ancient nbtree bug fixed back in 2012,\nby commit d3abbbeb.\n\n> which clearly doesn't seem right.\n>\n> I just can't quite judge how bad that is.\n\nIt's really hard to judge, even if you're an expert. We're talking\nabout a fairly chaotic scenario. My guess is that there is a very\nsmall chance of a very unpleasant scenario if you have a GiST index\nthat has regular page deletions, and if you use\nvacuum_defer_cleanup_age. It's likely that most GiST indexes never\nhave any page deletions due to the workload characteristics.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 4 Feb 2023 11:10:55 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-02-04 11:10:55 -0800, Peter Geoghegan wrote:\n> On Sat, Feb 4, 2023 at 2:57 AM Andres Freund <andres@anarazel.de> wrote:\n> > Is there a good way to make breakage in the page recycling mechanism\n> > visible with gist? I guess to see corruption, I'd have to halt a scan\n> > before a page is visited with gdb, then cause the page to be recycled\n> > prematurely in another session, then unblock the first? Which'd then\n> > visit that page, thinking it to be in a different part of the tree than\n> > it actually is?\n> \n> Yes. This bug is similar to an ancient nbtree bug fixed back in 2012,\n> by commit d3abbbeb.\n> \n> > which clearly doesn't seem right.\n> >\n> > I just can't quite judge how bad that is.\n> \n> It's really hard to judge, even if you're an expert. We're talking\n> about a fairly chaotic scenario. My guess is that there is a very\n> small chance of a very unpleasant scenario if you have a GiST index\n> that has regular page deletions, and if you use\n> vacuum_defer_cleanup_age. It's likely that most GiST indexes never\n> have any page deletions due to the workload characteristics.\n\nThanks.\n\n\nSounds like a problem here is too hard to repro. I mostly wanted to know how\nto be more confident about a fix working correctly. There's no tests for the\nwhole page recycling behaviour, afaics, so it's a bit scary to change things\naround.\n\nI didn't quite feel confident pushing a fix for this just before a minor\nrelease, so I'll push once the minor releases are tagged. A quite minimal fix\nto GetFullRecentGlobalXmin() in 12-13 (returning FirstNormalTransactionId if\nepoch == 0 and RecentGlobalXmin > nextxid_xid), and the slightly larger fix in\n14+.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 6 Feb 2023 13:02:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-02-06 13:02:05 -0800, Andres Freund wrote:\n> I didn't quite feel confident pushing a fix for this just before a minor\n> release, so I'll push once the minor releases are tagged. A quite minimal fix\n> to GetFullRecentGlobalXmin() in 12-13 (returning FirstNormalTransactionId if\n> epoch == 0 and RecentGlobalXmin > nextxid_xid), and the slightly larger fix in\n> 14+.\n\nPushed that.\n\n\nMark:\n\nI worked some more on the fixes for amcheck, and fixes for amcheck.\n\nThe second amcheck fix ends up correcting some inaccurate output in the tests\n- previously xids from before xid 0 were reported to be in the future.\n\nPreviously there was no test case exercising exceeding nextxid, without\nwrapping around into the past. I added that at the end of\n004_verify_heapam.pl, because renumbering seemed too annoying.\n\nWhat do you think?\n\n\nSomewhat random note:\n\nIs it intentional that we VACUUM FREEZE test ROWCOUNT times? That's\neffectively O(ROWCOUNT^2), albeit with small enough constants to not really\nmatter. I don't think we need to insert the rows one-by-one either. Changing\nthat to a single INSERT and FREEZE shaves 10-12% off the tests. I didn't\nchange that, but we also fire off a psql for each tuple for heap_page_items(),\nwith offset $N no less. That seems to be another 500ms.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 8 Mar 2023 16:15:58 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "\n\n> On Mar 8, 2023, at 4:15 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> I worked some more on the fixes for amcheck, and fixes for amcheck.\n> \n> The second amcheck fix ends up correcting some inaccurate output in the tests\n> - previously xids from before xid 0 were reported to be in the future.\n> \n> Previously there was no test case exercising exceeding nextxid, without\n> wrapping around into the past. I added that at the end of\n> 004_verify_heapam.pl, because renumbering seemed too annoying.\n> \n> What do you think?\n\nThe changes look reasonable to me.\n\n> Somewhat random note:\n> \n> Is it intentional that we VACUUM FREEZE test ROWCOUNT times? That's\n> effectively O(ROWCOUNT^2), albeit with small enough constants to not really\n> matter. I don't think we need to insert the rows one-by-one either. Changing\n> that to a single INSERT and FREEZE shaves 10-12% off the tests. I didn't\n> change that, but we also fire off a psql for each tuple for heap_page_items(),\n> with offset $N no less. That seems to be another 500ms.\n\nI don't recall the reasoning. Feel free to optimize the tests.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 9 Mar 2023 12:15:16 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-03-09 12:15:16 -0800, Mark Dilger wrote:\n> > Somewhat random note:\n> > \n> > Is it intentional that we VACUUM FREEZE test ROWCOUNT times? That's\n> > effectively O(ROWCOUNT^2), albeit with small enough constants to not really\n> > matter. I don't think we need to insert the rows one-by-one either. Changing\n> > that to a single INSERT and FREEZE shaves 10-12% off the tests. I didn't\n> > change that, but we also fire off a psql for each tuple for heap_page_items(),\n> > with offset $N no less. That seems to be another 500ms.\n> \n> I don't recall the reasoning. Feel free to optimize the tests.\n\nSomething like the attached.\n\nI don't know enough perl to know how to interpolate something like\nuse constant ROWCOUNT => 17;\nso I just made it a variable.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sat, 11 Mar 2023 15:22:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "\n\n> On Mar 11, 2023, at 3:22 PM, Andres Freund <andres@anarazel.de> wrote:\n> \n> Something like the attached.\n\nI like that your patch doesn't make the test longer. I assume you've already run the tests and that it works.\n\n> I don't know enough perl to know how to interpolate something like\n> use constant ROWCOUNT => 17;\n> so I just made it a variable.\n\nSeems fair. I certainly don't mind.\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Sat, 11 Mar 2023 15:34:55 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-03-11 15:34:55 -0800, Mark Dilger wrote:\n> > On Mar 11, 2023, at 3:22 PM, Andres Freund <andres@anarazel.de> wrote:\n> > \n> > Something like the attached.\n> \n> I like that your patch doesn't make the test longer. I assume you've already run the tests and that it works.\n\nI did check that, yes :). My process of writing perl is certainly, uh,\niterative. No way I would get anything close to working without testing it.\n\nCI now finished the tests as well:\nhttps://cirrus-ci.com/build/6675457702100992\n\nSo I'll go ahead and push that.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 11 Mar 2023 15:41:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On 12.03.23 00:41, Andres Freund wrote:\n> Hi,\n> \n> On 2023-03-11 15:34:55 -0800, Mark Dilger wrote:\n>>> On Mar 11, 2023, at 3:22 PM, Andres Freund <andres@anarazel.de> wrote:\n>>>\n>>> Something like the attached.\n>>\n>> I like that your patch doesn't make the test longer. I assume you've already run the tests and that it works.\n> \n> I did check that, yes :). My process of writing perl is certainly, uh,\n> iterative. No way I would get anything close to working without testing it.\n> \n> CI now finished the tests as well:\n> https://cirrus-ci.com/build/6675457702100992\n> \n> So I'll go ahead and push that.\n\nThere is a small issue with this commit (a4f23f9b3c).\n\nIn src/bin/pg_amcheck/t/004_verify_heapam.pl, there is code to detect \nwhether the page layout matches expectations and if not it calls plan \nskip_all.\n\nThis commit adds a test\n\nis(scalar @lp_off, $ROWCOUNT, \"acquired row offsets\");\n\n*before* that skip_all call. This appears to be invalid. If the \nskip_all happens, you get a complaint like\n\nt/004_verify_heapam.pl (Wstat: 0 Tests: 1 Failed: 0)\n Parse errors: Bad plan. You planned 0 tests but ran 1.\n\nWe could move the is() test after all the skip_all's. Any thoughts?\n\n\n\n", "msg_date": "Wed, 10 May 2023 17:44:07 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hi,\n\nOn 2023-05-10 17:44:07 +0200, Peter Eisentraut wrote:\n> On 12.03.23 00:41, Andres Freund wrote:\n> > Hi,\n> > \n> > On 2023-03-11 15:34:55 -0800, Mark Dilger wrote:\n> > > > On Mar 11, 2023, at 3:22 PM, Andres Freund <andres@anarazel.de> wrote:\n> > > > \n> > > > Something like the attached.\n> > > \n> > > I like that your patch doesn't make the test longer. I assume you've already run the tests and that it works.\n> > \n> > I did check that, yes :). My process of writing perl is certainly, uh,\n> > iterative. No way I would get anything close to working without testing it.\n> > \n> > CI now finished the tests as well:\n> > https://cirrus-ci.com/build/6675457702100992\n> > \n> > So I'll go ahead and push that.\n> \n> There is a small issue with this commit (a4f23f9b3c).\n> \n> In src/bin/pg_amcheck/t/004_verify_heapam.pl, there is code to detect\n> whether the page layout matches expectations and if not it calls plan\n> skip_all.\n\nSome of these skip_all's don't seem like a good idea. Why is a broken\nrelfrozenxid a cause for skipping a test? But anyway, that's really unrelated\nto the topic at hand.\n\n\n> This commit adds a test\n> \n> is(scalar @lp_off, $ROWCOUNT, \"acquired row offsets\");\n> \n> *before* that skip_all call. This appears to be invalid. If the skip_all\n> happens, you get a complaint like\n> \n> t/004_verify_heapam.pl (Wstat: 0 Tests: 1 Failed: 0)\n> Parse errors: Bad plan. You planned 0 tests but ran 1.\n> \n> We could move the is() test after all the skip_all's. Any thoughts?\n\nI think the easiest fix is to just die if we can't get the offsets - it's not\nlike we can really continue afterwards...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 10 May 2023 11:04:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "On 10.05.23 20:04, Andres Freund wrote:\n>> This commit adds a test\n>>\n>> is(scalar @lp_off, $ROWCOUNT, \"acquired row offsets\");\n>>\n>> *before* that skip_all call. This appears to be invalid. If the skip_all\n>> happens, you get a complaint like\n>>\n>> t/004_verify_heapam.pl (Wstat: 0 Tests: 1 Failed: 0)\n>> Parse errors: Bad plan. You planned 0 tests but ran 1.\n>>\n>> We could move the is() test after all the skip_all's. Any thoughts?\n> \n> I think the easiest fix is to just die if we can't get the offsets - it's not\n> like we can really continue afterwards...\n\nThis should do it:\n\n-is(scalar @lp_off, $ROWCOUNT, \"acquired row offsets\");\n+scalar @lp_off == $ROWCOUNT or BAIL_OUT(\"row offset counts mismatch\");\n\nBut I'm not sure what the latest thinking on BAIL_OUT is. It is used \nnearby in a similar way though.\n\n\n\n", "msg_date": "Fri, 12 May 2023 10:28:00 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" }, { "msg_contents": "Hello Andres,\n\n12.03.2023 02:41, Andres Freund wrote:\n> CI now finished the tests as well:\n> https://cirrus-ci.com/build/6675457702100992\n>\n> So I'll go ahead and push that.\n\nAs I mentioned at [1], `meson test` fails on Windows x86 platform during\nthe test pg_amcheck/004_verify_heapam (I'm using VS 2022 Version 17.9.7):\nmeson setup build --wipe -Dcassert=true\ncd build & ninja & meson test\n\n... postgresql:pg_amcheck / pg_amcheck/004_verify_heapam ERROR             6.95s   exit status 25\n\n004_verify_heapam_test.log contains:\nTRAP: failed Assert(\"FullTransactionIdIsNormal(fxid)\"), File: \"../contrib/amcheck/verify_heapam.c\", Line: 1915, PID: 2560\n2024-07-04 20:56:54.592 PDT [9780] LOG:  server process (PID 2560) was terminated by exception 0xC0000409\n2024-07-04 20:56:54.592 PDT [9780] DETAIL:  Failed process was running: SELECT v.blkno, v.offnum, v.attnum, v.msg FROM \npg_catalog.pg_class c, \"public\".verify_heapam(\n     relation := c.oid, on_error_stop := false, check_toast := true, skip := 'none'\n     ) v WHERE c.oid = 16438 AND c.relpersistence != 't'\n\n`git bisect` for this anomaly pointed at 4f5d461e0.\n(I couldn't compile Postgres on that commit, but with\n`git show 53ea2b7ad | git apply` (see also [2]) it's possible.)\n\nThe Assert in question is:\n     else\n         fxid = FullTransactionIdFromU64(nextfxid_i - diff);\n\n     Assert(FullTransactionIdIsNormal(fxid));\n\nIt was not clear to me how it comes out that fxid is not normal, until I\nlooked at the disassembly:\n     else\n         fxid = FullTransactionIdFromU64(nextfxid_i - diff);\n751812D2  sub         ebx,eax\n751812D4  sbb         edi,edx\n\n     Assert(FullTransactionIdIsNormal(fxid));\n751812D6  jne         FullTransactionIdFromXidAndCtx+0E6h (751812F6h)\n751812D8  jb          FullTransactionIdFromXidAndCtx+0CFh (751812DFh)\n751812DA  cmp         ebx,3\n751812DD  jae         FullTransactionIdFromXidAndCtx+0E6h (751812F6h)\n751812DF  push        77Bh\n751812E4  push        offset string \"../contrib/amcheck/verify_heapa@\"... (7518C4A4h)\n751812E9  push        offset string \"FullTransactionIdIsNormal(fxid)\" (7518DB04h)\n751812EE  call        _ExceptionalCondition (75189FFEh)\n\nThe same code fragment for your convenience:\nhttps://ideone.com/8wiGRY\n\nCould you please look at this?\n\n[1] https://www.postgresql.org/message-id/72705e42-42d1-ac6e-e7d5-4baec8a0d2af%40gmail.com\n[2] https://postgr.es/m/17967-cd21e34a314141b2@postgresql.org\n\nBest regards,\nAlexander\n\n\n\n\n\nHello Andres,\n\n 12.03.2023 02:41, Andres Freund wrote:\n\n\nCI now finished the tests as well:\nhttps://cirrus-ci.com/build/6675457702100992\n\nSo I'll go ahead and push that.\n\n\n\n As I mentioned at [1], `meson test` fails on Windows x86 platform\n during\n the test pg_amcheck/004_verify_heapam (I'm using VS 2022 Version\n 17.9.7):\n meson setup build --wipe -Dcassert=true\n cd build & ninja & meson test\n\n ... postgresql:pg_amcheck / pg_amcheck/004_verify_heapam       \n ERROR             6.95s   exit status 25\n\n 004_verify_heapam_test.log contains:\n TRAP: failed Assert(\"FullTransactionIdIsNormal(fxid)\"), File:\n \"../contrib/amcheck/verify_heapam.c\", Line: 1915, PID: 2560\n 2024-07-04 20:56:54.592 PDT [9780] LOG:  server process (PID 2560)\n was terminated by exception 0xC0000409\n 2024-07-04 20:56:54.592 PDT [9780] DETAIL:  Failed process was\n running: SELECT v.blkno, v.offnum, v.attnum, v.msg FROM\n pg_catalog.pg_class c, \"public\".verify_heapam(\n     relation := c.oid, on_error_stop := false, check_toast := true,\n skip := 'none'\n     ) v WHERE c.oid = 16438 AND c.relpersistence != 't'\n\n `git bisect` for this anomaly pointed at 4f5d461e0.\n (I couldn't compile Postgres on that commit, but with\n `git show 53ea2b7ad | git apply` (see also [2]) it's possible.)\n\n The Assert in question is:\n     else\n         fxid = FullTransactionIdFromU64(nextfxid_i - diff);\n\n     Assert(FullTransactionIdIsNormal(fxid));\n\n It was not clear to me how it comes out that fxid is not normal,\n until I\n looked at the disassembly:\n     else\n         fxid = FullTransactionIdFromU64(nextfxid_i - diff);\n 751812D2  sub         ebx,eax  \n 751812D4  sbb         edi,edx  \n\n     Assert(FullTransactionIdIsNormal(fxid));\n 751812D6  jne         FullTransactionIdFromXidAndCtx+0E6h\n (751812F6h)  \n 751812D8  jb          FullTransactionIdFromXidAndCtx+0CFh\n (751812DFh)  \n 751812DA  cmp         ebx,3  \n 751812DD  jae         FullTransactionIdFromXidAndCtx+0E6h\n (751812F6h)  \n 751812DF  push        77Bh  \n 751812E4  push        offset string\n \"../contrib/amcheck/verify_heapa@\"... (7518C4A4h)  \n 751812E9  push        offset string\n \"FullTransactionIdIsNormal(fxid)\" (7518DB04h)  \n 751812EE  call        _ExceptionalCondition (75189FFEh)\n\n The same code fragment for your convenience:\nhttps://ideone.com/8wiGRY\n\n Could you please look at this?\n\n [1]\nhttps://www.postgresql.org/message-id/72705e42-42d1-ac6e-e7d5-4baec8a0d2af%40gmail.com\n [2] https://postgr.es/m/17967-cd21e34a314141b2@postgresql.org\n\n Best regards,\n Alexander", "msg_date": "Fri, 5 Jul 2024 13:00:01 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BUG: Postgres 14 + vacuum_defer_cleanup_age + FOR UPDATE + UPDATE" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT.\n\nThis is the same kind of work that has been done in 83a1a1b566 and 8018ffbf58 but this time for the\npg_stat_get_xact*() functions (as suggested by Andres in [1]).\n\nThe function names remain the same, but some fields have to be renamed.\n\nWhile at it, I also took the opportunity to create the macros for pg_stat_get_xact_function_total_time(),\npg_stat_get_xact_function_self_time() and pg_stat_get_function_total_time(), pg_stat_get_function_self_time()\n(even if the same code pattern is only repeated two 2 times).\n\nNow that this patch renames some fields, I think that, for consistency, those ones should be renamed too (aka remove the f_ and t_ prefixes):\n\nPgStat_FunctionCounts.f_numcalls\nPgStat_StatFuncEntry.f_numcalls\nPgStat_TableCounts.t_truncdropped\nPgStat_TableCounts.t_delta_live_tuples\nPgStat_TableCounts.t_delta_dead_tuples\nPgStat_TableCounts.t_changed_tuples\n\nBut I think it would be better to do it in a follow-up patch (once this one get committed).\n\n[1]: https://www.postgresql.org/message-id/20230105002733.ealhzubjaiqis6ua%40awork3.anarazel.de\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 5 Jan 2023 14:48:39 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Thu, Jan 5, 2023 at 8:50 AM Drouvot, Bertrand <\nbertranddrouvot.pg@gmail.com> wrote:\n\n> Hi hackers,\n>\n> Please find attached a patch proposal to $SUBJECT.\n>\n> This is the same kind of work that has been done in 83a1a1b566 and\n> 8018ffbf58 but this time for the\n> pg_stat_get_xact*() functions (as suggested by Andres in [1]).\n>\n> The function names remain the same, but some fields have to be renamed.\n>\n> While at it, I also took the opportunity to create the macros for\n> pg_stat_get_xact_function_total_time(),\n> pg_stat_get_xact_function_self_time() and\n> pg_stat_get_function_total_time(), pg_stat_get_function_self_time()\n> (even if the same code pattern is only repeated two 2 times).\n>\n> Now that this patch renames some fields, I think that, for consistency,\n> those ones should be renamed too (aka remove the f_ and t_ prefixes):\n>\n> PgStat_FunctionCounts.f_numcalls\n> PgStat_StatFuncEntry.f_numcalls\n> PgStat_TableCounts.t_truncdropped\n> PgStat_TableCounts.t_delta_live_tuples\n> PgStat_TableCounts.t_delta_dead_tuples\n> PgStat_TableCounts.t_changed_tuples\n>\n> But I think it would be better to do it in a follow-up patch (once this\n> one get committed).\n>\n> [1]:\n> https://www.postgresql.org/message-id/20230105002733.ealhzubjaiqis6ua%40awork3.anarazel.de\n>\n> Looking forward to your feedback,\n>\n> Regards,\n>\n> --\n> Bertrand Drouvot\n> PostgreSQL Contributors Team\n> RDS Open Source Databases\n> Amazon Web Services: https://aws.amazon.com\n\n\nI like code cleanups like this. It makes sense, it results in less code,\nand anyone doing a `git grep pg_stat_get_live_tuples` will quickly find the\nmacro definition.\n\nUnsurprisingly, it passes `make check-world`.\n\nSo I think it's good to go as-is.\n\nIt does get me wondering, however, if we reordered the three typedefs to\ngroup like-typed registers together, we could make them an array with the\nnames becoming defined constant index values (or keeping them via a union),\nthen the typedefs effectively become:\n\ntypedef struct PgStat_FunctionCallUsage\n{\n PgStat_FunctionCounts *fs;\n instr_time time_counters[3];\n} PgStat_FunctionCallUsage;\n\n\ntypedef struct PgStat_BackendSubEntry\n{\n PgStat_Counter counters[2];\n} PgStat_BackendSubEntry;\n\n\ntypedef struct PgStat_TableCounts\n{\n bool t_truncdropped;\n PgStat_Counter counters[12];\n} PgStat_TableCounts;\n\n\nThen we'd only have 3 actual C functions:\n\npg_stat_get_xact_counter(oid, int)\npg_stat_get_xact_subtrans_counter(oid, int)\npg_stat_get_xact_function_time_counter(oid, int)\n\nand then the existing functions become SQL standard function body calls,\nsomething like this:\n\nCREATE OR REPLACE FUNCTION pg_stat_get_xact_numscans(oid)\n RETURNS bigint\n LANGUAGE sql\n STABLE PARALLEL RESTRICTED COST 1\nRETURN pg_stat_get_xact_counter($1, 0);\n\n\nCREATE OR REPLACE FUNCTION pg_stat_get_xact_tuples_returned(oid)\n RETURNS bigint\n LANGUAGE sql\n STABLE PARALLEL RESTRICTED COST 1\nRETURN pg_stat_get_xact_counter($1, 1);\n\n\n\nThe most obvious drawback to this approach is that the C functions would\nneed to do runtime bounds checking on the index parameter, and the amount\nof memory footprint saved by going from 17 short functions to 3 is not\nenough to make any real difference. So I think your approach is better, but\nI wanted to throw this idea out there.\n\nOn Thu, Jan 5, 2023 at 8:50 AM Drouvot, Bertrand <bertranddrouvot.pg@gmail.com> wrote:Hi hackers,\n\nPlease find attached a patch proposal to $SUBJECT.\n\nThis is the same kind of work that has been done in 83a1a1b566 and 8018ffbf58 but this time for the\npg_stat_get_xact*() functions (as suggested by Andres in [1]).\n\nThe function names remain the same, but some fields have to be renamed.\n\nWhile at it, I also took the opportunity to create the macros for pg_stat_get_xact_function_total_time(),\npg_stat_get_xact_function_self_time() and pg_stat_get_function_total_time(), pg_stat_get_function_self_time()\n(even if the same code pattern is only repeated two 2 times).\n\nNow that this patch renames some fields, I think that, for consistency, those ones should be renamed too (aka remove the f_ and t_ prefixes):\n\nPgStat_FunctionCounts.f_numcalls\nPgStat_StatFuncEntry.f_numcalls\nPgStat_TableCounts.t_truncdropped\nPgStat_TableCounts.t_delta_live_tuples\nPgStat_TableCounts.t_delta_dead_tuples\nPgStat_TableCounts.t_changed_tuples\n\nBut I think it would be better to do it in a follow-up patch (once this one get committed).\n\n[1]: https://www.postgresql.org/message-id/20230105002733.ealhzubjaiqis6ua%40awork3.anarazel.de\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.comI like code cleanups like this. It makes sense, it results in less code, and anyone doing a `git grep pg_stat_get_live_tuples` will quickly find the macro definition.Unsurprisingly, it passes `make check-world`.So I think it's good to go as-is.It does get me wondering, however, if we reordered the three typedefs to group like-typed registers together, we could make them an array with the names becoming defined constant index values (or keeping them via a union), then the typedefs effectively become:typedef struct PgStat_FunctionCallUsage{    PgStat_FunctionCounts *fs;    instr_time  time_counters[3];} PgStat_FunctionCallUsage;typedef struct PgStat_BackendSubEntry{    PgStat_Counter counters[2];} PgStat_BackendSubEntry;typedef struct PgStat_TableCounts{    bool        t_truncdropped;    PgStat_Counter counters[12];} PgStat_TableCounts;Then we'd only have 3 actual C functions:pg_stat_get_xact_counter(oid, int)pg_stat_get_xact_subtrans_counter(oid, int)pg_stat_get_xact_function_time_counter(oid, int)and then the existing functions become SQL standard function body calls, something like this:CREATE OR REPLACE FUNCTION pg_stat_get_xact_numscans(oid) RETURNS bigint LANGUAGE sql  STABLE PARALLEL RESTRICTED COST 1RETURN pg_stat_get_xact_counter($1, 0); CREATE OR REPLACE FUNCTION pg_stat_get_xact_tuples_returned(oid) RETURNS bigint LANGUAGE sql STABLE PARALLEL RESTRICTED COST 1RETURN pg_stat_get_xact_counter($1, 1);The most obvious drawback to this approach is that the C functions would need to do runtime bounds checking on the index parameter, and the amount of memory footprint saved by going from 17 short functions to 3 is not enough to make any real difference. So I think your approach is better, but I wanted to throw this idea out there.", "msg_date": "Thu, 5 Jan 2023 15:19:54 -0500", "msg_from": "Corey Huinker <corey.huinker@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 2023-01-05 15:19:54 -0500, Corey Huinker wrote:\n> It does get me wondering, however, if we reordered the three typedefs to\n> group like-typed registers together, we could make them an array with the\n> names becoming defined constant index values (or keeping them via a union),\n> then the typedefs effectively become:\n\nI think that'd make it substantially enough harder to work with the\ndatastructures that I don't want to go there.\n\n\nThe \"more fundamental\" approach would be to switch to using a table-returning\nfunction for accessing these stat values. When just accessing a single counter\nor two, the current approach avoids the overhead of having to construct a\ntuple. But after that the overhead of having to fetch the stats data (i.e. a\nhash table lookup, potentially some locking) multiple times takes over.\n\nUnfortunately there's currently no way to dynamically switch between those\nbehaviours.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 5 Jan 2023 15:21:40 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 1/5/23 9:19 PM, Corey Huinker wrote:\n> \n> \n> I like code cleanups like this. It makes sense, it results in less code, and anyone doing a `git grep pg_stat_get_live_tuples` will quickly find the macro definition.\n> \n> Unsurprisingly, it passes `make check-world`.\n> \n> So I think it's good to go as-is.\n\nThanks for the review!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 09:40:37 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 1/6/23 12:21 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-05 15:19:54 -0500, Corey Huinker wrote:\n>> It does get me wondering, however, if we reordered the three typedefs to\n>> group like-typed registers together, we could make them an array with the\n>> names becoming defined constant index values (or keeping them via a union),\n>> then the typedefs effectively become:\n> \n> I think that'd make it substantially enough harder to work with the\n> datastructures that I don't want to go there.\n> \n\nYeah, I think that's a good idea from a \"coding style\" point of view but harder to work with.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 09:44:25 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nMichael, CCing you because of the point about PG_STAT_GET_DBENTRY_FLOAT8\nbelow.\n\n\nOn 2023-01-05 14:48:39 +0100, Drouvot, Bertrand wrote:\n> While at it, I also took the opportunity to create the macros for pg_stat_get_xact_function_total_time(),\n> pg_stat_get_xact_function_self_time() and pg_stat_get_function_total_time(), pg_stat_get_function_self_time()\n> (even if the same code pattern is only repeated two 2 times).\n\nI'd split that up into a separate commit.\n\n\n> Now that this patch renames some fields\n\nI don't mind renaming the fields - the prefixes really don't provide anything\nuseful. But it's not clear why this is related to this patch? You could just\ninclude the f_ prefix in the macro, no?\n\n\n> , I think that, for consistency, those ones should be renamed too (aka remove the f_ and t_ prefixes):\n> \n> PgStat_FunctionCounts.f_numcalls\n> PgStat_StatFuncEntry.f_numcalls\n> PgStat_TableCounts.t_truncdropped\n> PgStat_TableCounts.t_delta_live_tuples\n> PgStat_TableCounts.t_delta_dead_tuples\n> PgStat_TableCounts.t_changed_tuples\n\nYea, without that the result is profoundly weird.\n\n\n> But I think it would be better to do it in a follow-up patch (once this one get committed).\n\nI don't mind committing it in a separate commit, but I think it should be done\nat least immediately following the other commit. I.e. developed together.\n\nI probably would go the other way, and rename all of them first. That'd make\nthis commit a lot more focused, and the renaming commit purely mechanical.\n\nProbably should remove PgStat_BackendFunctionEntry. PgStat_TableStatus has\nreason to exist, but that's not true for PgStat_BackendFunctionEntry.\n\n\n\n> @@ -168,19 +168,19 @@ pgstat_end_function_usage(PgStat_FunctionCallUsage *fcu, bool finalize)\n> \tINSTR_TIME_ADD(total_func_time, f_self);\n> \n> \t/*\n> -\t * Compute the new f_total_time as the total elapsed time added to the\n> -\t * pre-call value of f_total_time. This is necessary to avoid\n> +\t * Compute the new total_time as the total elapsed time added to the\n> +\t * pre-call value of total_time. This is necessary to avoid\n> \t * double-counting any time taken by recursive calls of myself. (We do\n> \t * not need any similar kluge for self time, since that already excludes\n> \t * any recursive calls.)\n> \t */\n> -\tINSTR_TIME_ADD(f_total, fcu->save_f_total_time);\n> +\tINSTR_TIME_ADD(f_total, fcu->save_total_time);\n> \n> \t/* update counters in function stats table */\n> \tif (finalize)\n> \t\tfs->f_numcalls++;\n> -\tfs->f_total_time = f_total;\n> -\tINSTR_TIME_ADD(fs->f_self_time, f_self);\n> +\tfs->total_time = f_total;\n> +\tINSTR_TIME_ADD(fs->self_time, f_self);\n> }\n\nI'd also rename f_self etc.\n\n\n> @@ -148,29 +148,24 @@ pg_stat_get_function_calls(PG_FUNCTION_ARGS)\n> \tPG_RETURN_INT64(funcentry->f_numcalls);\n> }\n> \n> -Datum\n> -pg_stat_get_function_total_time(PG_FUNCTION_ARGS)\n> -{\n> -\tOid\t\t\tfuncid = PG_GETARG_OID(0);\n> -\tPgStat_StatFuncEntry *funcentry;\n> -\n> -\tif ((funcentry = pgstat_fetch_stat_funcentry(funcid)) == NULL)\n> -\t\tPG_RETURN_NULL();\n> -\t/* convert counter from microsec to millisec for display */\n> -\tPG_RETURN_FLOAT8(((double) funcentry->f_total_time) / 1000.0);\n> +#define PG_STAT_GET_FUNCENTRY_FLOAT8(stat)\t\t\t\t\t\t\t\\\n> +Datum\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +CppConcat(pg_stat_get_function_,stat)(PG_FUNCTION_ARGS)\t\t\t\t\\\n> +{\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\tOid\t\t\tfuncid = PG_GETARG_OID(0);\t\t\t\t\t\t\t\\\n> +\tPgStat_StatFuncEntry *funcentry;\t\t\t\t\t\t\t\t\\\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n> +\tif ((funcentry = pgstat_fetch_stat_funcentry(funcid)) == NULL)\t\\\n> +\t\tPG_RETURN_NULL();\t\t\t\t\t\t\t\t\t\t\t\\\n> +\t/* convert counter from microsec to millisec for display */\t\t\\\n> +\tPG_RETURN_FLOAT8(((double) funcentry->stat) / 1000.0);\t\t\t\\\n> }\n\nHm. Given the conversion with / 1000, is PG_STAT_GET_FUNCENTRY_FLOAT8 an\naccurate name? Maybe PG_STAT_GET_FUNCENTRY_FLOAT8_MS?\n\nI now see that PG_STAT_GET_DBENTRY_FLOAT8 already exists, defined the same\nway. But the name fields misleading enough that I'd be inclined to rename it?\n\n\n> +#define PG_STAT_GET_XACT_WITH_SUBTRANS_RELENTRY_INT64(stat)\t\t\t\t\t\\\n\nHow about PG_STAT_GET_XACT_PLUS_SUBTRANS_INT64?\n\n\nAlthough I suspect this actually hints at an architectural thing that could be\nfixed better: Perhaps we should replace find_tabstat_entry() with a version\nreturning a fully \"reconciled\" PgStat_StatTabEntry? It feels quite wrong to\nhave that intimitate knowledge of the subtransaction stuff in pgstatfuncs.c\nand about how the different counts get combined.\n\nI think that'd allow us to move the definition of PgStat_TableStatus to\nPgStat_TableXactStatus, PgStat_TableCounts to pgstat_internal.h. Which feels a\nheck of a lot cleaner.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 14:59:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 1/11/23 11:59 PM, Andres Freund wrote:\n> Hi,\n> \n> Michael, CCing you because of the point about PG_STAT_GET_DBENTRY_FLOAT8\n> below.\n> \n> \n> On 2023-01-05 14:48:39 +0100, Drouvot, Bertrand wrote:\n>> While at it, I also took the opportunity to create the macros for pg_stat_get_xact_function_total_time(),\n>> pg_stat_get_xact_function_self_time() and pg_stat_get_function_total_time(), pg_stat_get_function_self_time()\n>> (even if the same code pattern is only repeated two 2 times).\n> \n> I'd split that up into a separate commit.\n> \n> \n\nThanks for looking at it! Makes sense, will do.\n\n\n>> Now that this patch renames some fields\n> \n> I don't mind renaming the fields - the prefixes really don't provide anything\n> useful. But it's not clear why this is related to this patch? You could just\n> include the f_ prefix in the macro, no?\n> \n> \n\nRight, but the idea is to take the same approach that the one used in 8018ffbf58 (where placing the prefixes in the macro\nwould have been possible too).\n\n\n>> , I think that, for consistency, those ones should be renamed too (aka remove the f_ and t_ prefixes):\n>>\n>> PgStat_FunctionCounts.f_numcalls\n>> PgStat_StatFuncEntry.f_numcalls\n>> PgStat_TableCounts.t_truncdropped\n>> PgStat_TableCounts.t_delta_live_tuples\n>> PgStat_TableCounts.t_delta_dead_tuples\n>> PgStat_TableCounts.t_changed_tuples\n> \n> Yea, without that the result is profoundly weird.\n> \n> \n>> But I think it would be better to do it in a follow-up patch (once this one get committed).\n> \n> I don't mind committing it in a separate commit, but I think it should be done\n> at least immediately following the other commit. I.e. developed together.\n> \n> I probably would go the other way, and rename all of them first. That'd make\n> this commit a lot more focused, and the renaming commit purely mechanical.\n> \n\nYeah, makes sense. Let's proceed that way. I'll provide the \"rename\" patch.\n\n\n> Probably should remove PgStat_BackendFunctionEntry. \n\nI think that would be a 3rd patch, agree?\n\n>> @@ -168,19 +168,19 @@ pgstat_end_function_usage(PgStat_FunctionCallUsage *fcu, bool finalize)\n>> \tINSTR_TIME_ADD(total_func_time, f_self);\n>> \n>> \t/*\n>> -\t * Compute the new f_total_time as the total elapsed time added to the\n>> -\t * pre-call value of f_total_time. This is necessary to avoid\n>> +\t * Compute the new total_time as the total elapsed time added to the\n>> +\t * pre-call value of total_time. This is necessary to avoid\n>> \t * double-counting any time taken by recursive calls of myself. (We do\n>> \t * not need any similar kluge for self time, since that already excludes\n>> \t * any recursive calls.)\n>> \t */\n>> -\tINSTR_TIME_ADD(f_total, fcu->save_f_total_time);\n>> +\tINSTR_TIME_ADD(f_total, fcu->save_total_time);\n>> \n>> \t/* update counters in function stats table */\n>> \tif (finalize)\n>> \t\tfs->f_numcalls++;\n>> -\tfs->f_total_time = f_total;\n>> -\tINSTR_TIME_ADD(fs->f_self_time, f_self);\n>> +\tfs->total_time = f_total;\n>> +\tINSTR_TIME_ADD(fs->self_time, f_self);\n>> }\n> \n> I'd also rename f_self etc.\n> \n\nMakes sense, will do.\n\n>> @@ -148,29 +148,24 @@ pg_stat_get_function_calls(PG_FUNCTION_ARGS)\n>> \tPG_RETURN_INT64(funcentry->f_numcalls);\n>> }\n>> \n>> -Datum\n>> -pg_stat_get_function_total_time(PG_FUNCTION_ARGS)\n>> -{\n>> -\tOid\t\t\tfuncid = PG_GETARG_OID(0);\n>> -\tPgStat_StatFuncEntry *funcentry;\n>> -\n>> -\tif ((funcentry = pgstat_fetch_stat_funcentry(funcid)) == NULL)\n>> -\t\tPG_RETURN_NULL();\n>> -\t/* convert counter from microsec to millisec for display */\n>> -\tPG_RETURN_FLOAT8(((double) funcentry->f_total_time) / 1000.0);\n>> +#define PG_STAT_GET_FUNCENTRY_FLOAT8(stat)\t\t\t\t\t\t\t\\\n>> +Datum\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +CppConcat(pg_stat_get_function_,stat)(PG_FUNCTION_ARGS)\t\t\t\t\\\n>> +{\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\tOid\t\t\tfuncid = PG_GETARG_OID(0);\t\t\t\t\t\t\t\\\n>> +\tPgStat_StatFuncEntry *funcentry;\t\t\t\t\t\t\t\t\\\n>> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\tif ((funcentry = pgstat_fetch_stat_funcentry(funcid)) == NULL)\t\\\n>> +\t\tPG_RETURN_NULL();\t\t\t\t\t\t\t\t\t\t\t\\\n>> +\t/* convert counter from microsec to millisec for display */\t\t\\\n>> +\tPG_RETURN_FLOAT8(((double) funcentry->stat) / 1000.0);\t\t\t\\\n>> }\n> \n> Hm. Given the conversion with / 1000, is PG_STAT_GET_FUNCENTRY_FLOAT8 an\n> accurate name? Maybe PG_STAT_GET_FUNCENTRY_FLOAT8_MS?\n> \n> I now see that PG_STAT_GET_DBENTRY_FLOAT8 already exists, defined the same\n> way. But the name fields misleading enough that I'd be inclined to rename it?\n> \n\nPG_STAT_GET_FUNCENTRY_FLOAT8_MS looks good by me. Waiting on what we'll decide for\nthe existing PG_STAT_GET_DBENTRY_FLOAT8 (so that I can align for the PG_STAT_GET_FUNCENTRY_FLOAT8).\n\n>> +#define PG_STAT_GET_XACT_WITH_SUBTRANS_RELENTRY_INT64(stat)\t\t\t\t\t\\\n> \n> How about PG_STAT_GET_XACT_PLUS_SUBTRANS_INT64?\n> \n\nSounds, better, thanks!\n\n> Although I suspect this actually hints at an architectural thing that could be\n> fixed better: Perhaps we should replace find_tabstat_entry() with a version\n> returning a fully \"reconciled\" PgStat_StatTabEntry? It feels quite wrong to\n> have that intimitate knowledge of the subtransaction stuff in pgstatfuncs.c\n> and about how the different counts get combined.\n> \n> I think that'd allow us to move the definition of PgStat_TableStatus to\n> PgStat_TableXactStatus, PgStat_TableCounts to pgstat_internal.h. Which feels a\n> heck of a lot cleaner.\n\nYeah, I think that would be for a 4th patch, agree?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 08:38:57 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 2023-01-12 08:38:57 +0100, Drouvot, Bertrand wrote:\n> On 1/11/23 11:59 PM, Andres Freund wrote:\n> > > Now that this patch renames some fields\n> > \n> > I don't mind renaming the fields - the prefixes really don't provide anything\n> > useful. But it's not clear why this is related to this patch? You could just\n> > include the f_ prefix in the macro, no?\n> > \n> > \n> \n> Right, but the idea is to take the same approach that the one used in 8018ffbf58 (where placing the prefixes in the macro\n> would have been possible too).\n\nI'm not super happy about that patch tbh.\n\n\n> > Probably should remove PgStat_BackendFunctionEntry.\n> \n> I think that would be a 3rd patch, agree?\n\nYep.\n\n\n\n> > I now see that PG_STAT_GET_DBENTRY_FLOAT8 already exists, defined the same\n> > way. But the name fields misleading enough that I'd be inclined to rename it?\n> > \n> \n> PG_STAT_GET_FUNCENTRY_FLOAT8_MS looks good by me. Waiting on what we'll decide for\n> the existing PG_STAT_GET_DBENTRY_FLOAT8 (so that I can align for the PG_STAT_GET_FUNCENTRY_FLOAT8).\n\n+1\n\n\n\n> > Although I suspect this actually hints at an architectural thing that could be\n> > fixed better: Perhaps we should replace find_tabstat_entry() with a version\n> > returning a fully \"reconciled\" PgStat_StatTabEntry? It feels quite wrong to\n> > have that intimitate knowledge of the subtransaction stuff in pgstatfuncs.c\n> > and about how the different counts get combined.\n> > \n> > I think that'd allow us to move the definition of PgStat_TableStatus to\n> > PgStat_TableXactStatus, PgStat_TableCounts to pgstat_internal.h. Which feels a\n> > heck of a lot cleaner.\n> \n> Yeah, I think that would be for a 4th patch, agree?\n\nYea. I am of multiple minds about the ordering. I can see benefits on fixing\nthe architectural issue before reducing duplication in the accessor with a\nmacro. The reason is that if we addressed the architectural issue, the\ndifference between the xact and non-xact version will be very minimal, and\ncould even be generated by the same macro.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:24:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 1/12/23 7:24 PM, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-12 08:38:57 +0100, Drouvot, Bertrand wrote:\n>> On 1/11/23 11:59 PM, Andres Freund wrote:\n>>>> Now that this patch renames some fields\n>>>\n>>> I don't mind renaming the fields - the prefixes really don't provide anything\n>>> useful. But it's not clear why this is related to this patch? You could just\n>>> include the f_ prefix in the macro, no?\n>>>\n>>>\n>>\n>> Right, but the idea is to take the same approach that the one used in 8018ffbf58 (where placing the prefixes in the macro\n>> would have been possible too).\n> \n> I'm not super happy about that patch tbh.\n> \n> \n>>> Probably should remove PgStat_BackendFunctionEntry.\n>>\n>> I think that would be a 3rd patch, agree?\n> \n> Yep.\n> \n> \n> \n>>> I now see that PG_STAT_GET_DBENTRY_FLOAT8 already exists, defined the same\n>>> way. But the name fields misleading enough that I'd be inclined to rename it?\n>>>\n>>\n>> PG_STAT_GET_FUNCENTRY_FLOAT8_MS looks good by me. Waiting on what we'll decide for\n>> the existing PG_STAT_GET_DBENTRY_FLOAT8 (so that I can align for the PG_STAT_GET_FUNCENTRY_FLOAT8).\n> \n> +1\n> \n> \n> \n>>> Although I suspect this actually hints at an architectural thing that could be\n>>> fixed better: Perhaps we should replace find_tabstat_entry() with a version\n>>> returning a fully \"reconciled\" PgStat_StatTabEntry? It feels quite wrong to\n>>> have that intimitate knowledge of the subtransaction stuff in pgstatfuncs.c\n>>> and about how the different counts get combined.\n>>>\n>>> I think that'd allow us to move the definition of PgStat_TableStatus to\n>>> PgStat_TableXactStatus, PgStat_TableCounts to pgstat_internal.h. Which feels a\n>>> heck of a lot cleaner.\n>>\n>> Yeah, I think that would be for a 4th patch, agree?\n> \n> Yea. I am of multiple minds about the ordering. I can see benefits on fixing\n> the architectural issue before reducing duplication in the accessor with a\n> macro. The reason is that if we addressed the architectural issue, the\n> difference between the xact and non-xact version will be very minimal, and\n> could even be generated by the same macro.\n> \n\nYeah, I do agree and I'm in favor of this ordering:\n\n1) replace find_tabstat_entry() with a version returning a fully \"reconciled\" PgStat_StatTabEntry\n2) remove prefixes\n3) Introduce the new macros\n\nAnd it looks to me that removing PgStat_BackendFunctionEntry can be done independently\n\nI'll first look at 1).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 13 Jan 2023 10:36:49 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 2023-01-13 10:36:49 +0100, Drouvot, Bertrand wrote:\n> > > > Although I suspect this actually hints at an architectural thing that could be\n> > > > fixed better: Perhaps we should replace find_tabstat_entry() with a version\n> > > > returning a fully \"reconciled\" PgStat_StatTabEntry? It feels quite wrong to\n> > > > have that intimitate knowledge of the subtransaction stuff in pgstatfuncs.c\n> > > > and about how the different counts get combined.\n> > > > \n> > > > I think that'd allow us to move the definition of PgStat_TableStatus to\n> > > > PgStat_TableXactStatus, PgStat_TableCounts to pgstat_internal.h. Which feels a\n> > > > heck of a lot cleaner.\n> > > \n> > > Yeah, I think that would be for a 4th patch, agree?\n> > \n> > Yea. I am of multiple minds about the ordering. I can see benefits on fixing\n> > the architectural issue before reducing duplication in the accessor with a\n> > macro. The reason is that if we addressed the architectural issue, the\n> > difference between the xact and non-xact version will be very minimal, and\n> > could even be generated by the same macro.\n> > \n> \n> Yeah, I do agree and I'm in favor of this ordering:\n> \n> 1) replace find_tabstat_entry() with a version returning a fully \"reconciled\" PgStat_StatTabEntry\n> 2) remove prefixes\n> 3) Introduce the new macros\n\n> I'll first look at 1).\n\nMakes sense.\n\n\n> And it looks to me that removing PgStat_BackendFunctionEntry can be done independently\n\nIt's imo the function version of 1), just a bit simpler to implement due to\nthe much simpler reconciliation. It could be done together with it, or\nseparately.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 10:37:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Looks like you have a path forward on this and it's not ready to\ncommit yet? In which case I'll mark it Waiting on Author?\n\nOn Fri, 13 Jan 2023 at 13:38, Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-01-13 10:36:49 +0100, Drouvot, Bertrand wrote:\n>\n> > I'll first look at 1).\n>\n> Makes sense.\n>\n> > And it looks to me that removing PgStat_BackendFunctionEntry can be done independently\n>\n> It's imo the function version of 1), just a bit simpler to implement due to\n> the much simpler reconciliation. It could be done together with it, or\n> separately.\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 14:54:16 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 3/1/23 8:54 PM, Gregory Stark (as CFM) wrote:\n> Looks like you have a path forward on this and it's not ready to\n> commit yet? In which case I'll mark it Waiting on Author?\n> \n\nYeah, there is some dependencies around this one.\n\n[1]: depends on it\nCurrent one depends of [2], [3] and [4]\n\nWaiting on Author is then the right state, thanks for having moved it to that state.\n\n[1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n[2]: https://www.postgresql.org/message-id/flat/b9e1f543-ee93-8168-d530-d961708ad9d3@gmail.com\n[3]: https://www.postgresql.org/message-id/flat/11d531fe-52fc-c6ea-7e8e-62f1b6ec626e@gmail.com\n[4]: https://www.postgresql.org/message-id/flat/9142f62a-a422-145c-bde0-b5bc498a4ada@gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 2 Mar 2023 08:39:14 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Thu, Mar 02, 2023 at 08:39:14AM +0100, Drouvot, Bertrand wrote:\n> Yeah, there is some dependencies around this one.\n> \n> [1]: depends on it\n> Current one depends of [2], [3] and [4]\n> \n> Waiting on Author is then the right state, thanks for having moved it to that state.\n> \n> [1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n> [2]: https://www.postgresql.org/message-id/flat/b9e1f543-ee93-8168-d530-d961708ad9d3@gmail.com\n> [3]: https://www.postgresql.org/message-id/flat/11d531fe-52fc-c6ea-7e8e-62f1b6ec626e@gmail.com\n> [4]: https://www.postgresql.org/message-id/flat/9142f62a-a422-145c-bde0-b5bc498a4ada@gmail.com\n\n[3] and [4] have been applied. [2] is more sensitive than it looks,\nand [1] for the split of index and table stats can feed on the one of\nthis thread.\n\nRoma wasn't built in one day, and from what I can see you can still do\nsome progress with the refactoring of pgstatfuncs.c with what's\nalready on HEAD. So how about handling doing that first as much as we\ncan based on the state of HEAD? That looks like 50~60% (?) of the\noriginal goal to switch pgstatfuncs.c to use more macros to generate\nthe definition of all these SQL functions.\n--\nMichael", "msg_date": "Fri, 24 Mar 2023 09:04:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 3/24/23 1:04 AM, Michael Paquier wrote:\n> On Thu, Mar 02, 2023 at 08:39:14AM +0100, Drouvot, Bertrand wrote:\n>> Yeah, there is some dependencies around this one.\n>>\n>> [1]: depends on it\n>> Current one depends of [2], [3] and [4]\n>>\n>> Waiting on Author is then the right state, thanks for having moved it to that state.\n>>\n>> [1]: https://www.postgresql.org/message-id/flat/f572abe7-a1bb-e13b-48c7-2ca150546822@gmail.com\n>> [2]: https://www.postgresql.org/message-id/flat/b9e1f543-ee93-8168-d530-d961708ad9d3@gmail.com\n>> [3]: https://www.postgresql.org/message-id/flat/11d531fe-52fc-c6ea-7e8e-62f1b6ec626e@gmail.com\n>> [4]: https://www.postgresql.org/message-id/flat/9142f62a-a422-145c-bde0-b5bc498a4ada@gmail.com\n> \n> [3] and [4] have been applied.\n\nThanks for your help on this!\n\n> [2] is more sensitive than it looks,\n> and [1] for the split of index and table stats can feed on the one of\n> this thread.\n> \n> Roma wasn't built in one day, and from what I can see you can still do\n> some progress with the refactoring of pgstatfuncs.c with what's\n> already on HEAD. So how about handling doing that first as much as we\n> can based on the state of HEAD? That looks like 50~60% (?) of the\n> original goal to switch pgstatfuncs.c to use more macros to generate\n> the definition of all these SQL functions.\n\nI think that's a good idea, so please find enclosed V2 which as compare to V1:\n\n- Does not include the refactoring for pg_stat_get_xact_tuples_inserted(), pg_stat_get_xact_tuples_updated()\nand pg_stat_get_xact_tuples_deleted() (as they depend of [2] mentioned above)\n\n- Does not include the refactoring for pg_stat_get_xact_function_total_time(), pg_stat_get_xact_function_self_time(),\npg_stat_get_function_total_time() and pg_stat_get_function_self_time(). I think they can be done in a dedicated commit once\nwe agree on the renaming for PG_STAT_GET_DBENTRY_FLOAT8 (see Andres's comment up-thread) so that the new macros can match the future agreement.\n\n- Does include the refactoring of the new pg_stat_get_xact_tuples_newpage_updated() function (added in ae4fdde135)\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 24 Mar 2023 06:58:30 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Fri, Mar 24, 2023 at 06:58:30AM +0100, Drouvot, Bertrand wrote:\n> - Does not include the refactoring for pg_stat_get_xact_tuples_inserted(),\n> pg_stat_get_xact_tuples_updated() and pg_stat_get_xact_tuples_deleted() (as\n> they depend of [2] mentioned above) \n> \n> - Does not include the refactoring for\n> pg_stat_get_xact_function_total_time(),\n> pg_stat_get_xact_function_self_time(), \n> pg_stat_get_function_total_time() and\n> pg_stat_get_function_self_time(). I think they can be done in a\n> dedicated commit once we agree on the renaming for\n> PG_STAT_GET_DBENTRY_FLOAT8 (see Andres's comment up-thread) so that\n> the new macros can match the future agreement.\n> \n> - Does include the refactoring of the new\n> - pg_stat_get_xact_tuples_newpage_updated() function (added in\n> - ae4fdde135) \n\nFine by me. One step is better than no steps, and this helps:\n 1 file changed, 29 insertions(+), 97 deletions(-)\n\nI'll go apply that if there are no objections.\n--\nMichael", "msg_date": "Sat, 25 Mar 2023 11:50:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Sat, Mar 25, 2023 at 11:50:50AM +0900, Michael Paquier wrote:\n> On Fri, Mar 24, 2023 at 06:58:30AM +0100, Drouvot, Bertrand wrote:\n>> - Does not include the refactoring for\n>> pg_stat_get_xact_function_total_time(),\n>> pg_stat_get_xact_function_self_time(), \n>> pg_stat_get_function_total_time() and\n>> pg_stat_get_function_self_time(). I think they can be done in a\n>> dedicated commit once we agree on the renaming for\n>> PG_STAT_GET_DBENTRY_FLOAT8 (see Andres's comment up-thread) so that\n>> the new macros can match the future agreement.\n\nThanks for the reminder. I have completely missed that this is\nmentioned in [1], and that it is all about 8018ffb. The suggestion to\nprefix the macro names with a \"_MS\" to outline the conversion sounds\nlike a good one seen from here. So please find attached a patch to do\nthis adjustment, completed with a similar switch for the two counters\nof the function entries.\n\n>> - Does include the refactoring of the new\n>> - pg_stat_get_xact_tuples_newpage_updated() function (added in\n>> - ae4fdde135) \n> \n> Fine by me. One step is better than no steps, and this helps:\n> 1 file changed, 29 insertions(+), 97 deletions(-)\n> \n> I'll go apply that if there are no objections.\n\nJust did this part to shave a bit more code.\n\n[1]: https://www.postgresql.org/message-id/20230111225907.6el6c5j3hukizqxc@awork3.anarazel.de\n--\nMichael", "msg_date": "Mon, 27 Mar 2023 10:20:34 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "Hi,\n\nOn 3/27/23 3:20 AM, Michael Paquier wrote:\n> On Sat, Mar 25, 2023 at 11:50:50AM +0900, Michael Paquier wrote:\n>> On Fri, Mar 24, 2023 at 06:58:30AM +0100, Drouvot, Bertrand wrote:\n>>> - Does not include the refactoring for\n>>> pg_stat_get_xact_function_total_time(),\n>>> pg_stat_get_xact_function_self_time(),\n>>> pg_stat_get_function_total_time() and\n>>> pg_stat_get_function_self_time(). I think they can be done in a\n>>> dedicated commit once we agree on the renaming for\n>>> PG_STAT_GET_DBENTRY_FLOAT8 (see Andres's comment up-thread) so that\n>>> the new macros can match the future agreement.\n> \n> Thanks for the reminder. I have completely missed that this is\n> mentioned in [1], and that it is all about 8018ffb. The suggestion to\n> prefix the macro names with a \"_MS\" to outline the conversion sounds\n> like a good one seen from here. So please find attached a patch to do\n> this adjustment, completed with a similar switch for the two counters\n> of the function entries.\n> \n\nThanks! LGTM, but what about also taking care of pg_stat_get_xact_function_total_time()\nand pg_stat_get_xact_function_self_time() while at it?\n\n>>> - Does include the refactoring of the new\n>>> - pg_stat_get_xact_tuples_newpage_updated() function (added in\n>>> - ae4fdde135)\n>>\n>> Fine by me. One step is better than no steps, and this helps:\n>> 1 file changed, 29 insertions(+), 97 deletions(-)\n>>\n>> I'll go apply that if there are no objections.\n> \n> Just did this part to shave a bit more code.\n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Mar 2023 07:45:26 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Mon, Mar 27, 2023 at 07:45:26AM +0200, Drouvot, Bertrand wrote:\n> Thanks! LGTM, but what about also taking care of pg_stat_get_xact_function_total_time()\n> and pg_stat_get_xact_function_self_time() while at it?\n\nWith a macro that uses INSTR_TIME_GET_MILLISEC() to cope with\ninstr_time? Why not, that's one duplication less.\n--\nMichael", "msg_date": "Mon, 27 Mar 2023 15:40:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On 3/27/23 8:40 AM, Michael Paquier wrote:\n> On Mon, Mar 27, 2023 at 07:45:26AM +0200, Drouvot, Bertrand wrote:\n>> Thanks! LGTM, but what about also taking care of pg_stat_get_xact_function_total_time()\n>> and pg_stat_get_xact_function_self_time() while at it?\n> \n> With a macro that uses INSTR_TIME_GET_MILLISEC() to cope with\n> instr_time? Why not, that's one duplication less.\n\nYes, something like V1 up-thread was doing. I think it can be added with your current proposal.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Mar 2023 08:54:13 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Mon, Mar 27, 2023 at 08:54:13AM +0200, Drouvot, Bertrand wrote:\n> Yes, something like V1 up-thread was doing. I think it can be added with your current proposal.\n\nSure, I can write that. Or perhaps you'd prefer write something\nyourself?\n--\nMichael", "msg_date": "Mon, 27 Mar 2023 16:23:38 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On 3/27/23 9:23 AM, Michael Paquier wrote:\n> On Mon, Mar 27, 2023 at 08:54:13AM +0200, Drouvot, Bertrand wrote:\n>> Yes, something like V1 up-thread was doing. I think it can be added with your current proposal.\n> \n> Sure, I can write that. Or perhaps you'd prefer write something\n> yourself?\n\nPlease find attached V2 adding pg_stat_get_xact_function_total_time()\nand pg_stat_get_xact_function_self_time() to the party.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 27 Mar 2023 10:11:21 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Mon, Mar 27, 2023 at 10:11:21AM +0200, Drouvot, Bertrand wrote:\n> Please find attached V2 adding pg_stat_get_xact_function_total_time()\n> and pg_stat_get_xact_function_self_time() to the party.\n\nThe patch has one mistake: PG_STAT_GET_XACT_FUNCENTRY_FLOAT8_MS does\nnot need a slash on its last line or it would include the next, empty\nline. This could lead to mistakes (no need to send a new patch just\nfor that).\n--\nMichael", "msg_date": "Mon, 27 Mar 2023 19:08:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "On Mon, Mar 27, 2023 at 07:08:51PM +0900, Michael Paquier wrote:\n> The patch has one mistake: PG_STAT_GET_XACT_FUNCENTRY_FLOAT8_MS does\n> not need a slash on its last line or it would include the next, empty\n> line. This could lead to mistakes (no need to send a new patch just\n> for that).\n\nAdjusted that, and the rest was fine after a second look, so applied.\nIt looks like we are done for now with this thread, so I have marked\nit as committed in the CF app.\n--\nMichael", "msg_date": "Tue, 28 Mar 2023 07:41:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" }, { "msg_contents": "\n\nOn 3/28/23 12:41 AM, Michael Paquier wrote:\n> On Mon, Mar 27, 2023 at 07:08:51PM +0900, Michael Paquier wrote:\n>> The patch has one mistake: PG_STAT_GET_XACT_FUNCENTRY_FLOAT8_MS does\n>> not need a slash on its last line or it would include the next, empty\n>> line. This could lead to mistakes (no need to send a new patch just\n>> for that).\n> \n> Adjusted that, and the rest was fine after a second look, so applied.\n> It looks like we are done for now with this thread, so I have marked\n> it as committed in the CF app.\n\nThanks for having corrected the mistake and applied the patch!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Mar 2023 07:19:53 +0200", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Generate pg_stat_get_xact*() functions with Macros" } ]
[ { "msg_contents": "Hi Team,\n\nIn order to restore dumped extended statistics (stxdndistinct, stxddependencies, stxdmcv) we need to provide input functions to parse pg_distinct/pg_dependency/pg_mcv_list strings.\n\nToday we get the ERROR \"cannot accept a value of type pg_ndistinct/pg_dependencies/pg_mcv_list\" when we try to do an insert of any type.\n\nApproch tried:\n- Using yacc grammar file (statistics_gram.y) to parse the input string to its internal format for the types pg_distinct and pg_dependencies\n- We are just calling byteain() for serialized input text of type pg_mcv_list.\n\nCurrently the changes are working locally, I would like to push the commit changes to upstream if there any usecase for postgres. Would like to know if there any interest from postgres side.\n\nRegards,\nHari Krishna\n\n\n\n\n\n\n\n\n\nHi Team,\n \nIn order to restore dumped extended statistics (stxdndistinct, stxddependencies, stxdmcv) we need to provide input functions to parse pg_distinct/pg_dependency/pg_mcv_list strings.\n \nToday we get the ERROR \"cannot accept a value of type pg_ndistinct/pg_dependencies/pg_mcv_list\" when we try to do an insert of any type.\n \nApproch tried:\n- Using yacc grammar file (statistics_gram.y) to parse the input string to its internal format for the types pg_distinct and pg_dependencies\n- We are just calling byteain() for serialized input text of type pg_mcv_list.\n \nCurrently the changes are working locally,  I would like to push the commit changes to upstream if there any usecase for postgres.   Would like to know if there any interest from postgres side.\n \nRegards,\nHari Krishna", "msg_date": "Thu, 5 Jan 2023 18:29:03 +0000", "msg_from": "Hari krishna Maddileti <hmaddileti@vmware.com>", "msg_from_op": true, "msg_subject": "Support for dumping extended statistics" }, { "msg_contents": "On Thu, Jan 5, 2023 at 06:29:03PM +0000, Hari krishna Maddileti wrote:\n> Hi Team,\n> In order to restore dumped extended statistics (stxdndistinct,\n> stxddependencies, stxdmcv) we need to provide input functions to parse\n> pg_distinct/pg_dependency/pg_mcv_list strings.\n> \n> Today we get the ERROR \"cannot accept a value of type pg_ndistinct/\n> pg_dependencies/pg_mcv_list\" when we try to do an insert of any type.\n> \n> Approch tried:\n> \n> - Using yacc grammar file (statistics_gram.y) to parse the input string to its\n> internal format for the types pg_distinct and pg_dependencies\n> \n> - We are just calling byteain() for serialized input text of type pg_mcv_list.\n> \n> Currently the changes are working locally, I would like to push the commit\n> changes to upstream if there any usecase for postgres. Would like to know if\n> there any interest from postgres side.\n\nThere is certainly interest in allowing the optimizer statistics to be\ndumped and reloaded. This could be used by pg_restore and pg_upgrade.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n\n", "msg_date": "Fri, 6 Jan 2023 21:39:59 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "Thanks Team for showing interest.\n\nPlease find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n\n\nRegards,\nHari\nFrom: Bruce Momjian <bruce@momjian.us>\nDate: Saturday, 7 January 2023 at 8:10 AM\nTo: Hari krishna Maddileti <hmaddileti@vmware.com>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Support for dumping extended statistics\n!! External Email\n\nOn Thu, Jan 5, 2023 at 06:29:03PM +0000, Hari krishna Maddileti wrote:\n> Hi Team,\n> In order to restore dumped extended statistics (stxdndistinct,\n> stxddependencies, stxdmcv) we need to provide input functions to parse\n> pg_distinct/pg_dependency/pg_mcv_list strings.\n>\n> Today we get the ERROR \"cannot accept a value of type pg_ndistinct/\n> pg_dependencies/pg_mcv_list\" when we try to do an insert of any type.\n>\n> Approch tried:\n>\n> - Using yacc grammar file (statistics_gram.y) to parse the input string to its\n> internal format for the types pg_distinct and pg_dependencies\n>\n> - We are just calling byteain() for serialized input text of type pg_mcv_list.\n>\n> Currently the changes are working locally, I would like to push the commit\n> changes to upstream if there any usecase for postgres. Would like to know if\n> there any interest from postgres side.\n\nThere is certainly interest in allowing the optimizer statistics to be\ndumped and reloaded. This could be used by pg_restore and pg_upgrade.\n\n--\n Bruce Momjian <bruce@momjian.us> https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmomjian.us%2F&data=05%7C01%7Chmaddileti%40vmware.com%7C3eec45fa323646114b1b08daf0587937%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638086560027653219%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=NGidyq8AYdqqAAjirIud%2FE2SD%2Bw4MWmdyFwIu2Bos4A%3D&reserved=0\n EDB https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fenterprisedb.com%2F&data=05%7C01%7Chmaddileti%40vmware.com%7C3eec45fa323646114b1b08daf0587937%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638086560027653219%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=XUv87gO4KT3W%2FJh17szMBUryZF5kB2hhkY8DD8HeAjE%3D&reserved=0\n\nEmbrace your flaws. They make you human, rather than perfect,\nwhich you will never be.\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.", "msg_date": "Tue, 10 Jan 2023 11:28:36 +0000", "msg_from": "Hari krishna Maddileti <hmaddileti@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:28:36AM +0000, Hari krishna Maddileti wrote:\n> Thanks Team for showing interest.\n> \n> Please find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n\nThe patch is failing ; you need to make the corresponding update to\nmeson as you did for make.\n\nhttp://cfbot.cputube.org/david-kimura.html\nhttps://wiki.postgresql.org/wiki/Meson_for_patch_authors\nhttps://wiki.postgresql.org/wiki/Meson\n\nBut actually, it also fails to compile with \"make\".\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 14 Jan 2023 14:57:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "Hi Justin,\n Thanks for the update, I have attached the updated patch with meson compatible and addressed warnings from make file too.\n\n\nOn 15/01/23, 2:27 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n\n!! External Email\n\nOn Tue, Jan 10, 2023 at 11:28:36AM +0000, Hari krishna Maddileti wrote:\n> Thanks Team for showing interest.\n>\n> Please find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n\nThe patch is failing ; you need to make the corresponding update to\nmeson as you did for make.\n\nhttps://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2Fdavid-kimura.html&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ijYtKFzkEiruO9ZyzqEhsDakZG6G9IjJQgY3DiN4eUQ%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson_for_patch_authors&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=udY5fPdSMhi1wlcNiR0EHwvdiV5ozoQL8gDhNfJCcUI%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kHDvMHWoGXyk67%2FM9Kkct%2Bl4t2554XyJCoy53Eqx1xo%3D&reserved=0\n\nBut actually, it also fails to compile with \"make\".\n\n--\nJustin\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.", "msg_date": "Wed, 1 Feb 2023 04:38:17 +0000", "msg_from": "Hari krishna Maddileti <hmaddileti@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "+ pgsql-hackers\n\nHi Justin,\n Thanks for the update, I have attached the updated patch with meson compatible and addressed warnings from make file too.\n\n\nOn 15/01/23, 2:27 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n\n!! External Email\n\nOn Tue, Jan 10, 2023 at 11:28:36AM +0000, Hari krishna Maddileti wrote:\n> Thanks Team for showing interest.\n>\n> Please find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n\nThe patch is failing ; you need to make the corresponding update to\nmeson as you did for make.\n\nhttps://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2Fdavid-kimura.html&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ijYtKFzkEiruO9ZyzqEhsDakZG6G9IjJQgY3DiN4eUQ%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson_for_patch_authors&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=udY5fPdSMhi1wlcNiR0EHwvdiV5ozoQL8gDhNfJCcUI%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kHDvMHWoGXyk67%2FM9Kkct%2Bl4t2554XyJCoy53Eqx1xo%3D&reserved=0\n\nBut actually, it also fails to compile with \"make\".\n\n--\nJustin\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.\n\n\n\n\n\n\n\n\n\n \n+ pgsql-hackers \n \n\n\n\nHi Justin,\n                Thanks for the update, I have attached the updated patch with meson compatible and  addressed warnings from make file too.\n\n\n\n\n \n \n\nOn 15/01/23, 2:27 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n\n \n\n!! External Email\n\nOn Tue, Jan 10, 2023 at 11:28:36AM +0000, Hari krishna Maddileti wrote:\n> Thanks Team for showing interest.\n>\n> Please find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n\nThe patch is failing ; you need to make the corresponding update to\nmeson as you did for make.\n\nhttps://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fcfbot.cputube.org%2Fdavid-kimura.html&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ijYtKFzkEiruO9ZyzqEhsDakZG6G9IjJQgY3DiN4eUQ%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson_for_patch_authors&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=udY5fPdSMhi1wlcNiR0EHwvdiV5ozoQL8gDhNfJCcUI%3D&reserved=0\nhttps://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwiki.postgresql.org%2Fwiki%2FMeson&data=05%7C01%7Chmaddileti%40vmware.com%7C299f368fff494a8eddc508daf671e768%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C638093266355001101%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=kHDvMHWoGXyk67%2FM9Kkct%2Bl4t2554XyJCoy53Eqx1xo%3D&reserved=0\n\nBut actually, it also fails to compile with \"make\".\n\n--\nJustin\n\n!! External Email: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender.", "msg_date": "Wed, 1 Feb 2023 05:12:23 +0000", "msg_from": "Hari krishna Maddileti <hmaddileti@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "On Wed, Feb 01, 2023 at 04:38:17AM +0000, Hari krishna Maddileti wrote:\n> Hi Justin,\n> Thanks for the update, I have attached the updated patch with meson compatible and addressed warnings from make file too.\n\nThanks - I see it compiles now under both build systems.\n\nBut there's build warnings, and it fails regression tests.\n\nhttp://cfbot.cputube.org/david-kimura.html\n\nOn 15/01/23, 2:27 AM, \"Justin Pryzby\" <pryzby@telsasoft.com> wrote:\n> \n> On Tue, Jan 10, 2023 at 11:28:36AM +0000, Hari krishna Maddileti wrote:\n> > Thanks Team for showing interest.\n> >\n> > Please find the attached patch, which uses the same approach as mentioned in previous email to implement input functions to parse pg_distinct, pg_dependency and pg_mcv_list strings.\n> \n> The patch is failing ; you need to make the corresponding update to\n> meson as you did for make.\n> \n> But actually, it also fails to compile with \"make\".\n\n\n", "msg_date": "Wed, 1 Feb 2023 07:07:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "\n\nOn 1/7/23 03:39, Bruce Momjian wrote:\n> On Thu, Jan 5, 2023 at 06:29:03PM +0000, Hari krishna Maddileti wrote:\n>> Hi Team,\n>> In order to restore dumped extended statistics (stxdndistinct,\n>> stxddependencies, stxdmcv) we need to provide input functions to parse\n>> pg_distinct/pg_dependency/pg_mcv_list strings.\n>>\n>> Today we get the ERROR \"cannot accept a value of type pg_ndistinct/\n>> pg_dependencies/pg_mcv_list\" when we try to do an insert of any type.\n>>\n>> Approch tried:\n>>\n>> - Using yacc grammar file (statistics_gram.y) to parse the input string to its\n>> internal format for the types pg_distinct and pg_dependencies\n>>\n>> - We are just calling byteain() for serialized input text of type pg_mcv_list.\n>>\n>> Currently the changes are working locally, I would like to push the commit\n>> changes to upstream if there any usecase for postgres. Would like to know if\n>> there any interest from postgres side.\n> \n> There is certainly interest in allowing the optimizer statistics to be\n> dumped and reloaded. This could be used by pg_restore and pg_upgrade.\n> \n\nIndeed, although I think it'd be better to deal with regular statistics\n(which is what 99% of systems use). Furthermore, we should probably\nthink about differences between major versions - until now we could\nchange on-disk format of the statistics, because we have reset them.\nIt'd be silly to do dump on version X, and then fail to restore it on\n(X+1) just because the statistics changed a bit.\n\nSo we need to be able to determine is the statistics has the correct\nformat/version, or what. And we need to do that for pg_upgrade.\n\nAt the very least we need an option to skip restoring statistics, or\nsomething like that.\n\nregards\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 1 Feb 2023 14:30:01 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 1/7/23 03:39, Bruce Momjian wrote:\n>> There is certainly interest in allowing the optimizer statistics to be\n>> dumped and reloaded. This could be used by pg_restore and pg_upgrade.\n\n> Indeed, although I think it'd be better to deal with regular statistics\n> (which is what 99% of systems use). Furthermore, we should probably\n> think about differences between major versions - until now we could\n> change on-disk format of the statistics, because we have reset them.\n\nYeah, it's extremely odd to be proposing dump/reload for extended\nstats when we don't yet have it for plain stats. And yes, the main\nstumbling block is that you need to have a plan for stats changing\nacross versions, or even just environmental issues. For example,\nwhat if the target DB doesn't use the same collation as the source?\nThat would affect string sorting and therefore at least partially\ninvalidate histograms for text columns.\n\nI actually did some work on this, probably close to ten years ago\nnow, and came up with some hacks that didn't pass community review.\nIt'd be a good idea to dig up those old discussions if you want to\nre-open the topic.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 01 Feb 2023 09:58:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" }, { "msg_contents": "Hi,\n\nOn 2023-02-01 04:38:17 +0000, Hari krishna Maddileti wrote:\n> Thanks for the update, I have attached the updated patch with meson compatible and addressed warnings from make file too.\n\nThis patch consistently crashes in CI:\nhttps://cirrus-ci.com/github/postgresql-cfbot/postgresql/commitfest%2F42%2F4114\n\nExample crash:\nhttps://api.cirrus-ci.com/v1/task/4910781754507264/logs/cores.log\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 13 Feb 2023 16:16:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Support for dumping extended statistics" } ]
[ { "msg_contents": "This does not seem good:\n\nregression=# create table pp (a int, b int) partition by range(a);\nCREATE TABLE\nregression=# create table cc (a int generated always as (b+1) stored, b int);\nCREATE TABLE\nregression=# alter table pp attach partition cc for values from ('1') to ('10'); \nALTER TABLE\nregression=# insert into pp values(1,100);\nINSERT 0 1\nregression=# table pp;\n a | b \n-----+-----\n 101 | 100\n(1 row)\n\nI'm not sure to what extent it's sensible for partitions to have\nGENERATED columns that don't match their parent; but even if that's\nokay for payload columns I doubt we want to allow partitioning\ncolumns to be GENERATED.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Jan 2023 13:53:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "On Fri, Jan 6, 2023 at 3:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This does not seem good:\n>\n> regression=# create table pp (a int, b int) partition by range(a);\n> CREATE TABLE\n> regression=# create table cc (a int generated always as (b+1) stored, b int);\n> CREATE TABLE\n> regression=# alter table pp attach partition cc for values from ('1') to ('10');\n> ALTER TABLE\n> regression=# insert into pp values(1,100);\n> INSERT 0 1\n> regression=# table pp;\n> a | b\n> -----+-----\n> 101 | 100\n> (1 row)\n\nThis indeed is broken and seems like an oversight. :-(\n\n> I'm not sure to what extent it's sensible for partitions to have\n> GENERATED columns that don't match their parent; but even if that's\n> okay for payload columns I doubt we want to allow partitioning\n> columns to be GENERATED.\n\nActually, I'm inclined to disallow partitions to have *any* generated\ncolumns that are not present in the parent table. The main reason for\nthat is the inconvenience of checking that a partition's generated\ncolumns doesn't override the partition key column of an ancestor that\nis not its immediate parent, which MergeAttributesIntoExisting() has\naccess to and would have been locked.\n\nPatch doing it that way is attached. Perhaps the newly added error\nmessage should match CREATE TABLE .. PARTITION OF's, but I found the\nlatter to be not detailed enough, or maybe that's just me.\n\n\n\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 6 Jan 2023 17:26:58 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Fri, Jan 6, 2023 at 3:53 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I'm not sure to what extent it's sensible for partitions to have\n>> GENERATED columns that don't match their parent; but even if that's\n>> okay for payload columns I doubt we want to allow partitioning\n>> columns to be GENERATED.\n\n> Actually, I'm inclined to disallow partitions to have *any* generated\n> columns that are not present in the parent table. The main reason for\n> that is the inconvenience of checking that a partition's generated\n> columns doesn't override the partition key column of an ancestor that\n> is not its immediate parent, which MergeAttributesIntoExisting() has\n> access to and would have been locked.\n\nAfter thinking about this awhile, I feel that we ought to disallow\nit in the traditional-inheritance case as well. The reason is that\nthere are semantic prohibitions on inserting or updating a generated\ncolumn, eg\n\nregression=# create table t (f1 int, f2 int generated always as (f1+1) stored);\nCREATE TABLE\nregression=# update t set f2=42;\nERROR: column \"f2\" can only be updated to DEFAULT\nDETAIL: Column \"f2\" is a generated column.\n\nIt's not very reasonable to have to recheck that for child tables,\nand we don't. But if one does this:\n\nregression=# create table pp (f1 int, f2 int);\nCREATE TABLE\nregression=# create table cc (f1 int, f2 int generated always as (f1+1) stored) inherits(pp);\nNOTICE: merging column \"f1\" with inherited definition\nNOTICE: merging column \"f2\" with inherited definition\nCREATE TABLE\nregression=# insert into cc values(1);\nINSERT 0 1\nregression=# update pp set f2 = 99 where f1 = 1;\nUPDATE 1\nregression=# table cc;\n f1 | f2 \n----+----\n 1 | 99\n(1 row)\n\nThat is surely just as broken as the partition-based case.\n\nI also note that the code adjacent to what you added is\n\n /*\n * If parent column is generated, child column must be, too.\n */\n if (attribute->attgenerated && !childatt->attgenerated)\n ereport(ERROR, ...\n\nwithout any exception for non-partition inheritance, and the\nfollowing check for equivalent generation expressions has\nno such exception either. So it's not very clear why this\ntest should have an exception.\n\n> Patch doing it that way is attached. Perhaps the newly added error\n> message should match CREATE TABLE .. PARTITION OF's, but I found the\n> latter to be not detailed enough, or maybe that's just me.\n\nMaybe we should improve the existing error message while we're at it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Jan 2023 12:32:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "I wrote:\n> After thinking about this awhile, I feel that we ought to disallow\n> it in the traditional-inheritance case as well. The reason is that\n> there are semantic prohibitions on inserting or updating a generated\n> column, eg\n\n> regression=# create table t (f1 int, f2 int generated always as (f1+1) stored);\n> CREATE TABLE\n> regression=# update t set f2=42;\n> ERROR: column \"f2\" can only be updated to DEFAULT\n> DETAIL: Column \"f2\" is a generated column.\n\n> It's not very reasonable to have to recheck that for child tables,\n> and we don't. But if one does this:\n\n> regression=# create table pp (f1 int, f2 int);\n> CREATE TABLE\n> regression=# create table cc (f1 int, f2 int generated always as (f1+1) stored) inherits(pp);\n> NOTICE: merging column \"f1\" with inherited definition\n> NOTICE: merging column \"f2\" with inherited definition\n> CREATE TABLE\n> regression=# insert into cc values(1);\n> INSERT 0 1\n> regression=# update pp set f2 = 99 where f1 = 1;\n> UPDATE 1\n> regression=# table cc;\n> f1 | f2 \n> ----+----\n> 1 | 99\n> (1 row)\n\n> That is surely just as broken as the partition-based case.\n\nSo what we need is about like this. This is definitely not something\nto back-patch, since it's taking away what had been a documented\nbehavior. You could imagine trying to prevent such UPDATEs instead,\nbut I judge it not worth the trouble. If anyone were actually using\nthis capability we'd have heard bug reports.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 09 Jan 2023 16:41:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "On Tue, Jan 10, 2023 at 6:41 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > After thinking about this awhile, I feel that we ought to disallow\n> > it in the traditional-inheritance case as well. The reason is that\n> > there are semantic prohibitions on inserting or updating a generated\n> > column, eg\n>\n> > regression=# create table t (f1 int, f2 int generated always as (f1+1) stored);\n> > CREATE TABLE\n> > regression=# update t set f2=42;\n> > ERROR: column \"f2\" can only be updated to DEFAULT\n> > DETAIL: Column \"f2\" is a generated column.\n>\n> > It's not very reasonable to have to recheck that for child tables,\n> > and we don't. But if one does this:\n>\n> > regression=# create table pp (f1 int, f2 int);\n> > CREATE TABLE\n> > regression=# create table cc (f1 int, f2 int generated always as (f1+1) stored) inherits(pp);\n> > NOTICE: merging column \"f1\" with inherited definition\n> > NOTICE: merging column \"f2\" with inherited definition\n> > CREATE TABLE\n> > regression=# insert into cc values(1);\n> > INSERT 0 1\n> > regression=# update pp set f2 = 99 where f1 = 1;\n> > UPDATE 1\n> > regression=# table cc;\n> > f1 | f2\n> > ----+----\n> > 1 | 99\n> > (1 row)\n>\n> > That is surely just as broken as the partition-based case.\n\nAgree that it looks bad.\n\n> So what we need is about like this. This is definitely not something\n> to back-patch, since it's taking away what had been a documented\n> behavior. You could imagine trying to prevent such UPDATEs instead,\n> but I judge it not worth the trouble. If anyone were actually using\n> this capability we'd have heard bug reports.\n\nThanks for the patch. It looks good, though I guess you said that we\nshould also change the error message that CREATE TABLE ... PARTITION\nOF emits to match the other cases while we're here. I've attached a\ndelta patch.\n\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 10 Jan 2023 11:30:16 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Thanks for the patch. It looks good, though I guess you said that we\n> should also change the error message that CREATE TABLE ... PARTITION\n> OF emits to match the other cases while we're here. I've attached a\n> delta patch.\n\nThanks. I hadn't touched that issue because I wasn't entirely sure\nwhich error message(s) you were unhappy with. These changes look\nfine offhand.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Jan 2023 21:38:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "I wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> Thanks for the patch. It looks good, though I guess you said that we\n>> should also change the error message that CREATE TABLE ... PARTITION\n>> OF emits to match the other cases while we're here. I've attached a\n>> delta patch.\n\n> Thanks. I hadn't touched that issue because I wasn't entirely sure\n> which error message(s) you were unhappy with. These changes look\n> fine offhand.\n\nSo, after playing with that a bit ... removing the block in\nparse_utilcmd.c allows you to do\n\nregression=# CREATE TABLE gtest_parent (f1 date NOT NULL, f2 bigint, f3 bigint GENERATED ALWAYS AS (f2 * 2) STORED) PARTITION BY RANGE (f1);\nCREATE TABLE\nregression=# CREATE TABLE gtest_child PARTITION OF gtest_parent (\nregression(# f3 WITH OPTIONS GENERATED ALWAYS AS (f2 * 3) STORED\nregression(# ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\nCREATE TABLE\nregression=# \\d gtest_child\n Table \"public.gtest_child\"\n Column | Type | Collation | Nullable | Default \n--------+--------+-----------+----------+-------------------------------------\n f1 | date | | not null | \n f2 | bigint | | | \n f3 | bigint | | | generated always as (f2 * 3) stored\nPartition of: gtest_parent FOR VALUES FROM ('2016-07-01') TO ('2016-08-01')\n\nregression=# insert into gtest_parent values('2016-07-01', 42);\nINSERT 0 1\nregression=# table gtest_parent;\n f1 | f2 | f3 \n------------+----+-----\n 2016-07-01 | 42 | 126\n(1 row)\n\nThat is, you can make a partition with a different generated expression\nthan the parent has. Do we really want to allow that? I think it works\nas far as the backend is concerned, but it breaks pg_dump, which tries\nto dump this state of affairs as\n\nCREATE TABLE public.gtest_parent (\n f1 date NOT NULL,\n f2 bigint,\n f3 bigint GENERATED ALWAYS AS ((f2 * 2)) STORED\n)\nPARTITION BY RANGE (f1);\n\nCREATE TABLE public.gtest_child (\n f1 date NOT NULL,\n f2 bigint,\n f3 bigint GENERATED ALWAYS AS ((f2 * 3)) STORED\n);\n\nALTER TABLE ONLY public.gtest_parent ATTACH PARTITION public.gtest_child FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\n\nand that fails at reload because the ATTACH PARTITION code path\nchecks for equivalence of the generation expressions.\n\nThis different-generated-expression situation isn't really morally\ndifferent from different ordinary DEFAULT expressions, which we\ndo endeavor to support. So maybe we should deem this a supported\ncase and remove ATTACH PARTITION's insistence that the generation\nexpressions match ... which I think would be a good thing anyway,\nbecause that check-for-same-reverse-compiled-expression business\nis pretty cheesy in itself. AFAIK, 3f7836ff6 took care of the\nonly problem that the backend would have with this, and pg_dump\nlooks like it will work as long as the backend will take the\nATTACH command.\n\nI also looked into making CREATE TABLE ... PARTITION OF reject\nthis case; but that's much harder than it sounds, because what we\nhave at the relevant point is a raw (unanalyzed) expression for\nthe child's generation expression but a cooked one for the\nparent's, so there is no easy way to match the two.\n\nIn short, it's seeming like the rule for both partitioning and\ntraditional inheritance ought to be \"a child column must have\nthe same generated-or-not property as its parent, but their\ngeneration expressions need not be the same\". Thoughts?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 17:13:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "On Wed, Jan 11, 2023 at 7:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> >> Thanks for the patch. It looks good, though I guess you said that we\n> >> should also change the error message that CREATE TABLE ... PARTITION\n> >> OF emits to match the other cases while we're here. I've attached a\n> >> delta patch.\n>\n> > Thanks. I hadn't touched that issue because I wasn't entirely sure\n> > which error message(s) you were unhappy with. These changes look\n> > fine offhand.\n>\n> So, after playing with that a bit ... removing the block in\n> parse_utilcmd.c allows you to do\n>\n> regression=# CREATE TABLE gtest_parent (f1 date NOT NULL, f2 bigint, f3 bigint GENERATED ALWAYS AS (f2 * 2) STORED) PARTITION BY RANGE (f1);\n> CREATE TABLE\n> regression=# CREATE TABLE gtest_child PARTITION OF gtest_parent (\n> regression(# f3 WITH OPTIONS GENERATED ALWAYS AS (f2 * 3) STORED\n> regression(# ) FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\n> CREATE TABLE\n> regression=# \\d gtest_child\n> Table \"public.gtest_child\"\n> Column | Type | Collation | Nullable | Default\n> --------+--------+-----------+----------+-------------------------------------\n> f1 | date | | not null |\n> f2 | bigint | | |\n> f3 | bigint | | | generated always as (f2 * 3) stored\n> Partition of: gtest_parent FOR VALUES FROM ('2016-07-01') TO ('2016-08-01')\n>\n> regression=# insert into gtest_parent values('2016-07-01', 42);\n> INSERT 0 1\n> regression=# table gtest_parent;\n> f1 | f2 | f3\n> ------------+----+-----\n> 2016-07-01 | 42 | 126\n> (1 row)\n>\n> That is, you can make a partition with a different generated expression\n> than the parent has. Do we really want to allow that? I think it works\n> as far as the backend is concerned, but it breaks pg_dump, which tries\n> to dump this state of affairs as\n>\n> CREATE TABLE public.gtest_parent (\n> f1 date NOT NULL,\n> f2 bigint,\n> f3 bigint GENERATED ALWAYS AS ((f2 * 2)) STORED\n> )\n> PARTITION BY RANGE (f1);\n>\n> CREATE TABLE public.gtest_child (\n> f1 date NOT NULL,\n> f2 bigint,\n> f3 bigint GENERATED ALWAYS AS ((f2 * 3)) STORED\n> );\n>\n> ALTER TABLE ONLY public.gtest_parent ATTACH PARTITION public.gtest_child FOR VALUES FROM ('2016-07-01') TO ('2016-08-01');\n>\n> and that fails at reload because the ATTACH PARTITION code path\n> checks for equivalence of the generation expressions.\n>\n> This different-generated-expression situation isn't really morally\n> different from different ordinary DEFAULT expressions, which we\n> do endeavor to support.\n\nAh, right, we are a bit more flexible in allowing that. Though,\npartition-specific defaults, unlike generated columns, are not\nrespected when inserting/updating via the parent:\n\ncreate table partp (a int, b int generated always as (a+1) stored, c\nint default 0) partition by list (a);\ncreate table partc1 partition of partp (b generated always as (a+2)\nstored, c default 1) for values in (1);\ninsert into partp values (1);\ntable partp;\n a | b | c\n---+---+---\n 1 | 3 | 0\n(1 row)\n\ncreate table partc2 partition of partp (b generated always as (a+2)\nstored) for values in (2);\nupdate partp set a = 2;\ntable partp;\n a | b | c\n---+---+---\n 2 | 4 | 0\n(1 row)\n\n> So maybe we should deem this a supported\n> case and remove ATTACH PARTITION's insistence that the generation\n> expressions match\n\nI tend to agree now that we have 3f7836ff6.\n\n> ... which I think would be a good thing anyway,\n> because that check-for-same-reverse-compiled-expression business\n> is pretty cheesy in itself.\n\nHmm, yeah, we usually transpose a parent's expression into one that\nhas a child's attribute numbers and vice versa when checking their\nequivalence.\n\n> AFAIK, 3f7836ff6 took care of the\n> only problem that the backend would have with this, and pg_dump\n> looks like it will work as long as the backend will take the\n> ATTACH command.\n\nRight.\n\n> I also looked into making CREATE TABLE ... PARTITION OF reject\n> this case; but that's much harder than it sounds, because what we\n> have at the relevant point is a raw (unanalyzed) expression for\n> the child's generation expression but a cooked one for the\n> parent's, so there is no easy way to match the two.\n>\n> In short, it's seeming like the rule for both partitioning and\n> traditional inheritance ought to be \"a child column must have\n> the same generated-or-not property as its parent, but their\n> generation expressions need not be the same\". Thoughts?\n\nAgreed.\n\nI've updated your disallow-generated-child-columns-2.patch to do this,\nand have also merged the delta post that I had attached with my last\nemail, whose contents you sound to agree with.\n--\nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 11 Jan 2023 12:43:50 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I've updated your disallow-generated-child-columns-2.patch to do this,\n> and have also merged the delta post that I had attached with my last\n> email, whose contents you sound to agree with.\n\nPushed with some further work to improve the handling of multiple-\ninheritance cases. We still need to insist that all or none of the\nparent columns are generated, but we don't have to require their\ngeneration expressions to be alike: that can be resolved by letting\nthe child table override the expression, much as we've long done for\nplain default expressions. (This did need some work in pg_dump\nafter all.) I'm pretty happy with where this turned out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:58:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "On Thu, Jan 12, 2023 at 5:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > I've updated your disallow-generated-child-columns-2.patch to do this,\n> > and have also merged the delta post that I had attached with my last\n> > email, whose contents you sound to agree with.\n>\n> Pushed with some further work to improve the handling of multiple-\n> inheritance cases. We still need to insist that all or none of the\n> parent columns are generated, but we don't have to require their\n> generation expressions to be alike: that can be resolved by letting\n> the child table override the expression, much as we've long done for\n> plain default expressions. (This did need some work in pg_dump\n> after all.) I'm pretty happy with where this turned out.\n\nThanks, that all looks more consistent now indeed.\n\nI noticed a typo in the doc additions, which I've attached a fix for.\n\n-- \nThanks, Amit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Jan 2023 12:00:22 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I noticed a typo in the doc additions, which I've attached a fix for.\n\nDoh, right, pushed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 22:20:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Hello,\n\n11.01.2023 23:58, Tom Lane wrote:\n> Amit Langote<amitlangote09@gmail.com> writes:\n>> I've updated your disallow-generated-child-columns-2.patch to do this,\n>> and have also merged the delta post that I had attached with my last\n>> email, whose contents you sound to agree with.\n> Pushed with some further work to improve the handling of multiple-\n> inheritance cases. We still need to insist that all or none of the\n> parent columns are generated, but we don't have to require their\n> generation expressions to be alike: that can be resolved by letting\n> the child table override the expression, much as we've long done for\n> plain default expressions. (This did need some work in pg_dump\n> after all.) I'm pretty happy with where this turned out.\nI've encountered a query that triggers an assert added in that commit:\nCREATE TABLE t(a int, b int GENERATED ALWAYS AS (a) STORED) PARTITION BY \nRANGE (a);\nCREATE TABLE tp PARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to (1);\n\nCore was generated by `postgres: law regression [local] CREATE \nTABLE                                 '.\nProgram terminated with signal SIGABRT, Aborted.\n\nwarning: Section `.reg-xstate/3152655' in core file too small.\n#0  __pthread_kill_implementation (no_tid=0, signo=6, \nthreadid=140460372887360) at ./nptl/pthread_kill.c:44\n44      ./nptl/pthread_kill.c: No such file or directory.\n(gdb) bt\n#0  __pthread_kill_implementation (no_tid=0, signo=6, \nthreadid=140460372887360) at ./nptl/pthread_kill.c:44\n#1  __pthread_kill_internal (signo=6, threadid=140460372887360) at \n./nptl/pthread_kill.c:78\n#2  __GI___pthread_kill (threadid=140460372887360, signo=signo@entry=6) \nat ./nptl/pthread_kill.c:89\n#3  0x00007fbf79f0e476 in __GI_raise (sig=sig@entry=6) at \n../sysdeps/posix/raise.c:26\n#4  0x00007fbf79ef47f3 in __GI_abort () at ./stdlib/abort.c:79\n#5  0x000055e76b35b322 in ExceptionalCondition (\n     conditionName=conditionName@entry=0x55e76b4a2240 \n\"!(coldef->generated && !restdef->generated)\",\n     fileName=fileName@entry=0x55e76b49ec71 \"tablecmds.c\", \nlineNumber=lineNumber@entry=3028) at assert.c:66\n#6  0x000055e76afef8c3 in MergeAttributes (schema=0x55e76d480318, \nsupers=supers@entry=0x55e76d474c18,\n     relpersistence=112 'p', is_partition=true, \nsupconstr=supconstr@entry=0x7ffe945a3768) at tablecmds.c:3028\n#7  0x000055e76aff0ef2 in DefineRelation \n(stmt=stmt@entry=0x55e76d44b2d8, relkind=relkind@entry=114 'r', ownerId=10,\n     ownerId@entry=0, typaddress=typaddress@entry=0x0,\n     queryString=queryString@entry=0x55e76d44a408 \"CREATE TABLE tp \nPARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to (1);\") at tablecmds.c:861\n...\n\nWithout asserts enables, the partition created successfully, and\nINSERT INTO t VALUES(0);\nSELECT * FROM t;\nyields:\na | b\n---+---\n0 | 1\n(1 row)\n\nIs this behavior expected?\n\nBest regards,\nAlexander\n\n\n\n\n\nHello,\n\n 11.01.2023 23:58, Tom Lane wrote:\n\n\nAmit Langote <amitlangote09@gmail.com> writes:\n\n\nI've updated your disallow-generated-child-columns-2.patch to do this,\nand have also merged the delta post that I had attached with my last\nemail, whose contents you sound to agree with.\n\n\n\nPushed with some further work to improve the handling of multiple-\ninheritance cases. We still need to insist that all or none of the\nparent columns are generated, but we don't have to require their\ngeneration expressions to be alike: that can be resolved by letting\nthe child table override the expression, much as we've long done for\nplain default expressions. (This did need some work in pg_dump\nafter all.) I'm pretty happy with where this turned out.\n\n\n I've encountered a query that triggers an assert added in that\n commit:\n CREATE TABLE t(a int, b int GENERATED ALWAYS AS (a) STORED)\n PARTITION BY RANGE (a);\n CREATE TABLE tp PARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to\n (1);\n\n Core was generated by `postgres: law regression [local] CREATE\n TABLE                                 '.\n Program terminated with signal SIGABRT, Aborted.\n\n warning: Section `.reg-xstate/3152655' in core file too small.\n #0  __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140460372887360) at ./nptl/pthread_kill.c:44\n 44      ./nptl/pthread_kill.c: No such file or directory.\n (gdb) bt\n #0  __pthread_kill_implementation (no_tid=0, signo=6,\n threadid=140460372887360) at ./nptl/pthread_kill.c:44\n #1  __pthread_kill_internal (signo=6, threadid=140460372887360) at\n ./nptl/pthread_kill.c:78\n #2  __GI___pthread_kill (threadid=140460372887360,\n signo=signo@entry=6) at ./nptl/pthread_kill.c:89\n #3  0x00007fbf79f0e476 in __GI_raise (sig=sig@entry=6) at\n ../sysdeps/posix/raise.c:26\n #4  0x00007fbf79ef47f3 in __GI_abort () at ./stdlib/abort.c:79\n #5  0x000055e76b35b322 in ExceptionalCondition (\n     conditionName=conditionName@entry=0x55e76b4a2240\n \"!(coldef->generated && !restdef->generated)\", \n     fileName=fileName@entry=0x55e76b49ec71 \"tablecmds.c\",\n lineNumber=lineNumber@entry=3028) at assert.c:66\n #6  0x000055e76afef8c3 in MergeAttributes (schema=0x55e76d480318,\n supers=supers@entry=0x55e76d474c18, \n     relpersistence=112 'p', is_partition=true,\n supconstr=supconstr@entry=0x7ffe945a3768) at tablecmds.c:3028\n #7  0x000055e76aff0ef2 in DefineRelation\n (stmt=stmt@entry=0x55e76d44b2d8, relkind=relkind@entry=114 'r',\n ownerId=10, \n     ownerId@entry=0, typaddress=typaddress@entry=0x0, \n     queryString=queryString@entry=0x55e76d44a408 \"CREATE TABLE tp\n PARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to (1);\") at\n tablecmds.c:861\n ...\n\n Without asserts enables, the partition created successfully, and\n INSERT INTO t VALUES(0);\n SELECT * FROM t;\n yields:\n a | b  \n ---+---\n \n 0 | 1\n \n (1 row)\n\nIs this behavior expected?\n\n Best regards,\n Alexander", "msg_date": "Thu, 16 Feb 2023 21:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "On 2023-Feb-16, Alexander Lakhin wrote:\n\n> I've encountered a query that triggers an assert added in that commit:\n> CREATE TABLE t(a int, b int GENERATED ALWAYS AS (a) STORED) PARTITION BY\n> RANGE (a);\n> CREATE TABLE tp PARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to (1);\n\nIt seems wrong that this command is accepted. It should have given an\nerror, because the partition is not allowed to override the generation\nof the value that is specified in the parent table.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"Pido que me den el Nobel por razones humanitarias\" (Nicanor Parra)\n\n\n", "msg_date": "Thu, 16 Feb 2023 19:05:42 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-Feb-16, Alexander Lakhin wrote:\n>> I've encountered a query that triggers an assert added in that commit:\n>> CREATE TABLE t(a int, b int GENERATED ALWAYS AS (a) STORED) PARTITION BY\n>> RANGE (a);\n>> CREATE TABLE tp PARTITION OF t(b DEFAULT 1) FOR VALUES FROM (0) to (1);\n\n> It seems wrong that this command is accepted. It should have given an\n> error, because the partition is not allowed to override the generation\n> of the value that is specified in the parent table.\n\nAgreed. We missed a check somewhere, will fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 16 Feb 2023 17:40:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: ATTACH PARTITION seems to ignore column generation status" } ]
[ { "msg_contents": "57faaf376 added pg_truncate(const char *path, off_t length), but\n\"length\" is ignored under WIN32 and the file is unconditionally\ntruncated to 0.\n\nThere's no live bug, since the only caller passes 0:\n\n| src/backend/storage/smgr/md.c: ret = pg_truncate(path, 0);\n\nBut I guess extension users could be unhappy under win32, so maybe a fix\nshould be backpatched.\n\ndiff --git a/src/backend/storage/file/fd.c b/src/backend/storage/file/fd.c\nindex d4a46f01583..926d000f2ea 100644\n--- a/src/backend/storage/file/fd.c\n+++ b/src/backend/storage/file/fd.c\n@@ -638,7 +638,7 @@ pg_truncate(const char *path, off_t length)\n \tfd = OpenTransientFile(path, O_RDWR | PG_BINARY);\n \tif (fd >= 0)\n \t{\n-\t\tret = ftruncate(fd, 0);\n+\t\tret = ftruncate(fd, length);\n \t\tsave_errno = errno;\n \t\tCloseTransientFile(fd);\n \t\terrno = save_errno;\n\n\n", "msg_date": "Thu, 5 Jan 2023 21:16:53 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "pg_ftruncate hardcodes length=0 but only under windows" }, { "msg_contents": "On Fri, Jan 6, 2023 at 4:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> - ret = ftruncate(fd, 0);\n> + ret = ftruncate(fd, length);\n\nOops. Right. Thanks, pushed.\n\n\n", "msg_date": "Fri, 6 Jan 2023 17:07:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_ftruncate hardcodes length=0 but only under windows" } ]
[ { "msg_contents": "Hi,\n\nWhen developing another feature, I find an existing bug which was reported from Dilip[1].\n\nCurrently, it's possible that we only send a streaming block without sending a\nend of stream message(stream abort) when decoding and streaming a transaction\nthat was aborted due to crash because we might not WAL log a XLOG_XACT_ABORT\nfor such a crashed transaction. This will cause the subscriber(with\nstreaming=on) create a stream file but won't delete it until the apply\nworker restart.\n\nBUG repro(borrowed from Dilip):\n---\n1. start 2 servers(config: logical_decoding_work_mem=64kB)\n./pg_ctl -D data/ -c -l pub_logs start\n./pg_ctl -D data1/ -c -l sub_logs start\n\n2. Publisher:\ncreate table t(a int PRIMARY KEY ,b text);\ncreate publication test_pub for table t\nwith(PUBLISH='insert,delete,update,truncate');\nalter table t replica identity FULL ;\n\n3. Subscription Server:\ncreate table t(a int,b text);\ncreate subscription test_sub CONNECTION 'host=localhost port=10000\ndbname=postgres' PUBLICATION test_pub WITH ( slot_name =\ntest_slot_sub1,streaming=on);\n\n4. Publication Server:\nbegin ;\ninsert into t values (generate_series(1,50000), 'zzzzzzzzz'); -- (while executing this restart publisher in 2-3 secs)\n\nRestart the publication server, while the transaction is still in an\nuncommitted state.\n./pg_ctl -D data/ -c -l pub_logs restart -mi\n---\n\nAfter restarting the publisher, we can see the subscriber receive a streaming\nblock and create a stream file(/base/pgsql_tmp/xxx.fileset).\n\nTo fix it, One idea is to send a stream abort message when we are cleaning up\ncrashed transaction on publisher(e.g. in ReorderBufferAbortOld()). And here is\na tiny patch which changes the same. I have confirmed that the bug is fixed and\nall regression tests pass. I didn't add a testcase because we need to make sure\nthe crash happens before all the WAL logged transactions data are decoded which\ndoesn't seem easy to write a stable test for this.\n\nThoughts ?\n\n[1] https://www.postgresql.org/message-id/CAFiTN-sTYk%3Dh75Jn1a7ee%2B5hOcdQFjKGDvF_0NWQQXmoBv4A%2BA%40mail.gmail.com\n\nBest regards,\nHou zj", "msg_date": "Fri, 6 Jan 2023 03:54:53 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Notify downstream to discard the streamed transaction which was\n aborted due to crash." }, { "msg_contents": "On Fri, Jan 6, 2023 at 9:25 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n>\n> To fix it, One idea is to send a stream abort message when we are cleaning up\n> crashed transaction on publisher(e.g. in ReorderBufferAbortOld()). And here is\n> a tiny patch which changes the same. I have confirmed that the bug is fixed and\n> all regression tests pass. I didn't add a testcase because we need to make sure\n> the crash happens before all the WAL logged transactions data are decoded which\n> doesn't seem easy to write a stable test for this.\n>\n\nYour fix looks good to me. Have you tried this in PG-14 where it was introduced?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 6 Jan 2023 10:44:34 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Notify downstream to discard the streamed transaction which was\n aborted due to crash." }, { "msg_contents": "On Friday, January 6, 2023 1:15 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Fri, Jan 6, 2023 at 9:25 AM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com>\r\n> wrote:\r\n> >\r\n> >\r\n> > To fix it, One idea is to send a stream abort message when we are\r\n> > cleaning up crashed transaction on publisher(e.g. in\r\n> > ReorderBufferAbortOld()). And here is a tiny patch which changes the\r\n> > same. I have confirmed that the bug is fixed and all regression tests\r\n> > pass. I didn't add a testcase because we need to make sure the crash\r\n> > happens before all the WAL logged transactions data are decoded which\r\n> doesn't seem easy to write a stable test for this.\r\n> >\r\n> \r\n> Your fix looks good to me. Have you tried this in PG-14 where it was\r\n> introduced?\r\n\r\nYes, I have confirmed that PG-14 has the same problem and can be fixed after\r\napplying the patch.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Fri, 6 Jan 2023 05:44:16 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Notify downstream to discard the streamed transaction which was\n aborted due to crash." }, { "msg_contents": "On Fri, Jan 6, 2023 at 9:25 AM houzj.fnst@fujitsu.com\n<houzj.fnst@fujitsu.com> wrote:\n>\n> Hi,\n>\n> When developing another feature, I find an existing bug which was reported from Dilip[1].\n>\n> Currently, it's possible that we only send a streaming block without sending a\n> end of stream message(stream abort) when decoding and streaming a transaction\n> that was aborted due to crash because we might not WAL log a XLOG_XACT_ABORT\n> for such a crashed transaction. This will cause the subscriber(with\n> streaming=on) create a stream file but won't delete it until the apply\n> worker restart.\n>\n> BUG repro(borrowed from Dilip):\n> ---\n> 1. start 2 servers(config: logical_decoding_work_mem=64kB)\n> ./pg_ctl -D data/ -c -l pub_logs start\n> ./pg_ctl -D data1/ -c -l sub_logs start\n>\n> 2. Publisher:\n> create table t(a int PRIMARY KEY ,b text);\n> create publication test_pub for table t\n> with(PUBLISH='insert,delete,update,truncate');\n> alter table t replica identity FULL ;\n>\n> 3. Subscription Server:\n> create table t(a int,b text);\n> create subscription test_sub CONNECTION 'host=localhost port=10000\n> dbname=postgres' PUBLICATION test_pub WITH ( slot_name =\n> test_slot_sub1,streaming=on);\n>\n> 4. Publication Server:\n> begin ;\n> insert into t values (generate_series(1,50000), 'zzzzzzzzz'); -- (while executing this restart publisher in 2-3 secs)\n>\n> Restart the publication server, while the transaction is still in an\n> uncommitted state.\n> ./pg_ctl -D data/ -c -l pub_logs restart -mi\n> ---\n>\n> After restarting the publisher, we can see the subscriber receive a streaming\n> block and create a stream file(/base/pgsql_tmp/xxx.fileset).\n>\n> To fix it, One idea is to send a stream abort message when we are cleaning up\n> crashed transaction on publisher(e.g. in ReorderBufferAbortOld()). And here is\n> a tiny patch which changes the same. I have confirmed that the bug is fixed and\n> all regression tests pass. I didn't add a testcase because we need to make sure\n> the crash happens before all the WAL logged transactions data are decoded which\n> doesn't seem easy to write a stable test for this.\n>\n> Thoughts ?\n\nFix looks good to me. Thanks for working on this.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 6 Jan 2023 11:18:29 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Notify downstream to discard the streamed transaction which was\n aborted due to crash." }, { "msg_contents": "On Fri, Jan 6, 2023 at 11:18 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> >\n> > To fix it, One idea is to send a stream abort message when we are cleaning up\n> > crashed transaction on publisher(e.g. in ReorderBufferAbortOld()). And here is\n> > a tiny patch which changes the same. I have confirmed that the bug is fixed and\n> > all regression tests pass. I didn't add a testcase because we need to make sure\n> > the crash happens before all the WAL logged transactions data are decoded which\n> > doesn't seem easy to write a stable test for this.\n> >\n> > Thoughts ?\n>\n> Fix looks good to me. Thanks for working on this.\n>\n\nPushed!\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 7 Jan 2023 16:00:32 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Notify downstream to discard the streamed transaction which was\n aborted due to crash." } ]
[ { "msg_contents": "Hi all,\n\nI've attached the simple patch to add the progress reporting option to\npg_verifybackup. The progress information is displayed with --progress\noption only during the checksum verification, which is the most time\nconsuming task. It cannot be used together with --quiet option.\n\nFeedback is very welcome.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 6 Jan 2023 16:28:42 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Fri, Jan 06, 2023 at 04:28:42PM +0900, Masahiko Sawada wrote:\n> I've attached the simple patch to add the progress reporting option to\n> pg_verifybackup. The progress information is displayed with --progress\n> option only during the checksum verification, which is the most time\n> consuming task. It cannot be used together with --quiet option.\n\nThat looks helpful, particularly when a backup has many relation\nfiles. Calculating the total size when browsing the file list looks\nfine.\n\n+ /* Complain if the specified arguments conflict */\n+ if (show_progress && quiet)\n+ pg_fatal(\"cannot specify both --progress and --quiet\");\n\nNothing on HEAD proposes --progress and --quiet at the same time from\nwhat I can see, so just disabling the combination is fine by me. For\nthe error message, I would recommend to switch to a more generic\npattern, as of:\n\"cannot specify both %s and %s\", \"-P/--progress\", \"-q/--quiet\"\n\nCould you add a check based on command_fails_like() in 004_options.pl,\nat least? A second test to check after the output of --progress would\nbe a nice bonus, for example by sticking --progress into one of the\nexisting commands doing a command_like().\n--\nMichael", "msg_date": "Wed, 1 Feb 2023 10:25:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Wed, Feb 1, 2023 at 10:25 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 06, 2023 at 04:28:42PM +0900, Masahiko Sawada wrote:\n> > I've attached the simple patch to add the progress reporting option to\n> > pg_verifybackup. The progress information is displayed with --progress\n> > option only during the checksum verification, which is the most time\n> > consuming task. It cannot be used together with --quiet option.\n>\n> That looks helpful, particularly when a backup has many relation\n> files. Calculating the total size when browsing the file list looks\n> fine.\n>\n> + /* Complain if the specified arguments conflict */\n> + if (show_progress && quiet)\n> + pg_fatal(\"cannot specify both --progress and --quiet\");\n>\n> Nothing on HEAD proposes --progress and --quiet at the same time from\n> what I can see, so just disabling the combination is fine by me. For\n> the error message, I would recommend to switch to a more generic\n> pattern, as of:\n> \"cannot specify both %s and %s\", \"-P/--progress\", \"-q/--quiet\"\n\nAgreed.\n\n>\n> Could you add a check based on command_fails_like() in 004_options.pl,\n> at least?\n\nAgreed, done in v2 patch.\n\n> A second test to check after the output of --progress would\n> be a nice bonus, for example by sticking --progress into one of the\n> existing commands doing a command_like().\n\nIt seems that the --progress option doesn't work with command_like()\nsince the progress information is written in stderr but command_like()\ndoesn't want it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Feb 2023 14:57:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Thu, Feb 02, 2023 at 02:57:44PM +0900, Masahiko Sawada wrote:\n> It seems that the --progress option doesn't work with command_like()\n> since the progress information is written in stderr but command_like()\n> doesn't want it.\n\nWhat about command_checks_all()? It should check for stderr, stdout\nas well as the expected error code.\n--\nMichael", "msg_date": "Thu, 2 Feb 2023 15:12:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Thu, Feb 2, 2023 at 3:12 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Feb 02, 2023 at 02:57:44PM +0900, Masahiko Sawada wrote:\n> > It seems that the --progress option doesn't work with command_like()\n> > since the progress information is written in stderr but command_like()\n> > doesn't want it.\n>\n> What about command_checks_all()? It should check for stderr, stdout\n> as well as the expected error code.\n\nSeems a good idea. Please find an attached patch.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Feb 2023 17:56:47 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Thu, Feb 02, 2023 at 05:56:47PM +0900, Masahiko Sawada wrote:\n> Seems a good idea. Please find an attached patch.\n\nThat seems rather OK seen from here. I'll see about getting that\napplied except if there is an objection of any kind.\n--\nMichael", "msg_date": "Sat, 4 Feb 2023 12:32:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Sat, Feb 04, 2023 at 12:32:15PM +0900, Michael Paquier wrote:\n> That seems rather OK seen from here. I'll see about getting that\n> applied except if there is an objection of any kind.\n\nOkay, I have looked at that again this morning and I've spotted one\ntiny issue: specifying --progress with --skip-checksums does not\nreally make sense.\n\nIgnoring entries with a bad size would lead to incorrect progress\nreport (for example, say an entry in the manifest has a largely\noversized size number), so your approach on this side is correct. The\napplication of the ignore list via -i is also correct, as a patch\nmatching with should_ignore_relpath() does not compute an extra size\nfor total_size.\n\nI was also wondering for a few minutes while on it whether it would\nhave been cleaner to move the check for should_ignore_relpath()\ndirectly in verify_file_checksum() and verify_backup_file(), but\nnobody has complained about that as being a problem, either.\n\nWhat do you think about the updated version attached?\n--\nMichael", "msg_date": "Mon, 6 Feb 2023 09:35:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Mon, Feb 6, 2023 at 9:35 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Feb 04, 2023 at 12:32:15PM +0900, Michael Paquier wrote:\n> > That seems rather OK seen from here. I'll see about getting that\n> > applied except if there is an objection of any kind.\n>\n> Okay, I have looked at that again this morning and I've spotted one\n> tiny issue: specifying --progress with --skip-checksums does not\n> really make sense.\n\nI thought that too, but I thought it's better to ignore it, instead of\nerroring out. For example, we can specify both --disable and\n--progress options to pg_checksum commands, but we don't write any\nprogress information in this case.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 12:27:51 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Mon, Feb 06, 2023 at 12:27:51PM +0900, Masahiko Sawada wrote:\n> I thought that too, but I thought it's better to ignore it, instead of\n> erroring out. For example, we can specify both --disable and\n> --progress options to pg_checksum commands, but we don't write any\n> progress information in this case.\n\nWell, if you don't feel strongly about that, that's fine by me as\nwell, so I have applied your v3 with the tweaks I posted previously,\nwithout the restriction on --skip-checksums.\n--\nMichael", "msg_date": "Mon, 6 Feb 2023 14:45:43 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Add progress reporting to pg_verifybackup" }, { "msg_contents": "On Mon, Feb 6, 2023 at 2:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 06, 2023 at 12:27:51PM +0900, Masahiko Sawada wrote:\n> > I thought that too, but I thought it's better to ignore it, instead of\n> > erroring out. For example, we can specify both --disable and\n> > --progress options to pg_checksum commands, but we don't write any\n> > progress information in this case.\n>\n> Well, if you don't feel strongly about that, that's fine by me as\n> well, so I have applied your v3 with the tweaks I posted previously,\n> without the restriction on --skip-checksums.\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 6 Feb 2023 15:33:22 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add progress reporting to pg_verifybackup" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/15/ddl-partitioning.html\nDescription:\n\nLink:\nhttps://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\r\n\r\n\"Using ONLY to add or drop a constraint on only the partitioned table is\nsupported as long as there are no partitions. Once partitions exist, using\nONLY will result in an error. Instead, constraints on the partitions\nthemselves can be added and (if they are not present in the parent table)\ndropped.\" This seems in contradiction to the example involving adding a\nunique constraint while minimizing locking at the bottom of \"5.11.2.2.\nPartition Maintenance\", which seems to run fine on my local Pg instance:\r\n\r\n\"\r\nThis technique can be used with UNIQUE and PRIMARY KEY constraints too; the\nindexes are created implicitly when the constraint is created. Example:\r\n\r\n```ALTER TABLE ONLY measurement ADD UNIQUE (city_id, logdate);\r\n\r\nALTER TABLE measurement_y2006m02 ADD UNIQUE (city_id, logdate);\r\nALTER INDEX measurement_city_id_logdate_key\r\n ATTACH PARTITION measurement_y2006m02_city_id_logdate_key;\r\n...\r\n```\r\n\"\r\n\r\nI might be misinterpreting something. Sorry if that's the case! \r\n\r\nThanks,\r\nBryce", "msg_date": "Fri, 06 Jan 2023 08:28:07 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Fri, 2023-01-06 at 08:28 +0000, PG Doc comments form wrote:\n> The following documentation comment has been logged on the website:\n> \n> Page: https://www.postgresql.org/docs/15/ddl-partitioning.html\n> Description:\n> \n> Link:\n> https://www.postgresql.org/docs/current/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE\n> \n> \"Using ONLY to add or drop a constraint on only the partitioned table is\n> supported as long as there are no partitions. Once partitions exist, using\n> ONLY will result in an error. Instead, constraints on the partitions\n> themselves can be added and (if they are not present in the parent table)\n> dropped.\" This seems in contradiction to the example involving adding a\n> unique constraint while minimizing locking at the bottom of \"5.11.2.2.\n> Partition Maintenance\", which seems to run fine on my local Pg instance:\n> \n> \"\n> This technique can be used with UNIQUE and PRIMARY KEY constraints too; the\n> indexes are created implicitly when the constraint is created. Example:\n> \n> ```ALTER TABLE ONLY measurement ADD UNIQUE (city_id, logdate);\n> \n> ALTER TABLE measurement_y2006m02 ADD UNIQUE (city_id, logdate);\n> ALTER INDEX measurement_city_id_logdate_key\n>     ATTACH PARTITION measurement_y2006m02_city_id_logdate_key;\n> ...\n> ```\n> \"\n> \n> I might be misinterpreting something. Sorry if that's the case! \n\nNo, that is actually an omission in the documentation.\n\nThe attached patch tries to improve that.\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 09 Jan 2023 16:40:10 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Mon, 2023-01-09 at 16:40 +0100, Laurenz Albe wrote:\n> > \"Using ONLY to add or drop a constraint on only the partitioned table is\n> > supported as long as there are no partitions. Once partitions exist, using\n> > ONLY will result in an error. Instead, constraints on the partitions\n> > themselves can be added and (if they are not present in the parent table)\n> > dropped.\" This seems in contradiction to the example involving adding a\n> > unique constraint while minimizing locking at the bottom of \"5.11.2.2.\n> > Partition Maintenance\", which seems to run fine on my local Pg instance:\n> > \n> > This technique can be used with UNIQUE and PRIMARY KEY constraints too; the\n> > indexes are created implicitly when the constraint is created. Example:\n> \n> No, that is actually an omission in the documentation.\n> \n> The attached patch tries to improve that.\n\nI am sending a reply to the hackers list, so that I can add the patch to the commitfest.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Fri, 27 Oct 2023 08:58:02 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "That looks good to me!\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 09 Nov 2023 16:29:12 +0000", "msg_from": "shihao zhong <zhong950419@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Thu, Nov 9, 2023 at 10:00 PM shihao zhong <zhong950419@gmail.com> wrote:\n>\n> That looks good to me!\n>\n> The new status of this patch is: Ready for Committer\n\n\nI have reviewed the patch and it is working fine.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 30 Nov 2023 09:41:51 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Fri, Oct 27, 2023 at 12:28 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Mon, 2023-01-09 at 16:40 +0100, Laurenz Albe wrote:\n> > > \"Using ONLY to add or drop a constraint on only the partitioned table is\n> > > supported as long as there are no partitions. Once partitions exist, using\n> > > ONLY will result in an error. Instead, constraints on the partitions\n> > > themselves can be added and (if they are not present in the parent table)\n> > > dropped.\" This seems in contradiction to the example involving adding a\n> > > unique constraint while minimizing locking at the bottom of \"5.11.2.2.\n> > > Partition Maintenance\", which seems to run fine on my local Pg instance:\n> > >\n> > > This technique can be used with UNIQUE and PRIMARY KEY constraints too; the\n> > > indexes are created implicitly when the constraint is created. Example:\n> >\n> > No, that is actually an omission in the documentation.\n> >\n> > The attached patch tries to improve that.\n>\n> I am sending a reply to the hackers list, so that I can add the patch to the commitfest.\n\nMay be attach the patch to hackers thread (this) as well?\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 30 Nov 2023 19:22:05 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Thu, 2023-11-30 at 19:22 +0530, Ashutosh Bapat wrote:\n> May be attach the patch to hackers thread (this) as well?\n\nIf you want, sure. I thought it was good enough if the thread\nis accessible via the commitfest app.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 30 Nov 2023 17:59:04 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Thu, Nov 30, 2023 at 10:29 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Thu, 2023-11-30 at 19:22 +0530, Ashutosh Bapat wrote:\n> > May be attach the patch to hackers thread (this) as well?\n>\n> If you want, sure. I thought it was good enough if the thread\n> is accessible via the commitfest app.\n\nThe addition is long enough that it deserved to be outside of parentheses.\n\nI think it's worth mentioning the exception but in a way that avoids\nrepeating what's mentioned in the last paragraph of just the previous\nsection. I don't have brilliant ideas about how to rephrase it.\n\nMaybe \"Using ONLY to add or drop a constraint, other than PRIMARY and\nUNIQUE, on only the partitioned table is supported as long as there\nare no partitions. ...\".\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Fri, 1 Dec 2023 18:49:46 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Fri, 2023-12-01 at 18:49 +0530, Ashutosh Bapat wrote:\n> On Thu, Nov 30, 2023 at 10:29 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > \n> > On Thu, 2023-11-30 at 19:22 +0530, Ashutosh Bapat wrote:\n> > > May be attach the patch to hackers thread (this) as well?\n> > \n> > If you want, sure.  I thought it was good enough if the thread\n> > is accessible via the commitfest app.\n> \n> The addition is long enough that it deserved to be outside of parentheses.\n> \n> I think it's worth mentioning the exception but in a way that avoids\n> repeating what's mentioned in the last paragraph of just the previous\n> section. I don't have brilliant ideas about how to rephrase it.\n> \n> Maybe \"Using ONLY to add or drop a constraint, other than PRIMARY and\n> UNIQUE, on only the partitioned table is supported as long as there\n> are no partitions. ...\".\n\nI agree that the parenthesis is too long. I shortened it in the attached\npatch. Is that acceptable?\n\nYours,\nLaurenz Albe", "msg_date": "Mon, 04 Dec 2023 21:10:05 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 1:40 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Fri, 2023-12-01 at 18:49 +0530, Ashutosh Bapat wrote:\n> > On Thu, Nov 30, 2023 at 10:29 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > >\n> > > On Thu, 2023-11-30 at 19:22 +0530, Ashutosh Bapat wrote:\n> > > > May be attach the patch to hackers thread (this) as well?\n> > >\n> > > If you want, sure. I thought it was good enough if the thread\n> > > is accessible via the commitfest app.\n> >\n> > The addition is long enough that it deserved to be outside of parentheses.\n> >\n> > I think it's worth mentioning the exception but in a way that avoids\n> > repeating what's mentioned in the last paragraph of just the previous\n> > section. I don't have brilliant ideas about how to rephrase it.\n> >\n> > Maybe \"Using ONLY to add or drop a constraint, other than PRIMARY and\n> > UNIQUE, on only the partitioned table is supported as long as there\n> > are no partitions. ...\".\n>\n> I agree that the parenthesis is too long. I shortened it in the attached\n> patch. Is that acceptable?\n\nIt's still longer than the actual sentence :). I am fine with it if\nsomebody else finds it acceptable.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 5 Dec 2023 20:27:18 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Tue, Dec 5, 2023 at 3:57 PM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Tue, Dec 5, 2023 at 1:40 AM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >\n> > On Fri, 2023-12-01 at 18:49 +0530, Ashutosh Bapat wrote:\n> > > On Thu, Nov 30, 2023 at 10:29 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> > > >\n> > > > On Thu, 2023-11-30 at 19:22 +0530, Ashutosh Bapat wrote:\n> > > > > May be attach the patch to hackers thread (this) as well?\n> > > >\n> > > > If you want, sure. I thought it was good enough if the thread\n> > > > is accessible via the commitfest app.\n> > >\n> > > The addition is long enough that it deserved to be outside of parentheses.\n> > >\n> > > I think it's worth mentioning the exception but in a way that avoids\n> > > repeating what's mentioned in the last paragraph of just the previous\n> > > section. I don't have brilliant ideas about how to rephrase it.\n> > >\n> > > Maybe \"Using ONLY to add or drop a constraint, other than PRIMARY and\n> > > UNIQUE, on only the partitioned table is supported as long as there\n> > > are no partitions. ...\".\n> >\n> > I agree that the parenthesis is too long. I shortened it in the attached\n> > patch. Is that acceptable?\n>\n> It's still longer than the actual sentence :). I am fine with it if\n> somebody else finds it acceptable.\n\nIt still reads a bit weird to me. How about the attached wording instead?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 10 Jan 2024 13:41:23 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Wed, 2024-01-10 at 13:41 +0100, Magnus Hagander wrote:\n> It still reads a bit weird to me. How about the attached wording instead?\n\nThanks! I am fine with your wording.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Wed, 10 Jan 2024 18:08:37 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Wed, Jan 10, 2024 at 10:38 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n>\n> On Wed, 2024-01-10 at 13:41 +0100, Magnus Hagander wrote:\n> > It still reads a bit weird to me. How about the attached wording instead?\n>\n> Thanks! I am fine with your wording.\n\nWorks for me too.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Thu, 11 Jan 2024 15:54:22 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Thu, Jan 11, 2024 at 11:24 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n>\n> On Wed, Jan 10, 2024 at 10:38 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> >\n> > On Wed, 2024-01-10 at 13:41 +0100, Magnus Hagander wrote:\n> > > It still reads a bit weird to me. How about the attached wording instead?\n> >\n> > Thanks! I am fine with your wording.\n>\n> Works for me too.\n\nThanks, applied and backpatched all the way.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Thu, 11 Jan 2024 14:44:16 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" }, { "msg_contents": "On Thu, 2024-01-11 at 14:44 +0100, Magnus Hagander wrote:\n> Thanks, applied and backpatched all the way.\n\nThanks for taking care of that!\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 11 Jan 2024 16:05:51 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Postgres Partitions Limitations (5.11.2.3)" } ]
[ { "msg_contents": "Can we change 'convey' to 'confer' in these recent doc changes?\n\nMaybe 'convey a privilege' isn't exactly wrong but it leaves you \nwondering what exactly is meant.\n\nThanks,\n\nErik", "msg_date": "Fri, 6 Jan 2023 10:24:06 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "convey privileges -> confer privileges" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch to $SUBJECT.\n\nThe wrong comments have been discovered by Robert in [1].\n\nSubmitting this here as a separate thread so it does not get lost in the logical decoding\non standby thread.\n\n[1]: https://www.postgresql.org/message-id/CA%2BTgmoYTTsxP8y6uknZvCBNCRq%2B1FJ4zGbX8Px1TGW459fGsaQ%40mail.gmail.com\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 6 Jan 2023 11:05:07 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Fix comments in gistxlogDelete, xl_heap_freeze_page and\n xl_btree_delete" }, { "msg_contents": "Hi,\n\nOn 1/6/23 11:05 AM, Drouvot, Bertrand wrote:\n> Hi hackers,\n> \n> Please find attached a patch to $SUBJECT.\n> \n> The wrong comments have been discovered by Robert in [1].\n> \n> Submitting this here as a separate thread so it does not get lost in the logical decoding\n> on standby thread.\n> \n> [1]: https://www.postgresql.org/message-id/CA%2BTgmoYTTsxP8y6uknZvCBNCRq%2B1FJ4zGbX8Px1TGW459fGsaQ%40mail.gmail.com\n> \n> Looking forward to your feedback,\n> \n> Regards,\n> \n\nIt looks like I did not create a CF entry for this one: fixed with [1].\n\nAlso, while at it, adding a commit message in V2 attached.\n\n[1]: https://commitfest.postgresql.org/43/4235/\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 2 Mar 2023 14:04:58 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix comments in gistxlogDelete, xl_heap_freeze_page and\n xl_btree_delete" }, { "msg_contents": "On Thu, Mar 2, 2023 at 6:35 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 1/6/23 11:05 AM, Drouvot, Bertrand wrote:\n> > Hi hackers,\n> >\n> > Please find attached a patch to $SUBJECT.\n> >\n> > The wrong comments have been discovered by Robert in [1].\n> >\n> > Submitting this here as a separate thread so it does not get lost in the logical decoding\n> > on standby thread.\n> >\n> > [1]: https://www.postgresql.org/message-id/CA%2BTgmoYTTsxP8y6uknZvCBNCRq%2B1FJ4zGbX8Px1TGW459fGsaQ%40mail.gmail.com\n> >\n> > Looking forward to your feedback,\n> >\n> > Regards,\n> >\n>\n> It looks like I did not create a CF entry for this one: fixed with [1].\n>\n> Also, while at it, adding a commit message in V2 attached.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Mar 2023 17:00:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comments in gistxlogDelete,\n xl_heap_freeze_page and xl_btree_delete" }, { "msg_contents": "Hi,\n\nOn 3/3/23 12:30 PM, Amit Kapila wrote:\n> On Thu, Mar 2, 2023 at 6:35 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> On 1/6/23 11:05 AM, Drouvot, Bertrand wrote:\n>>> Hi hackers,\n>>>\n>>> Please find attached a patch to $SUBJECT.\n>>>\n>>> The wrong comments have been discovered by Robert in [1].\n>>>\n>>> Submitting this here as a separate thread so it does not get lost in the logical decoding\n>>> on standby thread.\n>>>\n>>> [1]: https://www.postgresql.org/message-id/CA%2BTgmoYTTsxP8y6uknZvCBNCRq%2B1FJ4zGbX8Px1TGW459fGsaQ%40mail.gmail.com\n>>>\n>>> Looking forward to your feedback,\n>>>\n>>> Regards,\n>>>\n>>\n>> It looks like I did not create a CF entry for this one: fixed with [1].\n>>\n>> Also, while at it, adding a commit message in V2 attached.\n>>\n> \n> LGTM.\n> \n\nThanks for having looked at it!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 3 Mar 2023 17:28:47 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix comments in gistxlogDelete, xl_heap_freeze_page and\n xl_btree_delete" }, { "msg_contents": "On Fri, Mar 3, 2023 at 11:28 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> Thanks for having looked at it!\n\n+1. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 3 Mar 2023 12:53:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comments in gistxlogDelete,\n xl_heap_freeze_page and xl_btree_delete" }, { "msg_contents": "Hi,\n\nOn 3/3/23 6:53 PM, Robert Haas wrote:\n> On Fri, Mar 3, 2023 at 11:28 AM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>> Thanks for having looked at it!\n> \n> +1. Committed.\n> \n\nThanks!\n\nNot a big deal, but the commit message that has been used is not 100% accurate.\n\nIndeed, for gistxlogDelete, that's the other way around (as\ncompare to what the commit message says).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 4 Mar 2023 09:33:45 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix comments in gistxlogDelete, xl_heap_freeze_page and\n xl_btree_delete" }, { "msg_contents": "On Sat, Mar 4, 2023 at 3:33 AM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n> Indeed, for gistxlogDelete, that's the other way around (as\n> compare to what the commit message says).\n\nWoops. Good point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 6 Mar 2023 10:08:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix comments in gistxlogDelete,\n xl_heap_freeze_page and xl_btree_delete" } ]
[ { "msg_contents": "Hello,\n\nOne of our customers has an issue with partitions and foreign keys. He\nworks on a v13, but the issue is also present on v15.\n\nI attach a SQL script showing the issue, and the results on 13.7, 13.9, and\n15.1. But I'll explain the script here, and its behaviour on 13.9.\n\nThere is one partitioned table, two partitions and a foreign key. The\nforeign key references the same table:\n\ncreate table t1 (\n c1 bigint not null,\n c1_old bigint null,\n c2 bigint not null,\n c2_old bigint null,\n primary key (c1, c2)\n )\n partition by list (c1);\ncreate table t1_a partition of t1 for values in (1);\ncreate table t1_def partition of t1 default;\nalter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\ndelete restrict on update restrict;\n\nI've a SQL function that shows me some information from pg_constraints\n(code of the function in the SQL script attached). Here is the result of\nthis function after creating the table, its partitions, and its foreign key:\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(5 rows)\n\nThe constraint works great :\n\ninsert into t1 values(1, NULL, 2, NULL);\ninsert into t1 values(2, 1, 2, 2);\ndelete from t1 where c1 = 1;\npsql:ticket15010_v3.sql:34: ERROR: update or delete on table \"t1_a\"\nviolates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\nDETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n\nThis error is normal since the line I want to delete is referenced on the\nother line.\n\nIf I try to detach the partition, it also gives me an error.\n\nalter table t1 detach partition t1_a;\npsql:ticket15010_v3.sql:36: ERROR: removing partition \"t1_a\" violates\nforeign key constraint \"t1_c1_old_c2_old_fkey1\"\nDETAIL: Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".\n\nSounds good to me too (well, I'd like it to be smarter and find that the\nconstraint is still good after the detach, but I can understand why it\nwon't allow it).\n\nThe pg_constraint didn't change of course:\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(5 rows)\n\nNow, I'll delete the whole table contents, and I'll detach the partition:\n\ndelete from t1;\nalter table t1 detach partition t1_a;\n\nIt seems to be working, but the content of pg_constraints is weird:\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 |\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(4 rows)\n\nI understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a',\n't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the\n('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.\n\nAnyway, I attach the partition:\n\nalter table t1 attach partition t1_a for values in (1);\n\nBut pg_constraint has not changed:\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(4 rows)\n\nI was expecting to see the fifth tuple coming back, but alas, no.\n\nAnd as a result, the foreign key doesn't work anymore:\n\ninsert into t1 values(1, NULL, 2, NULL);\ninsert into t1 values(2, 1, 2, 2);\ndelete from t1 where c1 = 1;\n\nWell, let's truncate the partitioned table, and drop the partition:\n\ntruncate t1;\ndrop table t1_a;\n\nThe content of pg_constraint looks good to me:\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(3 rows)\n\nLet's create the partition to see if that works better:\n\ncreate table t1_a partition of t1 for values in (1);\n\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(4 rows)\n\ninsert into t1 values(1, NULL, 2, NULL);\nINSERT 0 1\ninsert into t1 values(2, 1, 2, 2);\nINSERT 0 1\ndelete from t1 where c1 = 1;\nDELETE 1\n\nNope. I still miss the fifth tuple in pg_constraint, which results in a\nviolated foreign key.\n\nHow about dropping the foreign key to create it once more:\n\ntruncate t1;\nalter table t1 drop constraint t1_c1_old_c2_old_fkey;\nselect * from show_constraints();\n conname | t | tref | coparent\n---------+---+------+----------\n(0 rows)\n\ndrop table t1_a;\ncreate table t1_a partition of t1 for values in (1);\nalter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\ndelete restrict on update restrict;\nselect * from show_constraints();\n conname | t | tref | coparent\n------------------------+--------+--------+-----------------------\n t1_c1_old_c2_old_fkey | t1 | t1 |\n t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n(5 rows)\n\nI have my fifth row back! And now, the foreign key works as it should:\n\ninsert into t1 values(1, NULL, 2, NULL);\ninsert into t1 values(2, 1, 2, 2);\ndelete from t1 where c1 = 1;\npsql:ticket15010_v3.sql:87: ERROR: update or delete on table \"t1_a\"\nviolates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\nDETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n\nThis is what happens on 13.9 and 15.1. 13.7 shows another weird behaviour,\nbut I guess I'll stop there. Everything is in the attached files.\n\nI'd love to know if I did something wrong, if I didn't understand\nsomething, or if this is simply a bug.\n\nThanks.\n\nRegards.\n\n\n-- \nGuillaume.", "msg_date": "Fri, 6 Jan 2023 11:07:50 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Issue attaching a table to a partitioned table with an\n auto-referenced foreign key" }, { "msg_contents": "Quick ping, just to make sure someone can get a look at this issue :)\nThanks.\n\n\nLe ven. 6 janv. 2023 à 11:07, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Hello,\n>\n> One of our customers has an issue with partitions and foreign keys. He\n> works on a v13, but the issue is also present on v15.\n>\n> I attach a SQL script showing the issue, and the results on 13.7, 13.9,\n> and 15.1. But I'll explain the script here, and its behaviour on 13.9.\n>\n> There is one partitioned table, two partitions and a foreign key. The\n> foreign key references the same table:\n>\n> create table t1 (\n> c1 bigint not null,\n> c1_old bigint null,\n> c2 bigint not null,\n> c2_old bigint null,\n> primary key (c1, c2)\n> )\n> partition by list (c1);\n> create table t1_a partition of t1 for values in (1);\n> create table t1_def partition of t1 default;\n> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n> delete restrict on update restrict;\n>\n> I've a SQL function that shows me some information from pg_constraints\n> (code of the function in the SQL script attached). Here is the result of\n> this function after creating the table, its partitions, and its foreign key:\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (5 rows)\n>\n> The constraint works great :\n>\n> insert into t1 values(1, NULL, 2, NULL);\n> insert into t1 values(2, 1, 2, 2);\n> delete from t1 where c1 = 1;\n> psql:ticket15010_v3.sql:34: ERROR: update or delete on table \"t1_a\"\n> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n>\n> This error is normal since the line I want to delete is referenced on the\n> other line.\n>\n> If I try to detach the partition, it also gives me an error.\n>\n> alter table t1 detach partition t1_a;\n> psql:ticket15010_v3.sql:36: ERROR: removing partition \"t1_a\" violates\n> foreign key constraint \"t1_c1_old_c2_old_fkey1\"\n> DETAIL: Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".\n>\n> Sounds good to me too (well, I'd like it to be smarter and find that the\n> constraint is still good after the detach, but I can understand why it\n> won't allow it).\n>\n> The pg_constraint didn't change of course:\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (5 rows)\n>\n> Now, I'll delete the whole table contents, and I'll detach the partition:\n>\n> delete from t1;\n> alter table t1 detach partition t1_a;\n>\n> It seems to be working, but the content of pg_constraints is weird:\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 |\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (4 rows)\n>\n> I understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a',\n> 't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the\n> ('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.\n>\n> Anyway, I attach the partition:\n>\n> alter table t1 attach partition t1_a for values in (1);\n>\n> But pg_constraint has not changed:\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (4 rows)\n>\n> I was expecting to see the fifth tuple coming back, but alas, no.\n>\n> And as a result, the foreign key doesn't work anymore:\n>\n> insert into t1 values(1, NULL, 2, NULL);\n> insert into t1 values(2, 1, 2, 2);\n> delete from t1 where c1 = 1;\n>\n> Well, let's truncate the partitioned table, and drop the partition:\n>\n> truncate t1;\n> drop table t1_a;\n>\n> The content of pg_constraint looks good to me:\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (3 rows)\n>\n> Let's create the partition to see if that works better:\n>\n> create table t1_a partition of t1 for values in (1);\n>\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (4 rows)\n>\n> insert into t1 values(1, NULL, 2, NULL);\n> INSERT 0 1\n> insert into t1 values(2, 1, 2, 2);\n> INSERT 0 1\n> delete from t1 where c1 = 1;\n> DELETE 1\n>\n> Nope. I still miss the fifth tuple in pg_constraint, which results in a\n> violated foreign key.\n>\n> How about dropping the foreign key to create it once more:\n>\n> truncate t1;\n> alter table t1 drop constraint t1_c1_old_c2_old_fkey;\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ---------+---+------+----------\n> (0 rows)\n>\n> drop table t1_a;\n> create table t1_a partition of t1 for values in (1);\n> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n> delete restrict on update restrict;\n> select * from show_constraints();\n> conname | t | tref | coparent\n> ------------------------+--------+--------+-----------------------\n> t1_c1_old_c2_old_fkey | t1 | t1 |\n> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> (5 rows)\n>\n> I have my fifth row back! And now, the foreign key works as it should:\n>\n> insert into t1 values(1, NULL, 2, NULL);\n> insert into t1 values(2, 1, 2, 2);\n> delete from t1 where c1 = 1;\n> psql:ticket15010_v3.sql:87: ERROR: update or delete on table \"t1_a\"\n> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n>\n> This is what happens on 13.9 and 15.1. 13.7 shows another weird behaviour,\n> but I guess I'll stop there. Everything is in the attached files.\n>\n> I'd love to know if I did something wrong, if I didn't understand\n> something, or if this is simply a bug.\n>\n> Thanks.\n>\n> Regards.\n>\n>\n> --\n> Guillaume.\n>\n\n\n-- \nGuillaume.\n\nQuick ping, just to make sure someone can get a look at this issue :)Thanks.Le ven. 6 janv. 2023 à 11:07, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Hello,One of our customers has an issue with partitions and foreign keys. He works on a v13, but the issue is also present on v15.I attach a SQL script showing the issue, and the results on 13.7, 13.9, and 15.1. But I'll explain the script here, and its behaviour on 13.9.There is one partitioned table, two partitions and a foreign key. The foreign key references the same table:create table t1 (  c1 bigint not null,  c1_old bigint null,  c2 bigint not null,  c2_old bigint null,  primary key (c1, c2)  )  partition by list (c1);create table t1_a   partition of t1 for values in (1);create table t1_def partition of t1 default;alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on delete restrict on update restrict;I've a SQL function that shows me some information from pg_constraints (code of the function in the SQL script attached). Here is the result of this function after creating the table, its partitions, and its foreign key:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)The constraint works great :insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;psql:ticket15010_v3.sql:34: ERROR:  update or delete on table \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"DETAIL:  Key (c1, c2)=(1, 2) is still referenced from table \"t1\".This error is normal since the line I want to delete is referenced on the other line.If I try to detach the partition, it also gives me an error.alter table t1 detach partition t1_a;psql:ticket15010_v3.sql:36: ERROR:  removing partition \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\"DETAIL:  Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".Sounds good to me too (well, I'd like it to be smarter and find that the constraint is still good after the detach, but I can understand why it won't allow it).The pg_constraint didn't change of course:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)Now, I'll delete the whole table contents, and I'll detach the partition:delete from t1;alter table t1 detach partition t1_a;It seems to be working, but the content of pg_constraints is weird:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     |  t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)I understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a', 't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the ('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.Anyway, I attach the partition:alter table t1 attach partition t1_a for values in (1);But pg_constraint has not changed:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)I was expecting to see the fifth tuple coming back, but alas, no.And as a result, the foreign key doesn't work anymore:insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;Well, let's truncate the partitioned table, and drop the partition:truncate t1;drop table t1_a;The content of pg_constraint looks good to me:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(3 rows)Let's create the partition to see if that works better:create table t1_a   partition of t1 for values in (1);select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)insert into t1 values(1, NULL, 2, NULL);INSERT 0 1insert into t1 values(2, 1,    2, 2);INSERT 0 1delete from t1 where c1 = 1;DELETE 1Nope. I still miss the fifth tuple in pg_constraint, which results in a violated foreign key.How about dropping the foreign key to create it once more:truncate t1;alter table t1 drop constraint t1_c1_old_c2_old_fkey;select * from show_constraints(); conname | t | tref | coparent ---------+---+------+----------(0 rows)drop table t1_a;create table t1_a   partition of t1 for values in (1);alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on delete restrict on update restrict;select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)I have my fifth row back! And now, the foreign key works as it should:insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;psql:ticket15010_v3.sql:87: ERROR:  update or delete on table \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"DETAIL:  Key (c1, c2)=(1, 2) is still referenced from table \"t1\".This is what happens on 13.9 and 15.1. 13.7 shows another weird behaviour, but I guess I'll stop there. Everything is in the attached files.I'd love to know if I did something wrong, if I didn't understand something, or if this is simply a bug.Thanks.Regards.-- Guillaume.\n-- Guillaume.", "msg_date": "Tue, 17 Jan 2023 16:53:27 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Issue attaching a table to a partitioned table with an\n auto-referenced foreign key" }, { "msg_contents": "One last ping, hoping someone will have more time now than in january.\n\nPerhaps my test is wrong, but I'd like to know why.\n\nThanks.\n\nLe mar. 17 janv. 2023 à 16:53, Guillaume Lelarge <guillaume@lelarge.info> a\nécrit :\n\n> Quick ping, just to make sure someone can get a look at this issue :)\n> Thanks.\n>\n>\n> Le ven. 6 janv. 2023 à 11:07, Guillaume Lelarge <guillaume@lelarge.info>\n> a écrit :\n>\n>> Hello,\n>>\n>> One of our customers has an issue with partitions and foreign keys. He\n>> works on a v13, but the issue is also present on v15.\n>>\n>> I attach a SQL script showing the issue, and the results on 13.7, 13.9,\n>> and 15.1. But I'll explain the script here, and its behaviour on 13.9.\n>>\n>> There is one partitioned table, two partitions and a foreign key. The\n>> foreign key references the same table:\n>>\n>> create table t1 (\n>> c1 bigint not null,\n>> c1_old bigint null,\n>> c2 bigint not null,\n>> c2_old bigint null,\n>> primary key (c1, c2)\n>> )\n>> partition by list (c1);\n>> create table t1_a partition of t1 for values in (1);\n>> create table t1_def partition of t1 default;\n>> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n>> delete restrict on update restrict;\n>>\n>> I've a SQL function that shows me some information from pg_constraints\n>> (code of the function in the SQL script attached). Here is the result of\n>> this function after creating the table, its partitions, and its foreign key:\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (5 rows)\n>>\n>> The constraint works great :\n>>\n>> insert into t1 values(1, NULL, 2, NULL);\n>> insert into t1 values(2, 1, 2, 2);\n>> delete from t1 where c1 = 1;\n>> psql:ticket15010_v3.sql:34: ERROR: update or delete on table \"t1_a\"\n>> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n>> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n>>\n>> This error is normal since the line I want to delete is referenced on the\n>> other line.\n>>\n>> If I try to detach the partition, it also gives me an error.\n>>\n>> alter table t1 detach partition t1_a;\n>> psql:ticket15010_v3.sql:36: ERROR: removing partition \"t1_a\" violates\n>> foreign key constraint \"t1_c1_old_c2_old_fkey1\"\n>> DETAIL: Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".\n>>\n>> Sounds good to me too (well, I'd like it to be smarter and find that the\n>> constraint is still good after the detach, but I can understand why it\n>> won't allow it).\n>>\n>> The pg_constraint didn't change of course:\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (5 rows)\n>>\n>> Now, I'll delete the whole table contents, and I'll detach the partition:\n>>\n>> delete from t1;\n>> alter table t1 detach partition t1_a;\n>>\n>> It seems to be working, but the content of pg_constraints is weird:\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 |\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (4 rows)\n>>\n>> I understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a',\n>> 't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the\n>> ('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.\n>>\n>> Anyway, I attach the partition:\n>>\n>> alter table t1 attach partition t1_a for values in (1);\n>>\n>> But pg_constraint has not changed:\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (4 rows)\n>>\n>> I was expecting to see the fifth tuple coming back, but alas, no.\n>>\n>> And as a result, the foreign key doesn't work anymore:\n>>\n>> insert into t1 values(1, NULL, 2, NULL);\n>> insert into t1 values(2, 1, 2, 2);\n>> delete from t1 where c1 = 1;\n>>\n>> Well, let's truncate the partitioned table, and drop the partition:\n>>\n>> truncate t1;\n>> drop table t1_a;\n>>\n>> The content of pg_constraint looks good to me:\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (3 rows)\n>>\n>> Let's create the partition to see if that works better:\n>>\n>> create table t1_a partition of t1 for values in (1);\n>>\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (4 rows)\n>>\n>> insert into t1 values(1, NULL, 2, NULL);\n>> INSERT 0 1\n>> insert into t1 values(2, 1, 2, 2);\n>> INSERT 0 1\n>> delete from t1 where c1 = 1;\n>> DELETE 1\n>>\n>> Nope. I still miss the fifth tuple in pg_constraint, which results in a\n>> violated foreign key.\n>>\n>> How about dropping the foreign key to create it once more:\n>>\n>> truncate t1;\n>> alter table t1 drop constraint t1_c1_old_c2_old_fkey;\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ---------+---+------+----------\n>> (0 rows)\n>>\n>> drop table t1_a;\n>> create table t1_a partition of t1 for values in (1);\n>> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n>> delete restrict on update restrict;\n>> select * from show_constraints();\n>> conname | t | tref | coparent\n>> ------------------------+--------+--------+-----------------------\n>> t1_c1_old_c2_old_fkey | t1 | t1 |\n>> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n>> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n>> (5 rows)\n>>\n>> I have my fifth row back! And now, the foreign key works as it should:\n>>\n>> insert into t1 values(1, NULL, 2, NULL);\n>> insert into t1 values(2, 1, 2, 2);\n>> delete from t1 where c1 = 1;\n>> psql:ticket15010_v3.sql:87: ERROR: update or delete on table \"t1_a\"\n>> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n>> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n>>\n>> This is what happens on 13.9 and 15.1. 13.7 shows another weird\n>> behaviour, but I guess I'll stop there. Everything is in the attached files.\n>>\n>> I'd love to know if I did something wrong, if I didn't understand\n>> something, or if this is simply a bug.\n>>\n>> Thanks.\n>>\n>> Regards.\n>>\n>>\n>> --\n>> Guillaume.\n>>\n>\n>\n> --\n> Guillaume.\n>\n\n\n-- \nGuillaume.\n\nOne last ping, hoping someone will have more time now than in january.Perhaps my test is wrong, but I'd like to know why.Thanks.Le mar. 17 janv. 2023 à 16:53, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Quick ping, just to make sure someone can get a look at this issue :)Thanks.Le ven. 6 janv. 2023 à 11:07, Guillaume Lelarge <guillaume@lelarge.info> a écrit :Hello,One of our customers has an issue with partitions and foreign keys. He works on a v13, but the issue is also present on v15.I attach a SQL script showing the issue, and the results on 13.7, 13.9, and 15.1. But I'll explain the script here, and its behaviour on 13.9.There is one partitioned table, two partitions and a foreign key. The foreign key references the same table:create table t1 (  c1 bigint not null,  c1_old bigint null,  c2 bigint not null,  c2_old bigint null,  primary key (c1, c2)  )  partition by list (c1);create table t1_a   partition of t1 for values in (1);create table t1_def partition of t1 default;alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on delete restrict on update restrict;I've a SQL function that shows me some information from pg_constraints (code of the function in the SQL script attached). Here is the result of this function after creating the table, its partitions, and its foreign key:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)The constraint works great :insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;psql:ticket15010_v3.sql:34: ERROR:  update or delete on table \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"DETAIL:  Key (c1, c2)=(1, 2) is still referenced from table \"t1\".This error is normal since the line I want to delete is referenced on the other line.If I try to detach the partition, it also gives me an error.alter table t1 detach partition t1_a;psql:ticket15010_v3.sql:36: ERROR:  removing partition \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\"DETAIL:  Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".Sounds good to me too (well, I'd like it to be smarter and find that the constraint is still good after the detach, but I can understand why it won't allow it).The pg_constraint didn't change of course:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)Now, I'll delete the whole table contents, and I'll detach the partition:delete from t1;alter table t1 detach partition t1_a;It seems to be working, but the content of pg_constraints is weird:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     |  t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)I understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a', 't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the ('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.Anyway, I attach the partition:alter table t1 attach partition t1_a for values in (1);But pg_constraint has not changed:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)I was expecting to see the fifth tuple coming back, but alas, no.And as a result, the foreign key doesn't work anymore:insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;Well, let's truncate the partitioned table, and drop the partition:truncate t1;drop table t1_a;The content of pg_constraint looks good to me:select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(3 rows)Let's create the partition to see if that works better:create table t1_a   partition of t1 for values in (1);select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(4 rows)insert into t1 values(1, NULL, 2, NULL);INSERT 0 1insert into t1 values(2, 1,    2, 2);INSERT 0 1delete from t1 where c1 = 1;DELETE 1Nope. I still miss the fifth tuple in pg_constraint, which results in a violated foreign key.How about dropping the foreign key to create it once more:truncate t1;alter table t1 drop constraint t1_c1_old_c2_old_fkey;select * from show_constraints(); conname | t | tref | coparent ---------+---+------+----------(0 rows)drop table t1_a;create table t1_a   partition of t1 for values in (1);alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on delete restrict on update restrict;select * from show_constraints();        conname         |   t    |  tref  |       coparent        ------------------------+--------+--------+----------------------- t1_c1_old_c2_old_fkey  | t1     | t1     |  t1_c1_old_c2_old_fkey  | t1_a   | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey  | t1_def | t1     | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey1 | t1     | t1_a   | t1_c1_old_c2_old_fkey t1_c1_old_c2_old_fkey2 | t1     | t1_def | t1_c1_old_c2_old_fkey(5 rows)I have my fifth row back! And now, the foreign key works as it should:insert into t1 values(1, NULL, 2, NULL);insert into t1 values(2, 1,    2, 2);delete from t1 where c1 = 1;psql:ticket15010_v3.sql:87: ERROR:  update or delete on table \"t1_a\" violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"DETAIL:  Key (c1, c2)=(1, 2) is still referenced from table \"t1\".This is what happens on 13.9 and 15.1. 13.7 shows another weird behaviour, but I guess I'll stop there. Everything is in the attached files.I'd love to know if I did something wrong, if I didn't understand something, or if this is simply a bug.Thanks.Regards.-- Guillaume.\n-- Guillaume.\n-- Guillaume.", "msg_date": "Wed, 22 Mar 2023 11:14:19 +0100", "msg_from": "Guillaume Lelarge <guillaume@lelarge.info>", "msg_from_op": true, "msg_subject": "Re: Issue attaching a table to a partitioned table with an\n auto-referenced foreign key" }, { "msg_contents": "So I gave a look at this one... And it's a tricky one.\n\nThe current policy about DETACHing a partition is to keep/adjust all FK\nreferencing it or referenced by it.\n\nHowever, in this exact self-referencing usecase, we can have rows referencing\nrows from the same partition OR another one. It seems like an\nimpossible issue to solve.\n\nHere is an example based on Guillaume's scenario ([c1_old, c2_old] -> [c1, c2]):\n\n t1:\n t1_a:\n c1 | c1_old | c2 | c2_old\n ----+--------+----+--------\n 1 | NULL | 2 | NULL\n 1 | 1 | 3 | 2\n 1 | 2 | 4 | 2 \n t1_b:\n c1 | c1_old | c2 | c2_old\n ----+--------+----+--------\n 2 | 1 | 2 | 3\n\nNow, what happens with the FK when we DETACH t1_a?\n * it's not enough t1_a only keeps a self-FK, as it references some\n rows from t1_b:\n (1, 2, 4, 2) -> (2, 1, 2, 3)\n * and t1_a can not only keeps a FK referencing t1 either as it references some\n rows fro itself:\n (1, 1, 3, 2) -> (1, NULL, 2, NULL)\n\nI'm currently not able to think about a constraint we could build to address\nthis situation after the DETACH.\n\nThe only clean way out would be to drop the FK between the old partition and\nthe partitioned table. But then, it breaks the current policy to keep the\nconstraint after DETACH. Not mentioning the nightmare to detect this situation\nfrom some other ones.\n\nThoughts?\n\nOn Wed, 22 Mar 2023 11:14:19 +0100\nGuillaume Lelarge <guillaume@lelarge.info> wrote:\n\n> One last ping, hoping someone will have more time now than in january.\n> \n> Perhaps my test is wrong, but I'd like to know why.\n> \n> Thanks.\n> \n> Le mar. 17 janv. 2023 à 16:53, Guillaume Lelarge <guillaume@lelarge.info> a\n> écrit :\n> \n> > Quick ping, just to make sure someone can get a look at this issue :)\n> > Thanks.\n> >\n> >\n> > Le ven. 6 janv. 2023 à 11:07, Guillaume Lelarge <guillaume@lelarge.info>\n> > a écrit :\n> > \n> >> Hello,\n> >>\n> >> One of our customers has an issue with partitions and foreign keys. He\n> >> works on a v13, but the issue is also present on v15.\n> >>\n> >> I attach a SQL script showing the issue, and the results on 13.7, 13.9,\n> >> and 15.1. But I'll explain the script here, and its behaviour on 13.9.\n> >>\n> >> There is one partitioned table, two partitions and a foreign key. The\n> >> foreign key references the same table:\n> >>\n> >> create table t1 (\n> >> c1 bigint not null,\n> >> c1_old bigint null,\n> >> c2 bigint not null,\n> >> c2_old bigint null,\n> >> primary key (c1, c2)\n> >> )\n> >> partition by list (c1);\n> >> create table t1_a partition of t1 for values in (1);\n> >> create table t1_def partition of t1 default;\n> >> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n> >> delete restrict on update restrict;\n> >>\n> >> I've a SQL function that shows me some information from pg_constraints\n> >> (code of the function in the SQL script attached). Here is the result of\n> >> this function after creating the table, its partitions, and its foreign\n> >> key:\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (5 rows)\n> >>\n> >> The constraint works great :\n> >>\n> >> insert into t1 values(1, NULL, 2, NULL);\n> >> insert into t1 values(2, 1, 2, 2);\n> >> delete from t1 where c1 = 1;\n> >> psql:ticket15010_v3.sql:34: ERROR: update or delete on table \"t1_a\"\n> >> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n> >> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n> >>\n> >> This error is normal since the line I want to delete is referenced on the\n> >> other line.\n> >>\n> >> If I try to detach the partition, it also gives me an error.\n> >>\n> >> alter table t1 detach partition t1_a;\n> >> psql:ticket15010_v3.sql:36: ERROR: removing partition \"t1_a\" violates\n> >> foreign key constraint \"t1_c1_old_c2_old_fkey1\"\n> >> DETAIL: Key (c1_old, c2_old)=(1, 2) is still referenced from table \"t1\".\n> >>\n> >> Sounds good to me too (well, I'd like it to be smarter and find that the\n> >> constraint is still good after the detach, but I can understand why it\n> >> won't allow it).\n> >>\n> >> The pg_constraint didn't change of course:\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (5 rows)\n> >>\n> >> Now, I'll delete the whole table contents, and I'll detach the partition:\n> >>\n> >> delete from t1;\n> >> alter table t1 detach partition t1_a;\n> >>\n> >> It seems to be working, but the content of pg_constraints is weird:\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (4 rows)\n> >>\n> >> I understand why the ('t1_c1_old_c2_old_fkey1', 't1', 't1_a',\n> >> 't1_c1_old_c2_old_fkey') tuple has gone but I don't understand why the\n> >> ('t1_c1_old_c2_old_fkey', 't1_a', 't1', NULL) tuple is still there.\n> >>\n> >> Anyway, I attach the partition:\n> >>\n> >> alter table t1 attach partition t1_a for values in (1);\n> >>\n> >> But pg_constraint has not changed:\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (4 rows)\n> >>\n> >> I was expecting to see the fifth tuple coming back, but alas, no.\n> >>\n> >> And as a result, the foreign key doesn't work anymore:\n> >>\n> >> insert into t1 values(1, NULL, 2, NULL);\n> >> insert into t1 values(2, 1, 2, 2);\n> >> delete from t1 where c1 = 1;\n> >>\n> >> Well, let's truncate the partitioned table, and drop the partition:\n> >>\n> >> truncate t1;\n> >> drop table t1_a;\n> >>\n> >> The content of pg_constraint looks good to me:\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (3 rows)\n> >>\n> >> Let's create the partition to see if that works better:\n> >>\n> >> create table t1_a partition of t1 for values in (1);\n> >>\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (4 rows)\n> >>\n> >> insert into t1 values(1, NULL, 2, NULL);\n> >> INSERT 0 1\n> >> insert into t1 values(2, 1, 2, 2);\n> >> INSERT 0 1\n> >> delete from t1 where c1 = 1;\n> >> DELETE 1\n> >>\n> >> Nope. I still miss the fifth tuple in pg_constraint, which results in a\n> >> violated foreign key.\n> >>\n> >> How about dropping the foreign key to create it once more:\n> >>\n> >> truncate t1;\n> >> alter table t1 drop constraint t1_c1_old_c2_old_fkey;\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ---------+---+------+----------\n> >> (0 rows)\n> >>\n> >> drop table t1_a;\n> >> create table t1_a partition of t1 for values in (1);\n> >> alter table t1 add foreign key (c1_old, c2_old) references t1 (c1, c2) on\n> >> delete restrict on update restrict;\n> >> select * from show_constraints();\n> >> conname | t | tref | coparent\n> >> ------------------------+--------+--------+-----------------------\n> >> t1_c1_old_c2_old_fkey | t1 | t1 |\n> >> t1_c1_old_c2_old_fkey | t1_a | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey | t1_def | t1 | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey1 | t1 | t1_a | t1_c1_old_c2_old_fkey\n> >> t1_c1_old_c2_old_fkey2 | t1 | t1_def | t1_c1_old_c2_old_fkey\n> >> (5 rows)\n> >>\n> >> I have my fifth row back! And now, the foreign key works as it should:\n> >>\n> >> insert into t1 values(1, NULL, 2, NULL);\n> >> insert into t1 values(2, 1, 2, 2);\n> >> delete from t1 where c1 = 1;\n> >> psql:ticket15010_v3.sql:87: ERROR: update or delete on table \"t1_a\"\n> >> violates foreign key constraint \"t1_c1_old_c2_old_fkey1\" on table \"t1\"\n> >> DETAIL: Key (c1, c2)=(1, 2) is still referenced from table \"t1\".\n> >>\n> >> This is what happens on 13.9 and 15.1. 13.7 shows another weird\n> >> behaviour, but I guess I'll stop there. Everything is in the attached\n> >> files.\n> >>\n> >> I'd love to know if I did something wrong, if I didn't understand\n> >> something, or if this is simply a bug.\n> >>\n> >> Thanks.\n> >>\n> >> Regards.\n> >>\n> >>\n> >> --\n> >> Guillaume.\n> >> \n> >\n> >\n> > --\n> > Guillaume.\n> > \n> \n> \n\n\n\n", "msg_date": "Fri, 7 Jul 2023 17:58:59 +0200", "msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>", "msg_from_op": false, "msg_subject": "Re: Issue attaching a table to a partitioned table with an\n auto-referenced foreign key" } ]
[ { "msg_contents": "Hello,\n\nwe have the great fuzzy string match, that comes up with suggestions in the case of a typo of a column name.\n\nSince underscores are the de facto standard of separating words, it would also make sense to also generate suggestions, if the order of words gets mixed up. Example: If the user types timstamp_entry instead of entry_timestamp the suggestion shows up.\n\nThe attached patch does that for up to three segments, that are separated by underscores. The permutation of two segments is treated the same way a wrongly typed char would be.\n\nThe permutation is skipped, if the typed column name contains more than 6 underscores to prevent a meaningful (measured on my development machine) slowdown, if the user types to many underscores. In terms of underscores m and the length of the individual strings n_att and n_col the trivial upper bound is O(n_att * n_col * m^2). Considering, that strings with a lot of underscores have a bigger likelihood of being long as well, I simply decided to add it. I still wonder a bit whether it should be disabled entirely (as this patch does) or only the swap-three sections part as the rest would bound by O(n_att * n_col * m). But the utility of only swapping two sections seems a bit dubious to me, if I have 7 or more of them.\n\nTo me this patch seems simple (if string handling in C can be called that way) and self contained. Despite my calculations above, it resides in a non performance critical piece of code. I think of it as a quality of life thing.\nLet me know what you think. Thank you!\n\nRegards\nArne", "msg_date": "Fri, 6 Jan 2023 21:29:12 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Hello Arne,\n\nThe goal of supporting words-switching hints sounds interesting and I've\ntried to apply your patch.\nThe patch was applied smoothly to the latest master and check-world\nreported no problems. Although I had problems after trying to test the new\nfunctionality.\n\nI tried to simply mix words in pg_stat_activity.wait_event_type:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\nin MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] ERROR: column \"wait_type_event\" does\nnot exist at character 8\n2023-07-06 14:12:35.968 MSK [1480] HINT: Perhaps you meant to reference\nthe column \"pg_stat_activity.wait_event_type\".\n2023-07-06 14:12:35.968 MSK [1480] STATEMENT: select wait_type_event from\npg_stat_activity ;\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nERROR: column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n ^\nHINT: Perhaps you meant to reference the column\n\"pg_stat_activity.wait_event_type\".\npostgres=#\n\nSo the desired hint is really there, but thgether with looots of warnings.\nFor sure these should not be be encountered.\n\nAnd no, this is not some kind of side problem brought by some other commit.\nThe same request on a plain master branch performs without these warnings:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:10:17.171 MSK [22431] ERROR: column \"wait_type_event\" does\nnot exist at character 8\n2023-07-06 14:10:17.171 MSK [22431] STATEMENT: select wait_type_event from\npg_stat_activity ;\nERROR: column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n--\n best regards,\n Mikhail A. Gribkov\n\ne-mail: youzhick@gmail.com\n*http://www.flickr.com/photos/youzhick/albums\n<http://www.flickr.com/photos/youzhick/albums>*\nhttp://www.strava.com/athletes/5085772\nphone: +7(916)604-71-12\nTelegram: @youzhick\n\nHello Arne,The goal of supporting words-switching hints sounds interesting and I've tried to apply your patch.The patch was applied smoothly to the latest master and check-world reported no problems. Although I had problems after trying to test the new functionality.I tried to simply mix words in pg_stat_activity.wait_event_type:postgres=# select wait_type_event from pg_stat_activity ;2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf302023-07-06 14:12:35.968 MSK [1480] ERROR:  column \"wait_type_event\" does not exist at character 82023-07-06 14:12:35.968 MSK [1480] HINT:  Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".2023-07-06 14:12:35.968 MSK [1480] STATEMENT:  select wait_type_event from pg_stat_activity ;WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30ERROR:  column \"wait_type_event\" does not existLINE 1: select wait_type_event from pg_stat_activity ;               ^HINT:  Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".postgres=#So the desired hint is really there, but thgether with looots of warnings. For sure these should not be be encountered.And no, this is not some kind of side problem brought by some other commit. The same request on a plain master branch performs without these warnings:postgres=# select wait_type_event from pg_stat_activity ;2023-07-06 14:10:17.171 MSK [22431] ERROR:  column \"wait_type_event\" does not exist at character 82023-07-06 14:10:17.171 MSK [22431] STATEMENT:  select wait_type_event from pg_stat_activity ;ERROR:  column \"wait_type_event\" does not existLINE 1: select wait_type_event from pg_stat_activity ;-- best regards,    Mikhail A. Gribkove-mail: youzhick@gmail.comhttp://www.flickr.com/photos/youzhick/albumshttp://www.strava.com/athletes/5085772phone: +7(916)604-71-12Telegram: @youzhick", "msg_date": "Thu, 6 Jul 2023 14:31:00 +0300", "msg_from": "Mikhail Gribkov <youzhick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, failed\nDocumentation: tested, failed\n\nHello Arne,\r\n\r\nThe goal of supporting words-switching hints sounds interesting and I've tried to apply your patch.\r\nThe patch was applied smoothly to the latest master and check-world reported no problems. Although I had problems after trying to test the new functionality.\r\n\r\nI tried to simply mix words in pg_stat_activity.wait_event_type:\r\n\r\npostgres=# select wait_type_event from pg_stat_activity ;\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\n2023-07-06 14:12:35.968 MSK [1480] ERROR: column \"wait_type_event\" does not exist at character 8\r\n2023-07-06 14:12:35.968 MSK [1480] HINT: Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\r\n2023-07-06 14:12:35.968 MSK [1480] STATEMENT: select wait_type_event from pg_stat_activity ;\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\r\nERROR: column \"wait_type_event\" does not exist\r\nLINE 1: select wait_type_event from pg_stat_activity ;\r\n ^\r\nHINT: Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\r\npostgres=#\r\n\r\nSo the desired hint is really there, but thgether with looots of warnings. For sure these should not be be encountered.\r\n\r\nAnd no, this is not some kind of side problem brought by some other commit. The same request on a plain master branch performs without these warnings:\r\n\r\npostgres=# select wait_type_event from pg_stat_activity ;\r\n2023-07-06 14:10:17.171 MSK [22431] ERROR: column \"wait_type_event\" does not exist at character 8\r\n2023-07-06 14:10:17.171 MSK [22431] STATEMENT: select wait_type_event from pg_stat_activity ;\r\nERROR: column \"wait_type_event\" does not exist\r\nLINE 1: select wait_type_event from pg_stat_activity ;\n\nThe new status of this patch is: Waiting on Author\n", "msg_date": "Thu, 06 Jul 2023 11:52:41 +0000", "msg_from": "Mikhail Gribkov <youzhick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Hello Mikhail,\n\nI'm sorry. Please try attached patch instead.\n\nThank you for having a look!\n\nRegards\nArne\n\n________________________________\nFrom: Mikhail Gribkov <youzhick@gmail.com>\nSent: Thursday, July 6, 2023 13:31\nTo: Arne Roland <A.Roland@index.de>\nCc: Pg Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Permute underscore separated components of columns before fuzzy matching\n\nHello Arne,\n\nThe goal of supporting words-switching hints sounds interesting and I've tried to apply your patch.\nThe patch was applied smoothly to the latest master and check-world reported no problems. Although I had problems after trying to test the new functionality.\n\nI tried to simply mix words in pg_stat_activity.wait_event_type:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] ERROR: column \"wait_type_event\" does not exist at character 8\n2023-07-06 14:12:35.968 MSK [1480] HINT: Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\n2023-07-06 14:12:35.968 MSK [1480] STATEMENT: select wait_type_event from pg_stat_activity ;\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING: detected write past chunk end in MessageContext 0x559d668aaf30\nERROR: column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n ^\nHINT: Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\npostgres=#\n\nSo the desired hint is really there, but thgether with looots of warnings. For sure these should not be be encountered.\n\nAnd no, this is not some kind of side problem brought by some other commit. The same request on a plain master branch performs without these warnings:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:10:17.171 MSK [22431] ERROR: column \"wait_type_event\" does not exist at character 8\n2023-07-06 14:10:17.171 MSK [22431] STATEMENT: select wait_type_event from pg_stat_activity ;\nERROR: column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n--\n best regards,\n Mikhail A. Gribkov\n\ne-mail: youzhick@gmail.com<mailto:youzhick@gmail.com>\nhttp://www.flickr.com/photos/youzhick/albums\nhttp://www.strava.com/athletes/5085772\nphone: +7(916)604-71-12\nTelegram: @youzhick", "msg_date": "Sun, 16 Jul 2023 22:42:42 +0000", "msg_from": "Arne Roland <A.Roland@index.de>", "msg_from_op": true, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Hello Arne,\n\nyep, now the warnings have gone. And I must thank you for quite a fun time\nI had here testing your patch :) I tried even some weird combinations like\nthis:\npostgres=# create table t(\"_ __ ___\" int);\nCREATE TABLE\npostgres=# select \"__ _ ___\" from t;\nERROR: column \"__ _ ___\" does not exist\nLINE 1: select \"__ _ ___\" from t;\n ^\nHINT: Perhaps you meant to reference the column \"t._ __ ___\".\npostgres=# select \"___ __ _\" from t;\nERROR: column \"___ __ _\" does not exist\nLINE 1: select \"___ __ _\" from t;\n ^\nHINT: Perhaps you meant to reference the column \"t._ __ ___\".\npostgres=#\n\n... and it still worked fine.\nHonestly I'm not entirely sure fixing only two switched words is worth the\neffort, but the declared goal is clearly achieved.\n\nI think the patch is good to go, although you need to fix code formatting.\nAt least the char*-definition and opening \"{\" brackets are conspicuous.\nMaybe there are more: it is worth running pgindend tool.\n\nAnd it would be much more convenient to work with your patch if every next\nversion file will have a unique name (maybe something like \"_v2\", \"_v3\"\netc. suffixes)\n\n--\n best regards,\n Mikhail A. Gribkov\n\ne-mail: youzhick@gmail.com\n*http://www.flickr.com/photos/youzhick/albums\n<http://www.flickr.com/photos/youzhick/albums>*\nhttp://www.strava.com/athletes/5085772\nphone: +7(916)604-71-12\nTelegram: @youzhick\n\n\n\nOn Mon, Jul 17, 2023 at 1:42 AM Arne Roland <A.Roland@index.de> wrote:\n\n> Hello Mikhail,\n>\n> I'm sorry. Please try attached patch instead.\n>\n> Thank you for having a look!\n>\n> Regards\n> Arne\n>\n> ------------------------------\n> *From:* Mikhail Gribkov <youzhick@gmail.com>\n> *Sent:* Thursday, July 6, 2023 13:31\n> *To:* Arne Roland <A.Roland@index.de>\n> *Cc:* Pg Hackers <pgsql-hackers@lists.postgresql.org>\n> *Subject:* Re: Permute underscore separated components of columns before\n> fuzzy matching\n>\n> Hello Arne,\n>\n> The goal of supporting words-switching hints sounds interesting and I've\n> tried to apply your patch.\n> The patch was applied smoothly to the latest master and check-world\n> reported no problems. Although I had problems after trying to test the new\n> functionality.\n>\n> I tried to simply mix words in pg_stat_activity.wait_event_type:\n>\n> postgres=# select wait_type_event from pg_stat_activity ;\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] WARNING: detected write past chunk end\n> in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> 2023-07-06 14:12:35.968 MSK [1480] ERROR: column \"wait_type_event\" does\n> not exist at character 8\n> 2023-07-06 14:12:35.968 MSK [1480] HINT: Perhaps you meant to reference\n> the column \"pg_stat_activity.wait_event_type\".\n> 2023-07-06 14:12:35.968 MSK [1480] STATEMENT: select wait_type_event from\n> pg_stat_activity ;\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> WARNING: detected write past chunk end in MessageContext 0x559d668aaf30\n> ERROR: column \"wait_type_event\" does not exist\n> LINE 1: select wait_type_event from pg_stat_activity ;\n> ^\n> HINT: Perhaps you meant to reference the column\n> \"pg_stat_activity.wait_event_type\".\n> postgres=#\n>\n> So the desired hint is really there, but thgether with looots of warnings.\n> For sure these should not be be encountered.\n>\n> And no, this is not some kind of side problem brought by some other\n> commit. The same request on a plain master branch performs without these\n> warnings:\n>\n> postgres=# select wait_type_event from pg_stat_activity ;\n> 2023-07-06 14:10:17.171 MSK [22431] ERROR: column \"wait_type_event\" does\n> not exist at character 8\n> 2023-07-06 14:10:17.171 MSK [22431] STATEMENT: select wait_type_event\n> from pg_stat_activity ;\n> ERROR: column \"wait_type_event\" does not exist\n> LINE 1: select wait_type_event from pg_stat_activity ;\n> --\n> best regards,\n> Mikhail A. Gribkov\n>\n> e-mail: youzhick@gmail.com\n> *http://www.flickr.com/photos/youzhick/albums\n> <http://www.flickr.com/photos/youzhick/albums>*\n> http://www.strava.com/athletes/5085772\n> phone: +7(916)604-71-12\n> Telegram: @youzhick\n>\n>\n\nHello Arne,yep, now the warnings have gone. And I must thank you for quite a fun time I had here testing your patch :) I tried even some weird combinations like this:postgres=# create table t(\"_ __ ___\" int);CREATE TABLEpostgres=# select \"__ _ ___\" from t;ERROR:  column \"__ _ ___\" does not existLINE 1: select \"__ _ ___\" from t;               ^HINT:  Perhaps you meant to reference the column \"t._ __ ___\".postgres=# select \"___ __ _\" from t;ERROR:  column \"___ __ _\" does not existLINE 1: select \"___ __ _\" from t;               ^HINT:  Perhaps you meant to reference the column \"t._ __ ___\".postgres=#... and it still worked fine.Honestly I'm not entirely sure fixing only two switched words is worth the effort, but the declared goal is clearly achieved. I think the patch is good to go, although you need to fix code formatting. At least the char*-definition and opening \"{\" brackets are conspicuous. Maybe there are more: it is worth running pgindend tool.And it would be much more convenient to work with your patch if every next version file will have a unique name (maybe something like \"_v2\", \"_v3\" etc. suffixes)-- best regards,    Mikhail A. Gribkove-mail: youzhick@gmail.comhttp://www.flickr.com/photos/youzhick/albumshttp://www.strava.com/athletes/5085772phone: +7(916)604-71-12Telegram: @youzhickOn Mon, Jul 17, 2023 at 1:42 AM Arne Roland <A.Roland@index.de> wrote:\n\n\nHello Mikhail,\n\n\n\n\nI'm sorry. Please try attached patch instead.\n\n\n\n\nThank you for having a look!\n\n\n\n\n\nRegards\n\nArne\n\n\n\n\n\n\nFrom: Mikhail Gribkov <youzhick@gmail.com>\nSent: Thursday, July 6, 2023 13:31\nTo: Arne Roland <A.Roland@index.de>\nCc: Pg Hackers <pgsql-hackers@lists.postgresql.org>\nSubject: Re: Permute underscore separated components of columns before fuzzy matching\n \n\n\n\nHello Arne,\n\nThe goal of supporting words-switching hints sounds interesting and I've tried to apply your patch.\nThe patch was applied smoothly to the latest master and check-world reported no problems. Although I had problems after trying to test the new functionality.\n\nI tried to simply mix words in pg_stat_activity.wait_event_type:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] WARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\n2023-07-06 14:12:35.968 MSK [1480] ERROR:  column \"wait_type_event\" does not exist at character 8\n2023-07-06 14:12:35.968 MSK [1480] HINT:  Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\n2023-07-06 14:12:35.968 MSK [1480] STATEMENT:  select wait_type_event from pg_stat_activity ;\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nWARNING:  detected write past chunk end in MessageContext 0x559d668aaf30\nERROR:  column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n               ^\nHINT:  Perhaps you meant to reference the column \"pg_stat_activity.wait_event_type\".\npostgres=#\n\nSo the desired hint is really there, but thgether with looots of warnings. For sure these should not be be encountered.\n\nAnd no, this is not some kind of side problem brought by some other commit. The same request on a plain master branch performs without these warnings:\n\npostgres=# select wait_type_event from pg_stat_activity ;\n2023-07-06 14:10:17.171 MSK [22431] ERROR:  column \"wait_type_event\" does not exist at character 8\n2023-07-06 14:10:17.171 MSK [22431] STATEMENT:  select wait_type_event from pg_stat_activity ;\nERROR:  column \"wait_type_event\" does not exist\nLINE 1: select wait_type_event from pg_stat_activity ;\n\n\n\n\n\n\n\n\n\n--\n best regards,\n    Mikhail A. Gribkov\n\ne-mail: youzhick@gmail.com\nhttp://www.flickr.com/photos/youzhick/albums\n\nhttp://www.strava.com/athletes/5085772\nphone: +7(916)604-71-12\nTelegram: @youzhick", "msg_date": "Mon, 24 Jul 2023 20:12:16 +0300", "msg_from": "Mikhail Gribkov <youzhick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" }, { "msg_contents": "Mikhail Gribkov <youzhick@gmail.com> writes:\n> Honestly I'm not entirely sure fixing only two switched words is worth the\n> effort, but the declared goal is clearly achieved.\n\n> I think the patch is good to go, although you need to fix code formatting.\n\nI took a brief look at this. I concur that we shouldn't need to be\nhugely concerned about the speed of this code path. However, we *do*\nneed to be concerned about its maintainability, and I think the patch\nfalls down badly there: it adds a chunk of very opaque and essentially\nundocumented code, that people will need to reverse-engineer anytime\nthey are studying this function. That could be alleviated perhaps\nwith more work on comments, but I have to wonder whether it's worth\ncarrying this logic at all. It's a rather strange behavior to add,\nand I wonder if many users will want it.\n\nOne thing that struck me is that no care is being taken for adjacent\nunderscores (that is, \"foo__bar\" and similar cases). It seems\nunlikely that treating the zero-length substring between the\nunderscores as a word to permute is helpful; moreover, it adds\nan edge case that the string-moving logic could easily get wrong.\nI wonder if the code should treat any number of consecutive\nunderscores as a single separator. (Somewhat related: I think it\nwill behave oddly when the first or last character is '_', since the\nouter loop ignores those positions.)\n\n> And it would be much more convenient to work with your patch if every next\n> version file will have a unique name (maybe something like \"_v2\", \"_v3\"\n> etc. suffixes)\n\nPlease. It's very confusing when there are multiple identically-named\npatches in a thread.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 17 Nov 2023 16:23:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Permute underscore separated components of columns before fuzzy\n matching" } ]
[ { "msg_contents": "Dear all\n\nMobilityDB (https://github.com/MobilityDB/MobilityDB) defines at the C\nlevel four template types: Set, Span, SpanSet, and Temporal. The type Set\nis akin to PostgreSQL's ArrayType restricted to one dimension, but enforces\nthe constraint that sets do not have duplicates, the types Span and SpanSet\nare akin to PostgreSQL's RangeType and MultirangeType but enforce the\nconstraints that span types are of fixed length and that empty spans and\ninfinite bounds are not allowed, and the typeTemporal is used to\nmanipulate time-varying values.\n\nThese template types need to be instantiated at the SQL level with base\ntypes (int, bigint, float, timestamptz, text, ...) and because of this,\nMobilityDB needs to define numerous SQL functions that all call the same\nfunction in C. Taking as example the Set type, we need to define, e.g.,\n\nCREATE FUNCTION intset_eq(intset, intset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_eq' ...\nCREATE FUNCTION bigintset_eq(bigintset, bigintset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_eq' ...\nCREATE FUNCTION floatset_eq(floatset, floatset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_eq' ...\nCREATE FUNCTION textset_eq(textset, textset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_eq' ...\n...\n\nCREATE FUNCTION intset_ne(intset, intset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_ne' ...\nCREATE FUNCTION bigintset_ne(bigintset, bigintset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_ne' ...\nCREATE FUNCTION floatset_ne(floatset, floatset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_ne' ...\nCREATE FUNCTION textset_ne(textset, textset) RETURNS bool AS\n'MODULE_PATHNAME', 'Set_ne' ...\n...\n\nIn the case of arrays, ranges, and multiranges, PostgreSQL avoids this\nredundancy using pseudo-types such as anyarray, anyrange, anymultirange, ...\n\nIs there a possibility that we can also define pseudo types such as anyset,\nanyspan, anyspanset, anytemporal, .... ?\n\nThis will considerably reduce the number of SQL functions to define.\nCurrently, given the high number of functions in MobilityDB, creating the\nextension takes a loooong time ....\n\nRegards\n\nEsteban\n\nDear allMobilityDB (https://github.com/MobilityDB/MobilityDB) defines at the C level four template types: Set, Span, SpanSet, and Temporal. The type Set is akin to PostgreSQL's ArrayType restricted to one dimension, but enforces the constraint that sets do not have duplicates, the types Span and SpanSet are akin to PostgreSQL's RangeType and MultirangeType but enforce the constraints that span types are of fixed length and that empty spans and infinite bounds are not allowed, and the typeTemporal is used to manipulate time-varying values.These template types need to be instantiated at the SQL level with base types (int, bigint, float, timestamptz, text, ...) and because of this, MobilityDB needs to define numerous SQL functions that all call the same function in C. Taking as example the Set type, we need to define, e.g.,CREATE FUNCTION intset_eq(intset, intset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION bigintset_eq(bigintset, bigintset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION floatset_eq(floatset, floatset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION textset_eq(textset, textset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ......CREATE FUNCTION intset_ne(intset, intset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION bigintset_ne(bigintset, bigintset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION floatset_ne(floatset, floatset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION textset_ne(textset, textset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ......In the case of arrays, ranges, and multiranges, PostgreSQL avoids this redundancy using pseudo-types such as anyarray, anyrange, anymultirange, ...Is there a possibility that we can also define pseudo types such as anyset, anyspan, anyspanset, anytemporal, .... ? This will considerably reduce the number of SQL functions to define. Currently, given the high number of functions in MobilityDB, creating the extension takes a loooong time ....RegardsEsteban", "msg_date": "Sat, 7 Jan 2023 10:31:55 +0100", "msg_from": "Esteban Zimanyi <esteban.zimanyi@ulb.be>", "msg_from_op": true, "msg_subject": "How to define template types in PostgreSQL" }, { "msg_contents": "Hi!\n\nI'd suggest creating an API that defines a general function set with\nvariable input,\nand calling implementation defined on the input type?\n\nOn Sat, Jan 7, 2023 at 12:32 PM Esteban Zimanyi <esteban.zimanyi@ulb.be>\nwrote:\n\n> Dear all\n>\n> MobilityDB (https://github.com/MobilityDB/MobilityDB) defines at the C\n> level four template types: Set, Span, SpanSet, and Temporal. The type Set\n> is akin to PostgreSQL's ArrayType restricted to one dimension, but enforces\n> the constraint that sets do not have duplicates, the types Span and SpanSet\n> are akin to PostgreSQL's RangeType and MultirangeType but enforce the\n> constraints that span types are of fixed length and that empty spans and\n> infinite bounds are not allowed, and the typeTemporal is used to\n> manipulate time-varying values.\n>\n> These template types need to be instantiated at the SQL level with base\n> types (int, bigint, float, timestamptz, text, ...) and because of this,\n> MobilityDB needs to define numerous SQL functions that all call the same\n> function in C. Taking as example the Set type, we need to define, e.g.,\n>\n> CREATE FUNCTION intset_eq(intset, intset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_eq' ...\n> CREATE FUNCTION bigintset_eq(bigintset, bigintset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_eq' ...\n> CREATE FUNCTION floatset_eq(floatset, floatset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_eq' ...\n> CREATE FUNCTION textset_eq(textset, textset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_eq' ...\n> ...\n>\n> CREATE FUNCTION intset_ne(intset, intset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_ne' ...\n> CREATE FUNCTION bigintset_ne(bigintset, bigintset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_ne' ...\n> CREATE FUNCTION floatset_ne(floatset, floatset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_ne' ...\n> CREATE FUNCTION textset_ne(textset, textset) RETURNS bool AS\n> 'MODULE_PATHNAME', 'Set_ne' ...\n> ...\n>\n> In the case of arrays, ranges, and multiranges, PostgreSQL avoids this\n> redundancy using pseudo-types such as anyarray, anyrange, anymultirange, ...\n>\n> Is there a possibility that we can also define pseudo types such as\n> anyset, anyspan, anyspanset, anytemporal, .... ?\n>\n> This will considerably reduce the number of SQL functions to define.\n> Currently, given the high number of functions in MobilityDB, creating the\n> extension takes a loooong time ....\n>\n> Regards\n>\n> Esteban\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I'd suggest creating an API that defines a general function set with variable input,and calling implementation defined on the input type?On Sat, Jan 7, 2023 at 12:32 PM Esteban Zimanyi <esteban.zimanyi@ulb.be> wrote:Dear allMobilityDB (https://github.com/MobilityDB/MobilityDB) defines at the C level four template types: Set, Span, SpanSet, and Temporal. The type Set is akin to PostgreSQL's ArrayType restricted to one dimension, but enforces the constraint that sets do not have duplicates, the types Span and SpanSet are akin to PostgreSQL's RangeType and MultirangeType but enforce the constraints that span types are of fixed length and that empty spans and infinite bounds are not allowed, and the typeTemporal is used to manipulate time-varying values.These template types need to be instantiated at the SQL level with base types (int, bigint, float, timestamptz, text, ...) and because of this, MobilityDB needs to define numerous SQL functions that all call the same function in C. Taking as example the Set type, we need to define, e.g.,CREATE FUNCTION intset_eq(intset, intset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION bigintset_eq(bigintset, bigintset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION floatset_eq(floatset, floatset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ...CREATE FUNCTION textset_eq(textset, textset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_eq' ......CREATE FUNCTION intset_ne(intset, intset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION bigintset_ne(bigintset, bigintset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION floatset_ne(floatset, floatset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ...CREATE FUNCTION textset_ne(textset, textset) RETURNS bool AS 'MODULE_PATHNAME', 'Set_ne' ......In the case of arrays, ranges, and multiranges, PostgreSQL avoids this redundancy using pseudo-types such as anyarray, anyrange, anymultirange, ...Is there a possibility that we can also define pseudo types such as anyset, anyspan, anyspanset, anytemporal, .... ? This will considerably reduce the number of SQL functions to define. Currently, given the high number of functions in MobilityDB, creating the extension takes a loooong time ....RegardsEsteban\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Sun, 8 Jan 2023 00:03:15 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: How to define template types in PostgreSQL" } ]
[ { "msg_contents": "While working on the regression tests added in a14a58329, I noticed\nthat DISTINCT does not make use of Incremental Sort. It'll only ever\ndo full sorts on the cheapest input path or make use of a path that's\nalready got the required pathkeys. Also, I see that\ncreate_final_distinct_paths() is a little quirky and if the cheapest\ninput path happens to be sorted, it'll add_path() the same path twice,\nwhich seems like a bit of a waste of effort. That could happen if say\nenable_seqscan is off or if a Merge Join is the cheapest join method.\n\nAdditionally, the parallel DISTINCT code looks like it should also get\nthe same treatment. I see that I'd coded this to only add a unique\npath atop of a presorted path and it never considers sorting the\ncheapest partial path. I've adjusted that in the attached and also\nmade it consider incremental sorting any path with presorted keys.\n\nPlease see the attached patch.\n\nDavid", "msg_date": "Sat, 7 Jan 2023 22:46:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Allow DISTINCT to use Incremental Sort" }, { "msg_contents": "On Sat, Jan 7, 2023 at 5:47 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> While working on the regression tests added in a14a58329, I noticed\n> that DISTINCT does not make use of Incremental Sort. It'll only ever\n> do full sorts on the cheapest input path or make use of a path that's\n> already got the required pathkeys. Also, I see that\n> create_final_distinct_paths() is a little quirky and if the cheapest\n> input path happens to be sorted, it'll add_path() the same path twice,\n> which seems like a bit of a waste of effort. That could happen if say\n> enable_seqscan is off or if a Merge Join is the cheapest join method.\n>\n> Additionally, the parallel DISTINCT code looks like it should also get\n> the same treatment. I see that I'd coded this to only add a unique\n> path atop of a presorted path and it never considers sorting the\n> cheapest partial path. I've adjusted that in the attached and also\n> made it consider incremental sorting any path with presorted keys.\n>\n> Please see the attached patch.\n\n\n+1 for the changes. A minor comment is that previously on HEAD for\nSELECT DISTINCT case, if we have to do an explicit full sort atop the\ncheapest path, we try to make sure to always use the more rigorous\nordering.\n\n /* For explicit-sort case, always use the more rigorous clause */\n if (list_length(root->distinct_pathkeys) <\n list_length(root->sort_pathkeys))\n {\n needed_pathkeys = root->sort_pathkeys;\n /* Assert checks that parser didn't mess up... */\n Assert(pathkeys_contained_in(root->distinct_pathkeys,\n needed_pathkeys));\n }\n else\n needed_pathkeys = root->distinct_pathkeys;\n\nI'm not sure if this is necessary, as AFAIU the parser should have\nensured that the sortClause is always a prefix of distinctClause.\n\nIn the patch this code has been removed. I think we should also remove\nthe related comments in create_final_distinct_paths.\n\n * When we have DISTINCT ON, we must sort by the more rigorous of\n * DISTINCT and ORDER BY, else it won't have the desired behavior.\n- * Also, if we do have to do an explicit sort, we might as well use\n- * the more rigorous ordering to avoid a second sort later. (Note\n- * that the parser will have ensured that one clause is a prefix of\n- * the other.)\n\nAlso, the comment just above this one is outdated too.\n\n * First, if we have any adequately-presorted paths, just stick a\n * Unique node on those. Then consider doing an explicit sort of the\n * cheapest input path and Unique'ing that.\n\nThe two-step workflow is what is the case on HEAD but not any more in\nthe patch. And I think we should mention incremental sort on any paths\nwith presorted keys.\n\nThanks\nRichard\n\nOn Sat, Jan 7, 2023 at 5:47 PM David Rowley <dgrowleyml@gmail.com> wrote:While working on the regression tests added in a14a58329, I noticed\nthat DISTINCT does not make use of Incremental Sort.  It'll only ever\ndo full sorts on the cheapest input path or make use of a path that's\nalready got the required pathkeys.  Also, I see that\ncreate_final_distinct_paths() is a little quirky and if the cheapest\ninput path happens to be sorted, it'll add_path() the same path twice,\nwhich seems like a bit of a waste of effort. That could happen if say\nenable_seqscan is off or if a Merge Join is the cheapest join method.\n\nAdditionally, the parallel DISTINCT code looks like it should also get\nthe same treatment.  I see that I'd coded this to only add a unique\npath atop of a presorted path and it never considers sorting the\ncheapest partial path.  I've adjusted that in the attached and also\nmade it consider incremental sorting any path with presorted keys.\n\nPlease see the attached patch. +1 for the changes.  A minor comment is that previously on HEAD forSELECT DISTINCT case, if we have to do an explicit full sort atop thecheapest path, we try to make sure to always use the more rigorousordering.        /* For explicit-sort case, always use the more rigorous clause */        if (list_length(root->distinct_pathkeys) <            list_length(root->sort_pathkeys))        {            needed_pathkeys = root->sort_pathkeys;            /* Assert checks that parser didn't mess up... */            Assert(pathkeys_contained_in(root->distinct_pathkeys,                                         needed_pathkeys));        }        else            needed_pathkeys = root->distinct_pathkeys;I'm not sure if this is necessary, as AFAIU the parser should haveensured that the sortClause is always a prefix of distinctClause.In the patch this code has been removed.  I think we should also removethe related comments in create_final_distinct_paths.       * When we have DISTINCT ON, we must sort by the more rigorous of       * DISTINCT and ORDER BY, else it won't have the desired behavior.-      * Also, if we do have to do an explicit sort, we might as well use-      * the more rigorous ordering to avoid a second sort later.  (Note-      * that the parser will have ensured that one clause is a prefix of-      * the other.)Also, the comment just above this one is outdated too.      * First, if we have any adequately-presorted paths, just stick a      * Unique node on those.  Then consider doing an explicit sort of the      * cheapest input path and Unique'ing that.The two-step workflow is what is the case on HEAD but not any more inthe patch.  And I think we should mention incremental sort on any pathswith presorted keys.ThanksRichard", "msg_date": "Mon, 9 Jan 2023 21:28:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow DISTINCT to use Incremental Sort" }, { "msg_contents": "Thanks for having a look at this.\n\nOn Tue, 10 Jan 2023 at 02:28, Richard Guo <guofenglinux@gmail.com> wrote:\n> +1 for the changes. A minor comment is that previously on HEAD for\n> SELECT DISTINCT case, if we have to do an explicit full sort atop the\n> cheapest path, we try to make sure to always use the more rigorous\n> ordering.\n\nI'm not quite sure I follow what's changed here. As far as I see it\nthe code still does what it did and uses the more rigorous sort.\n\npostgres=# explain (costs off) select distinct on (relname) * from\npg_Class order by relname, oid;\n QUERY PLAN\n----------------------------------\n Unique\n -> Sort\n Sort Key: relname, oid\n -> Seq Scan on pg_class\n\nIf it didn't, then there'd have been a Sort atop of the Unique to\nORDER BY relname,oid in the above.\n\nMaybe you were looking at create_partial_distinct_paths()? We don't do\nanything there for DISTINCT ON, although perhaps we could. Just not\nfor this patch.\n\n>\n> /* For explicit-sort case, always use the more rigorous clause */\n> if (list_length(root->distinct_pathkeys) <\n> list_length(root->sort_pathkeys))\n> {\n> needed_pathkeys = root->sort_pathkeys;\n> /* Assert checks that parser didn't mess up... */\n> Assert(pathkeys_contained_in(root->distinct_pathkeys,\n> needed_pathkeys));\n> }\n> else\n> needed_pathkeys = root->distinct_pathkeys;\n>\n> I'm not sure if this is necessary, as AFAIU the parser should have\n> ensured that the sortClause is always a prefix of distinctClause.\n\nI think that's true for standard DISTINCT, but it's not for DISTINCT ON.\n\n> In the patch this code has been removed. I think we should also remove\n> the related comments in create_final_distinct_paths.\n>\n> * When we have DISTINCT ON, we must sort by the more rigorous of\n> * DISTINCT and ORDER BY, else it won't have the desired behavior.\n> - * Also, if we do have to do an explicit sort, we might as well use\n> - * the more rigorous ordering to avoid a second sort later. (Note\n> - * that the parser will have ensured that one clause is a prefix of\n> - * the other.)\n\nI'm not quite following what you think has changed here.\n\n> Also, the comment just above this one is outdated too.\n>\n> * First, if we have any adequately-presorted paths, just stick a\n> * Unique node on those. Then consider doing an explicit sort of the\n> * cheapest input path and Unique'ing that.\n>\n> The two-step workflow is what is the case on HEAD but not any more in\n> the patch. And I think we should mention incremental sort on any paths\n> with presorted keys.\n\nYeah, I agree, that comment should mention incremental sort.\n\nI've attached an updated patch\n\nDavid", "msg_date": "Tue, 10 Jan 2023 15:13:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow DISTINCT to use Incremental Sort" }, { "msg_contents": "On Tue, Jan 10, 2023 at 10:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> > /* For explicit-sort case, always use the more rigorous clause */\n> > if (list_length(root->distinct_pathkeys) <\n> > list_length(root->sort_pathkeys))\n> > {\n> > needed_pathkeys = root->sort_pathkeys;\n> > /* Assert checks that parser didn't mess up... */\n> > Assert(pathkeys_contained_in(root->distinct_pathkeys,\n> > needed_pathkeys));\n> > }\n> > else\n> > needed_pathkeys = root->distinct_pathkeys;\n> >\n> > I'm not sure if this is necessary, as AFAIU the parser should have\n> > ensured that the sortClause is always a prefix of distinctClause.\n>\n> I think that's true for standard DISTINCT, but it's not for DISTINCT ON.\n>\n> > In the patch this code has been removed. I think we should also remove\n> > the related comments in create_final_distinct_paths.\n> >\n> > * When we have DISTINCT ON, we must sort by the more rigorous of\n> > * DISTINCT and ORDER BY, else it won't have the desired behavior.\n> > - * Also, if we do have to do an explicit sort, we might as well use\n> > - * the more rigorous ordering to avoid a second sort later. (Note\n> > - * that the parser will have ensured that one clause is a prefix of\n> > - * the other.)\n>\n> I'm not quite following what you think has changed here.\n\n\nSorry I didn't make myself clear. I mean currently on HEAD in planner.c\nfrom line 4847 to line 4857, we have the code to make sure we always use\nthe more rigorous clause for explicit-sort case. I think this code is\nnot necessary, because we have already done that in line 4791 to 4796,\nfor both DISTINCT ON and standard DISTINCT.\n\nIn this patch that code (line 4847 to line 4857) has been removed, which\nI agree with. I just wondered if the related comment should be removed\ntoo. But now after a second thought, I think it's OK to keep that\ncomment, as it still holds true in the new patch.\n\n\n> I've attached an updated patch\n\n\nThe patch looks good to me.\n\nThanks\nRichard\n\nOn Tue, Jan 10, 2023 at 10:14 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>         /* For explicit-sort case, always use the more rigorous clause */\n>         if (list_length(root->distinct_pathkeys) <\n>             list_length(root->sort_pathkeys))\n>         {\n>             needed_pathkeys = root->sort_pathkeys;\n>             /* Assert checks that parser didn't mess up... */\n>             Assert(pathkeys_contained_in(root->distinct_pathkeys,\n>                                          needed_pathkeys));\n>         }\n>         else\n>             needed_pathkeys = root->distinct_pathkeys;\n>\n> I'm not sure if this is necessary, as AFAIU the parser should have\n> ensured that the sortClause is always a prefix of distinctClause.\n\nI think that's true for standard DISTINCT, but it's not for DISTINCT ON.\n\n> In the patch this code has been removed.  I think we should also remove\n> the related comments in create_final_distinct_paths.\n>\n>        * When we have DISTINCT ON, we must sort by the more rigorous of\n>        * DISTINCT and ORDER BY, else it won't have the desired behavior.\n> -      * Also, if we do have to do an explicit sort, we might as well use\n> -      * the more rigorous ordering to avoid a second sort later.  (Note\n> -      * that the parser will have ensured that one clause is a prefix of\n> -      * the other.)\n\nI'm not quite following what you think has changed here. Sorry I didn't make myself clear.  I mean currently on HEAD in planner.cfrom line 4847 to line 4857, we have the code to make sure we always usethe more rigorous clause for explicit-sort case.  I think this code isnot necessary, because we have already done that in line 4791 to 4796,for both DISTINCT ON and standard DISTINCT.In this patch that code (line 4847 to line 4857) has been removed, whichI agree with.  I just wondered if the related comment should be removedtoo.  But now after a second thought, I think it's OK to keep thatcomment, as it still holds true in the new patch. \nI've attached an updated patch The patch looks good to me.ThanksRichard", "msg_date": "Tue, 10 Jan 2023 11:07:09 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow DISTINCT to use Incremental Sort" }, { "msg_contents": "On Tue, 10 Jan 2023 at 16:07, Richard Guo <guofenglinux@gmail.com> wrote:\n> Sorry I didn't make myself clear. I mean currently on HEAD in planner.c\n> from line 4847 to line 4857, we have the code to make sure we always use\n> the more rigorous clause for explicit-sort case. I think this code is\n> not necessary, because we have already done that in line 4791 to 4796,\n> for both DISTINCT ON and standard DISTINCT.\n\nThanks for explaining. I'm unsure if that code ever did anything\nuseful, but I agree that it does nothing now.\n\n>> I've attached an updated patch\n>\n>\n> The patch looks good to me.\n\nThanks for having another look. I've now pushed the patch.\n\nDavid\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:29:06 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Allow DISTINCT to use Incremental Sort" } ]
[ { "msg_contents": "Hi,\n\nwhile looking at fixing [1], I again came across the fact that we don't\ninitialize the projection etc during ExecInitWorkTableScan(), but do so during\nthe first call to ExecWorkTableScan().\n\nThis is explained with the following comment:\n\n\t/*\n\t * On the first call, find the ancestor RecursiveUnion's state via the\n\t * Param slot reserved for it. (We can't do this during node init because\n\t * there are corner cases where we'll get the init call before the\n\t * RecursiveUnion does.)\n\t */\n\nI remember being confused about this before. I think I dug out the relevant\ncommit back then 0a7abcd4c983 [2], but didn't end up finding the relevant\nthread. This time I did: [3].\n\n\nBasically the issue is that in queries with two CTEs we can, at least\ncurrently, end up with a WorkTable scans on a CTE we've not yet initialized,\ndue to the WorkTable scan of one CTE appearing in the other. Thus\nExecInitRecursiveUnion() hasn't yet set up the param we use in\nnodeWorktablescan.c to find the tuplestore and the type of the tuples it\ncontains.\n\nI don't think this is a huge issue, but it surprised multiple times, so I'd\nlike to expand the comment. At least for me it's hard to get from \"corner\ncases\" to one worktable scan appearing in another CTE and to mutally recursive\nCTEs.\n\n\nI did go down the rabbit hole of trying to avoid this issue because it \"feels\nwrong\" to not know the return type of an executor node during initialization.\nThe easiest approach I could see was to add the \"scan type\" to WorkTableScan\n(vs the target list, which includes the projection). Most of the other scan\nnodes get the scan type from the underlying relation etc, and thus don't need\nit in the plan node ([4]). That way the projection can be built normally\nduring ExecInitWorkTableScan(), but we still need to defer the lookup of\nnode->rustate. But that bothers me a lot less.\n\nI'm not sure it's worth changing this. Or whether that'd be the right approach.\n\n\nI'm also wondering if Tom's first instinct from back then making this an error\nwould have been the right call. But that ship has sailed.\n\n\nTo be clear, this \"issue\" is largely independent of [1] / not a blocker\nwhatsoever. Partially I wrote this to have an email to find the next time I\nencounter this.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/17737-55a063e3e6c41b4f%40postgresql.org\n[2] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=0a7abcd4c983\n[3] https://postgr.es/m/87zllbxngg.fsf%40oxford.xeocode.com\n[4] I don't particularly like that, we spend a lot of time converting memory\n inefficient target lists into tupledescs during executor initialization,\n even though we rely on the tuple types not to be materially different\n anyway. But that's a separate issue.\n\n\n", "msg_date": "Sat, 7 Jan 2023 13:35:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "delayed initialization in worktable scan" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Basically the issue is that in queries with two CTEs we can, at least\n> currently, end up with a WorkTable scans on a CTE we've not yet initialized,\n> due to the WorkTable scan of one CTE appearing in the other. Thus\n> ExecInitRecursiveUnion() hasn't yet set up the param we use in\n> nodeWorktablescan.c to find the tuplestore and the type of the tuples it\n> contains.\n\n> I don't think this is a huge issue, but it surprised multiple times, so I'd\n> like to expand the comment. At least for me it's hard to get from \"corner\n> cases\" to one worktable scan appearing in another CTE and to mutally recursive\n> CTEs.\n\nSure. I think I wrote the comment the way I did because I hadn't done\nenough analysis to be sure that mutually recursive CTEs was the only\nway to trigger it. But as long as you say \"for example\" or words to\nthat effect, I don't have a problem with giving an example case here.\n\n> I did go down the rabbit hole of trying to avoid this issue because it \"feels\n> wrong\" to not know the return type of an executor node during initialization.\n> ...\n> I'm not sure it's worth changing this. Or whether that'd be the right approach.\n\nI wouldn't bother unless we find a compelling reason to need the info\nearlier.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Jan 2023 17:17:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: delayed initialization in worktable scan" } ]
[ { "msg_contents": "thorntail failed some recovery tests in 2022-10:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-11-02%2004%3A25%3A43\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-31%2013%3A32%3A42\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-29%2017%3A48%3A15\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-24%2013%3A48%3A16\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-24%2010%3A08%3A30\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-21%2000%3A58%3A14\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-16%2000%3A08%3A17\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-15%2020%3A48%3A18\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-14%2020%3A13%3A35\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=thorntail&dt=2022-10-14%2006%3A58%3A15\n\nthorntail has long seen fsync failures, due to a driver bug[1]. On\n2022-09-28, its OS updated coreutils from 8.32-4.1, 9.1-1. That brought in\n\"cp\" use of the FICLONE ioctl. FICLONE internally syncs its source file,\nreporting EIO if that fails. A bug[2] in \"cp\" allowed it to silently make a\ndefective copy instead of reporting that EIO. Since the recovery suite\narchive_command uses \"cp\", these test failures emerged. The kernel may\nchange[3] to make such userspace bugs harder to add.\n\nFor thorntail, my workaround was to replace \"cp\" with a wrapper doing 'exec\n/usr/bin/cp --reflink=never \"$@\"'. I might eventually propose the ability to\ndisable FICLONE calls in PostgreSQL code. So far, those calls (in pg_upgrade)\nhave not caused thorntail failures.\n\n[1] https://postgr.es/m/flat/20210508001418.GA3076445@rfd.leadboat.com\n[2] https://github.com/coreutils/coreutils/commit/f6c93f334ef5dbc5c68c299785565ec7b9ba5180\n[3] https://lore.kernel.org/linux-xfs/20221108172436.GA3613139@rfd.leadboat.com\n\n\n", "msg_date": "Sat, 7 Jan 2023 15:29:24 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "FYI: 2022-10 thorntail failures from coreutils FICLONE" }, { "msg_contents": "Noah Misch <noah@leadboat.com> writes:\n> thorntail failed some recovery tests in 2022-10:\n\nSpeaking of which ... thorntail hasn't reported in for nearly\nthree weeks. Is it stuck?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Jan 2023 22:49:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: FYI: 2022-10 thorntail failures from coreutils FICLONE" }, { "msg_contents": "On Mon, Jan 09, 2023 at 10:49:26PM -0500, Tom Lane wrote:\n> Noah Misch <noah@leadboat.com> writes:\n> > thorntail failed some recovery tests in 2022-10:\n> \n> Speaking of which ... thorntail hasn't reported in for nearly\n> three weeks. Is it stuck?\n\nIts machine has been unresponsive to ssh for those weeks.\n\n\n", "msg_date": "Mon, 9 Jan 2023 23:24:33 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": true, "msg_subject": "Re: FYI: 2022-10 thorntail failures from coreutils FICLONE" } ]
[ { "msg_contents": "I've been thinking about adding RETURNING support to MERGE in order to\nlet the user see what changed.\n\nI considered allowing a separate RETURNING list at the end of each\naction, but rapidly dismissed that idea. Firstly, it introduces\nshift/reduce conflicts to the grammar. These can be resolved by making\nthe \"AS\" before column aliases non-optional, but that's pretty ugly,\nand there may be a better way. More serious drawbacks are that this\nsyntax is much more cumbersome for the end user, having to repeat the\nRETURNING clause several times, and the implementation is likely to be\npretty complex, so I didn't pursue it.\n\nA much simpler approach is to just have a single RETURNING list at the\nend of the command. That's much easier to implement, and easier for\nthe end user. The main drawback is that it's impossible for the user\nto work out from the values returned which action was actually taken,\nand I think that's a pretty essential piece of information (at least\nit seems pretty limiting to me, not being able to work that out).\n\nSo playing around with it (and inspired by the WITH ORDINALITY syntax\nfor SRFs), I had the idea of allowing \"WITH WHEN CLAUSE\" at the end of\nthe returning list, which adds an integer column to the list, whose\nvalue is set to the index of the when clause executed, as in the\nattached very rough patch.\n\nSo, quoting an example from the tests, this allows things like:\n\nWITH t AS (\n MERGE INTO sq_target t USING v ON tid = sid\n WHEN MATCHED AND tid > 2 THEN UPDATE SET balance = t.balance + delta\n WHEN NOT MATCHED THEN INSERT (balance, tid) VALUES (balance + delta, sid)\n WHEN MATCHED AND tid < 2 THEN DELETE\n RETURNING t.* WITH WHEN CLAUSE\n)\nSELECT CASE when_clause\n WHEN 1 THEN 'UPDATE'\n WHEN 2 THEN 'INSERT'\n WHEN 3 THEN 'DELETE'\n END, *\nFROM t;\n\n case | tid | balance | when_clause\n--------+-----+---------+-------------\n INSERT | -1 | -11 | 2\n DELETE | 1 | 100 | 3\n(2 rows)\n\n1 row is returned for each merge action executed (other than DO\nNOTHING actions), and as usual, the values represent old target values\nfor DELETE actions, and new target values for INSERT/UPDATE actions.\n\nIt's also possible to return the source values, and a bare \"*\" in the\nreturning list expands to all the source columns, followed by all the\ntarget columns.\n\nThe name of the added column, if included, can be changed by\nspecifying \"WITH WHEN CLAUSE [AS] col_alias\". I chose the syntax \"WHEN\nCLAUSE\" and \"when_clause\" as the default column name because those\nmatch the existing terminology used in the docs.\n\nAnyway, this feels like a good point to stop playing around and get\nfeedback on whether this seems useful, or if anyone has other ideas.\n\nRegards,\nDean", "msg_date": "Sun, 8 Jan 2023 12:28:02 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "MERGE ... RETURNING" }, { "msg_contents": "On Sun, 8 Jan 2023 at 07:28, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n\nSo playing around with it (and inspired by the WITH ORDINALITY syntax\n> for SRFs), I had the idea of allowing \"WITH WHEN CLAUSE\" at the end of\n> the returning list, which adds an integer column to the list, whose\n> value is set to the index of the when clause executed, as in the\n> attached very rough patch.\n>\n\nWould it be useful to have just the action? Perhaps \"WITH ACTION\"? My idea\nis that this would return an enum of INSERT, UPDATE, DELETE (so is \"action\"\nthe right word?). It seems to me in many situations I would be more likely\nto care about which of these 3 happened rather than the exact clause that\napplied. This isn't necessarily meant to be instead of your suggestion\nbecause I can imagine wanting to know the exact clause, just an alternative\nthat might suffice in many situations. Using it would also avoid problems\narising from editing the query in a way which changes the numbers of the\nclauses.\n\nSo, quoting an example from the tests, this allows things like:\n>\n> WITH t AS (\n> MERGE INTO sq_target t USING v ON tid = sid\n> WHEN MATCHED AND tid > 2 THEN UPDATE SET balance = t.balance + delta\n> WHEN NOT MATCHED THEN INSERT (balance, tid) VALUES (balance + delta,\n> sid)\n> WHEN MATCHED AND tid < 2 THEN DELETE\n> RETURNING t.* WITH WHEN CLAUSE\n> )\n> SELECT CASE when_clause\n> WHEN 1 THEN 'UPDATE'\n> WHEN 2 THEN 'INSERT'\n> WHEN 3 THEN 'DELETE'\n> END, *\n> FROM t;\n>\n> case | tid | balance | when_clause\n> --------+-----+---------+-------------\n> INSERT | -1 | -11 | 2\n> DELETE | 1 | 100 | 3\n> (2 rows)\n>\n> 1 row is returned for each merge action executed (other than DO\n> NOTHING actions), and as usual, the values represent old target values\n> for DELETE actions, and new target values for INSERT/UPDATE actions.\n>\n\nWould it be feasible to allow specifying old.column or new.column? These\nwould always be NULL for INSERT and DELETE respectively but more useful\nwith UPDATE. Actually I've been meaning to ask this question about UPDATE …\nRETURNING.\n\nActually, even with DELETE/INSERT, I can imagine wanting, for example, to\nget only the new values associated with INSERT or UPDATE and not the values\nremoved by a DELETE. So I can imagine specifying new.column to get NULLs\nfor any row that was deleted but still get the new values for other rows.\n\nIt's also possible to return the source values, and a bare \"*\" in the\n> returning list expands to all the source columns, followed by all the\n> target columns.\n>\n\nDoes this lead to a problem in the event there are same-named columns\nbetween source and target?\n\nThe name of the added column, if included, can be changed by\n> specifying \"WITH WHEN CLAUSE [AS] col_alias\". I chose the syntax \"WHEN\n> CLAUSE\" and \"when_clause\" as the default column name because those\n> match the existing terminology used in the docs.\n>\n\nOn Sun, 8 Jan 2023 at 07:28, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\nSo playing around with it (and inspired by the WITH ORDINALITY syntax\nfor SRFs), I had the idea of allowing \"WITH WHEN CLAUSE\" at the end of\nthe returning list, which adds an integer column to the list, whose\nvalue is set to the index of the when clause executed, as in the\nattached very rough patch.Would it be useful to have just the action? Perhaps \"WITH ACTION\"? My idea is that this would return an enum of INSERT, UPDATE, DELETE (so is \"action\" the right word?). It seems to me in many situations I would be more likely to care about which of these 3 happened rather than the exact clause that applied. This isn't necessarily meant to be instead of your suggestion because I can imagine wanting to know the exact clause, just an alternative that might suffice in many situations. Using it would also avoid problems arising from editing the query in a way which changes the numbers of the clauses.\nSo, quoting an example from the tests, this allows things like:\n\nWITH t AS (\n  MERGE INTO sq_target t USING v ON tid = sid\n    WHEN MATCHED AND tid > 2 THEN UPDATE SET balance = t.balance + delta\n    WHEN NOT MATCHED THEN INSERT (balance, tid) VALUES (balance + delta, sid)\n    WHEN MATCHED AND tid < 2 THEN DELETE\n    RETURNING t.* WITH WHEN CLAUSE\n)\nSELECT CASE when_clause\n         WHEN 1 THEN 'UPDATE'\n         WHEN 2 THEN 'INSERT'\n         WHEN 3 THEN 'DELETE'\n       END, *\nFROM t;\n\n  case  | tid | balance | when_clause\n--------+-----+---------+-------------\n INSERT |  -1 |     -11 |           2\n DELETE |   1 |     100 |           3\n(2 rows)\n\n1 row is returned for each merge action executed (other than DO\nNOTHING actions), and as usual, the values represent old target values\nfor DELETE actions, and new target values for INSERT/UPDATE actions.Would it be feasible to allow specifying old.column or new.column? These would always be NULL for INSERT and DELETE respectively but more useful with UPDATE. Actually I've been meaning to ask this question about UPDATE … RETURNING.Actually, even with DELETE/INSERT, I can imagine wanting, for example, to get only the new values associated with INSERT or UPDATE and not the values removed by a DELETE. So I can imagine specifying new.column to get NULLs for any row that was deleted but still get the new values for other rows.\nIt's also possible to return the source values, and a bare \"*\" in the\nreturning list expands to all the source columns, followed by all the\ntarget columns.Does this lead to a problem in the event there are same-named columns between source and target?\nThe name of the added column, if included, can be changed by\nspecifying \"WITH WHEN CLAUSE [AS] col_alias\". I chose the syntax \"WHEN\nCLAUSE\" and \"when_clause\" as the default column name because those\nmatch the existing terminology used in the docs.", "msg_date": "Sun, 8 Jan 2023 15:09:36 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 8 Jan 2023 at 20:09, Isaac Morland <isaac.morland@gmail.com> wrote:\n>\n> Would it be useful to have just the action? Perhaps \"WITH ACTION\"? My idea is that this would return an enum of INSERT, UPDATE, DELETE (so is \"action\" the right word?). It seems to me in many situations I would be more likely to care about which of these 3 happened rather than the exact clause that applied. This isn't necessarily meant to be instead of your suggestion because I can imagine wanting to know the exact clause, just an alternative that might suffice in many situations. Using it would also avoid problems arising from editing the query in a way which changes the numbers of the clauses.\n>\n\nHmm, perhaps that's something that can be added as well. Both use\ncases seem useful.\n\n>> 1 row is returned for each merge action executed (other than DO\n>> NOTHING actions), and as usual, the values represent old target values\n>> for DELETE actions, and new target values for INSERT/UPDATE actions.\n>\n> Would it be feasible to allow specifying old.column or new.column? These would always be NULL for INSERT and DELETE respectively but more useful with UPDATE. Actually I've been meaning to ask this question about UPDATE … RETURNING.\n>\n\nI too have wished for the ability to do that with UPDATE ...\nRETURNING, though I'm not sure how feasible it is.\n\nI think it's something best considered separately though. I haven't\ngiven any thought as to how to make it work, so there might be\ntechnical difficulties. But if it could be made to work for UPDATE, it\nshouldn't be much more effort to make it work for MERGE.\n\n>> It's also possible to return the source values, and a bare \"*\" in the\n>> returning list expands to all the source columns, followed by all the\n>> target columns.\n>\n> Does this lead to a problem in the event there are same-named columns between source and target?\n>\n\nNot really. It's exactly the same as doing \"SELECT * FROM src JOIN tgt\nON ...\". That may lead to duplicate column names in the result, but\nthat's not necessarily a problem.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 9 Jan 2023 12:29:05 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 1/9/23 13:29, Dean Rasheed wrote:\n> On Sun, 8 Jan 2023 at 20:09, Isaac Morland <isaac.morland@gmail.com> wrote:\n>>\n>> Would it be useful to have just the action? Perhaps \"WITH ACTION\"? My idea is that this would return an enum of INSERT, UPDATE, DELETE (so is \"action\" the right word?). It seems to me in many situations I would be more likely to care about which of these 3 happened rather than the exact clause that applied. This isn't necessarily meant to be instead of your suggestion because I can imagine wanting to know the exact clause, just an alternative that might suffice in many situations. Using it would also avoid problems arising from editing the query in a way which changes the numbers of the clauses.\n>>\n> \n> Hmm, perhaps that's something that can be added as well. Both use\n> cases seem useful.\n\nBikeshedding here. Instead of Yet Another WITH Clause, could we perhaps \nmake a MERGING() function analogous to the GROUPING() function that goes \nwith grouping sets?\n\nMERGE ...\nRETURNING *, MERGING('clause'), MERGING('action');\n\nOr something.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Mon, 9 Jan 2023 17:23:04 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 9 Jan 2023 at 16:23, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> Bikeshedding here. Instead of Yet Another WITH Clause, could we perhaps\n> make a MERGING() function analogous to the GROUPING() function that goes\n> with grouping sets?\n>\n> MERGE ...\n> RETURNING *, MERGING('clause'), MERGING('action');\n>\n\nHmm, possibly, but I think that would complicate the implementation quite a bit.\n\nGROUPING() is not really a function (in the sense that there is no\npg_proc entry for it, you can't do \"\\df grouping\", and it isn't\nexecuted with its arguments like a normal function). Rather, it\nrequires special-case handling in the parser, through to the executor,\nand I think MERGING() would be similar.\n\nAlso, it masks any user function with the same name, and would\nprobably require MERGING to be some level of reserved keyword.\n\nI'm not sure that's worth it, just to have a more standard-looking\nRETURNING list, without a WITH clause.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 9 Jan 2023 17:44:36 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 9 Jan 2023 at 17:44, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 9 Jan 2023 at 16:23, Vik Fearing <vik@postgresfriends.org> wrote:\n> >\n> > Bikeshedding here. Instead of Yet Another WITH Clause, could we perhaps\n> > make a MERGING() function analogous to the GROUPING() function that goes\n> > with grouping sets?\n> >\n> > MERGE ...\n> > RETURNING *, MERGING('clause'), MERGING('action');\n> >\n>\n> Hmm, possibly, but I think that would complicate the implementation quite a bit.\n>\n> GROUPING() is not really a function (in the sense that there is no\n> pg_proc entry for it, you can't do \"\\df grouping\", and it isn't\n> executed with its arguments like a normal function). Rather, it\n> requires special-case handling in the parser, through to the executor,\n> and I think MERGING() would be similar.\n>\n> Also, it masks any user function with the same name, and would\n> probably require MERGING to be some level of reserved keyword.\n>\n\nI thought about this some more, and I think functions do make more\nsense here, rather than inventing a special WITH syntax. However,\nrather than using a special MERGING() function like GROUPING(), which\nisn't really a function at all, I think it's better (and much simpler\nto implement) to have a pair of normal functions (one returning int,\nand one text).\n\nThe example from the tests shows the sort of thing this allows:\n\nMERGE INTO sq_target t USING sq_source s ON tid = sid\n WHEN MATCHED AND tid >= 2 THEN UPDATE SET balance = t.balance + delta\n WHEN NOT MATCHED THEN INSERT (balance, tid) VALUES (balance + delta, sid)\n WHEN MATCHED AND tid < 2 THEN DELETE\n RETURNING pg_merge_when_clause() AS when_clause,\n pg_merge_action() AS merge_action,\n t.*,\n CASE pg_merge_action()\n WHEN 'INSERT' THEN 'Inserted '||t\n WHEN 'UPDATE' THEN 'Added '||delta||' to balance'\n WHEN 'DELETE' THEN 'Removed '||t\n END AS description;\n\n when_clause | merge_action | tid | balance | description\n-------------+--------------+-----+---------+---------------------\n 3 | DELETE | 1 | 100 | Removed (1,100)\n 1 | UPDATE | 2 | 220 | Added 20 to balance\n 2 | INSERT | 4 | 40 | Inserted (4,40)\n(3 rows)\n\nI think this is easier to use than the WITH syntax, and more flexible,\nsince the new functions can be used anywhere in the RETURNING list,\nincluding in expressions.\n\nThere is one limitation though. Due to the way these functions need\naccess to the originating query, they need to appear directly in\nMERGE's RETURNING list, not in subqueries, plpgsql function bodies, or\nanything else that amounts to a different query. Maybe there's a way\nround that, but it looks tricky. In practice though, it's easy to work\naround, if necessary (e.g., by wrapping the MERGE in a CTE).\n\nRegards,\nDean", "msg_date": "Sun, 22 Jan 2023 10:09:00 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "> diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> new file mode 100644\n> index e34f583..aa3cca0\n> --- a/src/backend/commands/copy.c\n> +++ b/src/backend/commands/copy.c\n> @@ -274,12 +274,6 @@ DoCopy(ParseState *pstate, const CopyStm\n> \t{\n> \t\tAssert(stmt->query);\n> \n> -\t\t/* MERGE is allowed by parser, but unimplemented. Reject for now */\n> -\t\tif (IsA(stmt->query, MergeStmt))\n> -\t\t\tereport(ERROR,\n> -\t\t\t\t\terrcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> -\t\t\t\t\terrmsg(\"MERGE not supported in COPY\"));\n\nDoes this COPY stuff come from another branch where you're adding\nsupport for MERGE in COPY? I see that you add a test that MERGE without\nRETURNING fails, but you didn't add any tests that it works with\nRETURNING. Anyway, I suspect these small changes shouldn't be here.\n\n\nOverall, the idea of using Postgres-specific functions for extracting\ncontext in the RETURNING clause looks acceptable to me. We can change\nthat to add support to whatever clauses the SQL committee offers, when\nthey get around to offering something. (We do have to keep our fingers\ncrossed that they will decide to use the same RETURNING syntax as we do\nin this patch, of course.)\n\nRegarding mas_action_idx, I would have thought that it belongs in\nMergeAction rather than MergeActionState. After all, you determine it\nonce at parse time, and it is a constant from there onwards, right?\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 22 Jan 2023 19:58:39 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 22 Jan 2023 at 19:08, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> > diff --git a/src/backend/commands/copy.c b/src/backend/commands/copy.c\n> > new file mode 100644\n> > index e34f583..aa3cca0\n> > --- a/src/backend/commands/copy.c\n> > +++ b/src/backend/commands/copy.c\n> > @@ -274,12 +274,6 @@ DoCopy(ParseState *pstate, const CopyStm\n> > {\n> > Assert(stmt->query);\n> >\n> > - /* MERGE is allowed by parser, but unimplemented. Reject for now */\n> > - if (IsA(stmt->query, MergeStmt))\n> > - ereport(ERROR,\n> > - errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > - errmsg(\"MERGE not supported in COPY\"));\n>\n> Does this COPY stuff come from another branch where you're adding\n> support for MERGE in COPY? I see that you add a test that MERGE without\n> RETURNING fails, but you didn't add any tests that it works with\n> RETURNING. Anyway, I suspect these small changes shouldn't be here.\n>\n\nA few of the code changes I made were entirely untested (this was\nafter all just a proof-of-concept, intended to gather opinions and get\nconsensus about the overall shape of the feature). They serve as\nuseful reminders of things to test. In fact, since then, I've been\ndoing more testing, and so far everything I have tried has just\nworked, including COPY (MERGE ... RETURNING ...) TO ... Thinking about\nit, I can't see any reason why it wouldn't.\n\nStill, there's a lot more testing to do. Just going through the docs\nlooking for references to RETURNING gave me a lot more ideas of things\nto test.\n\n> Overall, the idea of using Postgres-specific functions for extracting\n> context in the RETURNING clause looks acceptable to me.\n\nCool.\n\n> We can change\n> that to add support to whatever clauses the SQL committee offers, when\n> they get around to offering something. (We do have to keep our fingers\n> crossed that they will decide to use the same RETURNING syntax as we do\n> in this patch, of course.)\n>\n\nYes indeed. At least, done this way, the only non-SQL-standard syntax\nis the RETURNING keyword itself, which we've already settled on for\nINSERT/UPDATE/DELETE. Let's just hope they don't decide to use\nRETURNING in an incompatible way in the future.\n\n> Regarding mas_action_idx, I would have thought that it belongs in\n> MergeAction rather than MergeActionState. After all, you determine it\n> once at parse time, and it is a constant from there onwards, right?\n>\n\nOh, yes that makes sense (and removes the need for a couple of the\nexecutor changes). Thanks for looking.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 23 Jan 2023 16:54:00 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 23 Jan 2023 at 16:54, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Sun, 22 Jan 2023 at 19:08, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > Regarding mas_action_idx, I would have thought that it belongs in\n> > MergeAction rather than MergeActionState. After all, you determine it\n> > once at parse time, and it is a constant from there onwards, right?\n>\n> Oh, yes that makes sense (and removes the need for a couple of the\n> executor changes). Thanks for looking.\n>\n\nAttached is a more complete patch, with that change and more doc and\ntest updates.\n\nA couple of latest changes from this patch look like they represent\npre-existing issues with MERGE that should really be extracted from\nthis patch and applied to HEAD+v15. I'll take a closer look at that,\nand start new threads for those.\n\nRegards,\nDean", "msg_date": "Tue, 7 Feb 2023 10:56:52 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, 7 Feb 2023 at 10:56, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Attached is a more complete patch\n>\n\nRebased version attached.\n\nRegards,\nDean", "msg_date": "Fri, 24 Feb 2023 05:46:46 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 24 Feb 2023 at 05:46, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Rebased version attached.\n>\n\nAnother rebase.\n\nRegards,\nDean", "msg_date": "Sun, 26 Feb 2023 09:50:37 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 26 Feb 2023 at 09:50, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Another rebase.\n>\n\nAnd another rebase.\n\nRegards,\nDean", "msg_date": "Mon, 13 Mar 2023 13:36:40 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 13 Mar 2023 at 13:36, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> And another rebase.\n>\n\nI ran out of cycles to pursue the MERGE patches in v16, but hopefully\nI can make more progress in v17.\n\nLooking at this one with fresh eyes, it looks mostly in good shape. To\nrecap, this adds support for the RETURNING clause in MERGE, together\nwith new support functions pg_merge_action() and\npg_merge_when_clause() that can be used in the RETURNING list of MERGE\nto retrieve the kind of action (INSERT/UPDATE/DELETE), and the index\nof the WHEN clause executed for each row merged. In addition,\nRETURNING support allows MERGE to be used as the source query in COPY\nTO and WITH queries.\n\nOne minor annoyance is that, due to the way that pg_merge_action() and\npg_merge_when_clause() require access to the MergeActionState, they\nonly work if they appear directly in the RETURNING list. They can't,\nfor example, appear in a subquery in the RETURNING list, and I don't\nsee an easy way round that limitation.\n\nAttached is an updated patch with some cosmetic updates, plus updated\nruleutils support.\n\nRegards,\nDean", "msg_date": "Sat, 1 Jul 2023 12:07:42 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sat, Jul 1, 2023 at 4:08 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 13 Mar 2023 at 13:36, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > And another rebase.\n> >\n>\n> I ran out of cycles to pursue the MERGE patches in v16, but hopefully\n> I can make more progress in v17.\n>\n> Looking at this one with fresh eyes, it looks mostly in good shape.\n\n+1\n\nMost of the review was done with the v6 of the patch, minus the doc\nbuild step. The additional changes in v7 of the patch were eyeballed,\nand tested with `make check`.\n\n> To\n> recap, this adds support for the RETURNING clause in MERGE, together\n> with new support functions pg_merge_action() and\n> pg_merge_when_clause() that can be used in the RETURNING list of MERGE\n\nnit: s/can be used in/can be used only in/\n\n> to retrieve the kind of action (INSERT/UPDATE/DELETE), and the index\n> of the WHEN clause executed for each row merged. In addition,\n> RETURNING support allows MERGE to be used as the source query in COPY\n> TO and WITH queries.\n>\n> One minor annoyance is that, due to the way that pg_merge_action() and\n> pg_merge_when_clause() require access to the MergeActionState, they\n> only work if they appear directly in the RETURNING list. They can't,\n> for example, appear in a subquery in the RETURNING list, and I don't\n> see an easy way round that limitation.\n\nI believe that's a serious limitation, and would be a blocker for the feature.\n\n> Attached is an updated patch with some cosmetic updates, plus updated\n> ruleutils support.\n\nWith each iteration of your patch, it is becoming increasingly clear\nthat this will be a documentation-heavy patch :-)\n\nI think the name of function pg_merge_when_clause() can be improved.\nHow about pg_merge_when_clause_ordinal().\n\nIn the doc page of MERGE, you've moved the 'with_query' from the\nbottom of Parameters section to the top of that section. Any reason\nfor this? Since the Parameters section is quite large, for a moment I\nthought the patch was _adding_ the description of 'with_query'.\n\n\n+ /*\n+ * Merge support functions should only be called directly from a MERGE\n+ * command, and need access to the parent ModifyTableState. The parser\n+ * should have checked that such functions only appear in the RETURNING\n+ * list of a MERGE, so this should never fail.\n+ */\n+ if (IsMergeSupportFunction(funcid))\n+ {\n+ if (!state->parent ||\n+ !IsA(state->parent, ModifyTableState) ||\n+ ((ModifyTableState *) state->parent)->operation != CMD_MERGE)\n+ elog(ERROR, \"merge support function called in non-merge context\");\n\nAs the comment says, this is an unexpected condition, and should have\nbeen caught and reported by the parser. So I think it'd be better to\nuse an Assert() here. Moreover, there's ERROR being raised by the\npg_merge_action() and pg_merge_when_clause() functions, when they\ndetect the function context is not appropriate.\n\nI found it very innovative to allow these functions to be called only\nin a certain part of certain SQL command. I don't think there's a\nprecedence for this in Postgres; I'd be glad to learn if there are\nother functions that Postgres exposes this way.\n\n- EXPR_KIND_RETURNING, /* RETURNING */\n+ EXPR_KIND_RETURNING, /* RETURNING in INSERT/UPDATE/DELETE */\n+ EXPR_KIND_MERGE_RETURNING, /* RETURNING in MERGE */\n\nHaving to invent a whole new ParseExprKind enum, feels like a bit of\nan overkill. My only objection is that this is exactly like\nEXPR_KIND_RETURNING, hence EXPR_KIND_MERGE_RETURNING needs to be dealt\nwith exactly in as many places. But I don't have any alternative\nproposals.\n\n--- a/src/include/catalog/pg_proc.h\n+++ b/src/include/catalog/pg_proc.h\n+/* Is this a merge support function? (Requires fmgroids.h) */\n+#define IsMergeSupportFunction(funcid) \\\n+ ((funcid) == F_PG_MERGE_ACTION || \\\n+ (funcid) == F_PG_MERGE_WHEN_CLAUSE)\n\nIf it doesn't cause recursion or other complications, I think we\nshould simply include the fmgroids.h in pg_proc.h. I understand that\nnot all .c files/consumers that include pg_proc.h may need fmgroids.h,\nbut #include'ing it will eliminate the need to keep the \"Requires...\"\nnote above, and avoid confusion, too.\n\n--- a/src/test/regress/expected/merge.out\n+++ b/src/test/regress/expected/merge.out\n\n-WHEN MATCHED AND tid > 2 THEN\n+WHEN MATCHED AND tid >= 2 THEN\n\nThis change can be treated as a bug fix :-)\n\n+-- COPY (MERGE ... RETURNING) TO ...\n+BEGIN;\n+COPY (\n+ MERGE INTO sq_target t\n+ USING v\n+ ON tid = sid\n+ WHEN MATCHED AND tid > 2 THEN\n\nFor consistency, the check should be tid >= 2, like you've fixed in\ncommand referenced above.\n\n+BEGIN;\n+COPY (\n+ MERGE INTO sq_target t\n+ USING v\n+ ON tid = sid\n+ WHEN MATCHED AND tid > 2 THEN\n+ UPDATE SET balance = t.balance + delta\n+ WHEN NOT MATCHED THEN\n+ INSERT (balance, tid) VALUES (balance + delta, sid)\n+ WHEN MATCHED AND tid < 2 THEN\n+ DELETE\n+ RETURNING pg_merge_action(), t.*\n+) TO stdout;\n+DELETE 1 100\n+ROLLBACK;\n\nI expected the .out file to have captured the stdout. I'm gradually,\nand gladly, re-learning bits of the test infrastructure.\n\nThe DELETE command tag in the output does not feel appropriate for a\nCOPY command that's using MERGE as the source of the data.\n\n+CREATE FUNCTION merge_into_sq_target(sid int, balance int, delta int,\n+ OUT action text, OUT tid int,\nOUT new_balance int)\n+LANGUAGE sql AS\n+$$\n+ MERGE INTO sq_target t\n+ USING (VALUES ($1, $2, $3)) AS v(sid, balance, delta)\n+ ON tid = v.sid\n+ WHEN MATCHED AND tid > 2 THEN\n\nAgain, for consistency, the comparison operator should be >=. There\nare a few more occurrences of this comparison in the rest of the file,\n that need the same treatment.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Wed, 5 Jul 2023 22:12:46 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 2023-Jul-05, Gurjeet Singh wrote:\n\n> +BEGIN;\n> +COPY (\n> + MERGE INTO sq_target t\n> + USING v\n> + ON tid = sid\n> + WHEN MATCHED AND tid > 2 THEN\n> + UPDATE SET balance = t.balance + delta\n> + WHEN NOT MATCHED THEN\n> + INSERT (balance, tid) VALUES (balance + delta, sid)\n> + WHEN MATCHED AND tid < 2 THEN\n> + DELETE\n> + RETURNING pg_merge_action(), t.*\n> +) TO stdout;\n> +DELETE 1 100\n> +ROLLBACK;\n> \n> I expected the .out file to have captured the stdout. I'm gradually,\n> and gladly, re-learning bits of the test infrastructure.\n> \n> The DELETE command tag in the output does not feel appropriate for a\n> COPY command that's using MERGE as the source of the data.\n\nYou misread this one :-) The COPY output is there, the tag is not. So\nDELETE is the value from pg_merge_action(), and \"1 100\" correspond to\nthe columns in the the sq_target row that was deleted. The command tag\nis presumably MERGE 1.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 6 Jul 2023 12:39:30 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, Jul 6, 2023 at 1:13 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Sat, Jul 1, 2023 at 4:08 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> >\n> > On Mon, 13 Mar 2023 at 13:36, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n> > >\n> > > And another rebase.\n> > >\n> >\n> > I ran out of cycles to pursue the MERGE patches in v16, but hopefully\n> > I can make more progress in v17.\n> >\n> > Looking at this one with fresh eyes, it looks mostly in good shape.\n>\n> +1\n>\n> Most of the review was done with the v6 of the patch, minus the doc\n> build step. The additional changes in v7 of the patch were eyeballed,\n> and tested with `make check`.\n>\n> > To\n> > recap, this adds support for the RETURNING clause in MERGE, together\n> > with new support functions pg_merge_action() and\n> > pg_merge_when_clause() that can be used in the RETURNING list of MERGE\n>\n> nit: s/can be used in/can be used only in/\n>\n> > to retrieve the kind of action (INSERT/UPDATE/DELETE), and the index\n> > of the WHEN clause executed for each row merged. In addition,\n> > RETURNING support allows MERGE to be used as the source query in COPY\n> > TO and WITH queries.\n> >\n> > One minor annoyance is that, due to the way that pg_merge_action() and\n> > pg_merge_when_clause() require access to the MergeActionState, they\n> > only work if they appear directly in the RETURNING list. They can't,\n> > for example, appear in a subquery in the RETURNING list, and I don't\n> > see an easy way round that limitation.\n>\n> I believe that's a serious limitation, and would be a blocker for the feature.\n>\n> > Attached is an updated patch with some cosmetic updates, plus updated\n> > ruleutils support.\n>\n> With each iteration of your patch, it is becoming increasingly clear\n> that this will be a documentation-heavy patch :-)\n>\n> I think the name of function pg_merge_when_clause() can be improved.\n> How about pg_merge_when_clause_ordinal().\n>\n\n> I think the name of function pg_merge_when_clause() can be improved.\n> How about pg_merge_when_clause_ordinal().\n\nanother idea: pg_merge_action_ordinal()\n\n\n", "msg_date": "Thu, 6 Jul 2023 19:07:38 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, Jul 6, 2023 at 3:39 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jul-05, Gurjeet Singh wrote:\n\n> > I expected the .out file to have captured the stdout. I'm gradually,\n> > and gladly, re-learning bits of the test infrastructure.\n> >\n> > The DELETE command tag in the output does not feel appropriate for a\n> > COPY command that's using MERGE as the source of the data.\n>\n> You misread this one :-) The COPY output is there, the tag is not. So\n> DELETE is the value from pg_merge_action(), and \"1 100\" correspond to\n> the columns in the the sq_target row that was deleted. The command tag\n> is presumably MERGE 1.\n\n:-) That makes more sense. It matches my old mental model. Thanks for\nclarifying!\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 6 Jul 2023 10:14:31 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, Jul 6, 2023 at 4:07 AM jian he <jian.universality@gmail.com> wrote:\n>\n> On Thu, Jul 6, 2023 at 1:13 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> > I think the name of function pg_merge_when_clause() can be improved.\n> > How about pg_merge_when_clause_ordinal().\n>\n> another idea: pg_merge_action_ordinal()\n\nSince there can be many occurrences of the same action\n(INSERT/UPDATE/DELETE) in a MERGE command associated with different\nconditions, I don't think action_ordinal would make sense for this\nfunction name.\n\ne.g.\nWHEN MATCHED and src.col1 = val1 THEN UPDATE col2 = someval1\nWHEN MATCHED and src.col1 = val2 THEN UPDATE col2 = someval2\n...\n\nWhen looking at the implementation code, as well, we see that the code\nin this function tracks and reports the lexical position of the WHEN\nclause, irrespective of the action associated with that WHEN clause.\n\n foreach(l, stmt->mergeWhenClauses)\n {\n...\n action->index = foreach_current_index(l) + 1;\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 6 Jul 2023 10:37:21 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 6 Jul 2023 at 06:13, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> Most of the review was done with the v6 of the patch, minus the doc\n> build step. The additional changes in v7 of the patch were eyeballed,\n> and tested with `make check`.\n>\n\nThanks for the review!\n\n> > One minor annoyance is that, due to the way that pg_merge_action() and\n> > pg_merge_when_clause() require access to the MergeActionState, they\n> > only work if they appear directly in the RETURNING list. They can't,\n> > for example, appear in a subquery in the RETURNING list, and I don't\n> > see an easy way round that limitation.\n>\n> I believe that's a serious limitation, and would be a blocker for the feature.\n>\n\nYes, that was bugging me for quite a while.\n\nAttached is a new version that removes that restriction, allowing the\nmerge support functions to appear anywhere. This requires a bit of\nplanner support, to deal with merge support functions in subqueries,\nin a similar way to how aggregates and GROUPING() expressions are\nhandled. But a number of the changes from the previous patch are no\nlonger necessary, so overall, this version of the patch is slightly\nsmaller.\n\n> I think the name of function pg_merge_when_clause() can be improved.\n> How about pg_merge_when_clause_ordinal().\n>\n\nThat's a bit of a mouthful, but I don't have a better idea right now.\nI've left the names alone for now, in case something better occurs to\nanyone.\n\n> In the doc page of MERGE, you've moved the 'with_query' from the\n> bottom of Parameters section to the top of that section. Any reason\n> for this? Since the Parameters section is quite large, for a moment I\n> thought the patch was _adding_ the description of 'with_query'.\n>\n\nAh yes, I was just making the order consistent with the\nINSERT/UPDATE/DELETE pages. That could probably be committed\nseparately.\n\n> + /*\n> + * Merge support functions should only be called directly from a MERGE\n> + * command, and need access to the parent ModifyTableState. The parser\n> + * should have checked that such functions only appear in the RETURNING\n> + * list of a MERGE, so this should never fail.\n> + */\n> + if (IsMergeSupportFunction(funcid))\n> + {\n> + if (!state->parent ||\n> + !IsA(state->parent, ModifyTableState) ||\n> + ((ModifyTableState *) state->parent)->operation != CMD_MERGE)\n> + elog(ERROR, \"merge support function called in non-merge context\");\n>\n> As the comment says, this is an unexpected condition, and should have\n> been caught and reported by the parser. So I think it'd be better to\n> use an Assert() here. Moreover, there's ERROR being raised by the\n> pg_merge_action() and pg_merge_when_clause() functions, when they\n> detect the function context is not appropriate.\n>\n> I found it very innovative to allow these functions to be called only\n> in a certain part of certain SQL command. I don't think there's a\n> precedence for this in Postgres; I'd be glad to learn if there are\n> other functions that Postgres exposes this way.\n>\n> - EXPR_KIND_RETURNING, /* RETURNING */\n> + EXPR_KIND_RETURNING, /* RETURNING in INSERT/UPDATE/DELETE */\n> + EXPR_KIND_MERGE_RETURNING, /* RETURNING in MERGE */\n>\n> Having to invent a whole new ParseExprKind enum, feels like a bit of\n> an overkill. My only objection is that this is exactly like\n> EXPR_KIND_RETURNING, hence EXPR_KIND_MERGE_RETURNING needs to be dealt\n> with exactly in as many places. But I don't have any alternative\n> proposals.\n>\n\nThat's all gone now from the new patch, since there is no longer any\nrestriction on where these functions can appear.\n\n> --- a/src/include/catalog/pg_proc.h\n> +++ b/src/include/catalog/pg_proc.h\n> +/* Is this a merge support function? (Requires fmgroids.h) */\n> +#define IsMergeSupportFunction(funcid) \\\n> + ((funcid) == F_PG_MERGE_ACTION || \\\n> + (funcid) == F_PG_MERGE_WHEN_CLAUSE)\n>\n> If it doesn't cause recursion or other complications, I think we\n> should simply include the fmgroids.h in pg_proc.h. I understand that\n> not all .c files/consumers that include pg_proc.h may need fmgroids.h,\n> but #include'ing it will eliminate the need to keep the \"Requires...\"\n> note above, and avoid confusion, too.\n>\n\nThere's now only one place that uses this macro, whereas there are a\nlot of places that include pg_proc.h and not fmgroids.h, so I don't\nthink forcing them all to include fmgroids.h is a good idea. (BTW,\nthis approach and comment is borrowed from IsTrueArrayType() in\npg_type.h)\n\n> --- a/src/test/regress/expected/merge.out\n> +++ b/src/test/regress/expected/merge.out\n>\n> -WHEN MATCHED AND tid > 2 THEN\n> +WHEN MATCHED AND tid >= 2 THEN\n>\n> This change can be treated as a bug fix :-)\n>\n> +-- COPY (MERGE ... RETURNING) TO ...\n> +BEGIN;\n> +COPY (\n> + MERGE INTO sq_target t\n> + USING v\n> + ON tid = sid\n> + WHEN MATCHED AND tid > 2 THEN\n>\n> For consistency, the check should be tid >= 2, like you've fixed in\n> command referenced above.\n>\n> +CREATE FUNCTION merge_into_sq_target(sid int, balance int, delta int,\n> + OUT action text, OUT tid int,\n> OUT new_balance int)\n> +LANGUAGE sql AS\n> +$$\n> + MERGE INTO sq_target t\n> + USING (VALUES ($1, $2, $3)) AS v(sid, balance, delta)\n> + ON tid = v.sid\n> + WHEN MATCHED AND tid > 2 THEN\n>\n> Again, for consistency, the comparison operator should be >=. There\n> are a few more occurrences of this comparison in the rest of the file,\n> that need the same treatment.\n>\n\nI changed the new tests to use \">= 2\" (and the COPY test now returns 3\nrows, with an action of each type, which is easier to read), but I\ndon't think it's really necessary to change all the existing tests\nfrom \"> 2\". There's nothing wrong with the \"= 2\" case having no\naction, as long as the tests give decent coverage.\n\nThanks again for all the feedback.\n\nRegards,\nDean", "msg_date": "Fri, 7 Jul 2023 23:48:00 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, Jul 7, 2023 at 3:48 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 6 Jul 2023 at 06:13, Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > > One minor annoyance is that, due to the way that pg_merge_action() and\n> > > pg_merge_when_clause() require access to the MergeActionState, they\n> > > only work if they appear directly in the RETURNING list. They can't,\n> > > for example, appear in a subquery in the RETURNING list, and I don't\n> > > see an easy way round that limitation.\n> >\n> > I believe that's a serious limitation, and would be a blocker for the feature.\n>\n> Yes, that was bugging me for quite a while.\n>\n> Attached is a new version that removes that restriction, allowing the\n> merge support functions to appear anywhere. This requires a bit of\n> planner support, to deal with merge support functions in subqueries,\n> in a similar way to how aggregates and GROUPING() expressions are\n> handled. But a number of the changes from the previous patch are no\n> longer necessary, so overall, this version of the patch is slightly\n> smaller.\n\n+1\n\n> > I think the name of function pg_merge_when_clause() can be improved.\n> > How about pg_merge_when_clause_ordinal().\n> >\n>\n> That's a bit of a mouthful, but I don't have a better idea right now.\n> I've left the names alone for now, in case something better occurs to\n> anyone.\n\n+1. How do we make sure we don't forget that it needs to be named\nbetter. Perhaps a TODO item within the patch will help.\n\n> > In the doc page of MERGE, you've moved the 'with_query' from the\n> > bottom of Parameters section to the top of that section. Any reason\n> > for this? Since the Parameters section is quite large, for a moment I\n> > thought the patch was _adding_ the description of 'with_query'.\n> >\n>\n> Ah yes, I was just making the order consistent with the\n> INSERT/UPDATE/DELETE pages. That could probably be committed\n> separately.\n\nI don't think that's necessary, if it improves consistency with related docs.\n\n> > + /*\n> > + * Merge support functions should only be called directly from a MERGE\n> > + * command, and need access to the parent ModifyTableState. The parser\n> > + * should have checked that such functions only appear in the RETURNING\n> > + * list of a MERGE, so this should never fail.\n> > + */\n> > + if (IsMergeSupportFunction(funcid))\n> > + {\n> > + if (!state->parent ||\n> > + !IsA(state->parent, ModifyTableState) ||\n> > + ((ModifyTableState *) state->parent)->operation != CMD_MERGE)\n> > + elog(ERROR, \"merge support function called in non-merge context\");\n> >\n> > As the comment says, this is an unexpected condition, and should have\n> > been caught and reported by the parser. So I think it'd be better to\n> > use an Assert() here. Moreover, there's ERROR being raised by the\n> > pg_merge_action() and pg_merge_when_clause() functions, when they\n> > detect the function context is not appropriate.\n> >\n> > I found it very innovative to allow these functions to be called only\n> > in a certain part of certain SQL command. I don't think there's a\n> > precedence for this in Postgres; I'd be glad to learn if there are\n> > other functions that Postgres exposes this way.\n> >\n> > - EXPR_KIND_RETURNING, /* RETURNING */\n> > + EXPR_KIND_RETURNING, /* RETURNING in INSERT/UPDATE/DELETE */\n> > + EXPR_KIND_MERGE_RETURNING, /* RETURNING in MERGE */\n> >\n> > Having to invent a whole new ParseExprKind enum, feels like a bit of\n> > an overkill. My only objection is that this is exactly like\n> > EXPR_KIND_RETURNING, hence EXPR_KIND_MERGE_RETURNING needs to be dealt\n> > with exactly in as many places. But I don't have any alternative\n> > proposals.\n> >\n>\n> That's all gone now from the new patch, since there is no longer any\n> restriction on where these functions can appear.\n\nI believe this elog can be safely turned into an Assert.\n\n+ switch (mergeActionCmdType)\n {\n- CmdType commandType = relaction->mas_action->commandType;\n....\n+ case CMD_INSERT:\n....\n+ default:\n+ elog(ERROR, \"unrecognized commandType: %d\", (int)\nmergeActionCmdType);\n\nThe same treatment can be applied to the elog(ERROR) in pg_merge_action().\n\n> > +CREATE FUNCTION merge_into_sq_target(sid int, balance int, delta int,\n> > + OUT action text, OUT tid int,\n> > OUT new_balance int)\n> > +LANGUAGE sql AS\n> > +$$\n> > + MERGE INTO sq_target t\n> > + USING (VALUES ($1, $2, $3)) AS v(sid, balance, delta)\n> > + ON tid = v.sid\n> > + WHEN MATCHED AND tid > 2 THEN\n> >\n> > Again, for consistency, the comparison operator should be >=. There\n> > are a few more occurrences of this comparison in the rest of the file,\n> > that need the same treatment.\n> >\n>\n> I changed the new tests to use \">= 2\" (and the COPY test now returns 3\n> rows, with an action of each type, which is easier to read), but I\n> don't think it's really necessary to change all the existing tests\n> from \"> 2\". There's nothing wrong with the \"= 2\" case having no\n> action, as long as the tests give decent coverage.\n\nI was just trying to drive these tests towards a consistent pattern.\nAs a reader, if I see these differences, the first and the\nconservative thought I have is that these differences must be there\nfor a reason, and then I have to work to find out what those reasons\nmight be. And that's a lot of wasted effort, just in case someone\nintends to change something in these tests.\n\nI performed this round of review by comparing the diff between the v7\nand v8 patches (after applying to commit 4f4d73466d)\n\n-ExecProcessReturning(ResultRelInfo *resultRelInfo,\n+ExecProcessReturning(ModifyTableContext *context,\n+ ResultRelInfo *resultRelInfo,\n...\n+ TupleTableSlot *rslot;\n...\n+ if (context->relaction)\n+ {\n...\n+ PG_TRY();\n+ {\n+ rslot = ExecProject(projectReturning);\n+ }\n+ PG_FINALLY();\n+ {\n+ mergeActionCmdType = saved_mergeActionCmdType;\n+ mergeActionIdx = saved_mergeActionIdx;\n+ }\n+ PG_END_TRY();\n+ }\n+ else\n+ rslot = ExecProject(projectReturning);\n+\n+ return rslot;\n\nIn the above hunk, if there's an exception/ERROR, I believe we should\nPG_RE_THROW(). If there's a reason to continue, we should at least set\nrslot = NULL, otherwise we may be returning an uninitialized value to\nthe caller.\n\n { oid => '9499', descr => 'command type of current MERGE action',\n- proname => 'pg_merge_action', provolatile => 'v',\n+ proname => 'pg_merge_action', provolatile => 'v', proparallel => 'r',\n....\n { oid => '9500', descr => 'index of current MERGE WHEN clause',\n- proname => 'pg_merge_when_clause', provolatile => 'v',\n+ proname => 'pg_merge_when_clause', provolatile => 'v', proparallel => 'r',\n....\n\nI see that you've now set proparallel of these functions to 'r'. I'd\njust like to understand how you got to that conclusion.\n\n--- error when using MERGE support functions outside MERGE\n-SELECT pg_merge_action() FROM sq_target;\n\nI believe it would be worthwhile to keep a record of the expected\noutputs of these invocations in the tests, just in case they change\nover time.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 11 Jul 2023 13:43:13 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, Jul 11, 2023 at 1:43 PM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> In the above hunk, if there's an exception/ERROR, I believe we should\n> PG_RE_THROW(). If there's a reason to continue, we should at least set\n> rslot = NULL, otherwise we may be returning an uninitialized value to\n> the caller.\n\nExcuse the brain-fart on my part. There's not need to PG_RE_THROW(),\nsince there's no PG_CATCH(). Re-learning the code's infrastructure\nslowly :-)\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 11 Jul 2023 13:58:20 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 2023-01-22 at 19:58 +0100, Alvaro Herrera wrote:\n> \n> (We do have to keep our fingers\n> crossed that they will decide to use the same RETURNING syntax as we\n> do\n> in this patch, of course.)\n\nDo we have a reason to think that they will accept something similar?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 11 Jul 2023 17:43:14 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 7/12/23 02:43, Jeff Davis wrote:\n> On Sun, 2023-01-22 at 19:58 +0100, Alvaro Herrera wrote:\n>>\n>> (We do have to keep our fingers\n>> crossed that they will decide to use the same RETURNING syntax as we\n>> do\n>> in this patch, of course.)\n> \n> Do we have a reason to think that they will accept something similar?\n\nWe have reason to think that they won't care at all.\n\nThere is no RETURNING clause in Standard SQL, and the way they would do \nthis is:\n\n SELECT ...\n FROM OLD TABLE (\n MERGE ...\n ) AS m\n\nThe rules for that for MERGE are well defined.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Wed, 12 Jul 2023 03:47:24 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 2023-07-12 at 03:47 +0200, Vik Fearing wrote:\n\n> There is no RETURNING clause in Standard SQL, and the way they would\n> do \n> this is:\n> \n>      SELECT ...\n>      FROM OLD TABLE (\n>          MERGE ...\n>      ) AS m\n> \n> The rules for that for MERGE are well defined.\n\nI only see OLD TABLE referenced as part of a trigger definition. Where\nis it defined for MERGE?\n\nIn any case, as long as the SQL standard doesn't conflict, then we're\nfine. And it looks unlikely to cause a conflict right now that wouldn't\nalso be a conflict with our existing RETURNING clause elsewhere, so I'm\nnot seeing a problem here.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 12 Jul 2023 16:48:29 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 7/13/23 01:48, Jeff Davis wrote:\n> On Wed, 2023-07-12 at 03:47 +0200, Vik Fearing wrote:\n> \n>> There is no RETURNING clause in Standard SQL, and the way they would\n>> do\n>> this is:\n>>\n>>      SELECT ...\n>>      FROM OLD TABLE (\n>>          MERGE ...\n>>      ) AS m\n>>\n>> The rules for that for MERGE are well defined.\n> \n> I only see OLD TABLE referenced as part of a trigger definition. Where\n> is it defined for MERGE?\n\nLook up <data change delta table> for that syntax. For how MERGE \ngenerates those, see 9075-2:2023 Section 14.12 <merge statement> General \nRules 6.b and 6.c.\n\n> In any case, as long as the SQL standard doesn't conflict, then we're\n> fine. And it looks unlikely to cause a conflict right now that wouldn't\n> also be a conflict with our existing RETURNING clause elsewhere, so I'm\n> not seeing a problem here.\n\nI do not see a problem either, which was what I was trying to express \n(perhaps poorly). At least not with the syntax. I have not yet tested \nthat the returned rows match the standard.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 02:03:26 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 2023-01-08 at 12:28 +0000, Dean Rasheed wrote:\n> I considered allowing a separate RETURNING list at the end of each\n> action, but rapidly dismissed that idea.\n\nOne potential benefit of that approach is that it would be more natural\nto specify output specific to the action, e.g. \n\n WHEN MATCHED THEN UPDATE ... RETURNING 'UPDATE', ...\n\nwhich would be an alternative to the special function pg_merge_action()\nor \"WITH WHEN\".\n\nI agree that it can be awkward to specify multiple RETURNING clauses\nand get the columns to match up, but it's hard for me to say whether\nit's better or worse without knowing more about the use cases.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 08:30:03 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, 11 Jul 2023 at 21:43, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> > > I think the name of function pg_merge_when_clause() can be improved.\n> > > How about pg_merge_when_clause_ordinal().\n> >\n> > That's a bit of a mouthful, but I don't have a better idea right now.\n> > I've left the names alone for now, in case something better occurs to\n> > anyone.\n>\n> +1. How do we make sure we don't forget that it needs to be named\n> better. Perhaps a TODO item within the patch will help.\n>\n\nThinking about that some more, I think the word \"number\" is more\nfamiliar to most people than \"ordinal\". There's the row_number()\nfunction, for example. So perhaps pg_merge_when_clause_number() would\nbe a better name. It's still quite long, but it's the best I can think\nof.\n\n> I believe this elog can be safely turned into an Assert.\n>\n> + switch (mergeActionCmdType)\n> {\n> - CmdType commandType = relaction->mas_action->commandType;\n> ....\n> + case CMD_INSERT:\n> ....\n> + default:\n> + elog(ERROR, \"unrecognized commandType: %d\", (int)\n> mergeActionCmdType);\n>\n> The same treatment can be applied to the elog(ERROR) in pg_merge_action().\n>\n\nHmm, that's a very common code pattern used in dozens, if not hundreds\nof places throughout the backend code, so I don't think this should be\ndifferent.\n\n> > > +CREATE FUNCTION merge_into_sq_target(sid int, balance int, delta int,\n> > > + OUT action text, OUT tid int,\n> > > OUT new_balance int)\n> > > +LANGUAGE sql AS\n> > > +$$\n> > > + MERGE INTO sq_target t\n> > > + USING (VALUES ($1, $2, $3)) AS v(sid, balance, delta)\n> > > + ON tid = v.sid\n> > > + WHEN MATCHED AND tid > 2 THEN\n> > >\n> > > Again, for consistency, the comparison operator should be >=. There\n> > > are a few more occurrences of this comparison in the rest of the file,\n> > > that need the same treatment.\n> > >\n> >\n> > I changed the new tests to use \">= 2\" (and the COPY test now returns 3\n> > rows, with an action of each type, which is easier to read), but I\n> > don't think it's really necessary to change all the existing tests\n> > from \"> 2\". There's nothing wrong with the \"= 2\" case having no\n> > action, as long as the tests give decent coverage.\n>\n> I was just trying to drive these tests towards a consistent pattern.\n> As a reader, if I see these differences, the first and the\n> conservative thought I have is that these differences must be there\n> for a reason, and then I have to work to find out what those reasons\n> might be. And that's a lot of wasted effort, just in case someone\n> intends to change something in these tests.\n>\n\nOK, I see what you're saying. I think it should be a follow-on patch\nthough, because I don't like the idea of this patch to be making\nchanges to tests not related to the feature being added.\n\n> { oid => '9499', descr => 'command type of current MERGE action',\n> - proname => 'pg_merge_action', provolatile => 'v',\n> + proname => 'pg_merge_action', provolatile => 'v', proparallel => 'r',\n> ....\n> { oid => '9500', descr => 'index of current MERGE WHEN clause',\n> - proname => 'pg_merge_when_clause', provolatile => 'v',\n> + proname => 'pg_merge_when_clause', provolatile => 'v', proparallel => 'r',\n> ....\n>\n> I see that you've now set proparallel of these functions to 'r'. I'd\n> just like to understand how you got to that conclusion.\n>\n\nNow that these functions can appear in subqueries in the RETURNING\nlist, there exists the theoretical possibility that the subquery might\nuse a parallel plan (actually that can't happen today, for any query\nthat modifies data, but maybe someday it may become a possibility),\nand it's possible to use these functions in a SELECT query inside a\nPL/pgSQL function called from the RETURNING list, which might consider\na parallel plan. Since these functions rely on access to executor\nstate that isn't copied to parallel workers, they must be run on the\nleader, hence I think PARALLEL RESTRICTED is the right level to use. A\nsimilar example is pg_trigger_depth().\n\n> --- error when using MERGE support functions outside MERGE\n> -SELECT pg_merge_action() FROM sq_target;\n>\n> I believe it would be worthwhile to keep a record of the expected\n> outputs of these invocations in the tests, just in case they change\n> over time.\n>\n\nYeah, that makes sense. I'll post an update soon.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Jul 2023 16:38:10 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 2023-01-09 at 12:29 +0000, Dean Rasheed wrote:\n> > Would it be feasible to allow specifying old.column or new.column?\n> > These would always be NULL for INSERT and DELETE respectively but\n> > more useful with UPDATE. Actually I've been meaning to ask this\n> > question about UPDATE … RETURNING.\n> > \n> \n> I too have wished for the ability to do that with UPDATE ...\n> RETURNING, though I'm not sure how feasible it is.\n> \n> I think it's something best considered separately though. I haven't\n> given any thought as to how to make it work, so there might be\n> technical difficulties. But if it could be made to work for UPDATE,\n> it\n> shouldn't be much more effort to make it work for MERGE.\n\nMERGE can end up combining old and new values in a way that doesn't\nhappen with INSERT/UPDATE/DELETE. For instance, a \"MERGE ... RETURNING\nid\" would return a mix of NEW.id (for INSERT/UPDATE actions) and OLD.id\n(for DELETE actions).\n\nThe pg_merge_action() can differentiate the old and new values, but\nit's a bit more awkward.\n\nI'm fine considering that as a separate patch, but it does seem worth\ndiscussing briefly here.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 09:01:04 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 2023-01-09 at 12:29 +0000, Dean Rasheed wrote:\n> > Would it be useful to have just the action? Perhaps \"WITH ACTION\"?\n> > My idea is that this would return an enum of INSERT, UPDATE, DELETE\n> > (so is \"action\" the right word?). It seems to me in many situations\n> > I would be more likely to care about which of these 3 happened\n> > rather than the exact clause that applied. This isn't necessarily\n> > meant to be instead of your suggestion because I can imagine\n> > wanting to know the exact clause, just an alternative that might\n> > suffice in many situations. Using it would also avoid problems\n> > arising from editing the query in a way which changes the numbers\n> > of the clauses.\n> > \n> \n> Hmm, perhaps that's something that can be added as well. Both use\n> cases seem useful.\n\nCan you expand a bit on the use cases for identifying individual WHEN\nclauses? I see that it offers a new capability beyond just the action\ntype, but I'm having trouble thinking of real use cases.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 09:36:58 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, Jul 13, 2023 at 8:38 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Tue, 11 Jul 2023 at 21:43, Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> > { oid => '9499', descr => 'command type of current MERGE action',\n> > - proname => 'pg_merge_action', provolatile => 'v',\n> > + proname => 'pg_merge_action', provolatile => 'v', proparallel => 'r',\n> > ....\n> > { oid => '9500', descr => 'index of current MERGE WHEN clause',\n> > - proname => 'pg_merge_when_clause', provolatile => 'v',\n> > + proname => 'pg_merge_when_clause', provolatile => 'v', proparallel => 'r',\n> > ....\n> >\n> > I see that you've now set proparallel of these functions to 'r'. I'd\n> > just like to understand how you got to that conclusion.\n> >\n>\n> Now that these functions can appear in subqueries in the RETURNING\n> list, there exists the theoretical possibility that the subquery might\n> use a parallel plan (actually that can't happen today, for any query\n> that modifies data, but maybe someday it may become a possibility),\n> and it's possible to use these functions in a SELECT query inside a\n> PL/pgSQL function called from the RETURNING list, which might consider\n> a parallel plan. Since these functions rely on access to executor\n> state that isn't copied to parallel workers, they must be run on the\n> leader, hence I think PARALLEL RESTRICTED is the right level to use. A\n> similar example is pg_trigger_depth().\n\nThanks for the explanation. That helps.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 13 Jul 2023 09:38:03 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 7/13/23 17:38, Dean Rasheed wrote:\n> On Tue, 11 Jul 2023 at 21:43, Gurjeet Singh <gurjeet@singh.im> wrote:\n>>\n>>>> I think the name of function pg_merge_when_clause() can be improved.\n>>>> How about pg_merge_when_clause_ordinal().\n>>>\n>>> That's a bit of a mouthful, but I don't have a better idea right now.\n>>> I've left the names alone for now, in case something better occurs to\n>>> anyone.\n>>\n>> +1. How do we make sure we don't forget that it needs to be named\n>> better. Perhaps a TODO item within the patch will help.\n>>\n> \n> Thinking about that some more, I think the word \"number\" is more\n> familiar to most people than \"ordinal\". There's the row_number()\n> function, for example. \n\n\nThere is also the WITH ORDINALITY and FOR ORDINALITY examples.\n\n\nSo perhaps pg_merge_when_clause_number() would\n> be a better name. It's still quite long, but it's the best I can think\n> of.\n\n\nHow about pg_merge_match_number() or pg_merge_ordinality()?\n-- \nVik Fearing\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 18:43:37 +0200", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 13 Jul 2023 at 17:01, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> MERGE can end up combining old and new values in a way that doesn't\n> happen with INSERT/UPDATE/DELETE. For instance, a \"MERGE ... RETURNING\n> id\" would return a mix of NEW.id (for INSERT/UPDATE actions) and OLD.id\n> (for DELETE actions).\n>\n\nRight, but allowing OLD/NEW.colname in the RETURNING list would remove\nthat complication, and it shouldn't change how a bare colname\nreference behaves.\n\n> The pg_merge_action() can differentiate the old and new values, but\n> it's a bit more awkward.\n>\n\nFor some use cases, I can imagine allowing OLD/NEW.colname would mean\nyou wouldn't need pg_merge_action() (if the column was NOT NULL), so I\nthink the features should work well together.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Jul 2023 18:01:44 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 13 Jul 2023 at 17:43, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> There is also the WITH ORDINALITY and FOR ORDINALITY examples.\n>\n\nTrue. I just think \"number\" is a friendlier, more familiar word than \"ordinal\".\n\n> So perhaps pg_merge_when_clause_number() would\n> > be a better name. It's still quite long, but it's the best I can think\n> > of.\n>\n> How about pg_merge_match_number() or pg_merge_ordinality()?\n\nI think \"match_number\" is problematic, because it might be a \"matched\"\nor a \"not matched\" action. \"when_clause\" is the term used on the MERGE\ndoc page.\n\nRegards,\nDean\n\n\n", "msg_date": "Thu, 13 Jul 2023 18:30:59 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 2023-07-13 at 18:01 +0100, Dean Rasheed wrote:\n> For some use cases, I can imagine allowing OLD/NEW.colname would mean\n> you wouldn't need pg_merge_action() (if the column was NOT NULL), so\n> I\n> think the features should work well together.\n\nFor use cases where a user could do it either way, which would you\nexpect to be the \"typical\" way (assuming we supported the new/old)?\n\n MERGE ... RETURNING pg_merge_action(), id, val;\n\nor\n\n MERGE ... RETURNING id, OLD.val, NEW.val;\n\n?\n\nI am still bothered that pg_merge_action() is so context-sensitive.\n\"SELECT pg_merge_action()\" by itself doesn't make any sense, but it's\nallowed in the v8 patch. We could make that a runtime error, which\nwould be better, but it feels like it's structurally wrong. This is not\nan objection, but it's just making me think harder about alternatives.\n\nMaybe instead of a function it could be a special table reference like:\n\n MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 12:14:52 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 13 Jul 2023 at 20:14, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Thu, 2023-07-13 at 18:01 +0100, Dean Rasheed wrote:\n> > For some use cases, I can imagine allowing OLD/NEW.colname would mean\n> > you wouldn't need pg_merge_action() (if the column was NOT NULL), so\n> > I\n> > think the features should work well together.\n>\n> For use cases where a user could do it either way, which would you\n> expect to be the \"typical\" way (assuming we supported the new/old)?\n>\n> MERGE ... RETURNING pg_merge_action(), id, val;\n>\n> or\n>\n> MERGE ... RETURNING id, OLD.val, NEW.val;\n>\n> ?\n>\n\nI think it might depend on whether OLD.val and NEW.val were actually\nrequired, but I think I would still probably use pg_merge_action() to\nget the action, since it doesn't rely on specific table columns being\nNOT NULL. It's a little like writing a trigger function that handles\nmultiple command types. You could use OLD and NEW to deduce whether it\nwas an INSERT, UPDATE or DELETE, or you could use TG_OP. I tend to use\nTG_OP, but maybe there are situations where using OLD and NEW is more\nnatural.\n\nI found a 10-year-old thread discussing adding support for OLD/NEW to\nRETURNING [1], but it doesn't look like anything close to a\ncommittable solution was developed, or even a design that might lead\nto one. That's a shame, because there seemed to be a lot of demand for\nthe feature, but it's not clear how much effort it would be to\nimplement.\n\n> I am still bothered that pg_merge_action() is so context-sensitive.\n> \"SELECT pg_merge_action()\" by itself doesn't make any sense, but it's\n> allowed in the v8 patch. We could make that a runtime error, which\n> would be better, but it feels like it's structurally wrong. This is not\n> an objection, but it's just making me think harder about alternatives.\n>\n> Maybe instead of a function it could be a special table reference like:\n>\n> MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n>\n\nWell, that's a little more concise, but I'm not sure that it really\nbuys us that much, to be worth the extra complication. Presumably\nsomething in the planner would turn that into something the executor\ncould handle, which might just end up being the existing functions\nanyway.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/51822C0F.5030807%40gmail.com\n\n\n", "msg_date": "Fri, 14 Jul 2023 09:55:28 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 2023-07-14 at 09:55 +0100, Dean Rasheed wrote:\n> I found a 10-year-old thread discussing adding support for OLD/NEW to\n> RETURNING [1], but it doesn't look like anything close to a\n> committable solution was developed, or even a design that might lead\n> to one. That's a shame, because there seemed to be a lot of demand\n> for\n> the feature, but it's not clear how much effort it would be to\n> implement.\n\nIt looks like progress was made in the direction of using a table alias\nwith executor support to bring the right attributes along.\n\nThere was some concern about how exactly the table alias should work\nsuch that it doesn't look too much like a join. Not sure how much of a\nproblem that is.\n\n> > Maybe instead of a function it could be a special table reference\n> > like:\n> > \n> >   MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n> > \n> \n> Well, that's a little more concise, but I'm not sure that it really\n> buys us that much, to be worth the extra complication. Presumably\n> something in the planner would turn that into something the executor\n> could handle, which might just end up being the existing functions\n> anyway.\n\nThe benefits are:\n\n1. It is naturally constrained to the right context. It doesn't require\nglobal variables and the PG_TRY/PG_FINALLY, and can't be called in the\nwrong contexts (like SELECT).\n\n2. More likely to be consistent with eventual support for NEW/OLD\n(actually BEFORE/AFTER for reasons the prior thread discussed).\n\nI'm not sure how much extra complication it would cause, though.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 17 Jul 2023 12:43:10 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, Jul 14, 2023 at 1:55 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Thu, 13 Jul 2023 at 20:14, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > On Thu, 2023-07-13 at 18:01 +0100, Dean Rasheed wrote:\n> > > For some use cases, I can imagine allowing OLD/NEW.colname would mean\n> > > you wouldn't need pg_merge_action() (if the column was NOT NULL), so\n> > > I\n> > > think the features should work well together.\n> >\n> > For use cases where a user could do it either way, which would you\n> > expect to be the \"typical\" way (assuming we supported the new/old)?\n> >\n> > MERGE ... RETURNING pg_merge_action(), id, val;\n> >\n> > or\n> >\n> > MERGE ... RETURNING id, OLD.val, NEW.val;\n> >\n> > ?\n> >\n>\n> I think it might depend on whether OLD.val and NEW.val were actually\n> required, but I think I would still probably use pg_merge_action() to\n> get the action, since it doesn't rely on specific table columns being\n> NOT NULL.\n\n+1. It would be better to expose the action explicitly, rather than\nasking the user to deduce it based on the old and new values of a\ncolumn. The server providing that value is better than letting users\nrely on error-prone methods.\n\n> I found a 10-year-old thread discussing adding support for OLD/NEW to\n> RETURNING [1],\n\nThanks for digging up that thread. An important concern brought up in\nthat thread was how the use of names OLD and NEW will affect plpgsql\n(an possibly other PLs) trigger functions, which rely on specific\nmeaning for those names. The names BEFORE and AFTER, proposed there\nare not as intuitive as OLD/NEW for the purpose of identifying old and\nnew versions of the row, but I don't have a better proposal. Perhaps\nPREVIOUS and CURRENT?\n\n> but it doesn't look like anything close to a\n> committable solution was developed, or even a design that might lead\n> to one. That's a shame, because there seemed to be a lot of demand for\n> the feature,\n\n+1\n\n> > I am still bothered that pg_merge_action() is so context-sensitive.\n> > \"SELECT pg_merge_action()\" by itself doesn't make any sense, but it's\n> > allowed in the v8 patch. We could make that a runtime error, which\n> > would be better, but it feels like it's structurally wrong. This is not\n> > an objection, but it's just making me think harder about alternatives.\n> >\n> > Maybe instead of a function it could be a special table reference like:\n> >\n> > MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n\nI believe Jeff meant s/action_number/when_number/. Not that we've\nsettled on a name for this virtual column.\n\n> Well, that's a little more concise, but I'm not sure that it really\n> buys us that much, to be worth the extra complication.\n\nAfter considering the options, and their pros and cons (ease of\nimplementation, possibility of conflict with SQL spec, intuitiveness\nof syntax), I'm now strongly leaning towards the SQL syntax variant.\nExposing the action taken via a context-sensitive function feels\nkludgy, when compared to Jeff's proposed SQL syntax. Don't get me\nwrong, I still feel it was very clever how you were able to make the\nfunction context sensitive, and make it work in expressions deeper in\nthe subqueries.\n\nPlus, if we were able to make it work as SQL syntax, it's very likely\nwe can use the same technique to implement BEFORE and AFTER behaviour\nin UPDATE ... RETURNING that the old thread could not accomplish a\ndecade ago.\n\n> Presumably\n> something in the planner would turn that into something the executor\n> could handle, which might just end up being the existing functions\n> anyway.\n\nIf the current patch's functions can serve the needs of the SQL syntax\nvariant, that'd be a neat win!\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 20 Jul 2023 23:19:54 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, Jul 17, 2023 at 12:43 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2023-07-14 at 09:55 +0100, Dean Rasheed wrote:\n> > I found a 10-year-old thread discussing adding support for OLD/NEW to\n> > RETURNING [1], but it doesn't look like anything close to a\n> > committable solution was developed, or even a design that might lead\n> > to one. That's a shame, because there seemed to be a lot of demand\n> > for\n> > the feature, but it's not clear how much effort it would be to\n> > implement.\n>\n> It looks like progress was made in the direction of using a table alias\n> with executor support to bring the right attributes along.\n\nThat patch introduced RTE_ALIAS to carry that info through to the\nexecutor, and having to special-case that that in many places was seen\nas a bad thing.\n\n> There was some concern about how exactly the table alias should work\n> such that it doesn't look too much like a join. Not sure how much of a\n> problem that is.\n\nMy understanding of that thread is that the join example Robert shared\nwas for illustrative purposes only, to show that executor already has\nenough information to produce the desired output, and to show that\nit's a matter of tweaking the parser and planner to tell the executor\nwhat output to produce. But later reviewers pointed out that it's not\nthat simple (example was given of ExecDelete performing\npullups/working hard to get the correct values of the old version of\nthe row).\n\nThe concerns were mainly around use of OLD.* and NEW.*, too much\nspecial-casing of RTE_ALIAS, and then the code quality of the patch\nitself.\n\n> > > Maybe instead of a function it could be a special table reference\n> > > like:\n> > >\n> > > MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n> > >\n> >\n> > Well, that's a little more concise, but I'm not sure that it really\n> > buys us that much, to be worth the extra complication. Presumably\n> > something in the planner would turn that into something the executor\n> > could handle, which might just end up being the existing functions\n> > anyway.\n>\n> The benefits are:\n>\n> 1. It is naturally constrained to the right context.\n\n+1\n\n> I'm not sure how much extra complication it would cause, though.\n\nIf that attempt with UPDATE RETURNING a decade ago is any indication,\nit's probably a tough one.\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 20 Jul 2023 23:37:48 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 2023-07-20 at 23:19 -0700, Gurjeet Singh wrote:\n> Plus, if we were able to make it work as SQL syntax, it's very likely\n> we can use the same technique to implement BEFORE and AFTER behaviour\n> in UPDATE ... RETURNING that the old thread could not accomplish a\n> decade ago.\n\nTo clarify, I don't think having a special table alias will require any\nchanges in gram.y and I don't consider it a syntactical change.\n\nI haven't looked into the implementation yet.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Fri, 21 Jul 2023 11:30:22 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 17 Jul 2023 at 20:43, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> > > Maybe instead of a function it could be a special table reference\n> > > like:\n> > >\n> > > MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n> > >\n> The benefits are:\n>\n> 1. It is naturally constrained to the right context. It doesn't require\n> global variables and the PG_TRY/PG_FINALLY, and can't be called in the\n> wrong contexts (like SELECT).\n>\n> 2. More likely to be consistent with eventual support for NEW/OLD\n> (actually BEFORE/AFTER for reasons the prior thread discussed).\n>\n\nThinking about this some more, I think that the point about\nconstraining these functions to the right context is a reasonable one,\nand earlier versions of this patch did that better, without needing\nglobal variables or a PG_TRY/PG_FINALLY block.\n\nHere is an updated patch that goes back to doing it that way. This is\nmore like the way that aggregate functions and GROUPING() work, in\nthat the parser constrains the location from which the functions can\nbe used, and at execution time, the functions rely on the relevant\ncontext being passed via the FunctionCallInfo context.\n\nIt's still possible to use these functions in subqueries in the\nRETURNING list, but attempting to use them anywhere else (like a\nSELECT on its own) will raise an error at parse time. If they do\nsomehow get invoked in a non-MERGE context, they will elog an error\n(again, just like aggregate functions), because that's a \"shouldn't\nhappen\" error.\n\nThis does nothing to be consistent with eventual support for\nBEFORE/AFTER, but I think that's really an entirely separate thing,\nand likely to work quite differently, internally.\n\n From a user perspective, writing something like \"BEFORE.id\" is quite\nnatural, because it's clear that \"id\" is a column, and \"BEFORE\" is the\nold state of the table. Writing something like \"MERGE.action\" seems a\nlot more counter-intuitive, because \"action\" isn't a column of\nanything (and if it was, I think this syntax would potentially cause\neven more confusion).\n\nSo really, I think \"MERGE.action\" is an abuse of the syntax,\ninconsistent with any other SQL syntax, and using functions is much\nmore natural, akin to GROUPING(), for example.\n\nRegards,\nDean", "msg_date": "Sat, 22 Jul 2023 03:10:04 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, Jul 21, 2023 at 7:17 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> On Mon, 17 Jul 2023 at 20:43, Jeff Davis <pgsql@j-davis.com> wrote:\n> >\n> > > > Maybe instead of a function it could be a special table reference\n> > > > like:\n> > > >\n> > > > MERGE ... RETURNING MERGE.action, MERGE.action_number, id, val?\n> > > >\n> > The benefits are:\n> >\n> > 1. It is naturally constrained to the right context. It doesn't require\n> > global variables and the PG_TRY/PG_FINALLY, and can't be called in the\n> > wrong contexts (like SELECT).\n> >\n> > 2. More likely to be consistent with eventual support for NEW/OLD\n> > (actually BEFORE/AFTER for reasons the prior thread discussed).\n> >\n>\n> Thinking about this some more, I think that the point about\n> constraining these functions to the right context is a reasonable one,\n> and earlier versions of this patch did that better, without needing\n> global variables or a PG_TRY/PG_FINALLY block.\n>\n> Here is an updated patch that goes back to doing it that way. This is\n> more like the way that aggregate functions and GROUPING() work, in\n> that the parser constrains the location from which the functions can\n> be used, and at execution time, the functions rely on the relevant\n> context being passed via the FunctionCallInfo context.\n>\n> It's still possible to use these functions in subqueries in the\n> RETURNING list, but attempting to use them anywhere else (like a\n> SELECT on its own) will raise an error at parse time. If they do\n> somehow get invoked in a non-MERGE context, they will elog an error\n> (again, just like aggregate functions), because that's a \"shouldn't\n> happen\" error.\n>\n> This does nothing to be consistent with eventual support for\n> BEFORE/AFTER, but I think that's really an entirely separate thing,\n\n+1\n\n> From a user perspective, writing something like \"BEFORE.id\" is quite\n> natural, because it's clear that \"id\" is a column, and \"BEFORE\" is the\n> old state of the table. Writing something like \"MERGE.action\" seems a\n> lot more counter-intuitive, because \"action\" isn't a column of\n> anything (and if it was, I think this syntax would potentially cause\n> even more confusion).\n>\n> So really, I think \"MERGE.action\" is an abuse of the syntax,\n> inconsistent with any other SQL syntax, and using functions is much\n> more natural, akin to GROUPING(), for example.\n\nThere seems to be other use cases which need us to invent a method to\nexpose a command-specific alias. See Tatsuo Ishii's call for help in\nhis patch for Row Pattern Recognition [1].\n\n<quote>\nI was not able to find a way to implement expressions like START.price\n(START is not a table alias). Any suggestion is greatly appreciated.\n</quote>\n\nIt looks like the SQL standard has started using more of such\ncontext-specific keywords, and I'd expect such keywords to only\nincrease in future, as the SQL committee tries to introduce more\nfeatures into the standard.\n\nSo if MERGE.action is not to your taste, perhaps you/someone can\nsuggest an alternative that doesn't cause a confusion, and yet\nimplementing it would open up the way for more such context-specific\nkeywords.\n\nI performed the review vo v9 patch by comparing it to v8 and v7\npatches, and then comparing to HEAD.\n\nThe v9 patch is more or less a complete revert to v7 patch, plus the\nplanner support for calling the merge-support functions in subqueries,\nparser catching use of merge-support functions outside MERGE command,\nand name change for one of the support functions.\n\nBut reverting to v7 also means that some of my gripes with that\nversion also return; e.g. invention of EXPR_KIND_MERGE_RETURNING. And\nas noted in v7 review, I don't have a better proposal.\n\nFunction name changed from pg_merge_when_clause() to\npg_merge_when_clause_number(). That's better, even though it's a bit\nof mouthful.\n\nDoc changes (compared to v7) look good.\n\nThe changes made to tests (compared to v7) are for the better.\n\n- * Uplevel PlaceHolderVars and aggregates are replaced, too.\n+ * Uplevel PlaceHolderVars, aggregates, GROUPING() expressions and merge\n+ * support functions are replaced, too.\n\nNeeds Oxford comma: s/GROUPING() expressions and/GROUPING() expressions, and/\n\n+pg_merge_action(PG_FUNCTION_ARGS)\n+{\n...\n+ relaction = mtstate->mt_merge_action;\n+ if (relaction)\n+ {\n..\n+ }\n+\n+ PG_RETURN_NULL();\n+}\n\nUnder what circumstances would the relaction be null? Is it okay to\nreturn NULL from this function if relaction is null, or is it better\nto throw an error? These questions apply to the\npg_merge_when_clause_number() function as well.\n\n[1]: Row pattern recognition\nhttps://www.postgresql.org/message-id/flat/20230625.210509.1276733411677577841.t-ishii%40sranhm.sra.co.jp\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Tue, 25 Jul 2023 13:46:38 -0700", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, 25 Jul 2023 at 21:46, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> There seems to be other use cases which need us to invent a method to\n> expose a command-specific alias. See Tatsuo Ishii's call for help in\n> his patch for Row Pattern Recognition [1].\n>\n\nI think that's different though, because in that example \"START\" is a\nrow from the table, and \"price\" is a table column, so using the table\nalias syntax \"START.price\" makes sense, to refer to a value from the\ntable.\n\nIn this case \"MERGE\" and \"action\" have nothing to do with table rows\nor columns, so saying \"MERGE.action\" doesn't fit the pattern.\n\n\n> I performed the review vo v9 patch by comparing it to v8 and v7\n> patches, and then comparing to HEAD.\n>\n\nMany thanks for looking at this.\n\n\n> The v9 patch is more or less a complete revert to v7 patch, plus the\n> planner support for calling the merge-support functions in subqueries,\n> parser catching use of merge-support functions outside MERGE command,\n> and name change for one of the support functions.\n>\n\nYes, that's a fair summary.\n\n\n> But reverting to v7 also means that some of my gripes with that\n> version also return; e.g. invention of EXPR_KIND_MERGE_RETURNING. And\n> as noted in v7 review, I don't have a better proposal.\n>\n\nTrue, but I think that it's in keeping with the purpose of the\nParseExprKind enumeration:\n\n/*\n * Expression kinds distinguished by transformExpr(). Many of these are not\n * semantically distinct so far as expression transformation goes; rather,\n * we distinguish them so that context-specific error messages can be printed.\n */\n\nwhich matches what the patch is using EXPR_KIND_MERGE_RETURNING for.\n\n\n> - * Uplevel PlaceHolderVars and aggregates are replaced, too.\n> + * Uplevel PlaceHolderVars, aggregates, GROUPING() expressions and merge\n> + * support functions are replaced, too.\n>\n> Needs Oxford comma: s/GROUPING() expressions and/GROUPING() expressions, and/\n>\n\nAdded.\n\n\n> +pg_merge_action(PG_FUNCTION_ARGS)\n> +{\n> ...\n> + relaction = mtstate->mt_merge_action;\n> + if (relaction)\n> + {\n> ..\n> + }\n> +\n> + PG_RETURN_NULL();\n> +}\n>\n> Under what circumstances would the relaction be null? Is it okay to\n> return NULL from this function if relaction is null, or is it better\n> to throw an error? These questions apply to the\n> pg_merge_when_clause_number() function as well.\n>\n\nYes, it's really a \"should never happen\" situation, so I've converted\nit to elog() an error. Similarly, commandType should never be\nCMD_NOTHING in pg_merge_action(), so that also now throws an error.\nAlso, the planner code now throws an error if it sees a merge support\nfunction outside a MERGE. Again, that should never happen, due to the\nparser check, but it seems better to be sure, and catch it early.\n\nWhile at it, I tidied up the planner code a bit, making the merge\nsupport function handling more like the other cases in\nreplace_correlation_vars_mutator(), and making\nreplace_outer_merge_support_function() more like its neighbouring\nfunctions, such as replace_outer_grouping(). In particular, it is now\nonly called if it is a reference to an upper-level MERGE, not for\nlocal references, which matches the pattern used in the neighbouring\nfunctions.\n\nFinally, I have added some new RLS code and tests, to apply SELECT\npolicies to new rows inserted by MERGE INSERT actions, if a RETURNING\nclause is specified, to make it consistent with a plain INSERT ...\nRETURNING command (see commit c2e08b04c9).\n\nRegards,\nDean", "msg_date": "Wed, 23 Aug 2023 09:20:23 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Updated version attached, fixing an uninitialized-variable warning\nfrom the cfbot.\n\nRegards,\nDean", "msg_date": "Wed, 23 Aug 2023 11:58:30 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 2023-08-23 at 11:58 +0100, Dean Rasheed wrote:\n> Updated version attached, fixing an uninitialized-variable warning\n> from the cfbot.\n\nI took another look and I'm still not comfortable with the special\nIsMergeSupportFunction() functions. I don't object necessarily -- if\nsomeone else wants to commit it, they can -- but I don't plan to commit\nit in this form.\n\nCan we revisit the idea of a per-WHEN RETURNING clause? The returning\nclauses could be treated kind of like a UNION, which makes sense\nbecause it really is a union of different results (the returned tuples\nfrom an INSERT are different than the returned tuples from a DELETE).\nYou can just add constants to the target lists to distinguish which\nWHEN clause they came from.\n\nI know you rejected that approach early on, but perhaps it's worth\ndiscussing further?\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 24 Oct 2023 12:10:59 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, Oct 24, 2023 at 2:11 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n> On Wed, 2023-08-23 at 11:58 +0100, Dean Rasheed wrote:\n> > Updated version attached, fixing an uninitialized-variable warning\n> > from the cfbot.\n>\n> I took another look and I'm still not comfortable with the special\n> IsMergeSupportFunction() functions. I don't object necessarily -- if\n> someone else wants to commit it, they can -- but I don't plan to commit\n> it in this form.\n>\n> Can we revisit the idea of a per-WHEN RETURNING clause? The returning\n> clauses could be treated kind of like a UNION, which makes sense\n> because it really is a union of different results (the returned tuples\n> from an INSERT are different than the returned tuples from a DELETE).\n> You can just add constants to the target lists to distinguish which\n> WHEN clause they came from.\n>\n> I know you rejected that approach early on, but perhaps it's worth\n> discussing further?\n>\n\n Yeah. Side benefit, the 'action_number' felt really out of place, and\nthat neatly might solve it. It doesn't match tg_op, for example. With the\ncurrent approach, return a text, or an enum? Why doesn't it match concepts\nthat are pretty well established elsewhere? SQL has a pretty good track\nrecord for not inventing weird numbers with no real meaning (sadly, not so\nmuch the developers). Having said that, pg_merge_action() doesn't feel\ntoo bad if the syntax issues can be worked out.\n\nVery supportive of the overall goal.\n\nmerlin\n\nOn Tue, Oct 24, 2023 at 2:11 PM Jeff Davis <pgsql@j-davis.com> wrote:On Wed, 2023-08-23 at 11:58 +0100, Dean Rasheed wrote:\n> Updated version attached, fixing an uninitialized-variable warning\n> from the cfbot.\n\nI took another look and I'm still not comfortable with the special\nIsMergeSupportFunction() functions. I don't object necessarily -- if\nsomeone else wants to commit it, they can -- but I don't plan to commit\nit in this form.\n\nCan we revisit the idea of a per-WHEN RETURNING clause? The returning\nclauses could be treated kind of like a UNION, which makes sense\nbecause it really is a union of different results (the returned tuples\nfrom an INSERT are different than the returned tuples from a DELETE).\nYou can just add constants to the target lists to distinguish which\nWHEN clause they came from.\n\nI know you rejected that approach early on, but perhaps it's worth\ndiscussing further? Yeah.  Side benefit, the 'action_number' felt really out of place, and that neatly might solve it.  It doesn't match tg_op, for example.  With the current approach, return a text, or an enum? Why doesn't it match concepts that are pretty well established elsewhere?  SQL has a pretty good track record for not inventing weird numbers with no real meaning (sadly, not so much the developers).   Having said that, pg_merge_action() doesn't feel too bad if the syntax issues can be worked out.Very supportive of the overall goal.merlin", "msg_date": "Tue, 24 Oct 2023 20:07:20 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 25 Oct 2023 at 02:07, Merlin Moncure <mmoncure@gmail.com> wrote:\n>\n> On Tue, Oct 24, 2023 at 2:11 PM Jeff Davis <pgsql@j-davis.com> wrote:\n>>\n>> Can we revisit the idea of a per-WHEN RETURNING clause? The returning\n>> clauses could be treated kind of like a UNION, which makes sense\n>> because it really is a union of different results (the returned tuples\n>> from an INSERT are different than the returned tuples from a DELETE).\n>> You can just add constants to the target lists to distinguish which\n>> WHEN clause they came from.\n>>\n> Yeah. Side benefit, the 'action_number' felt really out of place, and that neatly might solve it. It doesn't match tg_op, for example. With the current approach, return a text, or an enum? Why doesn't it match concepts that are pretty well established elsewhere? SQL has a pretty good track record for not inventing weird numbers with no real meaning (sadly, not so much the developers). Having said that, pg_merge_action() doesn't feel too bad if the syntax issues can be worked out.\n>\n\nI've been playing around a little with per-action RETURNING lists, and\nattached is a working proof-of-concept (no docs yet).\n\nThe implementation is simplified a little by not needing special merge\nsupport functions, but overall this approach introduces a little more\ncomplexity, which is perhaps not surprising.\n\nOne fiddly part is resolving the shift/reduce conflicts in the\ngrammar. Specifically, on seeing \"RETURNING expr when ...\", there is\nambiguity over whether the \"when\" is a column alias or the start of\nthe next merge action. I've resolved that by assigning a slightly\nhigher precedence to an expression without an alias, so WHEN is\nassumed to not be an alias. It seems pretty ugly though (in terms of\nhaving to duplicate so much code), and I'd be interested to know if\nthere's a neater way to do it.\n\n From a usability perspective, I'm still somewhat sceptical about this\napproach. It's a much more verbose syntax, and it gets quite tedious\nhaving to repeat the RETURNING list for every action, and keep them in\nsync. I also note that other database vendors seem to have opted for\nthe single RETURNING list approach (not that we necessarily need to\ncopy them).\n\nThe patch enforces the rule that if any action has a RETURNING list,\nthey all must have a RETURNING list. Not doing that leads to the\nnumber of rows returned not matching the command tag, or the number of\nrows modified, which I think would just lead to confusion. Also, it\nwould likely be a source of easy-to-overlook mistakes. One such\nmistake would be assuming that a RETURNING list at the very end\napplied to all actions, though it would also be easy to accidentally\nomit a RETURNING list in the middle of the command.\n\nHaving said that, I wonder if it would make sense to also support\nhaving a RETURNING list at the very end, if there are no other\nRETURNING lists. If we see that, we could automatically apply it to\nall actions, which I think would be much more convenient in situations\nwhere you don't care about the action executed, and just want the\nresults from the table. That would go a long way towards addressing my\nusability concerns.\n\nRegards,\nDean", "msg_date": "Fri, 27 Oct 2023 15:46:37 +0100", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 2023-10-27 at 15:46 +0100, Dean Rasheed wrote:\n> \n> One fiddly part is resolving the shift/reduce conflicts in the\n> grammar. Specifically, on seeing \"RETURNING expr when ...\", there is\n> ambiguity over whether the \"when\" is a column alias or the start of\n> the next merge action. I've resolved that by assigning a slightly\n> higher precedence to an expression without an alias, so WHEN is\n> assumed to not be an alias. It seems pretty ugly though (in terms of\n> having to duplicate so much code), and I'd be interested to know if\n> there's a neater way to do it.\n\nCan someone else comment on whether this is a reasonable solution to\nthe grammar problem?\n\n> From a usability perspective, I'm still somewhat sceptical about this\n> approach. It's a much more verbose syntax, and it gets quite tedious\n> having to repeat the RETURNING list for every action, and keep them\n> in\n> sync.\n\nIf we go with the single RETURNING-clause-at-the-end approach, how\nimportant is it that the action can be a part of an arbitrary\nexpression?\n\nPerhaps something closer to your original proposal would be a good\ncompromise (sorry to backtrack yet again...)? It couldn't be used in an\narbitrary expression, but that also means that it couldn't end up in\nthe wrong kind of expression.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Mon, 30 Oct 2023 13:08:37 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 10/24/23 21:10, Jeff Davis wrote:\n> Can we revisit the idea of a per-WHEN RETURNING clause?\n\nFor the record, I dislike this idea.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Tue, 31 Oct 2023 12:45:20 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, 2023-10-31 at 12:45 +0100, Vik Fearing wrote:\n> On 10/24/23 21:10, Jeff Davis wrote:\n> > Can we revisit the idea of a per-WHEN RETURNING clause?\n> \n> For the record, I dislike this idea.\n\nI agree that it makes things awkward, and if it creates grammatical\nproblems as well, then it's not very appealing.\n\nThere are only so many approaches though, so it would be helpful if you\ncould say which approach you prefer.\n\nAssuming we have one RETURNING clause at the end, then it creates the\nproblem of how to communicate which WHEN clause a tuple came from,\nwhether it's the old or the new version, and/or which action was\nperformed on that tuple.\n\nHow do we communicate any of those things? We need to get that\ninformation into the result table somehow, so it should probably be\nsome kind of expression that can exist in the RETURNING clause. But\nwhat kind of expression?\n\n(a) It could be a totally new expression kind with a new keyword (or\nrecycling some existing keywords for the same effect, or something that\nlooks superficially like a function call but isn't) that's only valid\nin the RETURNING clause of a MERGE statement. If you use it in another\nexpression (say the targetlist of a SELECT statement), then you'd get a\nfailure at parse analysis time.\n\n(b) It could be a FuncExpr that is passed the information out-of-band\n(i.e. not as an argument) and would fail at runtime if called in the\nwrong context.\n\n(c) It could be a Var (or perhaps a Param?), but to which table would\nit refer? The source or the target table could be used, but we would\nalso need a special table reference to represent additional context\n(WHEN clause number, action, or \"after\" version of the tuple).\n\nDean's v11 patch had kind of a combination of (a) and (b). It's raises\nan error at parse analysis time like (a), but without any grammar\nchanges or new expr kind because it's a function. I must admit that\nmight be a very reasonable compromise and I certainly won't reject it\nwithout a clearly better alternative. It does feel like a hack though\nin the sense that it's hard-coding function OIDs into the parse\nanalysis and I'm not sure that's a great thing to do. I wonder if it\nwould be worth thinking about a way to make it generic by really making\nit into a different kind of function with pg_proc support? That feels\nlike over-engineering, and I hate to generalize from a single use case,\nbut it might be a good thought exercise.\n\nThe cleanest from a SQL perspective (in my opinion) would be something\nmore like (c), because the merge action and WHEN clause number would be\npassed in tuple data. It also would be good precedent for something\nlike BEFORE/AFTER aliases, which could be useful for UPDATE actions.\nBut given the implementation complexities brought up earlier (I haven't\nlooked into the details, but others have), that might be over-\nengineering.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Tue, 31 Oct 2023 11:28:43 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 10/31/23 19:28, Jeff Davis wrote:\n> On Tue, 2023-10-31 at 12:45 +0100, Vik Fearing wrote:\n>> On 10/24/23 21:10, Jeff Davis wrote:\n>>> Can we revisit the idea of a per-WHEN RETURNING clause?\n>>\n>> For the record, I dislike this idea.\n> \n> I agree that it makes things awkward, and if it creates grammatical\n> problems as well, then it's not very appealing.\n> \n> There are only so many approaches though, so it would be helpful if you\n> could say which approach you prefer.\n\n\nThis isn't as easy to answer for me as it seems because I care deeply \nabout respecting the standard. The standard does not have RETURNING at \nall but instead has <data change delta table> and the results for that \nfor a MERGE statement are clearly defined.\n\nOn the other hand, we don't have that and we do have RETURNING so we \nshould improve upon that if we can.\n\nOne thing I don't like about either solution is that you cannot get at \nboth the old row versions and the new row versions at the same time. I \ndon't see how <data change delta table>s can be fixed to support that, \nbut RETURNING certainly can be with some spelling of OLD/NEW or \nBEFORE/AFTER or whatever.\n\n\n> Assuming we have one RETURNING clause at the end, then it creates the\n> problem of how to communicate which WHEN clause a tuple came from,\n> whether it's the old or the new version, and/or which action was\n> performed on that tuple.\n> \n> How do we communicate any of those things? We need to get that\n> information into the result table somehow, so it should probably be\n> some kind of expression that can exist in the RETURNING clause. But\n> what kind of expression?\n> \n> (a) It could be a totally new expression kind with a new keyword (or\n> recycling some existing keywords for the same effect, or something that\n> looks superficially like a function call but isn't) that's only valid\n> in the RETURNING clause of a MERGE statement. If you use it in another\n> expression (say the targetlist of a SELECT statement), then you'd get a\n> failure at parse analysis time.\n\n\nThis would be my choice, the same as how the standard GROUPING() \n\"function\" for grouping sets is implemented by GroupingFunc.\n\n-- \nVik Fearing\n\n\n\n", "msg_date": "Wed, 1 Nov 2023 00:19:04 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Tue, 31 Oct 2023 at 23:19, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 10/31/23 19:28, Jeff Davis wrote:\n>\n> > Assuming we have one RETURNING clause at the end, then it creates the\n> > problem of how to communicate which WHEN clause a tuple came from,\n> > whether it's the old or the new version, and/or which action was\n> > performed on that tuple.\n> >\n> > How do we communicate any of those things? We need to get that\n> > information into the result table somehow, so it should probably be\n> > some kind of expression that can exist in the RETURNING clause. But\n> > what kind of expression?\n> >\n> > (a) It could be a totally new expression kind with a new keyword (or\n> > recycling some existing keywords for the same effect, or something that\n> > looks superficially like a function call but isn't) that's only valid\n> > in the RETURNING clause of a MERGE statement. If you use it in another\n> > expression (say the targetlist of a SELECT statement), then you'd get a\n> > failure at parse analysis time.\n>\n> This would be my choice, the same as how the standard GROUPING()\n> \"function\" for grouping sets is implemented by GroupingFunc.\n>\n\nSomething I'm wondering about is to what extent this discussion is\ndriven by concerns about aspects of the implementation (specifically,\nreferences to function OIDs in code), versus a desire for a different\nuser-visible syntax. To a large extent, those are orthogonal\nquestions.\n\n(As an aside, I would note that there are already around a dozen\nreferences to specific function OIDs in the parse analysis code, and a\nlot more if you grep more widely across the whole of the backend\ncode.)\n\nAt one point, as I was writing this patch, I went part-way down the\nroute of adding a new node type (I think I called it MergeFunc), for\nthese merge support functions, somewhat inspired by GroupingFunc. In\nthe end, I backed out of that approach, because it seemed to be\nintroducing a lot of unnecessary additional complexity, and I decided\nthat a regular FuncExpr would suffice.\n\nIf pg_merge_action() and pg_merge_when_clause_number() were\nimplemented using a MergeFunc node, it would reduce the number of\nplaces that refer to specific function OIDs. Basically, a MergeFunc\nnode would be very much like a FuncExpr node, except that it would\nhave a \"levels up\" field, set during parse analysis, at the point\nwhere we check that it is being used in a merge returning clause, and\nthis field would be used during subselect planning. Note, however,\nthat that doesn't entirely eliminate references to specific function\nOIDs -- the parse analysis code would still do that. Also, additional\nspecial-case code in the executor would be required to handle\nMergeFunc nodes. Also, code like IncrementVarSublevelsUp() would need\nadjusting, and anything else like that.\n\nA separate question is what the syntax should be. We could invent a\nnew syntax, like GROUPING(). Perhaps:\n\n MERGE(ACTION) instead of pg_merge_action()\n MERGE(CLAUSE NUMBER) instead of pg_merge_when_clause_number()\n\nBut note that those could equally well generate either FuncExpr nodes\nor MergeFunc nodes, so the syntax question remains orthogonal to that\ninternal implementation question.\n\nIf MERGE(...) (or MERGING(...), or whatever) were part of the SQL\nstandard, then that would be the clear choice. But since it's not, I\ndon't see any real advantage to inventing special syntax here, rather\nthan just using a regular function call. In fact, it's worse, because\nif this were to work like GROUPING(), it would require MERGE (or\nMERGING, or whatever) to be a COL_NAME_KEYWORD, where currently MERGE\nis an UNRESERVED_KEYWORD, and that would break any existing\nuser-defined functions with that name, whereas the \"pg_\" prefix of my\nfunctions makes that much less likely.\n\nSo on the syntax question, in the absence of anything specific from\nthe SQL standard, I think we should stick to builtin functions,\nwithout inventing special syntax. That doesn't preclude adding special\nsyntax later, if the SQL standard mandates it, but that might be\nharder, if we invent our own syntax now.\n\nOn the implementation question, I'm not completely against the idea of\na MergeFunc node, but it does feel a little over-engineered.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 1 Nov 2023 10:12:09 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 11/1/23 11:12, Dean Rasheed wrote:\n> On Tue, 31 Oct 2023 at 23:19, Vik Fearing <vik@postgresfriends.org> wrote:\n>>\n>> On 10/31/23 19:28, Jeff Davis wrote:\n>>\n>>> Assuming we have one RETURNING clause at the end, then it creates the\n>>> problem of how to communicate which WHEN clause a tuple came from,\n>>> whether it's the old or the new version, and/or which action was\n>>> performed on that tuple.\n>>>\n>>> How do we communicate any of those things? We need to get that\n>>> information into the result table somehow, so it should probably be\n>>> some kind of expression that can exist in the RETURNING clause. But\n>>> what kind of expression?\n>>>\n>>> (a) It could be a totally new expression kind with a new keyword (or\n>>> recycling some existing keywords for the same effect, or something that\n>>> looks superficially like a function call but isn't) that's only valid\n>>> in the RETURNING clause of a MERGE statement. If you use it in another\n>>> expression (say the targetlist of a SELECT statement), then you'd get a\n>>> failure at parse analysis time.\n>>\n>> This would be my choice, the same as how the standard GROUPING()\n>> \"function\" for grouping sets is implemented by GroupingFunc.\n>>\n> \n> Something I'm wondering about is to what extent this discussion is\n> driven by concerns about aspects of the implementation (specifically,\n> references to function OIDs in code), versus a desire for a different\n> user-visible syntax. To a large extent, those are orthogonal\n> questions.\n\n\nFor my part, I am most concerned about the language level. I am \nsympathetic to the implementers' issues, but that is not my main focus.\n\nSo please do not take my implementation advice into account when I voice \nmy opinions.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Wed, 1 Nov 2023 12:17:50 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, Nov 1, 2023 at 5:12 AM Dean Rasheed <dean.a.rasheed@gmail.com>\nwrote:\n\n> On Tue, 31 Oct 2023 at 23:19, Vik Fearing <vik@postgresfriends.org> wrote:\n> >\n> > On 10/31/23 19:28, Jeff Davis wrote:\n> >\n> > > Assuming we have one RETURNING clause at the end, then it creates the\n> > > problem of how to communicate which WHEN clause a tuple came from,\n> > > whether it's the old or the new version, and/or which action was\n> > > performed on that tuple.\n> > >\n> > > How do we communicate any of those things? We need to get that\n> > > information into the result table somehow, so it should probably be\n> > > some kind of expression that can exist in the RETURNING clause. But\n> > > what kind of expression?\n> > >\n> > > (a) It could be a totally new expression kind with a new keyword (or\n> > > recycling some existing keywords for the same effect, or something that\n> > > looks superficially like a function call but isn't) that's only valid\n> > > in the RETURNING clause of a MERGE statement. If you use it in another\n> > > expression (say the targetlist of a SELECT statement), then you'd get a\n> > > failure at parse analysis time.\n> >\n> > This would be my choice, the same as how the standard GROUPING()\n> > \"function\" for grouping sets is implemented by GroupingFunc.\n> >\n>\n> Something I'm wondering about is to what extent this discussion is\n> driven by concerns about aspects of the implementation (specifically,\n> references to function OIDs in code), versus a desire for a different\n> user-visible syntax. To a large extent, those are orthogonal\n> questions.\n>\n> (As an aside, I would note that there are already around a dozen\n> references to specific function OIDs in the parse analysis code, and a\n> lot more if you grep more widely across the whole of the backend\n> code.)\n>\n> At one point, as I was writing this patch, I went part-way down the\n> route of adding a new node type (I think I called it MergeFunc), for\n> these merge support functions, somewhat inspired by GroupingFunc. In\n> the end, I backed out of that approach, because it seemed to be\n> introducing a lot of unnecessary additional complexity, and I decided\n> that a regular FuncExpr would suffice.\n>\n> If pg_merge_action() and pg_merge_when_clause_number() were\n> implemented using a MergeFunc node, it would reduce the number of\n> places that refer to specific function OIDs. Basically, a MergeFunc\n> node would be very much like a FuncExpr node, except that it would\n> have a \"levels up\" field, set during parse analysis, at the point\n> where we check that it is being used in a merge returning clause, and\n> this field would be used during subselect planning. Note, however,\n> that that doesn't entirely eliminate references to specific function\n> OIDs -- the parse analysis code would still do that. Also, additional\n> special-case code in the executor would be required to handle\n> MergeFunc nodes. Also, code like IncrementVarSublevelsUp() would need\n> adjusting, and anything else like that.\n>\n> A separate question is what the syntax should be. We could invent a\n> new syntax, like GROUPING(). Perhaps:\n>\n> MERGE(ACTION) instead of pg_merge_action()\n> MERGE(CLAUSE NUMBER) instead of pg_merge_when_clause_number()\n>\n\nHm, still struggling with this merge action and (especially) number stuff.\nCurrently we have:\n\n WHEN MATCHED [ AND *condition* ] THEN { *merge_update* |\n*merge_delete* | DO NOTHING } |\n WHEN NOT MATCHED [ AND *condition* ] THEN { *merge_insert* | DO NOTHING } }\n\nWhat about extending to something like:\n\nWHEN MATCHED [ AND *condition* ] [ AS *merge_clause_name ]*\n\nWHEN MATCHED AND tid > 2 AS giraffes THEN UPDATE SET balance = t.balance +\ndelta\n\n...and have pg_merge_clause() return 'giraffes' (of name type). If merge\nclause is not identified, maybe don't return any data for that clause\nthrough returning,, or return NULL. Maybe 'returning' clause doesn't have\nto be extended or molested in any way, it would follow mechanics as per\n'update', and could not refer to identified merge_clauses, but would allow\nfor pg_merge_clause() functioning. You wouldn't need to identify action or\nnumber. Food for thought, -- may have missed some finer details upthread.\n\nfor example,\nwith r as (\n merge into x using y on x.a = y.a\n when matched and x.c > 0 as good then do nothing\n when matched and x.c <= 0 as bad then do nothing\n returning pg_merge_clause(), x.*\n) ...\n\nyielding\npg_merge_clause a c\ngood 1 5\ngood 2 7\nbad 3 0\n...\n\n...maybe allow pg_merge_clause() take to optionally yield column name:\n returning pg_merge_clause('result'), x.*\n) ...\n\nyielding\nresult a c\ngood 1 5\ngood 2 7\nbad 3 0\n...\n\nmerlin\n\nOn Wed, Nov 1, 2023 at 5:12 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:On Tue, 31 Oct 2023 at 23:19, Vik Fearing <vik@postgresfriends.org> wrote:\n>\n> On 10/31/23 19:28, Jeff Davis wrote:\n>\n> > Assuming we have one RETURNING clause at the end, then it creates the\n> > problem of how to communicate which WHEN clause a tuple came from,\n> > whether it's the old or the new version, and/or which action was\n> > performed on that tuple.\n> >\n> > How do we communicate any of those things? We need to get that\n> > information into the result table somehow, so it should probably be\n> > some kind of expression that can exist in the RETURNING clause. But\n> > what kind of expression?\n> >\n> > (a) It could be a totally new expression kind with a new keyword (or\n> > recycling some existing keywords for the same effect, or something that\n> > looks superficially like a function call but isn't) that's only valid\n> > in the RETURNING clause of a MERGE statement. If you use it in another\n> > expression (say the targetlist of a SELECT statement), then you'd get a\n> > failure at parse analysis time.\n>\n> This would be my choice, the same as how the standard GROUPING()\n> \"function\" for grouping sets is implemented by GroupingFunc.\n>\n\nSomething I'm wondering about is to what extent this discussion is\ndriven by concerns about aspects of the implementation (specifically,\nreferences to function OIDs in code), versus a desire for a different\nuser-visible syntax. To a large extent, those are orthogonal\nquestions.\n\n(As an aside, I would note that there are already around a dozen\nreferences to specific function OIDs in the parse analysis code, and a\nlot more if you grep more widely across the whole of the backend\ncode.)\n\nAt one point, as I was writing this patch, I went part-way down the\nroute of adding a new node type (I think I called it MergeFunc), for\nthese merge support functions, somewhat inspired by GroupingFunc. In\nthe end, I backed out of that approach, because it seemed to be\nintroducing a lot of unnecessary additional complexity, and I decided\nthat a regular FuncExpr would suffice.\n\nIf pg_merge_action() and pg_merge_when_clause_number() were\nimplemented using a MergeFunc node, it would reduce the number of\nplaces that refer to specific function OIDs. Basically, a MergeFunc\nnode would be very much like a FuncExpr node, except that it would\nhave a \"levels up\" field, set during parse analysis, at the point\nwhere we check that it is being used in a merge returning clause, and\nthis field would be used during subselect planning. Note, however,\nthat that doesn't entirely eliminate references to specific function\nOIDs -- the parse analysis code would still do that. Also, additional\nspecial-case code in the executor would be required to handle\nMergeFunc nodes. Also, code like IncrementVarSublevelsUp() would need\nadjusting, and anything else like that.\n\nA separate question is what the syntax should be. We could invent a\nnew syntax, like GROUPING(). Perhaps:\n\n  MERGE(ACTION) instead of pg_merge_action()\n  MERGE(CLAUSE NUMBER) instead of pg_merge_when_clause_number()Hm, still struggling with this merge action and (especially) number stuff.  Currently we have: WHEN MATCHED [ AND condition ] THEN { merge_update | merge_delete | DO NOTHING } |\n WHEN NOT MATCHED [ AND condition ] THEN { merge_insert | DO NOTHING } }What about extending to something like:WHEN MATCHED [ AND condition ] [ AS merge_clause_name ]WHEN MATCHED AND tid > 2 AS giraffes THEN UPDATE SET balance = t.balance + delta...and have pg_merge_clause() return 'giraffes' (of name type).  If merge clause is not identified, maybe don't return any data for that clause through returning,, or return NULL.  Maybe 'returning' clause doesn't have to be extended or molested in any way, it would follow mechanics as per 'update', and could not refer to identified merge_clauses, but would allow for pg_merge_clause() functioning.  You wouldn't need to identify action or number.  Food for thought, -- may have missed some finer details upthread.for example,with r as (  merge into x using y on x.a = y.a  when matched and x.c > 0 as good then do nothing  when matched and x.c <= 0 as bad then do nothing  returning pg_merge_clause(), x.*) ...yielding pg_merge_clause a  cgood            1  5good            2  7bad             3  0......maybe allow pg_merge_clause()  take to optionally yield column name:  returning pg_merge_clause('result'), x.*) ...yielding result a  cgood   1  5good   2  7bad    3  0...merlin", "msg_date": "Wed, 1 Nov 2023 12:19:37 -0500", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 2023-11-01 at 10:12 +0000, Dean Rasheed wrote:\n> Something I'm wondering about is to what extent this discussion is\n> driven by concerns about aspects of the implementation (specifically,\n> references to function OIDs in code), versus a desire for a different\n> user-visible syntax. To a large extent, those are orthogonal\n> questions.\n\nMost of my concern is that parts of the implementation feel like a\nhack, which makes me concerned that we're approaching it the wrong way.\n\nAt a language level, I'm also concerned that we don't have a way to\naccess the before/after versions of the tuple. I won't insist on this\nbecause I'm hoping that could be solved as part of a later patch that\nalso addresses UPDATE ... RETURNING.\n\n> (As an aside, I would note that there are already around a dozen\n> references to specific function OIDs in the parse analysis code, and\n> a\n> lot more if you grep more widely across the whole of the backend\n> code.)\n\nIf you can point to a precedent, then I'm much more inclined to be OK\nwith the implementation.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Wed, 01 Nov 2023 10:49:02 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 1 Nov 2023 at 17:49, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> Most of my concern is that parts of the implementation feel like a\n> hack, which makes me concerned that we're approaching it the wrong way.\n>\n\nOK, that's a fair point. Attached is a new version, replacing those\nparts of the implementation with a new MergingFunc node. It doesn't\nadd that much more complexity, and I think the new code is much\nneater.\n\nAlso, I think this makes it easier / more natural to add additional\nreturning options, like Merlin's suggestion to return a user-defined\nlabel value, though I haven't implemented that.\n\nI have gone with the name originally suggested by Vik -- MERGING(),\nwhich means that that has to be a new col-name keyword. I'm not\nespecially wedded to that name, but I think that it's not a bad\nchoice, and I think going with that is preferable to making MERGE a\ncol-name keyword.\n\nSo (quoting the example from the docs), the new syntax looks like this:\n\nMERGE INTO products p\n USING stock s ON p.product_id = s.product_id\n WHEN MATCHED AND s.quantity > 0 THEN\n UPDATE SET in_stock = true, quantity = s.quantity\n WHEN MATCHED THEN\n UPDATE SET in_stock = false, quantity = 0\n WHEN NOT MATCHED THEN\n INSERT (product_id, in_stock, quantity)\n VALUES (s.product_id, true, s.quantity)\n RETURNING MERGING(ACTION), MERGING(CLAUSE_NUMBER), p.*;\n\n action | clause_number | product_id | in_stock | quantity\n--------+---------------+------------+----------+----------\n UPDATE | 1 | 1001 | t | 50\n UPDATE | 2 | 1002 | f | 0\n INSERT | 3 | 1003 | t | 10\n\nBy default, the returned column names are automatically taken from the\nargument to the MERGING() function (which isn't actually a function\nanymore).\n\nThere's one bug that I know about, to do with cross-partition updates,\nbut since that's a pre-existing bug, I'll start a new thread for it.\n\nRegards,\nDean", "msg_date": "Sun, 5 Nov 2023 11:52:09 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 5 Nov 2023 at 11:52, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> OK, that's a fair point. Attached is a new version, replacing those\n> parts of the implementation with a new MergingFunc node. It doesn't\n> add that much more complexity, and I think the new code is much\n> neater.\n>\n\nRebased version attached, following the changes made in 615f5f6faa and\na4f7d33a90.\n\nRegards,\nDean", "msg_date": "Thu, 9 Nov 2023 13:24:49 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Hi.\nv13 works fine. all tests passed. The code is very intuitive. played\nwith multi WHEN clauses, even with before/after row triggers, work as\nexpected.\n\nI don't know when replace_outer_merging will be invoked. even set a\nbreakpoint on it. coverage shows replace_outer_merging only called\nonce.\n\nsql-merge.html miss mentioned RETURNING need select columns privilege?\nin sql-insert.html, we have:\n\"Use of the RETURNING clause requires SELECT privilege on all columns\nmentioned in RETURNING. If you use the query clause to insert rows\nfrom a query, you of course need to have SELECT privilege on any table\nor column used in the query.\"\n\nI saw the change in src/sgml/glossary.sgml, So i looked around. in the\n\"Materialized view (relation)\" part. \"It cannot be modified via\nINSERT, UPDATE, or DELETE operations.\". Do we need to put \"MERGE\" into\nthat sentence?\nalso there is SELECT, INSERT, UPDATE, DELETE, do we need to add a\nMERGE entry in glossary.sgml?\n\n\n", "msg_date": "Mon, 13 Nov 2023 13:29:01 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Mon, 13 Nov 2023 at 05:29, jian he <jian.universality@gmail.com> wrote:\n>\n> v13 works fine. all tests passed. The code is very intuitive. played\n> with multi WHEN clauses, even with before/after row triggers, work as\n> expected.\n>\n\nThanks for the review and testing!\n\n> I don't know when replace_outer_merging will be invoked. even set a\n> breakpoint on it. coverage shows replace_outer_merging only called\n> once.\n>\n\nIt's used when MERGING() is used in a subquery in the RETURNING list.\nThe MergingFunc node in the subquery is replaced by a Param node,\nreferring to the outer MERGE query, so that the result from MERGING()\nis available in the SELECT subquery (under any other circumstances,\nyou're not allowed to use MERGING() in a SELECT). This is similar to\nwhat happens when a subquery contains an aggregate over columns from\nan outer query only -- for example, see:\n\nhttps://www.postgresql.org/docs/current/sql-expressions.html#SYNTAX-AGGREGATES:~:text=When%20an%20aggregate,aggregate%20belongs%20to.\n\nhttps://github.com/postgres/postgres/commit/e649796f128bd8702ba5744d36f4e8cb81f0b754\n\nA MERGING() expression in a subquery in the RETURNING list is\nanalogous, in that it belongs to the outer MERGE query, not the SELECT\nsubquery.\n\n> sql-merge.html miss mentioned RETURNING need select columns privilege?\n> in sql-insert.html, we have:\n> \"Use of the RETURNING clause requires SELECT privilege on all columns\n> mentioned in RETURNING. If you use the query clause to insert rows\n> from a query, you of course need to have SELECT privilege on any table\n> or column used in the query.\"\n>\n\nAh, good point. I don't think I looked at the privileges paragraph on\nthe MERGE page. Currently it says:\n\n You will require the SELECT privilege on the data_source\n and any column(s) of the target_table_name referred to in a\n condition.\n\nBeing pedantic, there are 2 problems with that:\n\n1. It might be taken to imply that you need the SELECT privilege on\nevery column of the data_source, which isn't the case.\n\n2. It mentions conditions, but not expressions (such as those that can\nappear in INSERT and UPDATE actions).\n\nA more accurate statement would be:\n\n You will require the SELECT privilege and any column(s)\n of the data_source and target_table_name referred to in any\n condition or expression.\n\nwhich is also consistent with the wording used on the UPDATE manual page.\n\nDone that way, I don't think it would need to be updated to mention\nRETURNING, because RETURNING just returns a list of expressions.\nAgain, that would be consistent with the UPDATE page, which doesn't\nmention RETURNING in its discussion of privileges.\n\n> I saw the change in src/sgml/glossary.sgml, So i looked around. in the\n> \"Materialized view (relation)\" part. \"It cannot be modified via\n> INSERT, UPDATE, or DELETE operations.\". Do we need to put \"MERGE\" into\n> that sentence?\n> also there is SELECT, INSERT, UPDATE, DELETE, do we need to add a\n> MERGE entry in glossary.sgml?\n\nYes, that makes sense.\n\nAttached is a separate patch with those doc updates, intended to be\napplied and back-patched independently of the main RETURNING patch.\n\nRegards,\nDean", "msg_date": "Wed, 15 Nov 2023 11:36:59 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": ">\n> Attached is a separate patch with those doc updates, intended to be\n> applied and back-patched independently of the main RETURNING patch.\n>\n> Regards,\n> Dean\n\n+ You will require the <literal>SELECT</literal> privilege and any column(s)\n+ of the <replaceable class=\"parameter\">data_source</replaceable> and\n+ <replaceable class=\"parameter\">target_table_name</replaceable> referred to\n+ in any <literal>condition</literal> or <literal>expression</literal>.\n\nI think it should be:\n+ You will require the <literal>SELECT</literal> privilege on any column(s)\n+ of the <replaceable class=\"parameter\">data_source</replaceable> and\n+ <replaceable class=\"parameter\">target_table_name</replaceable> referred to\n+ in any <literal>condition</literal> or <literal>expression</literal>.\n\nOther than that, it looks fine.\n\n\n", "msg_date": "Fri, 17 Nov 2023 12:30:36 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 17 Nov 2023 at 04:30, jian he <jian.universality@gmail.com> wrote:\n>\n> I think it should be:\n> + You will require the <literal>SELECT</literal> privilege on any column(s)\n> + of the <replaceable class=\"parameter\">data_source</replaceable> and\n> + <replaceable class=\"parameter\">target_table_name</replaceable> referred to\n> + in any <literal>condition</literal> or <literal>expression</literal>.\n>\n\nAh, of course. As always, I'm blind to grammatical errors in my own\ntext, no matter how many times I read it. Thanks for checking!\n\nPushed.\n\nThe v13 patch still applies on top of this, so I won't re-post it.\n\nRegards,\nDean\n\n\n", "msg_date": "Sat, 18 Nov 2023 12:54:57 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sat, Nov 18, 2023 at 8:55 PM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> The v13 patch still applies on top of this, so I won't re-post it.\n>\n\nHi.\nminor issues based on v13.\n\n+<synopsis>\n+<function id=\"function-merging\">MERGING</function> (\n<replaceable>property</replaceable> )\n+</synopsis>\n+ The following are valid property values specifying what to return:\n+\n+ <variablelist>\n+ <varlistentry>\n+ <term><literal>ACTION</literal></term>\n+ <listitem>\n+ <para>\n+ The merge action command executed for the current row\n+ (<literal>'INSERT'</literal>, <literal>'UPDATE'</literal>, or\n+ <literal>'DELETE'</literal>).\n+ </para>\n+ </listitem>\n+ </varlistentry>\ndo we change to <literal>property</literal>?\nMaybe the main para should be two sentences like:\nThe merge action command executed for the current row. Possible values\nare: <literal>'INSERT'</literal>, <literal>'UPDATE'</literal>,\n<literal>'DELETE'</literal>.\n\n static Node *\n+transformMergingFunc(ParseState *pstate, MergingFunc *f)\n+{\n+ /*\n+ * Check that we're in the RETURNING list of a MERGE command.\n+ */\n+ if (pstate->p_expr_kind != EXPR_KIND_MERGE_RETURNING)\n+ {\n+ ParseState *parent_pstate = pstate->parentParseState;\n+\n+ while (parent_pstate &&\n+ parent_pstate->p_expr_kind != EXPR_KIND_MERGE_RETURNING)\n+ parent_pstate = parent_pstate->parentParseState;\n+\n+ if (!parent_pstate ||\n+ parent_pstate->p_expr_kind != EXPR_KIND_MERGE_RETURNING)\n+ ereport(ERROR,\n+ errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"MERGING() can only be used in the RETURNING list of a MERGE command\"),\n+ parser_errposition(pstate, f->location));\n+ }\n+\n+ return (Node *) f;\n+}\n+\nthe object is correct, but not in the right place.\nmaybe we should change errcode(ERRCODE_WRONG_OBJECT_TYPE) to\nerrcode(ERRCODE_INVALID_OBJECT_DEFINITION)\nalso do we need to add some comments explain that why we return it as\nis when it's EXPR_KIND_MERGE_RETURNING.\n(my understanding is that, if key words not match, then it will fail\nat gram.y, like syntax error, else MERGING will based on keywords make\na MergingFunc node and assign mfop, mftype, location to it)\n\nin src/backend/executor/functions.c\n/*\n* Break from loop if we didn't shut down (implying we got a\n* lazily-evaluated row). Otherwise we'll press on till the whole\n* function is done, relying on the tuplestore to keep hold of the\n* data to eventually be returned. This is necessary since an\n* INSERT/UPDATE/DELETE RETURNING that sets the result might be\n* followed by additional rule-inserted commands, and we want to\n* finish doing all those commands before we return anything.\n*/\nDoes the above comments need to change to INSERT/UPDATE/DELETE/MERGE?\n\nin src/backend/nodes/nodeFuncs.c\ncase T_UpdateStmt:\n{\nUpdateStmt *stmt = (UpdateStmt *) node;\nif (WALK(stmt->relation))\nreturn true;\nif (WALK(stmt->targetList))\nreturn true;\nif (WALK(stmt->whereClause))\nreturn true;\nif (WALK(stmt->fromClause))\nreturn true;\nif (WALK(stmt->returningList))\nreturn true;\nif (WALK(stmt->withClause))\nreturn true;\n}\nbreak;\ncase T_MergeStmt:\n{\nMergeStmt *stmt = (MergeStmt *) node;\nif (WALK(stmt->relation))\nreturn true;\nif (WALK(stmt->sourceRelation))\nreturn true;\nif (WALK(stmt->joinCondition))\nreturn true;\nif (WALK(stmt->mergeWhenClauses))\nreturn true;\nif (WALK(stmt->withClause))\nreturn true;\n}\nbreak;\n\nyou add \"returningList\" to MergeStmt.\ndo you need to do the following similar to UpdateStmt, even though\nit's so abstract, i have no idea what's going on.\n`\nif (WALK(stmt->returningList))\nreturn true;\n`\n\n\n", "msg_date": "Wed, 17 Jan 2024 22:42:50 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 17 Jan 2024 at 14:43, jian he <jian.universality@gmail.com> wrote:\n>\n> +<synopsis>\n> +<function id=\"function-merging\">MERGING</function> (\n> <replaceable>property</replaceable> )\n> +</synopsis>\n> + The following are valid property values specifying what to return:\n> +\n> + <variablelist>\n> + <varlistentry>\n> + <term><literal>ACTION</literal></term>\n> + <listitem>\n> + <para>\n> + The merge action command executed for the current row\n> + (<literal>'INSERT'</literal>, <literal>'UPDATE'</literal>, or\n> + <literal>'DELETE'</literal>).\n> + </para>\n> + </listitem>\n> + </varlistentry>\n> do we change to <literal>property</literal>?\n> Maybe the main para should be two sentences like:\n> The merge action command executed for the current row. Possible values\n> are: <literal>'INSERT'</literal>, <literal>'UPDATE'</literal>,\n> <literal>'DELETE'</literal>.\n>\n\nOK, though actually it should be <parameter>property</parameter>.\nAlso, the parameter should be described as a key word, to match\nsimilar existing keyword function parameters (e.g., the normalize()\nfunction's second parameter).\n\n\n> + if (!parent_pstate ||\n> + parent_pstate->p_expr_kind != EXPR_KIND_MERGE_RETURNING)\n> + ereport(ERROR,\n> + errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"MERGING() can only be used in the RETURNING list of a MERGE command\"),\n> + parser_errposition(pstate, f->location));\n> +\n> the object is correct, but not in the right place.\n> maybe we should change errcode(ERRCODE_WRONG_OBJECT_TYPE) to\n> errcode(ERRCODE_INVALID_OBJECT_DEFINITION)\n> also do we need to add some comments explain that why we return it as\n> is when it's EXPR_KIND_MERGE_RETURNING.\n> (my understanding is that, if key words not match, then it will fail\n> at gram.y, like syntax error, else MERGING will based on keywords make\n> a MergingFunc node and assign mfop, mftype, location to it)\n>\n\nAh yes, that error code dates back to an earlier version of the patch,\nwhen this check was done in ParseFuncOrColumn(), when I think I just\ncopied the error code from nearby checks. I agree that\nERRCODE_WRONG_OBJECT_TYPE isn't really right, but I'm not convinced\nthat ERRCODE_INVALID_OBJECT_DEFINITION is right either, since the\nobject is valid, but it's not allowed in that part of the query. I\nthink ERRCODE_SYNTAX_ERROR is probably better (similar to when we find\n\"DEFAULT\" where we shouldn't, for example).\n\n\n> in src/backend/executor/functions.c\n> /*\n> * Break from loop if we didn't shut down (implying we got a\n> * lazily-evaluated row). Otherwise we'll press on till the whole\n> * function is done, relying on the tuplestore to keep hold of the\n> * data to eventually be returned. This is necessary since an\n> * INSERT/UPDATE/DELETE RETURNING that sets the result might be\n> * followed by additional rule-inserted commands, and we want to\n> * finish doing all those commands before we return anything.\n> */\n> Does the above comments need to change to INSERT/UPDATE/DELETE/MERGE?\n>\n\nNo, because MERGE doesn't support tables with rules, so this can't\napply to MERGE. I suppose the comment could be updated to say that,\nbut I don't think it's worth it, because I think it would distract the\nreader from the main point of the comment. I think that function is\ncomplex enough as it is, and since this patch isn't touching it, it\nshould probably be left alone.\n\n\n> in src/backend/nodes/nodeFuncs.c\n> you add \"returningList\" to MergeStmt.\n> do you need to do the following similar to UpdateStmt, even though\n> it's so abstract, i have no idea what's going on.\n> `\n> if (WALK(stmt->returningList))\n> return true;\n> `\n\nAh yes, good point. This can be triggered using a recursive CTE\ncontaining a MERGE ... RETURNING that returns an expression containing\na subquery with a recursive reference to the outer CTE, which should\nbe an error. I've added a regression test to ensure that this walker\npath gets coverage.\n\nThanks for reviewing. Updated patch attached.\n\nThe wider question is whether people are happy with the overall\napproach this patch now takes, and the new MERGING() function and\nMergingFunc node.\n\nRegards,\nDean", "msg_date": "Thu, 18 Jan 2024 17:44:23 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, Jan 19, 2024 at 1:44 AM Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n>\n> Thanks for reviewing. Updated patch attached.\n>\n> The wider question is whether people are happy with the overall\n> approach this patch now takes, and the new MERGING() function and\n> MergingFunc node.\n>\n\none minor white space issue:\n\ngit diff --check\ndoc/src/sgml/func.sgml:22482: trailing whitespace.\n+ action | clause_number | product_id | in_stock | quantity\n\n\n@@ -3838,7 +3904,7 @@ ExecModifyTable(PlanState *pstate)\n }\n slot = ExecGetUpdateNewTuple(resultRelInfo, context.planSlot,\n oldSlot);\n- context.relaction = NULL;\n+ node->mt_merge_action = NULL;\n\nI wonder what's the purpose of setting node->mt_merge_action to null ?\nI add `node->mt_merge_action = NULL;` at the end of each branch in\n`switch (operation)`.\nAll the tests still passed.\nOther than this question, this patch is very good.\n\n\n", "msg_date": "Mon, 29 Jan 2024 07:50:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Sun, 28 Jan 2024 at 23:50, jian he <jian.universality@gmail.com> wrote:\n>\n> one minor white space issue:\n>\n> git diff --check\n> doc/src/sgml/func.sgml:22482: trailing whitespace.\n> + action | clause_number | product_id | in_stock | quantity\n>\n\nAh, well spotted! I'm not in the habit of running git diff --check.\n\n> @@ -3838,7 +3904,7 @@ ExecModifyTable(PlanState *pstate)\n> }\n> slot = ExecGetUpdateNewTuple(resultRelInfo, context.planSlot,\n> oldSlot);\n> - context.relaction = NULL;\n> + node->mt_merge_action = NULL;\n>\n> I wonder what's the purpose of setting node->mt_merge_action to null ?\n> I add `node->mt_merge_action = NULL;` at the end of each branch in\n> `switch (operation)`.\n> All the tests still passed.\n\nGood question. It was necessary to set it to NULL there, because code\nunder ExecUpdate() reads it, and context.relaction would otherwise be\nuninitialised. Now though, mtstate->mt_merge_action is automatically\ninitialised to NULL when the ModifyTableState node is first built, and\nonly the MERGE code sets it to non-NULL, so it's no longer necessary\nto set it to NULL for other types of operation, because it will never\nbecome non-NULL unless mtstate->operation is CMD_MERGE. So we can\nsafely remove that line.\n\nHaving said that, it seems a bit ugly to be relying on mt_merge_action\nin so many places anyway. The places that test if it's non-NULL should\nmore logically be testing whether mtstate->operation is CMD_MERGE.\nDoing that, reduces the number of places in nodeModifyTable.c that\nread mt_merge_action down to one, and that one place only reads it\nafter testing that mtstate->operation is CMD_MERGE, which seems neater\nand safer.\n\nRegards,\nDean", "msg_date": "Mon, 29 Jan 2024 11:38:10 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "I didn't find any issue with v15.\nno commit message in the patch, If a commit message is there, I can\nhelp proofread.\n\n\n", "msg_date": "Tue, 30 Jan 2024 18:50:00 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Attached is a rebased version on top of 5f2e179bd3 (support for MERGE\ninto views), with a few additional tests to confirm that MERGE ...\nRETURNING works for views as well as tables.\n\nI see that this patch was discussed at the PostgreSQL Developers\nMeeting. Did anything new come out of that discussion?\n\nRegards,\nDean", "msg_date": "Thu, 29 Feb 2024 19:24:21 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "\nCan we get some input on whether the current MERGE ... RETURNING patch\nis the right approach from a language standpoint?\n\nWe've gone through a lot of iterations -- thank you Dean, for\nimplementing so many variations.\n\nTo summarize, most of the problem has been in retrieving the action\n(INSERT/UPDATE/DELETE) taken or the WHEN-clause number applied to a\nparticular matched row. The reason this is important is because the row\nreturned is the old row for a DELETE action, and the new row for an\nINSERT or UPDATE action. Without a way to distinguish the particular\naction, the RETURNING clause returns a mixture of old and new rows,\nwhich would be hard to use sensibly.\n\nGranted, DELETE in a MERGE may be a less common case. But given that we\nalso have INSERT ... ON CONFLICT, MERGE commands are more likely to be\nthe complicated cases where distinguishing the action or clause number\nis important.\n\nBut linguistically it's not clear where the action or clause number\nshould come from. The clauses don't have assigned numbers, and even if\nthey did, linguistically it's not clear how to refer to the clause\nnumber in a language like SQL. Would it be a special identifier, a\nfunction, a special function, or be a column in a special table\nreference? Or, do we just have one RETURNING-clause per WHEN-clause,\nand let the user use a literal of their choice in the RETURNING clause?\n\nThe current implementation uses a special function MERGING (a\ngrammatical construct without an OID that parses into a new MergingFunc\nexpr), which takes keywords ACTION or CLAUSE_NUMBER in the argument\npositions. That's not totally unprecedented in SQL -- the XML and JSON\nfunctions are kind of similar. But it's different in the sense that\nMERGING is also context-sensitive: grammatically, it fits pretty much\nanywhere a function fits, but then gets rejected at parse analysis time\n(or perhaps even execution time?) if it's not called from the right\nplace.\n\nIs that a reasonable thing to do?\n\nAnother related topic came up, which is that the RETURNING clause (for\nUPDATE as well as MERGE) should probably accept some kind of alias like\nNEW/OLD or BEFORE/AFTER to address the version of the row that you\nwant. That doesn't eliminate the need for the MERGING function, but\nit's good to think about how that might fit in with whatever we do\nhere.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 29 Feb 2024 11:49:29 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 2024-02-29 at 19:24 +0000, Dean Rasheed wrote:\n> Attached is a rebased version on top of 5f2e179bd3 (support for MERGE\n> into views), with a few additional tests to confirm that MERGE ...\n> RETURNING works for views as well as tables.\n\nThank you for the rebase. I just missed your message (race condition),\nso I replied to v15:\n\nhttps://www.postgresql.org/message-id/e03a87eb4e728c5e475b360b5845979f78d49020.camel%40j-davis.com\n\n> I see that this patch was discussed at the PostgreSQL Developers\n> Meeting. Did anything new come out of that discussion?\n\nI don't think we made any conclusions at the meeting, but I expressed\nthat we need input from one of them on this patch.\n\nRegards,\n\tJeff Davis\n\n\n\n", "msg_date": "Thu, 29 Feb 2024 11:56:38 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 29.02.24 20:49, Jeff Davis wrote:\n> To summarize, most of the problem has been in retrieving the action\n> (INSERT/UPDATE/DELETE) taken or the WHEN-clause number applied to a\n> particular matched row. The reason this is important is because the row\n> returned is the old row for a DELETE action, and the new row for an\n> INSERT or UPDATE action. Without a way to distinguish the particular\n> action, the RETURNING clause returns a mixture of old and new rows,\n> which would be hard to use sensibly.\n\nFor comparison with standard SQL (see <data change delta table>):\n\nFor an INSERT you could write\n\nSELECT whatever FROM NEW TABLE (INSERT statement here)\n\nor for an DELETE\n\nSELECT whatever FROM OLD TABLE (DELETE statement here)\n\nAnd for an UPDATE could can pick either OLD or NEW.\n\n(There is also FINAL, which appears to be valid in cases where NEW is \nvalid. Here is an explanation: \n<https://www.ibm.com/docs/en/db2oc?topic=statement-result-sets-from-sql-data-changes>)\n\nFor a MERGE statement, whether you can specify OLD or NEW (or FINAL) \ndepends on what actions appear in the MERGE statement.\n\nSo if we were to translate that to our syntax, it might be something like\n\n MERGE ... RETURNING OLD *\n\nor\n\n MERGE ... RETURNING NEW *\n\nThis wouldn't give you the ability to return both old and new. (Is that \nuseful?) But maybe you could also do something like\n\n MERGE ... RETURNING OLD 'old'::text, * RETURNING NEW 'new'::text, *\n\n(I mean here you could insert your own constants into the returning lists.)\n\n> The current implementation uses a special function MERGING (a\n> grammatical construct without an OID that parses into a new MergingFunc\n> expr), which takes keywords ACTION or CLAUSE_NUMBER in the argument\n> positions. That's not totally unprecedented in SQL -- the XML and JSON\n> functions are kind of similar. But it's different in the sense that\n> MERGING is also context-sensitive: grammatically, it fits pretty much\n> anywhere a function fits, but then gets rejected at parse analysis time\n> (or perhaps even execution time?) if it's not called from the right\n> place.\n\nAn analogy here might be that MATCH_RECOGNIZE (row-pattern recognition) \nhas a magic function MATCH_NUMBER() that can be used inside that clause. \n So a similar zero-argument magic function might make sense. I don't \nlike the MERGING(ACTION) syntax, but something like MERGE_ACTION() might \nmake sense. (This is just in terms of what kind of syntax might be \npalatable. Depending on where the syntax of the overall clause ends up, \nwe might not need it (see above).)\n\n\n\n", "msg_date": "Wed, 6 Mar 2024 09:51:08 +0100", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 6 Mar 2024 at 08:51, Peter Eisentraut <peter@eisentraut.org> wrote:\n>\n> For comparison with standard SQL (see <data change delta table>):\n>\n> For an INSERT you could write\n>\n> SELECT whatever FROM NEW TABLE (INSERT statement here)\n>\n> or for an DELETE\n>\n> SELECT whatever FROM OLD TABLE (DELETE statement here)\n>\n> And for an UPDATE could can pick either OLD or NEW.\n>\n\nThanks, that's very interesting. I hadn't seen that syntax before.\n\nOver on [1], I have a patch in the works that extends RETURNING,\nallowing it to return OLD.colname, NEW.colname, OLD.*, and NEW.*. It\nlooks like this new SQL standard syntax could be built on top of that\n(perhaps by having the rewriter turn queries of the above form into\nCTEs).\n\nHowever, the RETURNING syntax is more powerful, because it allows OLD\nand NEW to be used together in arbitrary expressions, for example:\n\n RETURNING ..., NEW.val - OLD.val AS delta, ...\n\n> > The current implementation uses a special function MERGING (a\n> > grammatical construct without an OID that parses into a new MergingFunc\n> > expr), which takes keywords ACTION or CLAUSE_NUMBER in the argument\n> > positions. That's not totally unprecedented in SQL -- the XML and JSON\n> > functions are kind of similar. But it's different in the sense that\n> > MERGING is also context-sensitive: grammatically, it fits pretty much\n> > anywhere a function fits, but then gets rejected at parse analysis time\n> > (or perhaps even execution time?) if it's not called from the right\n> > place.\n>\n> An analogy here might be that MATCH_RECOGNIZE (row-pattern recognition)\n> has a magic function MATCH_NUMBER() that can be used inside that clause.\n> So a similar zero-argument magic function might make sense. I don't\n> like the MERGING(ACTION) syntax, but something like MERGE_ACTION() might\n> make sense. (This is just in terms of what kind of syntax might be\n> palatable. Depending on where the syntax of the overall clause ends up,\n> we might not need it (see above).)\n>\n\nIt could be that having the ability to return OLD and NEW values, as\nin [1], is sufficient for use in MERGE, to identify the action\nperformed. However, I still think that dedicated functions would be\nuseful, if we can agree on names/syntax.\n\nI think that I prefer the names MERGE_ACTION() and\nMERGE_CLAUSE_NUMBER() from an aesthetic point of view, but it requires\n2 new COL_NAME_KEYWORD keywords. Maybe that's OK, I don't know.\n\nAlternatively, we could avoid adding new keywords by going back to\nmaking these regular functions, as they were in an earlier version of\nthis patch, and then use some special-case code during parse analysis\nto turn them into MergeFunc nodes (not quite a complete revert back to\nan earlier version of the patch, but not far off).\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com\n\n\n", "msg_date": "Wed, 6 Mar 2024 16:20:04 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, Feb 29, 2024 at 1:49 PM Jeff Davis <pgsql@j-davis.com> wrote:\n\n>\n> Can we get some input on whether the current MERGE ... RETURNING patch\n> is the right approach from a language standpoint?\n>\n\nMERGE_CLAUSE_NUMBER() seems really out of place to me, it feels out of\nplace to identify output set by number rather than some kind of name. Did\nnot see a lot of support for that position though.\n\nmerlin\n\nOn Thu, Feb 29, 2024 at 1:49 PM Jeff Davis <pgsql@j-davis.com> wrote:\nCan we get some input on whether the current MERGE ... RETURNING patch\nis the right approach from a language standpoint?MERGE_CLAUSE_NUMBER() seems really out of place to me, it feels out of place to identify output set by number rather than some kind of name.  Did not see a lot of support for that position though. merlin", "msg_date": "Wed, 6 Mar 2024 14:03:37 -0600", "msg_from": "Merlin Moncure <mmoncure@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Jeff Davis:\n> To summarize, most of the problem has been in retrieving the action\n> (INSERT/UPDATE/DELETE) taken or the WHEN-clause number applied to a\n> particular matched row. The reason this is important is because the row\n> returned is the old row for a DELETE action, and the new row for an\n> INSERT or UPDATE action. Without a way to distinguish the particular\n> action, the RETURNING clause returns a mixture of old and new rows,\n> which would be hard to use sensibly.\n\nIt seems to me that all of this is only a problem, because there is only\none RETURNING clause.\n\nDean Rasheed wrote in the very first post to this thread:\n> I considered allowing a separate RETURNING list at the end of each\n> action, but rapidly dismissed that idea. Firstly, it introduces\n> shift/reduce conflicts to the grammar. These can be resolved by making\n> the \"AS\" before column aliases non-optional, but that's pretty ugly,\n> and there may be a better way. More serious drawbacks are that this\n> syntax is much more cumbersome for the end user, having to repeat the\n> RETURNING clause several times, and the implementation is likely to be\n> pretty complex, so I didn't pursue it.\n\nI can't judge the grammar and complexity issues, but as a potential user\nit seems to me to be less complex to have multiple RETURNING clauses, \nwhere I could inject my own constants about the specific actions, than \nto have to deal with any of the suggested functions / clauses. More \nrepetitive, yes - but not more complex.\n\nMore importantly, I could add RETURNING to only some of the actions and \nnot always all at the same time - which seems pretty useful to me.\n\nBest,\n\nWolfgang\n\n\n", "msg_date": "Fri, 8 Mar 2024 09:41:53 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 8 Mar 2024 at 08:41, <walther@technowledgy.de> wrote:\n>\n> I can't judge the grammar and complexity issues, but as a potential user\n> it seems to me to be less complex to have multiple RETURNING clauses,\n> where I could inject my own constants about the specific actions, than\n> to have to deal with any of the suggested functions / clauses. More\n> repetitive, yes - but not more complex.\n>\n> More importantly, I could add RETURNING to only some of the actions and\n> not always all at the same time - which seems pretty useful to me.\n>\n\nI think that would be a bad idea, since it would mean the number of\nrows returned would no longer match the number of rows modified, which\nis a general property of all data-modifying commands that support\nRETURNING. It would also increase the chances of bugs for users who\nmight accidentally miss a WHEN clause.\n\nLooking back over the thread the majority opinion seems to be:\n\n1). Have a single RETURNING list, rather than one per action\n2). Drop the \"clause number\" function\n3). Call the other function MERGE_ACTION()\n\nAnd from an implementation point-of-view, it seems better to stick\nwith having a new node type to handle MERGE_ACTION(), and make\nMERGE_ACTION a COL_NAME_KEYWORD.\n\nThis seems like a reasonable compromise, and it still allows the\nspecific WHEN clause that was executed to be identified by using a\ncombination of MERGE_ACTION() and the attributes from the source and\ntarget relations. More functions can always be added later, if there\nis demand.\n\nAttached is a rebased patch, with those changes.\n\nRegards,\nDean", "msg_date": "Sun, 10 Mar 2024 15:22:49 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Hi, some minor issues:\n\n<synopsis>\n[ WITH <replaceable class=\"parameter\">with_query</replaceable> [, ...] ]\nMERGE INTO [ ONLY ] <replaceable\nclass=\"parameter\">target_table_name</replaceable> [ * ] [ [ AS ]\n<replaceable class=\"parameter\">target_alias</replaceable> ]\nUSING <replaceable class=\"parameter\">data_source</replaceable> ON\n<replaceable class=\"parameter\">join_condition</replaceable>\n<replaceable class=\"parameter\">when_clause</replaceable> [...]\n[ RETURNING * | <replaceable\nclass=\"parameter\">output_expression</replaceable> [ [ AS ]\n<replaceable class=\"parameter\">output_name</replaceable> ] [, ...] ]\n\nhere the \"WITH\" part should have \"[ RECURSIVE ]\" like:\n[ WITH [ RECURSIVE ] <replaceable\nclass=\"parameter\">with_query</replaceable> [, ...] ]\n\n+ An expression to be computed and returned by the <command>MERGE</command>\n+ command after each row is merged. The expression can use any columns of\n+ the source or target tables, or the <xref linkend=\"merge_action\"/>\n+ function to return additional information about the action executed.\n+ </para>\nshould be:\n+ An expression to be computed and returned by the <command>MERGE</command>\n+ command after each row is changed.\n\n\none minor issue:\nadd\n`\ntable sq_target;\ntable sq_source;\n`\nbefore `-- RETURNING` in src/test/regress/sql/merge.sql, so we can\neasily understand the tests.\n\n\n", "msg_date": "Wed, 13 Mar 2024 14:44:12 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 13 Mar 2024 at 06:44, jian he <jian.universality@gmail.com> wrote:\n>\n> <synopsis>\n> [ WITH <replaceable class=\"parameter\">with_query</replaceable> [, ...] ]\n> MERGE INTO [ ONLY ] <replaceable\n>\n> here the \"WITH\" part should have \"[ RECURSIVE ]\"\n\nActually, no. MERGE doesn't support WITH RECURSIVE.\n\nIt's not entirely clear to me why though. I did a quick test, removing\nthat restriction in the parse analysis code, and it seemed to work\nfine. Alvaro, do you remember why that restriction is there?\n\nIt's probably worth noting it in the docs, since it's different from\nINSERT, UPDATE and DELETE. I think this would suffice:\n\n <varlistentry>\n <term><replaceable class=\"parameter\">with_query</replaceable></term>\n <listitem>\n <para>\n The <literal>WITH</literal> clause allows you to specify one or more\n subqueries that can be referenced by name in the <command>MERGE</command>\n query. See <xref linkend=\"queries-with\"/> and <xref linkend=\"sql-select\"/>\n for details. Note that <literal>WITH RECURSIVE</literal> is not supported\n by <command>MERGE</command>.\n </para>\n </listitem>\n </varlistentry>\n\nAnd then maybe we can remove that restriction in HEAD, if there really\nisn't any need for it anymore.\n\nI also noticed that the \"UPDATE SET ...\" syntax in the synopsis is\nmissing a couple of options that are supported -- the optional \"ROW\"\nkeyword in the multi-column assignment syntax, and the syntax to\nassign from a subquery that returns multiple columns. So this should\nbe updated to match update.sgml:\n\nUPDATE SET { <replaceable class=\"parameter\">column_name</replaceable>\n= { <replaceable class=\"parameter\">expression</replaceable> | DEFAULT\n} |\n ( <replaceable\nclass=\"parameter\">column_name</replaceable> [, ...] ) = [ ROW ] ( {\n<replaceable class=\"parameter\">expression</replaceable> | DEFAULT } [,\n...] ) |\n ( <replaceable\nclass=\"parameter\">column_name</replaceable> [, ...] ) = ( <replaceable\nclass=\"parameter\">sub-SELECT</replaceable> )\n } [, ...]\n\nand then in the parameter section:\n\n <varlistentry>\n <term><replaceable class=\"parameter\">sub-SELECT</replaceable></term>\n <listitem>\n <para>\n A <literal>SELECT</literal> sub-query that produces as many output columns\n as are listed in the parenthesized column list preceding it. The\n sub-query must yield no more than one row when executed. If it\n yields one row, its column values are assigned to the target columns;\n if it yields no rows, NULL values are assigned to the target columns.\n The sub-query can refer to values from the original row in the\ntarget table,\n and values from the <replaceable>data_source</replaceable>.\n </para>\n </listitem>\n </varlistentry>\n\n(basically copied verbatim from update.sgml)\n\nI think I'll go make those doc changes, and back-patch them\nseparately, since they're not related to this patch.\n\nRegards,\nDean\n\n\n", "msg_date": "Wed, 13 Mar 2024 08:58:13 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Wed, 13 Mar 2024 at 08:58, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> I think I'll go make those doc changes, and back-patch them\n> separately, since they're not related to this patch.\n>\n\nOK, I've done that. Here is a rebased patch on top of that, with the\nother changes you suggested.\n\nRegards,\nDean", "msg_date": "Wed, 13 Mar 2024 14:12:05 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "Hi\nmainly document issues. Other than that, it looks good!\n\nMERGE not supported in COPY\nMERGE not supported in WITH query\nThese entries in src/backend/po.* need to be deleted if this patch is\ncommitted?\n------------------------------------------------------\n <indexterm zone=\"dml-returning\">\n <primary>RETURNING</primary>\n </indexterm>\n\n <indexterm zone=\"dml-returning\">\n <primary>INSERT</primary>\n <secondary>RETURNING</secondary>\n </indexterm>\n\n <indexterm zone=\"dml-returning\">\n <primary>UPDATE</primary>\n <secondary>RETURNING</secondary>\n </indexterm>\n\n <indexterm zone=\"dml-returning\">\n <primary>DELETE</primary>\n <secondary>RETURNING</secondary>\n </indexterm>\n\n <indexterm zone=\"dml-returning\">\n <primary>MERGE</primary>\n <secondary>RETURNING</secondary>\n </indexterm>\n\nin doc/src/sgml/dml.sgml, what is the point of these?\nIt is not rendered in the html file, deleting it still generates all the\nhtml file.\n------------------------------------------------------\nThe following part is about doc/src/sgml/plpgsql.sgml.\n\n <para>\n The <replaceable>query</replaceable> used in this type of\n<literal>FOR</literal>\n statement can be any SQL command that returns rows to the caller:\n <command>SELECT</command> is the most common case,\n but you can also use <command>INSERT</command>,\n<command>UPDATE</command>, or\n <command>DELETE</command> with a <literal>RETURNING</literal> clause.\nSome utility\n commands such as <command>EXPLAIN</command> will work too.\n </para>\nhere we need to add <command>MERGE</command>?\n\n\n <para>\n Row-level triggers fired <literal>BEFORE</literal> can return null to\nsignal the\n trigger manager to skip the rest of the operation for this row\n (i.e., subsequent triggers are not fired, and the\n\n<command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command>\ndoes not occur\n for this row). If a nonnull\nhere we need to add <command>MERGE</command>?\n\n\n <para>\n Variable substitution currently works only in <command>SELECT</command>,\n <command>INSERT</command>, <command>UPDATE</command>,\n <command>DELETE</command>, and commands containing one of\n these (such as <command>EXPLAIN</command> and <command>CREATE TABLE\n ... AS SELECT</command>),\n because the main SQL engine allows query parameters only in these\n commands. To use a non-constant name or value in other statement\n types (generically called utility statements), you must construct\n the utility statement as a string and <command>EXECUTE</command> it.\n </para>\nhere we need to add <command>MERGE</command>?\ndemo:\nCREATE OR REPlACE FUNCTION stamp_user2(id int, comment text) RETURNS void\nAS $$\n <<fn>>\n DECLARE\n curtime timestamp := now();\n BEGIN\n MERGE INTO users\n USING (SELECT 1)\n ON true\n WHEN MATCHED and (users.id = stamp_user2.id) THEN\n update SET last_modified = fn.curtime, comment =\nstamp_user2.comment;\n raise notice 'test';\n END;\n$$ LANGUAGE plpgsql;\n\n\n <literal>INSTEAD OF</literal> triggers (which are always row-level\ntriggers,\n and may only be used on views) can return null to signal that they did\n not perform any updates, and that the rest of the operation for this\n row should be skipped (i.e., subsequent triggers are not fired, and the\n row is not counted in the rows-affected status for the surrounding\n\n<command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command>).\nI am not sure we need to add <command>MERGE</command >. Maybe not.\n\nHimainly document issues. Other than that, it looks good!MERGE not supported in COPYMERGE not supported in WITH queryThese entries in src/backend/po.* need to be deleted if this patch is committed?------------------------------------------------------  <indexterm zone=\"dml-returning\">   <primary>RETURNING</primary>  </indexterm>  <indexterm zone=\"dml-returning\">   <primary>INSERT</primary>   <secondary>RETURNING</secondary>  </indexterm>  <indexterm zone=\"dml-returning\">   <primary>UPDATE</primary>   <secondary>RETURNING</secondary>  </indexterm>  <indexterm zone=\"dml-returning\">   <primary>DELETE</primary>   <secondary>RETURNING</secondary>  </indexterm>  <indexterm zone=\"dml-returning\">   <primary>MERGE</primary>   <secondary>RETURNING</secondary>  </indexterm>in doc/src/sgml/dml.sgml, what is the point of these?It is not rendered in the html file, deleting it still generates all the html file.------------------------------------------------------The following part is about doc/src/sgml/plpgsql.sgml.    <para>     The <replaceable>query</replaceable> used in this type of <literal>FOR</literal>     statement can be any SQL command that returns rows to the caller:     <command>SELECT</command> is the most common case,     but you can also use <command>INSERT</command>, <command>UPDATE</command>, or     <command>DELETE</command> with a <literal>RETURNING</literal> clause.  Some utility     commands such as <command>EXPLAIN</command> will work too.    </para>here we need to add <command>MERGE</command>?   <para>    Row-level triggers fired <literal>BEFORE</literal> can return null to signal the    trigger manager to skip the rest of the operation for this row    (i.e., subsequent triggers are not fired, and the    <command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command> does not occur    for this row).  If a nonnullhere we need to add <command>MERGE</command>?   <para>    Variable substitution currently works only in <command>SELECT</command>,    <command>INSERT</command>, <command>UPDATE</command>,    <command>DELETE</command>, and commands containing one of    these (such as <command>EXPLAIN</command> and <command>CREATE TABLE    ... AS SELECT</command>),    because the main SQL engine allows query parameters only in these    commands.  To use a non-constant name or value in other statement    types (generically called utility statements), you must construct    the utility statement as a string and <command>EXECUTE</command> it.   </para>here we need to add <command>MERGE</command>?demo:CREATE OR REPlACE FUNCTION stamp_user2(id int, comment text) RETURNS void AS $$    <<fn>>    DECLARE        curtime timestamp := now();    BEGIN        MERGE INTO users        USING (SELECT 1)        ON true        WHEN MATCHED and (users.id = stamp_user2.id) THEN          update SET last_modified = fn.curtime, comment = stamp_user2.comment;        raise notice 'test';    END;$$ LANGUAGE plpgsql;    <literal>INSTEAD OF</literal> triggers (which are always row-level triggers,    and may only be used on views) can return null to signal that they did    not perform any updates, and that the rest of the operation for this    row should be skipped (i.e., subsequent triggers are not fired, and the    row is not counted in the rows-affected status for the surrounding    <command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command>).I am not sure we need to add <command>MERGE</command >. Maybe not.", "msg_date": "Thu, 14 Mar 2024 13:30:36 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On 2024-Mar-13, Dean Rasheed wrote:\n\n> On Wed, 13 Mar 2024 at 06:44, jian he <jian.universality@gmail.com> wrote:\n> >\n> > <synopsis>\n> > [ WITH <replaceable class=\"parameter\">with_query</replaceable> [, ...] ]\n> > MERGE INTO [ ONLY ] <replaceable\n> >\n> > here the \"WITH\" part should have \"[ RECURSIVE ]\"\n> \n> Actually, no. MERGE doesn't support WITH RECURSIVE.\n> \n> It's not entirely clear to me why though. I did a quick test, removing\n> that restriction in the parse analysis code, and it seemed to work\n> fine. Alvaro, do you remember why that restriction is there?\n\nThere's no real reason for it, other than I didn't want to have to think\nit through; I did suspect that it might Just Work, but I felt I would\nhave had to come up with more nontrivial test cases than I wanted to\nwrite at the time.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"People get annoyed when you try to debug them.\" (Larry Wall)\n\n\n", "msg_date": "Thu, 14 Mar 2024 15:04:12 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Thu, 14 Mar 2024 at 05:30, jian he <jian.universality@gmail.com> wrote:\n>\n> Hi\n> mainly document issues. Other than that, it looks good!\n\nThanks for the review.\n\n\n> MERGE not supported in COPY\n> MERGE not supported in WITH query\n> These entries in src/backend/po.* need to be deleted if this patch is committed?\n\nNo, translation updates are handled separately.\n\n\n> <indexterm zone=\"dml-returning\">\n> <primary>MERGE</primary>\n> <secondary>RETURNING</secondary>\n> </indexterm>\n>\n> in doc/src/sgml/dml.sgml, what is the point of these?\n> It is not rendered in the html file, deleting it still generates all the html file.\n\nThese generate entries in the index -- see\nhttps://www.postgresql.org/docs/current/bookindex.html\n\n\n> The following part is about doc/src/sgml/plpgsql.sgml.\n>\n> <para>\n> The <replaceable>query</replaceable> used in this type of <literal>FOR</literal>\n> statement can be any SQL command that returns rows to the caller:\n> <command>SELECT</command> is the most common case,\n> but you can also use <command>INSERT</command>, <command>UPDATE</command>, or\n> <command>DELETE</command> with a <literal>RETURNING</literal> clause. Some utility\n> commands such as <command>EXPLAIN</command> will work too.\n> </para>\n> here we need to add <command>MERGE</command>?\n\nAh yes. I'm not sure how I missed that one.\n\n\n> <para>\n> Row-level triggers fired <literal>BEFORE</literal> can return null to signal the\n> trigger manager to skip the rest of the operation for this row\n> (i.e., subsequent triggers are not fired, and the\n> <command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command> does not occur\n> for this row). If a nonnull\n> here we need to add <command>MERGE</command>?\n\nNo, because there are no MERGE triggers. I suppose it could be updated\nto mention that this also applies to INSERT, UPDATE, and DELETE\nactions in a MERGE, but I'm not sure it's really necessary. In any\ncase, that's not something changed in this patch, so if we want to do\nthis, it should be a separate doc patch.\n\n\n> <para>\n> Variable substitution currently works only in <command>SELECT</command>,\n> <command>INSERT</command>, <command>UPDATE</command>,\n> <command>DELETE</command>, and commands containing one of\n> these (such as <command>EXPLAIN</command> and <command>CREATE TABLE\n> ... AS SELECT</command>),\n> because the main SQL engine allows query parameters only in these\n> commands. To use a non-constant name or value in other statement\n> types (generically called utility statements), you must construct\n> the utility statement as a string and <command>EXECUTE</command> it.\n> </para>\n> here we need to add <command>MERGE</command>?\n\nYes, I suppose so (though arguably it falls into the category of\n\"commands containing\" one of INSERT, UPDATE or DELETE). As above, this\nisn't something changed by this patch, so it should be done\nseparately.\n\n\n> <literal>INSTEAD OF</literal> triggers (which are always row-level triggers,\n> and may only be used on views) can return null to signal that they did\n> not perform any updates, and that the rest of the operation for this\n> row should be skipped (i.e., subsequent triggers are not fired, and the\n> row is not counted in the rows-affected status for the surrounding\n> <command>INSERT</command>/<command>UPDATE</command>/<command>DELETE</command>).\n> I am not sure we need to add <command>MERGE</command >. Maybe not.\n\nDitto.\n\nUpdated patch attached.\n\nRegards,\nDean", "msg_date": "Fri, 15 Mar 2024 11:06:42 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 15 Mar 2024 at 11:06, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Updated patch attached.\n>\n\nI have gone over this patch again in detail, and I believe that the\ncode is in good shape. All review comments have been addressed, and\nthe only thing remaining is the syntax question.\n\nTo recap, this adds support for a single RETURNING list at the end of\na MERGE command, and a special MERGE_ACTION() function that may be\nused in the RETURNING list to return the action command string\n('INSERT', 'UPDATE', or 'DELETE') that was executed.\n\nLooking for similar precedents in other databases, SQL Server uses a\nslightly different (non-standard) syntax for MERGE, and uses \"OUTPUT\"\ninstead of \"RETURNING\" to return rows. But it does allow \"$action\" in\nthe output list, which is functionally equivalent to MERGE_ACTION():\n\nhttps://learn.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver16#output_clause\n\nIn the future, we may choose to support the SQL standard syntax for\nreturning rows modified by INSERT, UPDATE, DELETE, and MERGE commands,\nbut I don't think that this patch needs to do that.\n\nWhat this patch does is to make MERGE more consistent with INSERT,\nUPDATE, and DELETE, by allowing RETURNING. And if the patch to add\nsupport for returning OLD/NEW values [1] makes it in too, it will be\nmore powerful than the SQL standard syntax, since it will allow both\nold and new values to be returned at the same time, in arbitrary\nexpressions.\n\nSo barring any further objections, I'd like to go ahead and get this\npatch committed.\n\nRegards,\nDean\n\n[1] https://www.postgresql.org/message-id/flat/CAEZATCWx0J0-v=Qjc6gXzR=KtsdvAE7Ow=D=mu50AgOe+pvisQ@mail.gmail.com\n\n\n", "msg_date": "Fri, 15 Mar 2024 11:20:10 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 2024-03-15 at 11:20 +0000, Dean Rasheed wrote:\n> To recap, this adds support for a single RETURNING list at the end of\n> a MERGE command, and a special MERGE_ACTION() function that may be\n> used in the RETURNING list to return the action command string\n> ('INSERT', 'UPDATE', or 'DELETE') that was executed.\n\n...\n\n> So barring any further objections, I'd like to go ahead and get this\n> patch committed.\n\nAll of my concerns have been extensively discussed and it seems like\nthey are just the cost of having a good feature. Thank you for going\nthrough so many alternative approaches, I think the one you've arrived\nat is consistent with what Vik endorsed[1].\n\nThe MERGE_ACTION keyword is added to the 'col_name_keyword' and the\n'bare_label_keyword' lists. That has some annoying effects, like:\n\n CREATE FUNCTION merge_action() RETURNS TEXT\n LANGUAGE SQL AS $$ SELECT 'asdf'; $$;\n ERROR: syntax error at or near \"(\"\n LINE 1: CREATE FUNCTION merge_action() RETURNS TEXT\n\nI didn't see any affirmative endorsement of exactly how the keyword is\nimplemented, but that patch has been around for a while, and I didn't\nsee any objection, either.\n\nI like this feature from a user perspective. So +1 from me.\n\nRegards,\n\tJeff Davis\n\n[1]\nhttps://www.postgresql.org/message-id/7db39b45-821f-4894-ada9-c19570b11b63@postgresfriends.org\n\n\n", "msg_date": "Fri, 15 Mar 2024 10:14:15 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: MERGE ... RETURNING" }, { "msg_contents": "On Fri, 15 Mar 2024 at 17:14, Jeff Davis <pgsql@j-davis.com> wrote:\n>\n> On Fri, 2024-03-15 at 11:20 +0000, Dean Rasheed wrote:\n>\n> > So barring any further objections, I'd like to go ahead and get this\n> > patch committed.\n>\n> I like this feature from a user perspective. So +1 from me.\n>\n\nI have committed this. Thanks for all the feedback everyone.\n\nRegards,\nDean\n\n\n", "msg_date": "Mon, 18 Mar 2024 08:01:03 +0000", "msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>", "msg_from_op": true, "msg_subject": "Re: MERGE ... RETURNING" } ]
[ { "msg_contents": "In-Reply-To: <20220327205020.GM28503@telsasoft.com>\n\nOn Sun, Mar 27, 2022 at 03:50:20PM -0500, Justin Pryzby wrote:\n> Here's a patch for zstd --long mode.\n\nRebased. I'll add this to the CF.", "msg_date": "Sun, 8 Jan 2023 14:27:37 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] basebackup: support zstd long distance matching" } ]
[ { "msg_contents": "Hi,\n\nA recent commit [1] added --save-fullpage option to pg_waldump to\nextract full page images (FPI) from WAL records and save them into\nfiles (one file per FPI) under a specified directory. While it added\ntests to check the LSN from the FPI file name and the FPI file\ncontents, it missed to further check the FPI contents like the tuples\non the page. I'm attaching a patch that basically reads the FPI file\n(saved by pg_waldump) contents and raw page from the table file (using\npageinspect extension) and compares the tuples from both of them. This\ntest ensures that the pg_waldump outputs the correct FPI. This idea is\nalso discussed elsewhere [2].\n\nThoughts?\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=d497093cbecccf6df26365e06a5f8f8614b591c8\n[2] https://www.postgresql.org/message-id/CALj2ACXesN9DTjgsekM8fig7CxhhxQfQP4fCiSJgcmp9wrZOvA@mail.gmail.com\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 9 Jan 2023 08:30:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Mon, Jan 09, 2023 at 08:30:00AM +0530, Bharath Rupireddy wrote:\n> A recent commit [1] added --save-fullpage option to pg_waldump to\n> extract full page images (FPI) from WAL records and save them into\n> files (one file per FPI) under a specified directory. While it added\n> tests to check the LSN from the FPI file name and the FPI file\n> contents, it missed to further check the FPI contents like the tuples\n> on the page. I'm attaching a patch that basically reads the FPI file\n> (saved by pg_waldump) contents and raw page from the table file (using\n> pageinspect extension) and compares the tuples from both of them. This\n> test ensures that the pg_waldump outputs the correct FPI. This idea is\n> also discussed elsewhere [2].\n> \n> Thoughts?\n\nI am not sure that it is necessary to expand this set of tests to have\ndependencies on heap and pageinspect (if we do so, what of index AMs)\nand spend more cycles on that, while we already have something in\nplace to cross-check ReadRecPtr with what's stored in the page header\nwritten on top of the block size.\n--\nMichael", "msg_date": "Tue, 10 Jan 2023 10:22:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Tue, Jan 10, 2023 at 6:52 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 09, 2023 at 08:30:00AM +0530, Bharath Rupireddy wrote:\n> > A recent commit [1] added --save-fullpage option to pg_waldump to\n> > extract full page images (FPI) from WAL records and save them into\n> > files (one file per FPI) under a specified directory. While it added\n> > tests to check the LSN from the FPI file name and the FPI file\n> > contents, it missed to further check the FPI contents like the tuples\n> > on the page. I'm attaching a patch that basically reads the FPI file\n> > (saved by pg_waldump) contents and raw page from the table file (using\n> > pageinspect extension) and compares the tuples from both of them. This\n> > test ensures that the pg_waldump outputs the correct FPI. This idea is\n> > also discussed elsewhere [2].\n> >\n> > Thoughts?\n>\n> I am not sure that it is necessary to expand this set of tests to have\n> dependencies on heap and pageinspect (if we do so, what of index AMs)\n> and spend more cycles on that, while we already have something in\n> place to cross-check ReadRecPtr with what's stored in the page header\n> written on top of the block size.\n\nWhile checking for a page LSN is enough here, there's no harm in\nverifying the whole FPI fetched from WAL record with that of the raw\npage data. Also, this test illustrates how one can make use of the\nfetched FPI - like reading the contents using pg_read_binary_file()\n(of course on can also use COPY command to load the FPI data to\npostgres) and using pageinspect functions to make sense of the raw\ndata.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:40:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "Hi,\n\nOn 1/10/23 2:22 AM, Michael Paquier wrote:\n> On Mon, Jan 09, 2023 at 08:30:00AM +0530, Bharath Rupireddy wrote:\n>> A recent commit [1] added --save-fullpage option to pg_waldump to\n>> extract full page images (FPI) from WAL records and save them into\n>> files (one file per FPI) under a specified directory. While it added\n>> tests to check the LSN from the FPI file name and the FPI file\n>> contents, it missed to further check the FPI contents like the tuples\n>> on the page. I'm attaching a patch that basically reads the FPI file\n>> (saved by pg_waldump) contents and raw page from the table file (using\n>> pageinspect extension) and compares the tuples from both of them. This\n>> test ensures that the pg_waldump outputs the correct FPI. This idea is\n>> also discussed elsewhere [2].\n>>\n>> Thoughts?\n> \n> I am not sure that it is necessary to expand this set of tests to have\n> dependencies on heap and pageinspect (if we do so, what of index AMs)\n> and spend more cycles on that, while we already have something in\n> place to cross-check ReadRecPtr with what's stored in the page header\n> written on top of the block size.\n\nI like the idea of comparing the full page (and not just the LSN) but\nI'm not sure that adding the pageinspect dependency is a good thing.\n\nWhat about extracting the block directly from the relation file and\ncomparing it with the one extracted from the WAL? (We'd need to skip the\nfirst 8 bytes to skip the LSN though).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 17:25:44 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Tue, Jan 10, 2023 at 05:25:44PM +0100, Drouvot, Bertrand wrote:\n> I like the idea of comparing the full page (and not just the LSN) but\n> I'm not sure that adding the pageinspect dependency is a good thing.\n> \n> What about extracting the block directly from the relation file and\n> comparing it with the one extracted from the WAL? (We'd need to skip the\n> first 8 bytes to skip the LSN though).\n\nByte-by-byte counting for the page hole? The page checksum would\nmatter as well, FWIW, as it is not set in WAL and a FPW logged in WAL\nmeans that the page got modified. It means that it could have been\nflushed, updating its pd_lsn and its pd_checksum on the way.\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 10:02:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Wed, Jan 11, 2023 at 6:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 10, 2023 at 05:25:44PM +0100, Drouvot, Bertrand wrote:\n> > I like the idea of comparing the full page (and not just the LSN) but\n> > I'm not sure that adding the pageinspect dependency is a good thing.\n> >\n> > What about extracting the block directly from the relation file and\n> > comparing it with the one extracted from the WAL? (We'd need to skip the\n> > first 8 bytes to skip the LSN though).\n>\n> Byte-by-byte counting for the page hole? The page checksum would\n> matter as well, FWIW, as it is not set in WAL and a FPW logged in WAL\n> means that the page got modified. It means that it could have been\n> flushed, updating its pd_lsn and its pd_checksum on the way.\n\nRight. LSN of FPI from the WAL record and page from the table won't be\nthe same, essentially FPI LSN <= table page. Since the LSNs are\ndifferent, checksums too. This is the reason we have masking functions\ncommon/bufmask.c and rm_mask functions defined for some of the\nresource managers while verifying FPI consistency in\nverifyBackupPageConsistency(). Note that pageinspect can give only\nunmasked/raw page data, which means, byte-by-byte comparison isn't\npossible with pageinspect too, hence I was comparing only the rows\nwith tuple_data_split().\n\nTherefore, reading bytes from the table page and comparing\nbyte-by-byte with FPI requires us to invent new masking functions in\nthe tests - simply a no-go IMO.\n\nAs the concern here is to not establish pageinspect dependency with\npg_waldump, I'm fine to withdraw this patch and be happy with what we\nhave today.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:47:50 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "Hi,\n\nOn 1/11/23 5:17 AM, Bharath Rupireddy wrote:\n> On Wed, Jan 11, 2023 at 6:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Tue, Jan 10, 2023 at 05:25:44PM +0100, Drouvot, Bertrand wrote:\n>>> I like the idea of comparing the full page (and not just the LSN) but\n>>> I'm not sure that adding the pageinspect dependency is a good thing.\n>>>\n>>> What about extracting the block directly from the relation file and\n>>> comparing it with the one extracted from the WAL? (We'd need to skip the\n>>> first 8 bytes to skip the LSN though).\n>>\n>> Byte-by-byte counting for the page hole? \n\nI've in mind to use diff on the whole page (minus the LSN).\n\n>> The page checksum would\n>> matter as well,\n\nRight, but the TAP test is done without checksum and we could also\nskip the checksum from the page if we really want to.\n\n> Right. LSN of FPI from the WAL record and page from the table won't be\n> the same, essentially FPI LSN <= table page. \n\nRight, that's why I proposed to exclude it for the comparison.\n\nWhat about something like the attached?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 11 Jan 2023 10:56:54 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Wed, Jan 11, 2023 at 3:28 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> Hi,\n>\n> On 1/11/23 5:17 AM, Bharath Rupireddy wrote:\n> > On Wed, Jan 11, 2023 at 6:32 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >>\n> >> On Tue, Jan 10, 2023 at 05:25:44PM +0100, Drouvot, Bertrand wrote:\n> >>> I like the idea of comparing the full page (and not just the LSN) but\n> >>> I'm not sure that adding the pageinspect dependency is a good thing.\n> >>>\n> >>> What about extracting the block directly from the relation file and\n> >>> comparing it with the one extracted from the WAL? (We'd need to skip the\n> >>> first 8 bytes to skip the LSN though).\n> >>\n> >> Byte-by-byte counting for the page hole?\n>\n> I've in mind to use diff on the whole page (minus the LSN).\n>\n> >> The page checksum would\n> >> matter as well,\n>\n> Right, but the TAP test is done without checksum and we could also\n> skip the checksum from the page if we really want to.\n>\n> > Right. LSN of FPI from the WAL record and page from the table won't be\n> > the same, essentially FPI LSN <= table page.\n>\n> Right, that's why I proposed to exclude it for the comparison.\n>\n> What about something like the attached?\n\nNote that the raw page on the table might differ not just in page LSN\nbut also in other fields, for instance see heap_mask for instance. It\nmasks lsn, checksum, hint bits, unused space etc. before verifying FPI\nconsistency during recovery in\nverifyBackupPageConsistency().\n\nI think the job of verifying FPI from WAL record with the page LSN is\nbetter left to the core - via verifyBackupPageConsistency(). Honestly,\npg_waldump is good with what it has currently - LSN checks.\n\n+# Extract the binary data without the LSN from the relation's block\n+sysseek($frel, 8, 0); #bypass the LSN\n+sysread($frel, $blk, 8184) or die \"sysread failed: $!\";\n+syswrite($blkfrel, $blk) or die \"syswrite failed: $!\";\n\nI suspect that these tests are portable with the hardcoded values such as above.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 19:17:47 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Wed, Jan 11, 2023 at 07:17:47PM +0530, Bharath Rupireddy wrote:\n> Note that the raw page on the table might differ not just in page LSN\n> but also in other fields, for instance see heap_mask for instance. It\n> masks lsn, checksum, hint bits, unused space etc. before verifying FPI\n> consistency during recovery in\n> verifyBackupPageConsistency().\n\nFWIW, I don't really want to enter in this business here. That feels\nlike a good addition of technical debt compared to the potential\ngain.\n--\nMichael", "msg_date": "Thu, 12 Jan 2023 13:44:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:14 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 07:17:47PM +0530, Bharath Rupireddy wrote:\n> > Note that the raw page on the table might differ not just in page LSN\n> > but also in other fields, for instance see heap_mask for instance. It\n> > masks lsn, checksum, hint bits, unused space etc. before verifying FPI\n> > consistency during recovery in\n> > verifyBackupPageConsistency().\n>\n> FWIW, I don't really want to enter in this business here. That feels\n> like a good addition of technical debt compared to the potential\n> gain.\n\nI couldn't agree more.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:16:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" }, { "msg_contents": "Hi,\n\nOn 1/12/23 5:44 AM, Michael Paquier wrote:\n> On Wed, Jan 11, 2023 at 07:17:47PM +0530, Bharath Rupireddy wrote:\n>> Note that the raw page on the table might differ not just in page LSN\n>> but also in other fields, for instance see heap_mask for instance. It\n>> masks lsn, checksum, hint bits, unused space etc. before verifying FPI\n>> consistency during recovery in\n>> verifyBackupPageConsistency().\n> \n> FWIW, I don't really want to enter in this business here. That feels\n> like a good addition of technical debt compared to the potential\n> gain.\n\nAgree, let's forget about it.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 08:09:03 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Strengthen pg_waldump's --save-fullpage tests" } ]
[ { "msg_contents": "\nHi,\n\nCommit 216a784829 change the src/backend/replication/logical/worker.c file mode\nfrom 0644 to 0755, which is unwanted, right?\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=216a784829c2c5f03ab0c43e009126cbb819e9b2\n\n-- \nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Mon, 09 Jan 2023 15:51:10 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": true, "msg_subject": "Unwanted file mode modification?" }, { "msg_contents": "On Mon, Jan 9, 2023 at 1:21 PM Japin Li <japinli@hotmail.com> wrote:\n>\n> Commit 216a784829 change the src/backend/replication/logical/worker.c file mode\n> from 0644 to 0755, which is unwanted, right?\n>\n\nRight, it is by mistake. I'll fix it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Jan 2023 13:51:00 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unwanted file mode modification?" } ]
[ { "msg_contents": "Hi hackers,\n\nI noticed that there is a problem about system view pg_publication_tables when\nlooking into [1]. The column \"attnames\" contains generated columns when no\ncolumn list is specified, but generated columns shouldn't be included because\nthey are not replicated (see send_relation_and_attrs()).\n\nI think one way to fix it is to modify pg_publication_tables query to exclude\ngenerated columns. But in this way, we need to bump catalog version when fixing\nit in back-branch. Another way is to modify function\npg_get_publication_tables()'s return value to contain all supported columns if\nno column list is specified, and we don't need to change system view.\n\nAttach the patch for HEAD, and we can ignore the changes of the system view in\nPG15.\n\n[1] https://www.postgresql.org/message-id/OSZPR01MB631087C65BA81E1FEE5A60D2FDF59%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\nRegards,\nShi yu", "msg_date": "Mon, 9 Jan 2023 11:59:11 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Mon, Jan 9, 2023 at 5:29 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> I noticed that there is a problem about system view pg_publication_tables when\n> looking into [1]. The column \"attnames\" contains generated columns when no\n> column list is specified, but generated columns shouldn't be included because\n> they are not replicated (see send_relation_and_attrs()).\n>\n> I think one way to fix it is to modify pg_publication_tables query to exclude\n> generated columns. But in this way, we need to bump catalog version when fixing\n> it in back-branch. Another way is to modify function\n> pg_get_publication_tables()'s return value to contain all supported columns if\n> no column list is specified, and we don't need to change system view.\n>\n\nThat sounds like a reasonable approach to fix the issue.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 9 Jan 2023 19:02:15 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> On Mon, Jan 9, 2023 at 5:29 PM shiy.fnst@fujitsu.com\n> <shiy.fnst@fujitsu.com> wrote:\n>> I think one way to fix it is to modify pg_publication_tables query to exclude\n>> generated columns. But in this way, we need to bump catalog version when fixing\n>> it in back-branch. Another way is to modify function\n>> pg_get_publication_tables()'s return value to contain all supported columns if\n>> no column list is specified, and we don't need to change system view.\n\n> That sounds like a reasonable approach to fix the issue.\n\nWe could just not fix it in the back branches. I'd argue that this is\nas much a definition change as a bug fix, so it doesn't really feel\nlike something to back-patch anyway.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Jan 2023 10:06:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> > On Mon, Jan 9, 2023 at 5:29 PM shiy.fnst@fujitsu.com\n> > <shiy.fnst@fujitsu.com> wrote:\n> >> I think one way to fix it is to modify pg_publication_tables query to exclude\n> >> generated columns. But in this way, we need to bump catalog version when\n> fixing\n> >> it in back-branch. Another way is to modify function\n> >> pg_get_publication_tables()'s return value to contain all supported columns\n> if\n> >> no column list is specified, and we don't need to change system view.\n> \n> > That sounds like a reasonable approach to fix the issue.\n> \n> We could just not fix it in the back branches. I'd argue that this is\n> as much a definition change as a bug fix, so it doesn't really feel\n> like something to back-patch anyway.\n> \n\nIf this is not fixed in back-branch, in some cases we will get an error when\ncreating/refreshing subscription because we query pg_publication_tables in\ncolumn list check.\n\ne.g.\n\n-- publisher\nCREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int GENERATED ALWAYS AS (a + 1) STORED);\nCREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 (a, b, c);\nCREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4;\n\n-- subscriber\nCREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int);\n\npostgres=# CREATE SUBSCRIPTION sub1 CONNECTION 'port=5432' PUBLICATION pub_mix_7, pub_mix_8;\nERROR: cannot use different column lists for table \"public.test_mix_4\" in different publications\n\nI think it might be better to fix it in back-branch. And if we fix it by\nmodifying pg_get_publication_tables(), we don't need to bump catalog version in\nback-branch, I think this seems acceptable.\n\nRegards,\nShi yu\n\n\n", "msg_date": "Tue, 10 Jan 2023 03:08:04 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Tue, Jan 10, 2023 at 8:38 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > On Mon, Jan 9, 2023 at 5:29 PM shiy.fnst@fujitsu.com\n> > > <shiy.fnst@fujitsu.com> wrote:\n> > >> I think one way to fix it is to modify pg_publication_tables query to exclude\n> > >> generated columns. But in this way, we need to bump catalog version when\n> > fixing\n> > >> it in back-branch. Another way is to modify function\n> > >> pg_get_publication_tables()'s return value to contain all supported columns\n> > if\n> > >> no column list is specified, and we don't need to change system view.\n> >\n> > > That sounds like a reasonable approach to fix the issue.\n> >\n> > We could just not fix it in the back branches. I'd argue that this is\n> > as much a definition change as a bug fix, so it doesn't really feel\n> > like something to back-patch anyway.\n> >\n>\n> If this is not fixed in back-branch, in some cases we will get an error when\n> creating/refreshing subscription because we query pg_publication_tables in\n> column list check.\n>\n> e.g.\n>\n> -- publisher\n> CREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int GENERATED ALWAYS AS (a + 1) STORED);\n> CREATE PUBLICATION pub_mix_7 FOR TABLE test_mix_4 (a, b, c);\n> CREATE PUBLICATION pub_mix_8 FOR TABLE test_mix_4;\n>\n> -- subscriber\n> CREATE TABLE test_mix_4 (a int PRIMARY KEY, b int, c int, d int);\n>\n> postgres=# CREATE SUBSCRIPTION sub1 CONNECTION 'port=5432' PUBLICATION pub_mix_7, pub_mix_8;\n> ERROR: cannot use different column lists for table \"public.test_mix_4\" in different publications\n>\n> I think it might be better to fix it in back-branch. And if we fix it by\n> modifying pg_get_publication_tables(), we don't need to bump catalog version in\n> back-branch, I think this seems acceptable.\n>\n\nSo, if we don't backpatch then it could lead to an error when it\nshouldn't have which is clearly a bug. I think we should backpatch\nthis unless Tom or others are against it.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:56:46 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n>> On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> We could just not fix it in the back branches. I'd argue that this is\n>>> as much a definition change as a bug fix, so it doesn't really feel\n>>> like something to back-patch anyway.\n\n> So, if we don't backpatch then it could lead to an error when it\n> shouldn't have which is clearly a bug. I think we should backpatch\n> this unless Tom or others are against it.\n\nThis isn't a hill that I'm ready to die on ... but do we have any field\ncomplaints about this? If not, I still lean against a back-patch.\nI think there's a significant risk of breaking case A while fixing\ncase B when we change this behavior, and that's something that's\nbetter done only in a major release.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 23:37:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Wed, Jan 11, 2023 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> writes:\n> >> On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> We could just not fix it in the back branches. I'd argue that this is\n> >>> as much a definition change as a bug fix, so it doesn't really feel\n> >>> like something to back-patch anyway.\n>\n> > So, if we don't backpatch then it could lead to an error when it\n> > shouldn't have which is clearly a bug. I think we should backpatch\n> > this unless Tom or others are against it.\n>\n> This isn't a hill that I'm ready to die on ... but do we have any field\n> complaints about this? If not, I still lean against a back-patch.\n> I think there's a significant risk of breaking case A while fixing\n> case B when we change this behavior, and that's something that's\n> better done only in a major release.\n>\n\nFair enough, but note that there is a somewhat related problem for\ndropped columns [1] as well. While reviewing that it occurred to me\nthat generated columns also have a similar problem which leads to this\nthread (it would have been better if there is a mention of the same in\nthe initial email). Now, as symptoms are similar, I think we shouldn't\nback-patch that as well, otherwise, it will appear to be partially\nfixed. What do you think?\n\n[1] - https://www.postgresql.org/message-id/OSZPR01MB631087C65BA81E1FEE5A60D2FDF59%40OSZPR01MB6310.jpnprd01.prod.outlook.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 11 Jan 2023 12:09:31 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Wed, Jan 11, 2023 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> \r\n> On Wed, Jan 11, 2023 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> >\r\n> > Amit Kapila <amit.kapila16@gmail.com> writes:\r\n> > >> On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n> > >>> We could just not fix it in the back branches. I'd argue that this is\r\n> > >>> as much a definition change as a bug fix, so it doesn't really feel\r\n> > >>> like something to back-patch anyway.\r\n> >\r\n> > > So, if we don't backpatch then it could lead to an error when it\r\n> > > shouldn't have which is clearly a bug. I think we should backpatch\r\n> > > this unless Tom or others are against it.\r\n> >\r\n> > This isn't a hill that I'm ready to die on ... but do we have any field\r\n> > complaints about this? If not, I still lean against a back-patch.\r\n> > I think there's a significant risk of breaking case A while fixing\r\n> > case B when we change this behavior, and that's something that's\r\n> > better done only in a major release.\r\n> >\r\n> \r\n> Fair enough, but note that there is a somewhat related problem for\r\n> dropped columns [1] as well. While reviewing that it occurred to me\r\n> that generated columns also have a similar problem which leads to this\r\n> thread (it would have been better if there is a mention of the same in\r\n> the initial email). Now, as symptoms are similar, I think we shouldn't\r\n> back-patch that as well, otherwise, it will appear to be partially\r\n> fixed. What do you think?\r\n> \r\n> [1] - https://www.postgresql.org/message-\r\n> id/OSZPR01MB631087C65BA81E1FEE5A60D2FDF59%40OSZPR01MB6310.jpnpr\r\n> d01.prod.outlook.com\r\n> \r\n\r\nI agree to only fix them on HEAD.\r\n\r\nI merged this patch and the one in [1] as they are similar problems. Please\r\nsee the attached patch.\r\n\r\nI removed the changes in tablesync.c which simplified the query in\r\nfetch_remote_table_info(), because it only works for publishers of v16. Those\r\nchanges are based on pg_get_publication_tables() returning all columns when no\r\ncolumn list is specified, but publishers of v15 return NULL in that case.\r\n\r\n[1] https://www.postgresql.org/message-id/OSZPR01MB631087C65BA81E1FEE5A60D2FDF59%40OSZPR01MB6310.jpnprd01.prod.outlook.com\r\n\r\n\r\nRegards,\r\nShi yu", "msg_date": "Thu, 12 Jan 2023 07:03:38 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Fix pg_publication_tables to exclude generated columns" }, { "msg_contents": "On Thu, Jan 12, 2023 at 12:33 PM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n>\n> On Wed, Jan 11, 2023 2:40 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Wed, Jan 11, 2023 at 10:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> writes:\n> > > >> On Mon, Jan 9, 2023 11:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > >>> We could just not fix it in the back branches. I'd argue that this is\n> > > >>> as much a definition change as a bug fix, so it doesn't really feel\n> > > >>> like something to back-patch anyway.\n> > >\n> > > > So, if we don't backpatch then it could lead to an error when it\n> > > > shouldn't have which is clearly a bug. I think we should backpatch\n> > > > this unless Tom or others are against it.\n> > >\n> > > This isn't a hill that I'm ready to die on ... but do we have any field\n> > > complaints about this? If not, I still lean against a back-patch.\n> > > I think there's a significant risk of breaking case A while fixing\n> > > case B when we change this behavior, and that's something that's\n> > > better done only in a major release.\n> > >\n> >\n> > Fair enough, but note that there is a somewhat related problem for\n> > dropped columns [1] as well. While reviewing that it occurred to me\n> > that generated columns also have a similar problem which leads to this\n> > thread (it would have been better if there is a mention of the same in\n> > the initial email). Now, as symptoms are similar, I think we shouldn't\n> > back-patch that as well, otherwise, it will appear to be partially\n> > fixed. What do you think?\n> >\n> > [1] - https://www.postgresql.org/message-\n> > id/OSZPR01MB631087C65BA81E1FEE5A60D2FDF59%40OSZPR01MB6310.jpnpr\n> > d01.prod.outlook.com\n> >\n>\n> I agree to only fix them on HEAD.\n>\n> I merged this patch and the one in [1] as they are similar problems. Please\n> see the attached patch.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Jan 2023 16:55:53 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix pg_publication_tables to exclude generated columns" } ]
[ { "msg_contents": "Over at [1] I speculated that it might be a good idea to allow\n+grouprole type user names in pg_ident.conf. The use case I have in mind\nis where the user authenticates to pgbouncer and then pgbouncer connects\nas the user using a client certificate. Without this mechanism that\nmeans that you need a mapping rule for each user in pg_ident.conf, which\ndoesn't scale very well, but with this mechanism all you have to do is\ngrant the specified role to users. So here's a small patch for that.\n\nComments welcome.\n\n\ncheers\n\n\nandrew\n\n\n[1] https://postgr.es/m/6912eb9c-4905-badb-ad87-eeca8ace13e7@dunslane.net\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 9 Jan 2023 08:00:26 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Allow +group in pg_ident.conf" }, { "msg_contents": "On Mon, Jan 09, 2023 at 08:00:26AM -0500, Andrew Dunstan wrote:\n> + If the <replaceable>database-username</replaceable> begins with a\n> + <literal>+</literal> character, then the operating system user can login as\n> + any user belonging to that role, similarly to how user names beginning with\n> + <literal>+</literal> are treated in <literal>pg_hba.conf</literal>.\n\nI would ѕuggest making it clear that this means role membership and not\nprivileges via INHERIT.\n\n> -\t\tif (case_insensitive)\n> +\t\tif (regexp_pgrole[0] == '+')\n> +\t\t{\n> +\t\t\tOid roleid = get_role_oid(pg_role, true);\n> +\t\t\tif (is_member(roleid, regexp_pgrole +1))\n> +\t\t\t\t*found_p = true;\n> +\t\t}\n> +\t\telse if (case_insensitive)\n\nIt looks like the is_member() check will always be case-sensitive. Should\nit respect the value of case_insensitive? If not, I think there should be\na brief comment explaining why.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 9 Jan 2023 10:24:08 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "On 2023-01-09 Mo 13:24, Nathan Bossart wrote:\n> On Mon, Jan 09, 2023 at 08:00:26AM -0500, Andrew Dunstan wrote:\n>> + If the <replaceable>database-username</replaceable> begins with a\n>> + <literal>+</literal> character, then the operating system user can login as\n>> + any user belonging to that role, similarly to how user names beginning with\n>> + <literal>+</literal> are treated in <literal>pg_hba.conf</literal>.\n> I would ѕuggest making it clear that this means role membership and not\n> privileges via INHERIT.\n\n\nI've adapted a sentence from the pg_hba.conf documentation so we stay\nconsistent.\n\n\n>> -\t\tif (case_insensitive)\n>> +\t\tif (regexp_pgrole[0] == '+')\n>> +\t\t{\n>> +\t\t\tOid roleid = get_role_oid(pg_role, true);\n>> +\t\t\tif (is_member(roleid, regexp_pgrole +1))\n>> +\t\t\t\t*found_p = true;\n>> +\t\t}\n>> +\t\telse if (case_insensitive)\n> It looks like the is_member() check will always be case-sensitive. Should\n> it respect the value of case_insensitive? If not, I think there should be\n> a brief comment explaining why.\n\n\nIt's not really relevant. We're not comparing role names here; rather we\nlook up two roles and then ask if one is a member of the other. I've\nadded a comment.\n\nThanks for the review (I take it you're generally in favor).\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 9 Jan 2023 17:33:14 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "This seems very much related to my patch here:\nhttps://commitfest.postgresql.org/41/4081/ (yes, somehow the thread\ngot split. I blame outlook)\n\nI'll try to review this one soonish.\n\n\n", "msg_date": "Tue, 10 Jan 2023 00:25:14 +0100", "msg_from": "Jelte Fennema <me@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "On Mon, Jan 09, 2023 at 05:33:14PM -0500, Andrew Dunstan wrote:\n> I've adapted a sentence from the pg_hba.conf documentation so we stay\n> consistent.\n\n+ <para>\n+ If the <replaceable>database-username</replaceable> begins with a\n+ <literal>+</literal> character, then the operating system user can login as\n+ any user belonging to that role, similarly to how user names beginning with\n+ <literal>+</literal> are treated in <literal>pg_hba.conf</literal>.\n+ Thus, a <literal>+</literal> mark means <quote>match any of the roles that\n+ are directly or indirectly members of this role</quote>, while a name\n+ without a <literal>+</literal> mark matches only that specific role.\n+ </para>\n\nShould this also mention that the behavior is enforced even in cases\nwhere we expect a case-sensitive match?\n\n> It's not really relevant. We're not comparing role names here; rather we\n> look up two roles and then ask if one is a member of the other. I've\n> added a comment.\n> \n> Thanks for the review (I take it you're generally in favor).\n\n- if (case_insensitive)\n+ if (regexp_pgrole[0] == '+')\n+ {\n+ /*\n+ * Since we're not comparing role names here, use of case\n+ * insensitive matching doesn't really apply.\n+ */\n+ Oid roleid = get_role_oid(pg_role, true);\n+ Assert(false);\n+ if (is_member(roleid, regexp_pgrole +1))\n+ *found_p = true;\n+ }\n+ else if (case_insensitive)\n\nHmm. As check_ident_usermap() is now coded, it means that the case of\na syntax substitution could be enforced to use a group with the user\nname given by the client. For example, take this ident entry:\nmymap /^(.*)@mydomain\\.com$ \\1\n\nThen, if we attempt to access Postgres with \"+testrole@mydomain.com\",\nwe would get a substitution to \"+testrole\", which would be enforced to\ncheck on is_member() with this expected role name. I am not sure what\nshould be the correct behavior here.\n--\nMichael", "msg_date": "Tue, 10 Jan 2023 10:14:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "Having looked closer now, I'm pretty sure you should base this patch\non top of my patch: https://commitfest.postgresql.org/41/4081/\nMainly because you also need the token version of pg_role, which is\none of the things my patch adds.\n\n> if (regexp_pgrole[0] == '+')\n\nFor these lines you'll need to check if the original token was quoted.\nIf it's quoted it shouldn't use the group behaviour, and instead\ncompare the + character as part of the literal role.\n\n> if (is_member(roleid, regexp_pgrole +1))\n> if (is_member(roleid, ++map_role))\n\nYou use these two checks to do the same, so it's best if they are\nwritten consistently.\n\n> if (regexp_pgrole[0] == '+')\n\nThis check can be moved before the following line and do an early\nreturn (like I do for \"all\" in my patch). Since if the first character\nis a + we know that it's not \\1 and thus we don't have to worry about\ngetting the regex match.\n\n> if ((ofs = strstr(identLine->pg_role->string, \"\\\\1\")) != NULL)\n\n\n", "msg_date": "Tue, 10 Jan 2023 13:09:44 +0100", "msg_from": "Jelte Fennema <me@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "\nOn 2023-01-10 Tu 07:09, Jelte Fennema wrote:\n> Having looked closer now, I'm pretty sure you should base this patch\n> on top of my patch: https://commitfest.postgresql.org/41/4081/\n> Mainly because you also need the token version of pg_role, which is\n> one of the things my patch adds.\n\n\nOk, that sounds reasonable, but the cfbot doesn't like patches that\ndepend on other patches in a different email. Maybe you can roll this up\nas an extra patch in your next version? It's pretty small.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:42:19 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "On Tue, Jan 10, 2023 at 09:42:19AM -0500, Andrew Dunstan wrote:\n> Ok, that sounds reasonable, but the cfbot doesn't like patches that\n> depend on other patches in a different email. Maybe you can roll this up\n> as an extra patch in your next version? It's pretty small.\n\nThis can go two ways if both of you agree, by sending an updated patch\non this thread based on the other one.. And actually, Jelte's patch\nhas less C code than this thread's proposal, eventhough it lacks\ntests.\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 10:14:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "I'm working on a new patchset for my commitfest entry. I'll make sure\nto include a third patch for the +group support, and I'll include you\n(Andrew) in the thread when I send it.\n\nOn Wed, 11 Jan 2023 at 02:14, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 10, 2023 at 09:42:19AM -0500, Andrew Dunstan wrote:\n> > Ok, that sounds reasonable, but the cfbot doesn't like patches that\n> > depend on other patches in a different email. Maybe you can roll this up\n> > as an extra patch in your next version? It's pretty small.\n>\n> This can go two ways if both of you agree, by sending an updated patch\n> on this thread based on the other one.. And actually, Jelte's patch\n> has less C code than this thread's proposal, eventhough it lacks\n> tests.\n> --\n> Michael\n\n\n", "msg_date": "Wed, 11 Jan 2023 09:59:27 +0100", "msg_from": "Jelte Fennema <me@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "On Wed, 11 Jan 2023 at 04:00, Jelte Fennema <me@jeltef.nl> wrote:\n>\n> I'm working on a new patchset for my commitfest entry.\n\nSo I'll set it to \"Waiting on Author\" pending that new patchset...\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 14:26:21 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" }, { "msg_contents": "On Wed, Mar 01, 2023 at 02:26:21PM -0500, Gregory Stark (as CFM) wrote:\n> So I'll set it to \"Waiting on Author\" pending that new patchset...\n\nThere is still an entry as of https://commitfest.postgresql.org/42/4112/.\nSupport for group detection in pg_ident.conf has been added in efb6f4a\nalready, so I have switched this entry as committed. We were also\naware of Andrew's proposal on the other commit, and it was much easier\nto just group everything together.\n--\nMichael", "msg_date": "Thu, 2 Mar 2023 09:56:21 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Allow +group in pg_ident.conf" } ]
[ { "msg_contents": "Hi,\n\nPostgres verifies consistency of FPI from WAL record with the replayed\npage during recovery in verifyBackupPageConsistency() when either\nwal_consistency_checking for the resource manager is enabled or a WAL\nrecord with XLR_CHECK_CONSISTENCY flag is inserted. While doing so, it\nuses two intermediate pages primary_image_masked (FPI from WAL record)\nand replay_image_masked (replayed page) which are dynamically\nallocated (palloc'd) before the recovery starts, however, they're not\nused unless verifyBackupPageConsistency() is called. And these\nvariables are palloc'd here for getting MAXALIGNed memory as opposed\nto static char arrays. Since verifyBackupPageConsistency() gets called\nconditionally only when the WAL record has the XLR_CHECK_CONSISTENCY\nflag set, it's a waste of memory for these two page variables.\n\nI propose to statically allocate these two pages using PGAlignedBlock\nstructure lazily in verifyBackupPageConsistency() to not waste dynamic\nmemory worth 2*BLCKSZ bytes. I'm attaching a small patch herewith.\n\nThoughts?\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 9 Jan 2023 20:00:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "At Mon, 9 Jan 2023 20:00:00 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> I propose to statically allocate these two pages using PGAlignedBlock\n> structure lazily in verifyBackupPageConsistency() to not waste dynamic\n> memory worth 2*BLCKSZ bytes. I'm attaching a small patch herewith.\n> \n> Thoughts?\n\nIMHO, it's a bit scaring to me to push down the execution stack by\nthat large size. I tend to choose the (current) possible memory\nwasting only on startup process than digging stack deeply.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 12 Jan 2023 17:29:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "On Thu, Jan 12, 2023 at 4:29 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 9 Jan 2023 20:00:00 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > I propose to statically allocate these two pages using PGAlignedBlock\n> > structure lazily in verifyBackupPageConsistency() to not waste dynamic\n> > memory worth 2*BLCKSZ bytes. I'm attaching a small patch herewith.\n> >\n> > Thoughts?\n>\n> IMHO, it's a bit scaring to me to push down the execution stack by\n> that large size. I tend to choose the (current) possible memory\n> wasting only on startup process than digging stack deeply.\n\n+1\n\n\n", "msg_date": "Thu, 12 Jan 2023 16:37:38 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "On Thu, Jan 12, 2023 at 1:59 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Mon, 9 Jan 2023 20:00:00 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > I propose to statically allocate these two pages using PGAlignedBlock\n> > structure lazily in verifyBackupPageConsistency() to not waste dynamic\n> > memory worth 2*BLCKSZ bytes. I'm attaching a small patch herewith.\n> >\n> > Thoughts?\n>\n> IMHO, it's a bit scaring to me to push down the execution stack by\n> that large size. I tend to choose the (current) possible memory\n> wasting only on startup process than digging stack deeply.\n\nOn the contrary, PGAlignedBlock is being used elsewhere in the code;\nsome of them are hot paths. verifyBackupPageConsistency() is not\nsomething that gets called always i.e. WAL consistency checks are done\nconditionally - when either one enables wal_consistency_checking for\nthe rmgr or the WAL record is flagged with\nXLR_CHECK_CONSISTENCY (core doesn't do, it's an external module, if\nany, do that).\n\nI really don't see much of a problem in allocating them statically and\npushing closer to where they're being used. If this really concerns,\nat the least, the dynamic allocation needs to be pushed to\nverifyBackupPageConsistency() IMO with if (first_time) { allocate two\nblocks with palloc} and use them. This at least saves some memory on\nthe heap for most of the servers out there.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 15:02:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "On Thu, Jan 12, 2023 at 04:37:38PM +0800, Julien Rouhaud wrote:\n> On Thu, Jan 12, 2023 at 4:29 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n>> IMHO, it's a bit scaring to me to push down the execution stack by\n>> that large size. I tend to choose the (current) possible memory\n>> wasting only on startup process than digging stack deeply.\n> \n> +1\n\nIndeed. I agree to leave that be.\n--\nMichael", "msg_date": "Fri, 13 Jan 2023 14:20:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "At Thu, 12 Jan 2023 15:02:25 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On the contrary, PGAlignedBlock is being used elsewhere in the code;\n\nI noticed it and had the same feeling, and thought that they don't\njustify to do the same at other places.\n\n> some of them are hot paths. verifyBackupPageConsistency() is not\n> something that gets called always i.e. WAL consistency checks are done\n> conditionally - when either one enables wal_consistency_checking for\n> the rmgr or the WAL record is flagged with\n> XLR_CHECK_CONSISTENCY (core doesn't do, it's an external module, if\n> any, do that).\n\nRight. So we could allocate them at the first use as below, but...\n\n> I really don't see much of a problem in allocating them statically and\n> pushing closer to where they're being used. If this really concerns,\n> at the least, the dynamic allocation needs to be pushed to\n> verifyBackupPageConsistency() IMO with if (first_time) { allocate two\n> blocks with palloc} and use them. This at least saves some memory on\n> the heap for most of the servers out there.\n\nYeah, we could do that. But as I mentioned before, that happens only\non startup thus it can be said that that's not worth bothering. On\nthe other hand I don't think it's great to waste 16kB * max_backends\nmemory especially when it is clearly recognized and easily avoidable.\n\nI guess the reason for the code is more or less that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:52:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" }, { "msg_contents": "On Mon, Jan 16, 2023 at 10:52:43AM +0900, Kyotaro Horiguchi wrote:\n> Yeah, we could do that. But as I mentioned before, that happens only\n> on startup thus it can be said that that's not worth bothering. On\n> the other hand I don't think it's great to waste 16kB * max_backends\n> memory especially when it is clearly recognized and easily avoidable.\n\nMemory's cheap, but basically nobody would use these except\ndevelopers..\n\n> I guess the reason for the code is more or less that.\n\nThe original discussion spreads across these threads:\nhttps://www.postgresql.org/message-id/CAB7nPqR0jzhF%3DU4AXLm%2BcmaE4J-HkUzbaRXtg%2B6ieERTqr%3Dpcg%40mail.gmail.com\nhttps://www.postgresql.org/message-id/CAGz5QC%2B_CNcDJkkmDyPg2zJ_R8AtEg1KyYADbU6B673RaTySAg%40mail.gmail.com\n\nThere was a specific point about using static buffers from me, though\nthese would not have been aligned as of the lack of PGAlignedBlock\nback in 2017 which is why palloc() was used. That should be around\nhere:\nhttps://www.postgresql.org/message-id/CAB7nPqR=OcojLCP=1Ho6Zo312CKzUZU8d4aJO+VvpUYV-waU_Q@mail.gmail.com\n--\nMichael", "msg_date": "Mon, 16 Jan 2023 11:45:00 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Lazy allocation of pages required for verifying FPI consistency" } ]
[ { "msg_contents": "When you include one role in another, you can specify three options:\nADMIN, INHERIT (added in e3ce2de0) and SET (3d14e171).\n\nFor example.\n\nCREATE ROLE alice LOGIN;\n\nGRANT pg_read_all_settings TO alice WITH ADMIN TRUE, INHERIT TRUE, SET TRUE;\nGRANT pg_stat_scan_tables TO alice WITH ADMIN FALSE, INHERIT FALSE, SET \nFALSE;\nGRANT pg_read_all_stats TO alice WITH ADMIN FALSE, INHERIT TRUE, SET FALSE;\n\nFor information about the options, you need to look in the pg_auth_members:\n\nSELECT roleid::regrole, admin_option, inherit_option, set_option\nFROM pg_auth_members\nWHERE member = 'alice'::regrole;\n         roleid        | admin_option | inherit_option | set_option\n----------------------+--------------+----------------+------------\n  pg_read_all_settings | t            | t              | t\n  pg_stat_scan_tables  | f            | f              | f\n  pg_read_all_stats    | f            | t              | f\n(3 rows)\n\nI think it would be useful to be able to get this information with a \npsql command\nlike \\du (and \\dg). With proposed patch the \\du command still only lists\nthe roles of which alice is a member:\n\n\\du alice\n                                      List of roles\n  Role name | Attributes |                          Member of\n-----------+------------+--------------------------------------------------------------\n  alice     |            | \n{pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n\nBut the \\du+ command adds information about the selected ADMIN, INHERIT\nand SET options:\n\n\\du+ alice\n                                     List of roles\n  Role name | Attributes |                   Member of                   \n| Description\n-----------+------------+-----------------------------------------------+-------------\n  alice     |            | pg_read_all_settings WITH ADMIN, INHERIT, SET+|\n            |            | pg_read_all_stats WITH INHERIT               +|\n            |            | pg_stat_scan_tables                           |\n\nOne more change. The roles in the \"Member of\" column are sorted for both\n\\du+ and \\du for consistent output.\n\nAny comments are welcome.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Mon, 9 Jan 2023 19:09:19 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Added the patch to the open commitfest:\nhttps://commitfest.postgresql.org/42/4116/\n\nFeel free to reject if it's not interesting.\n\n-- \nPavel Luzanov\n\n\n\n\n\n\nAdded the patch to the open commitfest:\nhttps://commitfest.postgresql.org/42/4116/\n\nFeel free to reject if it's not interesting.\n-- \nPavel Luzanov", "msg_date": "Tue, 10 Jan 2023 22:18:16 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Mon, Jan 9, 2023 at 9:09 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> When you include one role in another, you can specify three options:\n> ADMIN, INHERIT (added in e3ce2de0) and SET (3d14e171).\n>\n> For example.\n>\n> CREATE ROLE alice LOGIN;\n>\n> GRANT pg_read_all_settings TO alice WITH ADMIN TRUE, INHERIT TRUE, SET\n> TRUE;\n> GRANT pg_stat_scan_tables TO alice WITH ADMIN FALSE, INHERIT FALSE, SET\n> FALSE;\n> GRANT pg_read_all_stats TO alice WITH ADMIN FALSE, INHERIT TRUE, SET FALSE;\n>\n> For information about the options, you need to look in the pg_auth_members:\n>\n> SELECT roleid::regrole, admin_option, inherit_option, set_option\n> FROM pg_auth_members\n> WHERE member = 'alice'::regrole;\n> roleid | admin_option | inherit_option | set_option\n> ----------------------+--------------+----------------+------------\n> pg_read_all_settings | t | t | t\n> pg_stat_scan_tables | f | f | f\n> pg_read_all_stats | f | t | f\n> (3 rows)\n>\n> I think it would be useful to be able to get this information with a\n> psql command\n> like \\du (and \\dg). With proposed patch the \\du command still only lists\n> the roles of which alice is a member:\n>\n> \\du alice\n> List of roles\n> Role name | Attributes | Member of\n>\n> -----------+------------+--------------------------------------------------------------\n> alice | |\n> {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n>\n> But the \\du+ command adds information about the selected ADMIN, INHERIT\n> and SET options:\n>\n> \\du+ alice\n> List of roles\n> Role name | Attributes | Member of\n> | Description\n>\n> -----------+------------+-----------------------------------------------+-------------\n> alice | | pg_read_all_settings WITH ADMIN, INHERIT, SET+|\n> | | pg_read_all_stats WITH INHERIT +|\n> | | pg_stat_scan_tables |\n>\n> One more change. The roles in the \"Member of\" column are sorted for both\n> \\du+ and \\du for consistent output.\n>\n> Any comments are welcome.\n>\n>\nYeah, I noticed the lack too, then went a bit too far afield with trying to\ncompose a graph of the roles. I'm still working on that but at this point\nit probably won't be something I try to get committed to psql. Something\nmore limited like this does need to be included.\n\nOne thing I did was name the situation where none of the grants are true -\nEMPTY. So: pg_stat_scan_tables WITH EMPTY.\n\nI'm not too keen on the idea of converting the existing array into a\nnewline separated string. I would try hard to make the modification here\npurely additional. If users really want to build up queries on their own\nthey should be using the system catalog. So concise human readability\nshould be the goal here. Keeping those two things in mind I would add a\nnew text[] column to the views with the following possible values in each\ncell the meaning of which should be self-evident or this probably isn't a\ngood approach...\n\nais\nai\nas\na\nis\ni\ns\nempty\n\nThat said, I do find the newline based output to be quite useful in the\ngraph query I'm writing and so wouldn't be disappointed if we changed over\nto that. I'd probably stick with abbreviations though. Another thing I\ndid with the graph was have both \"member\" and \"memberof\" columns in the\noutput. In short, every grant row in pg_auth_members appears twice, once\nin each column, so the role being granted membership and the role into\nwhich membership is granted both have visibility when you filter on them.\nFor the role graph I took this idea and extended out to an entire chain of\nroles (and also broke out user and group separately) but I think doing the\ndirect-grant only here would still be a big improvement.\n\npostgres=# \\dgS+ pg_read_all_settings\n List of roles\n Role name | Attributes | Member of | Members | Description\n----------------------+--------------+-----------+-------------\n pg_read_all_settings | Cannot login | {} | { pg_monitor } |\n\nDavid J.\n\nOn Mon, Jan 9, 2023 at 9:09 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:When you include one role in another, you can specify three options:\nADMIN, INHERIT (added in e3ce2de0) and SET (3d14e171).\n\nFor example.\n\nCREATE ROLE alice LOGIN;\n\nGRANT pg_read_all_settings TO alice WITH ADMIN TRUE, INHERIT TRUE, SET TRUE;\nGRANT pg_stat_scan_tables TO alice WITH ADMIN FALSE, INHERIT FALSE, SET \nFALSE;\nGRANT pg_read_all_stats TO alice WITH ADMIN FALSE, INHERIT TRUE, SET FALSE;\n\nFor information about the options, you need to look in the pg_auth_members:\n\nSELECT roleid::regrole, admin_option, inherit_option, set_option\nFROM pg_auth_members\nWHERE member = 'alice'::regrole;\n         roleid        | admin_option | inherit_option | set_option\n----------------------+--------------+----------------+------------\n  pg_read_all_settings | t            | t              | t\n  pg_stat_scan_tables  | f            | f              | f\n  pg_read_all_stats    | f            | t              | f\n(3 rows)\n\nI think it would be useful to be able to get this information with a \npsql command\nlike \\du (and \\dg). With proposed patch the \\du command still only lists\nthe roles of which alice is a member:\n\n\\du alice\n                                      List of roles\n  Role name | Attributes |                          Member of\n-----------+------------+--------------------------------------------------------------\n  alice     |            | \n{pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n\nBut the \\du+ command adds information about the selected ADMIN, INHERIT\nand SET options:\n\n\\du+ alice\n                                     List of roles\n  Role name | Attributes |                   Member of                   \n| Description\n-----------+------------+-----------------------------------------------+-------------\n  alice     |            | pg_read_all_settings WITH ADMIN, INHERIT, SET+|\n            |            | pg_read_all_stats WITH INHERIT               +|\n            |            | pg_stat_scan_tables                           |\n\nOne more change. The roles in the \"Member of\" column are sorted for both\n\\du+ and \\du for consistent output.\n\nAny comments are welcome.Yeah, I noticed the lack too, then went a bit too far afield with trying to compose a graph of the roles.  I'm still working on that but at this point it probably won't be something I try to get committed to psql.  Something more limited like this does need to be included.One thing I did was name the situation where none of the grants are true - EMPTY.  So: pg_stat_scan_tables WITH EMPTY.I'm not too keen on the idea of converting the existing array into a newline separated string.  I would try hard to make the modification here purely additional.  If users really want to build up queries on their own they should be using the system catalog.  So concise human readability should be the goal here.  Keeping those two things in mind I would add a new text[] column to the views with the following possible values in each cell the meaning of which should be self-evident or this probably isn't a good approach...aisaiasaisisemptyThat said, I do find the newline based output to be quite useful in the graph query I'm writing and so wouldn't be disappointed if we changed over to that.  I'd probably stick with abbreviations though.  Another thing I did with the graph was have both \"member\" and \"memberof\" columns in the output.  In short, every grant row in pg_auth_members appears twice, once in each column, so the role being granted membership and the role into which membership is granted both have visibility when you filter on them.  For the role graph I took this idea and extended out to an entire chain of roles (and also broke out user and group separately) but I think doing the direct-grant only here would still be a big improvement.postgres=# \\dgS+ pg_read_all_settings                         List of roles      Role name       |  Attributes  | Member of | Members | Description----------------------+--------------+-----------+------------- pg_read_all_settings | Cannot login | {}        | { pg_monitor }       |David J.", "msg_date": "Tue, 24 Jan 2023 10:16:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 24.01.2023 20:16, David G. Johnston wrote:\n> Yeah, I noticed the lack too, then went a bit too far afield with \n> trying to compose a graph of the roles.  I'm still working on that but \n> at this point it probably won't be something I try to get committed to \n> psql.  Something more limited like this does need to be included.\n\nGlad to hear that you're working on it.\n\n> I'm not too keen on the idea of converting the existing array into a \n> newline separated string.  I would try hard to make the modification \n> here purely additional.\n\nI agree with all of your arguments. A couple of months I tried to find \nan acceptable variant in the background.\nBut apparently tried not very hard ))\n\nIn the end, the variant proposed in the patch seemed to me worthy to \nshow and\nstart a discussion. But I'm not sure that this is the best choice.\n\n> Another thing I did with the graph was have both \"member\" and \n> \"memberof\" columns in the output.  In short, every grant row in \n> pg_auth_members appears twice, once in each column, so the role being \n> granted membership and the role into which membership is granted both \n> have visibility when you filter on them.  For the role graph I took \n> this idea and extended out to an entire chain of roles (and also broke \n> out user and group separately) but I think doing the direct-grant only \n> here would still be a big improvement.\n\nIt will be interesting to see the result.\n\n-- \nPavel Luzanov\n\n\n\n\n\n\nOn 24.01.2023 20:16, David G. Johnston wrote:\n\n\n\nYeah, I noticed the lack too,\n then went a bit too far afield with trying to compose a\n graph of the roles.  I'm still working on that but at this\n point it probably won't be something I try to get\n committed to psql.  Something more limited like this does\n need to be included.\n\n\n\n\n Glad to hear that you're working on it.\n\n\n\n\nI'm not too keen on the idea of\n converting the existing array into a newline separated\n string.  I would try hard to make the modification here\n purely additional. \n\n\n\n\n\nI agree with all of your arguments. A couple of months I\n tried to find an acceptable variant in the background.\n But apparently tried not very hard ))\n\n In the end, the variant proposed in the patch seemed to me worthy\n to show and\n start a discussion. But I'm not sure that this is the best choice.\n\n\n\n\nAnother thing I did with the\n graph was have both \"member\" and \"memberof\" columns in the\n output.  In short, every grant row in pg_auth_members\n appears twice, once in each column, so the role being\n granted membership and the role into which membership is\n granted both have visibility when you filter on them.  For\n the role graph I took this idea and extended out to an\n entire chain of roles (and also broke out user and group\n separately) but I think doing the direct-grant only here\n would still be a big improvement.\n\n\n\n\nIt will be interesting to see the result.\n\n-- \nPavel Luzanov", "msg_date": "Thu, 26 Jan 2023 16:53:38 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Thanks a lot for the improvement, and it will definitely help provide \nmore very useful information.\n\nI noticed the document psql-ref.sgml has been updated for both `du+` and \n`dg+`, but only `du` and `\\du+` are covered in regression test. Is that \nbecause `dg+` is treated exactly the same as `du+` from testing point of \nview?\n\nThe reason I am asking this question is that I notice that `pg_monitor` \nalso has the detailed information, so not sure if more test cases required.\n\npostgres=# \\duS+\nList of roles\n           Role name          | Attributes                         \n|                   Member of                   | Description\n-----------------------------+------------------------------------------------------------+-----------------------------------------------+-------------\n  alice |                                                            | \npg_read_all_settings WITH ADMIN, INHERIT, SET |\n  pg_checkpoint               | Cannot login \n|                                               |\n  pg_database_owner           | Cannot login \n|                                               |\n  pg_execute_server_program   | Cannot login \n|                                               |\n  pg_maintain                 | Cannot login \n|                                               |\n  pg_monitor                  | Cannot \nlogin                                               | \npg_read_all_settings WITH INHERIT, SET       +|\n|                                                            | \npg_read_all_stats WITH INHERIT, SET          +|\n|                                                            | \npg_stat_scan_tables WITH INHERIT, SET         |\n\nBest regards,\n\nDavid\n\nOn 2023-01-09 8:09 a.m., Pavel Luzanov wrote:\n> When you include one role in another, you can specify three options:\n> ADMIN, INHERIT (added in e3ce2de0) and SET (3d14e171).\n>\n> For example.\n>\n> CREATE ROLE alice LOGIN;\n>\n> GRANT pg_read_all_settings TO alice WITH ADMIN TRUE, INHERIT TRUE, SET \n> TRUE;\n> GRANT pg_stat_scan_tables TO alice WITH ADMIN FALSE, INHERIT FALSE, \n> SET FALSE;\n> GRANT pg_read_all_stats TO alice WITH ADMIN FALSE, INHERIT TRUE, SET \n> FALSE;\n>\n> For information about the options, you need to look in the \n> pg_auth_members:\n>\n> SELECT roleid::regrole, admin_option, inherit_option, set_option\n> FROM pg_auth_members\n> WHERE member = 'alice'::regrole;\n>         roleid        | admin_option | inherit_option | set_option\n> ----------------------+--------------+----------------+------------\n>  pg_read_all_settings | t            | t              | t\n>  pg_stat_scan_tables  | f            | f              | f\n>  pg_read_all_stats    | f            | t              | f\n> (3 rows)\n>\n> I think it would be useful to be able to get this information with a \n> psql command\n> like \\du (and \\dg). With proposed patch the \\du command still only lists\n> the roles of which alice is a member:\n>\n> \\du alice\n>                                      List of roles\n>  Role name | Attributes |                          Member of\n> -----------+------------+-------------------------------------------------------------- \n>\n>  alice     |            | \n> {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables}\n>\n> But the \\du+ command adds information about the selected ADMIN, INHERIT\n> and SET options:\n>\n> \\du+ alice\n>                                     List of roles\n>  Role name | Attributes |                   Member \n> of                   | Description\n> -----------+------------+-----------------------------------------------+------------- \n>\n>  alice     |            | pg_read_all_settings WITH ADMIN, INHERIT, SET+|\n>            |            | pg_read_all_stats WITH INHERIT               +|\n>            |            | pg_stat_scan_tables                           |\n>\n> One more change. The roles in the \"Member of\" column are sorted for both\n> \\du+ and \\du for consistent output.\n>\n> Any comments are welcome.\n>\n\n\n", "msg_date": "Fri, 10 Feb 2023 13:08:35 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Fri, Feb 10, 2023 at 2:08 PM David Zhang <david.zhang@highgo.ca> wrote:\n\n>\n> I noticed the document psql-ref.sgml has been updated for both `du+` and\n> `dg+`, but only `du` and `\\du+` are covered in regression test. Is that\n> because `dg+` is treated exactly the same as `du+` from testing point of\n> view?\n>\n\nYes.\n\n>\n> The reason I am asking this question is that I notice that `pg_monitor`\n> also has the detailed information, so not sure if more test cases required.\n>\n\nOf course it does. Why does that bother you? And what does that have to\ndo with the previous paragraph?\n\nDavid J.\n\nOn Fri, Feb 10, 2023 at 2:08 PM David Zhang <david.zhang@highgo.ca> wrote:\nI noticed the document psql-ref.sgml has been updated for both `du+` and \n`dg+`, but only `du` and `\\du+` are covered in regression test. Is that \nbecause `dg+` is treated exactly the same as `du+` from testing point of \nview?Yes.\n\nThe reason I am asking this question is that I notice that `pg_monitor` \nalso has the detailed information, so not sure if more test cases required.Of course it does.  Why does that bother you?  And what does that have to do with the previous paragraph?David J.", "msg_date": "Fri, 10 Feb 2023 15:27:14 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 2023-02-10 2:27 p.m., David G. Johnston wrote:\n> On Fri, Feb 10, 2023 at 2:08 PM David Zhang <david.zhang@highgo.ca> wrote:\n>\n>\n> I noticed the document psql-ref.sgml has been updated for both\n> `du+` and\n> `dg+`, but only `du` and `\\du+` are covered in regression test. Is\n> that\n> because `dg+` is treated exactly the same as `du+` from testing\n> point of\n> view?\n>\n>\n> Yes.\n>\n>\n> The reason I am asking this question is that I notice that\n> `pg_monitor`\n> also has the detailed information, so not sure if more test cases\n> required.\n>\n>\n> Of course it does.  Why does that bother you?  And what does that have \n> to do with the previous paragraph?\n\nThere is a default built-in role `pg_monitor` and the behavior changed \nafter the patch. If `\\dg+` and `\\du+` is treated as the same, and `make \ncheck` all pass, then I assume there is no test case to verify the \noutput of `duS+`. My point is should we consider add a test case?\n\nBefore patch the output for `pg_monitor`,\n\npostgres=# \\duS+\nList of roles\n           Role name          | Attributes                         | \nMember of                           | Description\n-----------------------------+------------------------------------------------------------+--------------------------------------------------------------+-------------\n  alice |                                                            | \n{pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables} |\n  pg_checkpoint               | Cannot \nlogin                                               | \n{}                                                           |\n  pg_database_owner           | Cannot \nlogin                                               | \n{}                                                           |\n  pg_execute_server_program   | Cannot \nlogin                                               | \n{}                                                           |\n  pg_maintain                 | Cannot \nlogin                                               | \n{}                                                           |\n  pg_monitor                  | Cannot \nlogin                                               | \n{pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables} |\n  pg_read_all_data            | Cannot \nlogin                                               | \n{}                                                           |\n  pg_read_all_settings        | Cannot \nlogin                                               | \n{}                                                           |\n  pg_read_all_stats           | Cannot \nlogin                                               | \n{}                                                           |\n  pg_read_server_files        | Cannot \nlogin                                               | \n{}                                                           |\n  pg_signal_backend           | Cannot \nlogin                                               | \n{}                                                           |\n  pg_stat_scan_tables         | Cannot \nlogin                                               | \n{}                                                           |\n  pg_use_reserved_connections | Cannot \nlogin                                               | \n{}                                                           |\n  pg_write_all_data           | Cannot \nlogin                                               | \n{}                                                           |\n  pg_write_server_files       | Cannot \nlogin                                               | \n{}                                                           |\n  ubuntu                      | Superuser, Create role, Create DB, \nReplication, Bypass RLS | \n{}                                                           |\n\nAfter patch the output for `pg_monitor`,\n\npostgres=# \\duS+\nList of roles\n           Role name          | Attributes                         \n|                   Member of                   | Description\n-----------------------------+------------------------------------------------------------+-----------------------------------------------+-------------\n  alice |                                                            | \npg_read_all_settings WITH ADMIN, INHERIT, SET+|\n|                                                            | \npg_read_all_stats WITH INHERIT               +|\n|                                                            | \npg_stat_scan_tables                           |\n  pg_checkpoint               | Cannot login \n|                                               |\n  pg_database_owner           | Cannot login \n|                                               |\n  pg_execute_server_program   | Cannot login \n|                                               |\n  pg_maintain                 | Cannot login \n|                                               |\n  pg_monitor                  | Cannot \nlogin                                               | \npg_read_all_settings WITH INHERIT, SET       +|\n|                                                            | \npg_read_all_stats WITH INHERIT, SET          +|\n|                                                            | \npg_stat_scan_tables WITH INHERIT, SET         |\n  pg_read_all_data            | Cannot login \n|                                               |\n  pg_read_all_settings        | Cannot login \n|                                               |\n  pg_read_all_stats           | Cannot login \n|                                               |\n  pg_read_server_files        | Cannot login \n|                                               |\n  pg_signal_backend           | Cannot login \n|                                               |\n  pg_stat_scan_tables         | Cannot login \n|                                               |\n  pg_use_reserved_connections | Cannot login \n|                                               |\n  pg_write_all_data           | Cannot login \n|                                               |\n  pg_write_server_files       | Cannot login \n|                                               |\n  ubuntu                      | Superuser, Create role, Create DB, \nReplication, Bypass RLS |                                               |\n\n\nBest regards,\n\nDavid\n\n>\n> David J.\n\n\n\n\n\nOn 2023-02-10 2:27 p.m., David G.\n Johnston wrote:\n\n\n\n\n\nOn Fri, Feb\n 10, 2023 at 2:08 PM David Zhang <david.zhang@highgo.ca>\n wrote:\n\n\n\n\n I noticed the document psql-ref.sgml has been updated for\n both `du+` and \n `dg+`, but only `du` and `\\du+` are covered in regression\n test. Is that \n because `dg+` is treated exactly the same as `du+` from\n testing point of \n view?\n\n\n\nYes.\n\n\n The reason I am asking this question is that I notice that\n `pg_monitor` \n also has the detailed information, so not sure if more test\n cases required.\n\n\n\nOf course it\n does.  Why does that bother you?  And what does that have to\n do with the previous paragraph?\n\n\n\nThere is a default built-in role `pg_monitor` and the behavior\n changed after the patch. If `\\dg+` and `\\du+` is treated as the\n same, and `make check` all pass, then I assume there is no test\n case to verify the output of `duS+`. My point is should we\n consider add a test case?\n\nBefore patch the output for `pg_monitor`,\npostgres=# \\duS+\n                                                                            \n List of roles\n           Role name          |                        \n Attributes                         |                         \n Member of                           | Description \n-----------------------------+------------------------------------------------------------+--------------------------------------------------------------+-------------\n  alice                      \n |                                                            |\n {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables} | \n  pg_checkpoint               | Cannot\n login                                               |\n {}                                                           | \n  pg_database_owner           | Cannot\n login                                               |\n {}                                                           | \n  pg_execute_server_program   | Cannot\n login                                               |\n {}                                                           | \n  pg_maintain                 | Cannot\n login                                               |\n {}                                                           | \n  pg_monitor                  | Cannot\n login                                               |\n {pg_read_all_settings,pg_read_all_stats,pg_stat_scan_tables} | \n  pg_read_all_data            | Cannot\n login                                               |\n {}                                                           | \n  pg_read_all_settings        | Cannot\n login                                               |\n {}                                                           | \n  pg_read_all_stats           | Cannot\n login                                               |\n {}                                                           | \n  pg_read_server_files        | Cannot\n login                                               |\n {}                                                           | \n  pg_signal_backend           | Cannot\n login                                               |\n {}                                                           | \n  pg_stat_scan_tables         | Cannot\n login                                               |\n {}                                                           | \n  pg_use_reserved_connections | Cannot\n login                                               |\n {}                                                           | \n  pg_write_all_data           | Cannot\n login                                               |\n {}                                                           | \n  pg_write_server_files       | Cannot\n login                                               |\n {}                                                           | \n  ubuntu                      | Superuser, Create role, Create\n DB, Replication, Bypass RLS |\n {}                                                           | \n\n\nAfter patch the output for `pg_monitor`,\npostgres=# \\duS+\n                                                                    \n List of roles\n           Role name          |                        \n Attributes                         |                   Member\n of                   | Description \n-----------------------------+------------------------------------------------------------+-----------------------------------------------+-------------\n  alice                      \n |                                                            |\n pg_read_all_settings WITH ADMIN, INHERIT, SET+| \n                             \n |                                                            |\n pg_read_all_stats WITH INHERIT               +| \n                             \n |                                                            |\n pg_stat_scan_tables                           | \n  pg_checkpoint               | Cannot\n login                                              \n |                                               | \n  pg_database_owner           | Cannot\n login                                              \n |                                               | \n  pg_execute_server_program   | Cannot\n login                                              \n |                                               | \n  pg_maintain                 | Cannot\n login                                              \n |                                               | \n  pg_monitor                  | Cannot\n login                                               |\n pg_read_all_settings WITH INHERIT, SET       +| \n                             \n |                                                            |\n pg_read_all_stats WITH INHERIT, SET          +| \n                             \n |                                                            |\n pg_stat_scan_tables WITH INHERIT, SET         | \n  pg_read_all_data            | Cannot\n login                                              \n |                                               | \n  pg_read_all_settings        | Cannot\n login                                              \n |                                               | \n  pg_read_all_stats           | Cannot\n login                                              \n |                                               | \n  pg_read_server_files        | Cannot\n login                                              \n |                                               | \n  pg_signal_backend           | Cannot\n login                                              \n |                                               | \n  pg_stat_scan_tables         | Cannot\n login                                              \n |                                               | \n  pg_use_reserved_connections | Cannot\n login                                              \n |                                               | \n  pg_write_all_data           | Cannot\n login                                              \n |                                               | \n  pg_write_server_files       | Cannot\n login                                              \n |                                               | \n  ubuntu                      | Superuser, Create role, Create\n DB, Replication, Bypass RLS\n |                                               | \n\n\n\nBest regards,\nDavid\n\n\n\n\n\n\nDavid J.", "msg_date": "Wed, 15 Feb 2023 13:31:05 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, Feb 15, 2023 at 2:31 PM David Zhang <david.zhang@highgo.ca> wrote:\n\n> There is a default built-in role `pg_monitor` and the behavior changed\n> after the patch. If `\\dg+` and `\\du+` is treated as the same, and `make\n> check` all pass, then I assume there is no test case to verify the output\n> of `duS+`. My point is should we consider add a test case?\n>\n\nI mean, either you accept the change in how this meta-command presents its\ninformation or you don't. I don't see how a test case is particularly\nbeneficial. Or, at least the pg_monitor role is not special in this\nregard. Alice changed too and you don't seem to be including it in your\ncomplaint.\n\nDavid J.\n\nOn Wed, Feb 15, 2023 at 2:31 PM David Zhang <david.zhang@highgo.ca> wrote:\n\nThere is a default built-in role `pg_monitor` and the behavior\n changed after the patch. If `\\dg+` and `\\du+` is treated as the\n same, and `make check` all pass, then I assume there is no test\n case to verify the output of `duS+`. My point is should we\n consider add a test case? I mean, either you accept the change in how this meta-command presents its information or you don't.  I don't see how a test case is particularly beneficial.  Or, at least the pg_monitor role is not special in this regard.  Alice changed too and you don't seem to be including it in your complaint.David J.", "msg_date": "Wed, 15 Feb 2023 14:37:52 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 2023-02-15 1:37 p.m., David G. Johnston wrote:\n\n> On Wed, Feb 15, 2023 at 2:31 PM David Zhang <david.zhang@highgo.ca> wrote:\n>\n> There is a default built-in role `pg_monitor` and the behavior\n> changed after the patch. If `\\dg+` and `\\du+` is treated as the\n> same, and `make check` all pass, then I assume there is no test\n> case to verify the output of `duS+`. My point is should we\n> consider add a test case?\n>\n> I mean, either you accept the change in how this meta-command presents \n> its information or you don't.  I don't see how a test case is \n> particularly beneficial.  Or, at least the pg_monitor role is not \n> special in this regard.  Alice changed too and you don't seem to be \n> including it in your complaint.\nGood improvement, +1.\n\n\n\n\n\nOn 2023-02-15 1:37 p.m., David G. Johnston wrote:\n\n\n\n\n\nOn Wed, Feb\n 15, 2023 at 2:31 PM David Zhang <david.zhang@highgo.ca>\n wrote:\n\n\n\n\n\nThere is a default built-in role `pg_monitor` and the\n behavior changed after the patch. If `\\dg+` and `\\du+`\n is treated as the same, and `make check` all pass, then\n I assume there is no test case to verify the output of\n `duS+`. My point is should we consider add a test case?\n\n\n\n \n\nI mean,\n either you accept the change in how this meta-command\n presents its information or you don't.  I don't see how a\n test case is particularly beneficial.  Or, at least the\n pg_monitor role is not special in this regard.  Alice\n changed too and you don't seem to be including it in your\n complaint.\n\n\n\n\n Good improvement, +1.", "msg_date": "Wed, 15 Feb 2023 14:52:20 -0800", "msg_from": "David Zhang <david.zhang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 16.02.2023 00:37, David G. Johnston wrote:\n> I mean, either you accept the change in how this meta-command presents \n> its information or you don't.\n\nYes, that's the main issue of this patch.\n\nOn the one hand, it would be nice to see the membership options with the \npsql command.\n\nOn the other hand, I don't have an exact understanding of how best to do \nit. That's why I proposed a variant for discussion. It is quite possible \nthat if there is no consensus, it would be better to leave it as is, and \nget information by queries to the system catalog.\n\n-----\nPavel Luzanov\n\n\n\n\n\n\nOn 16.02.2023 00:37, David G.\n Johnston wrote:\n\n\n\n\n\nI mean, either you accept the\n change in how this meta-command presents its information\n or you don't.\n\n\n\n\nYes, that's the main issue of this patch.\nOn the one hand, it would be nice to see the membership\n options with the psql command.\nOn the other hand, I don't have an exact understanding of how\n best to do it. That's why I proposed a variant for discussion.\n It is quite possible that if there is no consensus, it would be\n better to leave it as is, and get information by queries to the\n system catalog.\n\n\n-----\nPavel Luzanov", "msg_date": "Thu, 16 Feb 2023 21:03:29 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Hello,\n> On the one hand, it would be nice to see the membership options with \n> the psql command.\n\nAfter playing with cf5eb37c and e5b8a4c0 I think something must be made \nwith \\du command.\n\npostgres@demo(16.0)=# CREATE ROLE admin LOGIN CREATEROLE;\nCREATE ROLE\npostgres@demo(16.0)=# \\c - admin\nYou are now connected to database \"demo\" as user \"admin\".\nadmin@demo(16.0)=> SET createrole_self_grant = 'SET, INHERIT';\nSET\nadmin@demo(16.0)=> CREATE ROLE bob LOGIN;\nCREATE ROLE\nadmin@demo(16.0)=> \\du\n\n                                    List of roles\n  Role name | Attributes                         | Member of\n-----------+------------------------------------------------------------+-----------\n  admin     | Create role                                                \n| {bob,bob}\n  bob |                                                            | {}\n  postgres  | Superuser, Create role, Create DB, Replication, Bypass RLS \n| {}\n\nWe see two bob roles in the 'Member of' column.Strange? But this is correct.\n\nadmin@demo(16.0)=> select roleid::regrole, member::regrole, * from \npg_auth_members where roleid = 'bob'::regrole;\n  roleid | member |  oid  | roleid | member | grantor | admin_option | \ninherit_option | set_option\n--------+--------+-------+--------+--------+---------+--------------+----------------+------------\n  bob    | admin  | 16713 |  16712 |  16711 |      10 | t            | \nf              | f\n  bob    | admin  | 16714 |  16712 |  16711 |   16711 | f            | \nt              | t\n(2 rows)\n\nFirst 'grant bob to admin' command issued immediately after creating \nrole bob by superuser(grantor=10). Second command issues by admin role \nand set membership options SET and INHERIT.\n\nIf we don't ready to display membership options with \\du+ may be at \nleast we must group records in 'Member of' column for \\du command?\n\n-----\nPavel Luzanov\n\n\n\n\n\n\nHello,\n\nOn\n the one hand, it would be nice to see the membership options\n with the psql command.\nAfter playing with cf5eb37c and e5b8a4c0 I think something\n must be made with \\du command.\npostgres@demo(16.0)=# CREATE ROLE admin LOGIN CREATEROLE;\nCREATE ROLE\npostgres@demo(16.0)=# \\c - admin\nYou are now connected to database \"demo\" as user \"admin\".\nadmin@demo(16.0)=> SET createrole_self_grant = 'SET,\n INHERIT';\nSET\nadmin@demo(16.0)=> CREATE ROLE bob LOGIN;\nCREATE ROLE\nadmin@demo(16.0)=> \\du\n\n                                   List of roles\n Role name |                        \n Attributes                         | Member of \n-----------+------------------------------------------------------------+-----------\n admin     | Create\n role                                                | {bob,bob}\n bob      \n |                                                            | {}\n postgres  | Superuser, Create role, Create DB,\n Replication, Bypass RLS | {}\n\nWe see two bob roles in the 'Member of' column. Strange? But this is correct.\n\nadmin@demo(16.0)=> select roleid::regrole,\n member::regrole, * from pg_auth_members where roleid =\n 'bob'::regrole;\n roleid | member |  oid  | roleid | member | grantor |\n admin_option | inherit_option | set_option \n--------+--------+-------+--------+--------+---------+--------------+----------------+------------\n bob    | admin  | 16713 |  16712 |  16711 |      10 |\n t            | f              | f\n bob    | admin  | 16714 |  16712 |  16711 |   16711 |\n f            | t              | t\n(2 rows)\nFirst 'grant bob to admin' command issued immediately after\n creating role bob by superuser(grantor=10). Second command\n issues by admin role and set membership options SET and INHERIT.\nIf we don't ready to display membership options with \\du+ may be\n at least we must group records in 'Member of' column for \\du\n command?\n\n\n-----\nPavel Luzanov", "msg_date": "Fri, 17 Feb 2023 14:02:01 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Fri, Feb 17, 2023 at 4:02 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> List of roles\n> Role name | Attributes |\n> Member of\n>\n> -----------+------------------------------------------------------------+-----------\n> admin | Create role |\n> {bob,bob}\n> bob | |\n> {}\n> postgres | Superuser, Create role, Create DB, Replication, Bypass RLS |\n> {}\n>\n> First 'grant bob to admin' command issued immediately after creating role\n> bob by superuser(grantor=10). Second command issues by admin role and set\n> membership options SET and INHERIT.\n> If we don't ready to display membership options with \\du+ may be at least\n> we must group records in 'Member of' column for \\du command?\n>\n\nI agree that these views should GROUP BY roleid and use bool_or(*_option)\nto produce their result. Their purpose is to communicate the current\neffective state to the user, not facilitate full inspection of the\nconfiguration, possibly to aid in issuing GRANT and REVOKE commands.\n\nOne thing I found, and I plan to bring this up independently once I've\ncollected my thoughts, is that pg_has_role() uses the terminology \"USAGE\"\nand \"MEMBER\" for \"INHERIT\" and \"SET\" respectively.\n\nIt's annoying that \"member\" has been overloaded here. And the choice of\nUSAGE just seems arbitrary (though I haven't researched it) given the\nrelated syntax.\n\nhttps://www.postgresql.org/docs/15/functions-info.html\n\nOn Fri, Feb 17, 2023 at 4:02 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n                                   List of roles\n Role name |                        \n Attributes                         | Member of \n-----------+------------------------------------------------------------+-----------\n admin     | Create\n role                                                | {bob,bob}\n bob      \n |                                                            | {}\n postgres  | Superuser, Create role, Create DB,\n Replication, Bypass RLS | {}\nFirst 'grant bob to admin' command issued immediately after\n creating role bob by superuser(grantor=10). Second command\n issues by admin role and set membership options SET and INHERIT.\nIf we don't ready to display membership options with \\du+ may be\n at least we must group records in 'Member of' column for \\du\n command?I agree that these views should GROUP BY roleid and use bool_or(*_option) to produce their result.  Their purpose is to communicate the current effective state to the user, not facilitate full inspection of the configuration, possibly to aid in issuing GRANT and REVOKE commands.One thing I found, and I plan to bring this up independently once I've collected my thoughts, is that pg_has_role() uses the terminology \"USAGE\" and \"MEMBER\" for \"INHERIT\" and \"SET\" respectively.It's annoying that \"member\" has been overloaded here.  And the choice of USAGE just seems arbitrary (though I haven't researched it) given the related syntax.https://www.postgresql.org/docs/15/functions-info.html", "msg_date": "Fri, 17 Feb 2023 09:53:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 17.02.2023 19:53, David G. Johnston wrote:\n> On Fri, Feb 17, 2023 at 4:02 AM Pavel Luzanov \n> <p.luzanov@postgrespro.ru> wrote:\n>\n>                                    List of roles\n>  Role name | Attributes                         | Member of\n> -----------+------------------------------------------------------------+-----------\n>  admin     | Create\n> role                                                | {bob,bob}\n>  bob | | {}\n>  postgres  | Superuser, Create role, Create DB, Replication,\n> Bypass RLS | {}\n>\n> First 'grant bob to admin' command issued immediately after\n> creating role bob by superuser(grantor=10). Second command issues\n> by admin role and set membership options SET and INHERIT.\n>\n> If we don't ready to display membership options with \\du+ may be\n> at least we must group records in 'Member of' column for \\du command?\n>\n>\n> I agree that these views should GROUP BY roleid and use \n> bool_or(*_option) to produce their result.\n\nOk, I'll try in the next few days. But what presentation format to use?\n\n1. bob(admin_option=t inherit_option=t set_option=f) -- it seems very long\n2. bob(ai) -- short, but will it be clear?\n3. something else?\n\n> Their purpose is to communicate the current effective state to the \n> user, not facilitate full inspection of the configuration, possibly to \n> aid in issuing GRANT and REVOKE commands.\n\nThis can help in issuing GRANT command, but not REVOKE. Revoking a \nrole's membership is now very similar to revoking privileges. Only the \nrole that granted membership can revoke that membership. So for REVOKE \nyou need to know who granted membership, but this information will not \nbe available after grouping.\n\n> One thing I found, and I plan to bring this up independently once I've \n> collected my thoughts, is that pg_has_role() uses the terminology \n> \"USAGE\" and \"MEMBER\" for \"INHERIT\" and \"SET\" respectively.\n>\n> It's annoying that \"member\" has been overloaded here.  And the choice \n> of USAGE just seems arbitrary (though I haven't researched it) given \n> the related syntax.\n>\n> https://www.postgresql.org/docs/15/functions-info.html\n>\n\nI didn't even know this function existed. But I see that it was changed \nin 3d14e171 with updated documentation:\nhttps://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\nMaybe that's enough.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 17.02.2023 19:53, David G. Johnston wrote:\n\n\n\n\n\nOn Fri, Feb 17, 2023 at 4:02 AM\n Pavel Luzanov <p.luzanov@postgrespro.ru>\n wrote:\n\n\n\n\n                                   List of roles\n  Role name |                        \n Attributes                         | Member of \n -----------+------------------------------------------------------------+-----------\n  admin     | Create\n role                                                |\n {bob,bob}\n  bob      \n |                                                           \n | {}\n  postgres  | Superuser, Create role, Create DB,\n Replication, Bypass RLS | {}\n\n\nFirst 'grant bob to admin' command issued\n immediately after creating role bob by\n superuser(grantor=10). Second command issues by admin\n role and set membership options SET and INHERIT.\nIf we don't ready to display membership options with\n \\du+ may be at least we must group records in 'Member\n of' column for \\du command?\n\n\n\nI agree that these views should\n GROUP BY roleid and use bool_or(*_option) to produce their\n result.  \n\n\n\n\nOk, I'll try in the next few days. But what presentation\n format to use?\n\n1. bob(admin_option=t inherit_option=t set_option=f) -- it\n seems very long\n2. bob(ai) -- short, but will it be clear?\n3. something else?\n\n\n\n\n\nTheir purpose is to communicate\n the current effective state to the user, not facilitate\n full inspection of the configuration, possibly to aid in\n issuing GRANT and REVOKE commands.\n\n\n\n\nThis can help in issuing GRANT command, but not REVOKE.\n Revoking a role's membership is now very similar to revoking\n privileges. Only the role that granted membership can revoke that\n membership. So for REVOKE you need to know who granted membership,\n but this information will not be available after grouping.\n\n\n\n\n\nOne thing I found, and I plan\n to bring this up independently once I've collected my\n thoughts, is that pg_has_role() uses the terminology\n \"USAGE\" and \"MEMBER\" for \"INHERIT\" and \"SET\" respectively.\n\n\nIt's annoying that \"member\" has\n been overloaded here.  And the choice of USAGE just seems\n arbitrary (though I haven't researched it) given the\n related syntax.\n\n\nhttps://www.postgresql.org/docs/15/functions-info.html\n\n\n\n\n\n\n\nI didn't even know this function existed. But I see that it\n was changed in 3d14e171 with updated documentation:\nhttps://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\nMaybe that's enough.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Wed, 22 Feb 2023 00:14:34 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Feb 21, 2023 at 2:14 PM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> On 17.02.2023 19:53, David G. Johnston wrote:\n>\n> On Fri, Feb 17, 2023 at 4:02 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\n> wrote:\n>\n>> List of roles\n>> Role name | Attributes |\n>> Member of\n>>\n>> -----------+------------------------------------------------------------+-----------\n>> admin | Create role |\n>> {bob,bob}\n>> bob | |\n>> {}\n>> postgres | Superuser, Create role, Create DB, Replication, Bypass RLS |\n>> {}\n>>\n>> First 'grant bob to admin' command issued immediately after creating role\n>> bob by superuser(grantor=10). Second command issues by admin role and set\n>> membership options SET and INHERIT.\n>> If we don't ready to display membership options with \\du+ may be at least\n>> we must group records in 'Member of' column for \\du command?\n>>\n>\n> I agree that these views should GROUP BY roleid and use bool_or(*_option)\n> to produce their result.\n>\n>\n> Ok, I'll try in the next few days. But what presentation format to use?\n>\n>\nThis is the format I've gone for (more-or-less) in my RoleGraph view (I'll\nbe sharing it publicly in the near future).\n\nbob from grantor (a, s, i) \\n\nadam from postgres (a, s, i) \\n\nemily from postgres (empty)\nI don't think first-letter mnemonics will be an issue - you need to learn\nthe syntax anyway. And it is already what we do for object grants besides.\n\nBased upon prior comments going for something like the following is\nundesirable: bob=asi/grantor\n\nSo I converted the \"/\" into \"from\" and stuck the permissions on the end\ninstead of in the middle (makes reading the \"from\" fragment cleaner).\n\nTo be clear, this is going away from grouping but trades verbosity and\ndeviation from what is done today for better information. If we are going\nto break this I suppose we might as well break it thoroughly.\n\n\n>\n> I didn't even know this function existed. But I see that it was changed in\n> 3d14e171 with updated documentation:\n>\n> https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\n> Maybe that's enough.\n>\n>\nI think that should probably have ADMIN as one of the options as well.\nAlso curious what it reports for an empty membership.\n\nDavid J.\n\nOn Tue, Feb 21, 2023 at 2:14 PM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n\nOn 17.02.2023 19:53, David G. Johnston wrote:\n\n\n\n\nOn Fri, Feb 17, 2023 at 4:02 AM\n Pavel Luzanov <p.luzanov@postgrespro.ru>\n wrote:\n\n\n\n\n                                   List of roles\n  Role name |                        \n Attributes                         | Member of \n -----------+------------------------------------------------------------+-----------\n  admin     | Create\n role                                                |\n {bob,bob}\n  bob      \n |                                                           \n | {}\n  postgres  | Superuser, Create role, Create DB,\n Replication, Bypass RLS | {}\n\n\nFirst 'grant bob to admin' command issued\n immediately after creating role bob by\n superuser(grantor=10). Second command issues by admin\n role and set membership options SET and INHERIT.\nIf we don't ready to display membership options with\n \\du+ may be at least we must group records in 'Member\n of' column for \\du command?\n\n\n\nI agree that these views should\n GROUP BY roleid and use bool_or(*_option) to produce their\n result.  \n\n\n\n\nOk, I'll try in the next few days. But what presentation\n format to use?This is the format I've gone for (more-or-less) in my RoleGraph view (I'll be sharing it publicly in the near future).bob from grantor (a, s, i) \\nadam from postgres (a, s, i) \\nemily from postgres (empty)I don't think first-letter mnemonics will be an issue - you need to learn the syntax anyway.  And it is already what we do for object grants besides.Based upon prior comments going for something like the following is undesirable:  bob=asi/grantorSo I converted the \"/\" into \"from\" and stuck the permissions on the end instead of in the middle (makes reading the \"from\" fragment cleaner).To be clear, this is going away from grouping but trades verbosity and deviation from what is done today for better information.  If we are going to break this I suppose we might as well break it thoroughly.\n\nI didn't even know this function existed. But I see that it\n was changed in 3d14e171 with updated documentation:\nhttps://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\nMaybe that's enough.I think that should probably have ADMIN as one of the options as well.  Also curious what it reports for an empty membership.David J.", "msg_date": "Tue, 21 Feb 2023 14:34:27 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 22.02.2023 00:34, David G. Johnston wrote:\n> This is the format I've gone for (more-or-less) in my RoleGraph view \n> (I'll be sharing it publicly in the near future).\n>\n> bob from grantor (a, s, i) \\n\n> adam from postgres (a, s, i) \\n\n> emily from postgres (empty)\n\nI think this is a good compromise.\n\n> Based upon prior comments going for something like the following is \n> undesirable: bob=asi/grantor\n\nAgree. Membership options are not the ACL (although they have \nsimilarities). Therefore, showing them as a ACL-like column will be \nconfusing.\n\nSo, please find attached the second version of the patch. It implements \nsuggested display format and small refactoring of existing code for \\du \ncommand.\nAs a non-native writer, I have doubts about the documentation part.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com", "msg_date": "Mon, 27 Feb 2023 23:14:42 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Next version (v3) addresses complains from cfbot. Changed only tests.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Wed, 1 Mar 2023 13:55:13 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Hello,\n\nOn 22.02.2023 00:34, David G. Johnston wrote:\n> I didn't even know this function existed. But I see that it was \n> changed in 3d14e171 with updated documentation:\n> https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\n> Maybe that's enough.\n>\n>\n> I think that should probably have ADMIN as one of the options as well. \n> Also curious what it reports for an empty membership.\n\nI've been experimenting for a few days and I want to admit that this is \na very difficult and not obvious topic.\nI'll try to summarize what I think.\n\n1.\nAbout ADMIN value for pg_has_role.\nImplementation of ADMIN value will be different from USAGE and SET.\nTo be True, USAGE value requires the full chain of memberships to have \nINHERIT option.\nSimilar with SET: the full chain of memberships must have SET option.\nBut for ADMIN, only last member in the chain must have ADMIN option and \nall previous members\nmust have INHERIT (to administer directly) or SET option (to switch to \nrole, last in the chain).\nTherefore, it is not obvious to me that the function needs the ADMIN value.\n\n2.\npg_has_role function description starts with: Does user have privilege \nfor role?\n     - This is not exact: function works not only with users, but with \nNOLOGIN roles too.\n     - Term \"privilege\": this term used for ACL columns, such usage may \nbe confusing,\n       especially after adding INHERIT and SET in addition to ADMIN option.\n\n3.\nIt is possible to grant membership with all three options turned off:\n     grant a to b with admin false, inherit false, set false;\nBut such membership is completely useless (if i didn't miss something).\nMay be such grants must be prohibited. At least this may be documented \nin the GRANT command.\n\n4.\nSince v16 it is possible to grant membership from one role to another \nseveral times with different grantors.\nAnd only grantor can revoke membership.\n     - This is not documented anywhere.\n     - Current behavior of \\du command with duplicated roles in \"Member \nof\" column strongly confusing.\n       This is one of the goals of the discussion patch.\n\nI think to write about this to pgsql-docs additionally to this topic.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nHello,\n\n\nOn 22.02.2023 00:34, David G. Johnston\n wrote:\n\n\n\nI didn't even know this function existed. But I see that it\n was changed in 3d14e171 with updated documentation:\n\n\n https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\nMaybe that's enough.\n\n\n\n\nI think that\n should probably have ADMIN as one of the options as well. \n Also curious what it reports for an empty membership.\n\n\n\n\nI've been experimenting for a few days and I want to admit that\n this is a very difficult and not obvious topic.\nI'll try to summarize what I think.\n\n1.\nAbout ADMIN value for pg_has_role.\nImplementation of ADMIN value will be different from USAGE\n and SET.\nTo be True, USAGE value requires the full chain of\n memberships to have INHERIT option.\nSimilar with SET: the full chain of memberships must have\n SET option.\nBut for ADMIN, only last member in the chain must have\n ADMIN option and all previous members\nmust have INHERIT (to administer directly) or SET option\n (to switch to role, last in the chain).\nTherefore, it is not obvious to me that the function needs\n the ADMIN value.\n\n2.\npg_has_role function description starts with: Does user\n have privilege for role? \n    - This is not exact: function works not only with\n users, but with NOLOGIN roles too.\n    - Term \"privilege\": this term used for ACL columns,\n such usage may be confusing, \n       especially after adding INHERIT and SET in addition to ADMIN\n option.\n    \n3.\nIt is possible to grant membership with all three options\n turned off:\n     grant a to b with admin false, inherit false, set false;\nBut such membership is completely useless (if i didn't miss\n something).\n May be such grants must be prohibited. At least this may be\n documented in the GRANT command.\n\n4.\nSince v16 it is possible to grant membership from one role\n to another several times with different grantors.\n And only grantor can revoke membership.\n    - This is not documented anywhere.\n    - Current behavior of \\du command with duplicated roles\n in \"Member of\" column strongly confusing. \n       This is one of the goals of the discussion patch.\n    \nI think to write about this to pgsql-docs additionally to\n this topic.\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Fri, 3 Mar 2023 14:01:59 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Fri, Mar 3, 2023 at 4:01 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> Hello,\n>\n> On 22.02.2023 00:34, David G. Johnston wrote:\n>\n> I didn't even know this function existed. But I see that it was changed in\n> 3d14e171 with updated documentation:\n>\n> https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\n> Maybe that's enough.\n>\n>\n> I think that should probably have ADMIN as one of the options as well.\n> Also curious what it reports for an empty membership.\n>\n>\n> I've been experimenting for a few days and I want to admit that this is a\n> very difficult and not obvious topic.\n> I'll try to summarize what I think.\n>\n> 1.\n> About ADMIN value for pg_has_role.\n> Implementation of ADMIN value will be different from USAGE and SET.\n> To be True, USAGE value requires the full chain of memberships to have\n> INHERIT option.\n> Similar with SET: the full chain of memberships must have SET option.\n> But for ADMIN, only last member in the chain must have ADMIN option and\n> all previous members\n> must have INHERIT (to administer directly) or SET option (to switch to\n> role, last in the chain).\n> Therefore, it is not obvious to me that the function needs the ADMIN value.\n>\n\nOr you can SET to some role that then has an unbroken INHERIT chain to the\nadministered role.\n\nADMIN basically implies SET/USAGE but it doesn't work the other way around.\n\nI'd be fine with \"pg_can_admin_role\" being a newly created function that\nprovides this true/false answer but it seems indisputable that today there\nis no core-provided means to answer the question \"can one role get ADMIN\nrights on another role\". Modifying \\du to show this seems out-of-scope but\nthe pg_has_role function already provides that question for INHERIT and SET\nso it is at least plausible to extend it to include ADMIN, even if the\nphrase \"has role\" seems a bit of a misnomer. I do cover this aspect with\nthe Role Graph pseudo-extension but given the presence and ease-of-use of a\nboolean-returning function this seems like a natural addition. We've also\nsurvived quite long without it - this isn't a new concept in v16, just a\nbit refined.\n\n\n>\n> 2.\n> pg_has_role function description starts with: Does user have privilege for\n> role?\n> - This is not exact: function works not only with users, but with\n> NOLOGIN roles too.\n> - Term \"privilege\": this term used for ACL columns, such usage may be\n> confusing,\n> especially after adding INHERIT and SET in addition to ADMIN option.\n>\n\nYes, it missed the whole \"there are only roles now\" memo. I don't have an\nissue with using privilege here though - you have to use the GRANT command\nwhich \"defines access privileges\". Otherwise \"membership option\" or maybe\njust \"option\" would need to be explored.\n\n\n>\n> 3.\n> It is possible to grant membership with all three options turned off:\n> grant a to b with admin false, inherit false, set false;\n> But such membership is completely useless (if i didn't miss something).\n> May be such grants must be prohibited. At least this may be documented in\n> the GRANT command.\n>\n\nI have no issue with prohibiting the \"empty membership\" if someone wants to\ncode that up.\n\n\n> 4.\n> Since v16 it is possible to grant membership from one role to another\n> several times with different grantors.\n> And only grantor can revoke membership.\n> - This is not documented anywhere.\n>\n\nYeah, a pass over the GRANTED BY actual operation versus documentation\nseems warranted.\n\n\n> - Current behavior of \\du command with duplicated roles in \"Member of\"\n> column strongly confusing.\n> This is one of the goals of the discussion patch.\n>\n\nThis indeed needs to be fixed, one way (include grantor) or the other\n(du-duplicate), with the current proposal of including grantor getting my\nvote.\n\n\n>\n> I think to write about this to pgsql-docs additionally to this topic.\n>\n\nI wouldn't bother starting yet another thread in this area right now, this\none can absorb some related changes as well as the subject line item.\n\nDavid J.\n\nOn Fri, Mar 3, 2023 at 4:01 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n\nHello,\n\n\nOn 22.02.2023 00:34, David G. Johnston\n wrote:\n\n\nI didn't even know this function existed. But I see that it\n was changed in 3d14e171 with updated documentation:\n\n\n https://www.postgresql.org/docs/devel/functions-info.html#FUNCTIONS-INFO-ACCESS\nMaybe that's enough.\n\n\n\n\nI think that\n should probably have ADMIN as one of the options as well. \n Also curious what it reports for an empty membership.\n\n\n\n\nI've been experimenting for a few days and I want to admit that\n this is a very difficult and not obvious topic.\nI'll try to summarize what I think.\n\n1.\nAbout ADMIN value for pg_has_role.\nImplementation of ADMIN value will be different from USAGE\n and SET.\nTo be True, USAGE value requires the full chain of\n memberships to have INHERIT option.\nSimilar with SET: the full chain of memberships must have\n SET option.\nBut for ADMIN, only last member in the chain must have\n ADMIN option and all previous members\nmust have INHERIT (to administer directly) or SET option\n (to switch to role, last in the chain).\nTherefore, it is not obvious to me that the function needs\n the ADMIN value.Or you can SET to some role that then has an unbroken INHERIT chain to the administered role.ADMIN basically implies SET/USAGE but it doesn't work the other way around.I'd be fine with \"pg_can_admin_role\" being a newly created function that provides this true/false answer but it seems indisputable that today there is no core-provided means to answer the question \"can one role get ADMIN rights on another role\".  Modifying \\du to show this seems out-of-scope but the pg_has_role function already provides that question for INHERIT and SET so it is at least plausible to extend it to include ADMIN, even if the phrase \"has role\" seems a bit of a misnomer.  I do cover this aspect with the Role Graph pseudo-extension but given the presence and ease-of-use of a boolean-returning function this seems like a natural addition.  We've also survived quite long without it - this isn't a new concept in v16, just a bit refined. \n2.\npg_has_role function description starts with: Does user\n have privilege for role? \n    - This is not exact: function works not only with\n users, but with NOLOGIN roles too.\n    - Term \"privilege\": this term used for ACL columns,\n such usage may be confusing, \n       especially after adding INHERIT and SET in addition to ADMIN\n option.Yes, it missed the whole \"there are only roles now\" memo.  I don't have an issue with using privilege here though - you have to use the GRANT command which \"defines access privileges\".  Otherwise \"membership option\" or maybe just \"option\" would need to be explored.     \n3.\nIt is possible to grant membership with all three options\n turned off:\n     grant a to b with admin false, inherit false, set false;\nBut such membership is completely useless (if i didn't miss\n something).\n May be such grants must be prohibited. At least this may be\n documented in the GRANT command.I have no issue with prohibiting the \"empty membership\" if someone wants to code that up.\n4.\nSince v16 it is possible to grant membership from one role\n to another several times with different grantors.\n And only grantor can revoke membership.\n    - This is not documented anywhere.Yeah, a pass over the GRANTED BY actual operation versus documentation seems warranted.\n    - Current behavior of \\du command with duplicated roles\n in \"Member of\" column strongly confusing. \n       This is one of the goals of the discussion patch.This indeed needs to be fixed, one way (include grantor) or the other (du-duplicate), with the current proposal of including grantor getting my vote.     \nI think to write about this to pgsql-docs additionally to\n this topic.I wouldn't bother starting yet another thread in this area right now, this one can absorb some related changes as well as the subject line item.David J.", "msg_date": "Fri, 3 Mar 2023 09:21:11 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 03.03.2023 19:21, David G. Johnston wrote:\n> I'd be fine with \"pg_can_admin_role\" being a newly created function \n> that provides this true/false answer but it seems indisputable that \n> today there is no core-provided means to answer the question \"can one \n> role get ADMIN rights on another role\".  Modifying \\du to show this \n> seems out-of-scope but the pg_has_role function already provides that \n> question for INHERIT and SET so it is at least plausible to extend it \n> to include ADMIN, even if the phrase \"has role\" seems a bit of a \n> misnomer.  I do cover this aspect with the Role Graph pseudo-extension \n> but given the presence and ease-of-use of a boolean-returning function \n> this seems like a natural addition.  We've also survived quite long \n> without it - this isn't a new concept in v16, just a bit refined.\n\nI must admit that I am slowly coming to the same conclusions that you \nhave already outlined in previous messages.\n\nIndeed, adding ADMIN to pg_has_role looks logical. The function will \nshow whether one role can manage another directly or indirectly (via SET \nROLE).\nAdding ADMIN will lead to the question of naming other values. It is \nmore reasonable to have INHERIT instead of USAGE.\nAnd it is not very clear whether (except for backward compatibility) a \nseparate MEMBER value is needed at all.\n\n> I wouldn't bother starting yet another thread in this area right now, \n> this one can absorb some related changes as well as the subject line item.\n\nI agree.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 03.03.2023 19:21, David G. Johnston wrote:\n\n\n\n\nI'd be fine\n with \"pg_can_admin_role\" being a newly created function\n that provides this true/false answer but it seems\n indisputable that today there is no core-provided means to\n answer the question \"can one role get ADMIN rights on\n another role\".  Modifying \\du to show this seems\n out-of-scope but the pg_has_role function already provides\n that question for INHERIT and SET so it is at least\n plausible to extend it to include ADMIN, even if the\n phrase \"has role\" seems a bit of a misnomer.  I do cover\n this aspect with the Role Graph pseudo-extension but given\n the presence and ease-of-use of a boolean-returning\n function this seems like a natural addition.  We've also\n survived quite long without it - this isn't a new concept\n in v16, just a bit refined.\n\n\n\n\n\n I must admit that I am slowly coming to the same conclusions that\n you have already outlined in previous messages.\n\n Indeed, adding ADMIN to pg_has_role looks logical. The function will\n show whether one role can manage another directly or indirectly (via\n SET ROLE).\n Adding ADMIN will lead to the question of naming other values. It is\n more reasonable to have INHERIT instead of USAGE.\n And it is not very clear whether (except for backward compatibility)\n a separate MEMBER value is needed at all.\n\n\n\n\nI wouldn't\n bother starting yet another thread in this area right now,\n this one can absorb some related changes as well as the\n subject line item.\n\n\n\n\n I agree.\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Mon, 6 Mar 2023 10:43:22 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Mon, Mar 6, 2023 at 12:43 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> Indeed, adding ADMIN to pg_has_role looks logical. The function will show\n> whether one role can manage another directly or indirectly (via SET ROLE).\n>\n\nFWIW I've finally gotten to publishing my beta version of the Role Graph\nfor PostgreSQL pseudo-extension I'd been talking about:\n\nhttps://github.com/polobo/RoleGraphForPostgreSQL\n\nThe administration column basically determines all this via a recursive\nCTE. I'm pondering how to incorporate some of this core material into it\nnow for both cross-validation purposes and ease-of-use.\n\nI handle EMPTY explicitly in the Role Graph but as I noted somewhere in my\ncomments, it really shouldn't be possible to leave the database in that\nstate. Do we need to bug Robert on this directly or do you plan to have a\ngo at it?\n\nAdding ADMIN will lead to the question of naming other values. It is more\n> reasonable to have INHERIT instead of USAGE.\n>\nAnd it is not very clear whether (except for backward compatibility) a\n> separate MEMBER value is needed at all.\n>\n\nThere is an appeal to breaking things thoroughly here and removing both\nMEMBER and USAGE terms while adding ADMIN, SET, INHERIT.\n\nHowever, I am against that. Most everyday usage of this would only likely\ncare about SET and INHERIT even going forward - administration of roles is\ndistinct from how those roles generally behave at runtime. Breaking the\nlater because we improved upon the former doesn't seem defensible. Thus,\nwhile adding ADMIN makes sense, keeping MEMBER (a.k.a., SET) and USAGE\n(a.k.a., INHERIT) is my suggested way forward.\n\nI'll be looking over your v3 patch sometime this week, if not today.\n\nDavid J.\n\nOn Mon, Mar 6, 2023 at 12:43 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n Indeed, adding ADMIN to pg_has_role looks logical. The function will\n show whether one role can manage another directly or indirectly (via\n SET ROLE).FWIW I've finally gotten to publishing my beta version of the Role Graph for PostgreSQL pseudo-extension I'd been talking about:https://github.com/polobo/RoleGraphForPostgreSQLThe administration column basically determines all this via a recursive CTE.  I'm pondering how to incorporate some of this core material into it now for both cross-validation purposes and ease-of-use.I handle EMPTY explicitly in the Role Graph but as I noted somewhere in my comments, it really shouldn't be possible to leave the database in that state.  Do we need to bug Robert on this directly or do you plan to have a go at it?\n Adding ADMIN will lead to the question of naming other values. It is\n more reasonable to have INHERIT instead of USAGE. And it is not very clear whether (except for backward compatibility)\n a separate MEMBER value is needed at all.There is an appeal to breaking things thoroughly here and removing both MEMBER and USAGE terms while adding ADMIN, SET, INHERIT.However, I am against that.  Most everyday usage of this would only likely care about SET and INHERIT even going forward - administration of roles is distinct from how those roles generally behave at runtime.  Breaking the later because we improved upon the former doesn't seem defensible.  Thus, while adding ADMIN makes sense, keeping MEMBER (a.k.a., SET) and USAGE (a.k.a., INHERIT) is my suggested way forward.I'll be looking over your v3 patch sometime this week, if not today.David J.", "msg_date": "Tue, 7 Mar 2023 14:02:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Mar 7, 2023 at 2:02 PM David G. Johnston <david.g.johnston@gmail.com>\nwrote:\n\n>\n> I'll be looking over your v3 patch sometime this week, if not today.\n>\n>\nMoving the goal posts for this meta-command to >= 9.5 seems like it should\nbe done as a separate patch and thread. The documentation presently states\nwe are targeting 9.2 and newer.\n\nMy suggestion for the docs is below. I find saying \"additional\ninformation is shown...currently this adds the comment\". Repeating that\n\"+\" means (show more) everywhere seems excessive, just state what those\n\"more\" things are. I consider \\dFp and \\dl to be good examples in this\nregard.\n\nI also think that \"Wall of text\" doesn't serve us well. See \\dP for\npermission to use paragraphs.\n\nI didn't modify \\du to match; keeping those in sync (as opposed to having\n\\du just say \"see \\dg\") seems acceptable.\n\nYou had the direction of membership wrong in your copy: \"For each\nmembership in the role\" describes the reverse of \"Member of\" which is what\nthe column is. The actual format template is constructed properly.\n\n--- a/doc/src/sgml/ref/psql-ref.sgml\n+++ b/doc/src/sgml/ref/psql-ref.sgml\n@@ -1727,15 +1727,18 @@ INSERT INTO tbl1 VALUES ($1, $2) \\bind 'first\nvalue' 'second value' \\g\n <literal>S</literal> modifier to include system roles.\n If <replaceable class=\"parameter\">pattern</replaceable> is\nspecified,\n only those roles whose names match the pattern are listed.\n- For each membership in the role, the membership options and\n- the role that granted the membership are displayed.\n- Оne-letter abbreviations are used for membership options:\n- <literal>a</literal> &mdash; admin option, <literal>i</literal>\n&mdash; inherit option,\n- <literal>s</literal> &mdash; set option and\n<literal>empty</literal> if no one is set.\n- See <link linkend=\"sql-grant\"><command>GRANT</command></link>\ncommand for their meaning.\n- If the form <literal>\\dg+</literal> is used, additional information\n- is shown about each role; currently this adds the comment for each\n- role.\n+ </para>\n+ <para>\n+ Shown within each row, in newline-separated format, are the\nmemberships granted to\n+ the role. The presentation includes both the name of the grantor\n+ as well as the membership permissions (in an abbreviated format:\n+ <literal>a</literal> for admin option, <literal>i</literal> for\ninherit option,\n+ <literal>s</literal> for set option.) The word\n<literal>empty</literal> is printed in\n+ the case that none of those permissions are granted.\n+ See the <link linkend=\"sql-grant\"><command>GRANT</command></link>\ncommand for their meaning.\n+ </para>\n+ <para>\n+ If the form <literal>\\dg+</literal> is used the comment attached\nto the role is shown.\n </para>\n </listitem>\n </varlistentry>\n\nI would suggest tweaking the test output to include regress_du_admin and\nalso to make regress_du_admin a CREATEROLE role with LOGIN.\n\nI'll need to update the Role Graph View to add the spaces and swap the\norder of the \"s\" and \"i\" symbols.\n\nDavid J.\n\nOn Tue, Mar 7, 2023 at 2:02 PM David G. Johnston <david.g.johnston@gmail.com> wrote:I'll be looking over your v3 patch sometime this week, if not today.Moving the goal posts for this meta-command to >= 9.5 seems like it should be done as a separate patch and thread.  The documentation presently states we are targeting 9.2 and newer.My suggestion for the docs is below.  I find saying \"additional information is shown...currently this adds the comment\".  Repeating that \"+\" means (show more) everywhere seems excessive, just state what those \"more\" things are.  I consider \\dFp and \\dl to be good examples in this regard.I also think that \"Wall of text\" doesn't serve us well.  See \\dP for permission to use paragraphs.I didn't modify \\du to match; keeping those in sync (as opposed to having \\du just say \"see \\dg\") seems acceptable.You had the direction of membership wrong in your copy: \"For each membership in the role\" describes the reverse of \"Member of\" which is what the column is.  The actual format template is constructed properly.--- a/doc/src/sgml/ref/psql-ref.sgml+++ b/doc/src/sgml/ref/psql-ref.sgml@@ -1727,15 +1727,18 @@ INSERT INTO tbl1 VALUES ($1, $2) \\bind 'first value' 'second value' \\g         <literal>S</literal> modifier to include system roles.         If <replaceable class=\"parameter\">pattern</replaceable> is specified,         only those roles whose names match the pattern are listed.-        For each membership in the role, the membership options and-        the role that granted the membership are displayed.-        Оne-letter abbreviations are used for membership options:-        <literal>a</literal> &mdash; admin option, <literal>i</literal> &mdash; inherit option,-        <literal>s</literal> &mdash; set option and <literal>empty</literal> if no one is set.-        See <link linkend=\"sql-grant\"><command>GRANT</command></link> command for their meaning.-        If the form <literal>\\dg+</literal> is used, additional information-        is shown about each role; currently this adds the comment for each-        role.+        </para>+        <para>+        Shown within each row, in newline-separated format, are the memberships granted to+        the role.  The presentation includes both the name of the grantor+        as well as the membership permissions (in an abbreviated format:+        <literal>a</literal> for admin option, <literal>i</literal> for inherit option,+        <literal>s</literal> for set option.) The word <literal>empty</literal> is printed in+        the case that none of those permissions are granted.+        See the <link linkend=\"sql-grant\"><command>GRANT</command></link> command for their meaning.+        </para>+        <para>+        If the form <literal>\\dg+</literal> is used the comment attached to the role is shown.         </para>         </listitem>       </varlistentry>I would suggest tweaking the test output to include regress_du_admin and also to make regress_du_admin a CREATEROLE role with LOGIN.I'll need to update the Role Graph View to add the spaces and swap the order of the \"s\" and \"i\" symbols.David J.", "msg_date": "Tue, 7 Mar 2023 19:31:42 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 08.03.2023 05:31, David G. Johnston wrote:\n> Moving the goal posts for this meta-command to >= 9.5 seems like it \n> should be done as a separate patch and thread.  The documentation \n> presently states we are targeting 9.2 and newer.\n\nI missed the comment at the beginning of the file about version 9.2. I \nwill return the version check for rolbypassrls.\n\n> My suggestion for the docs is below.\n\n> +        <para>\n> +        Shown within each row, in newline-separated format, are the \n> memberships granted to\n> +        the role.  The presentation includes both the name of the grantor\n> +        as well as the membership permissions (in an abbreviated format:\n> +        <literal>a</literal> for admin option, <literal>i</literal> \n> for inherit option,\n> +        <literal>s</literal> for set option.) The word \n> <literal>empty</literal> is printed in\n> +        the case that none of those permissions are granted.\n> +        See the <link \n> linkend=\"sql-grant\"><command>GRANT</command></link> command for their \n> meaning.\n> +        </para>\n> +        <para>\n> +        If the form <literal>\\dg+</literal> is used the comment \n> attached to the role is shown.\n>          </para>\n\nThanks. I will replace the description with this one.\n\n> I would suggest tweaking the test output to include regress_du_admin \n> and also to make regress_du_admin a CREATEROLE role with LOGIN.\n\nOk.\n\nThank you for review. I will definitely work on the new version, but \nunfortunately and with a high probability it will happen after March 20.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 08.03.2023 05:31, David G. Johnston wrote:\n\n\n\n\n\nMoving the goal posts for this\n meta-command to >= 9.5 seems like it should be done as\n a separate patch and thread.  The documentation presently\n states we are targeting 9.2 and newer.\n\n\n\n\nI missed the comment at the beginning of the file about\n version 9.2. I will return the version check for rolbypassrls.\n\n\n\n\n\nMy suggestion for the docs is\n below.\n\n\n\n\n\n\n\n\n+        <para>\n+        Shown within each row, in\n newline-separated format, are the memberships granted to\n+        the role.  The presentation includes both\n the name of the grantor\n+        as well as the membership permissions (in\n an abbreviated format:\n+        <literal>a</literal> for admin\n option, <literal>i</literal> for inherit\n option,\n+        <literal>s</literal> for set\n option.) The word <literal>empty</literal> is\n printed in\n+        the case that none of those permissions\n are granted.\n+        See the <link\n linkend=\"sql-grant\"><command>GRANT</command></link>\n command for their meaning.\n+        </para>\n+        <para>\n+        If the form\n <literal>\\dg+</literal> is used the comment\n attached to the role is shown.\n         </para>\n\n\n\n\n\nThanks. I will replace the description with this one.\n\n\n\n\n\nI would suggest tweaking the\n test output to include regress_du_admin and also to make\n regress_du_admin a CREATEROLE role with LOGIN.\n\n\n\n\nOk.\n\nThank you for review. I will definitely work on the new\n version, but unfortunately and with a high probability it will\n happen after March 20.\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Fri, 10 Mar 2023 15:06:04 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 08.03.2023 00:02, David G. Johnston wrote:\n>\n> FWIW I've finally gotten to publishing my beta version of the Role \n> Graph for PostgreSQL pseudo-extension I'd been talking about:\n>\n> https://github.com/polobo/RoleGraphForPostgreSQL\n\nGreat. So far I've looked very briefly, but it's interesting.\n\n> I handle EMPTY explicitly in the Role Graph but as I noted somewhere \n> in my comments, it really shouldn't be possible to leave the database \n> in that state.  Do we need to bug Robert on this directly or do you \n> plan to have a go at it?\n\nI don't plan to do that. Right now I don't have enough time and \nexperience. This requires an experienced developer.\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 08.03.2023 00:02, David G. Johnston wrote:\n\n\n\n\n\n\n\n\nFWIW I've finally gotten to\n publishing my beta version of the Role Graph for\n PostgreSQL pseudo-extension I'd been talking about:\n\n\nhttps://github.com/polobo/RoleGraphForPostgreSQL\n\n\n\n\n\nGreat. So far I've looked very briefly, but it's\n interesting.\n\n\n\n\n\nI handle EMPTY explicitly in\n the Role Graph but as I noted somewhere in my comments, it\n really shouldn't be possible to leave the database in that\n state.  Do we need to bug Robert on this directly or do\n you plan to have a go at it?\n\n\n\n\nI don't plan to do that. Right now I don't have enough time\n and experience. This requires an experienced developer.\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Fri, 10 Mar 2023 15:18:26 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 10.03.2023 15:06, Pavel Luzanov wrote:\n> I missed the comment at the beginning of the file about version 9.2. I \n> will return the version check for rolbypassrls.\n\n\n>> +        <para>\n>> +        Shown within each row, in newline-separated format, are the \n>> memberships granted to\n>> +        the role.  The presentation includes both the name of the \n>> grantor\n>> +        as well as the membership permissions (in an abbreviated format:\n>> +        <literal>a</literal> for admin option, <literal>i</literal> \n>> for inherit option,\n>> +        <literal>s</literal> for set option.) The word \n>> <literal>empty</literal> is printed in\n>> +        the case that none of those permissions are granted.\n>> +        See the <link \n>> linkend=\"sql-grant\"><command>GRANT</command></link> command for their \n>> meaning.\n>> +        </para>\n>> +        <para>\n>> +        If the form <literal>\\dg+</literal> is used the comment \n>> attached to the role is shown.\n>>          </para>\n>\n> Thanks. I will replace the description with this one.\n\n>\n>> I would suggest tweaking the test output to include regress_du_admin \n>> and also to make regress_du_admin a CREATEROLE role with LOGIN.\n>\n> Ok.\n\nPlease review the attached version 4 with the changes discussed.\n\n-----\nPavel Luzanov", "msg_date": "Mon, 20 Mar 2023 11:49:55 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": ">>> I would suggest tweaking the test output to include regress_du_admin ...\n\nI found (with a help of cfbot) difficulty with this. The problem is the bootstrap superuser name (oid=10).\nThis name depends on the OS username. In my case it's pal, but in most cases it's postgres or something else.\nAnd the output of \\du regress_du_admin can't be predicted:\n\n\\du regress_du_admin\n List of roles\n Role name | Attributes | Member of\n------------------+-------------+-------------------------------------\n regress_du_admin | Create role | regress_du_role0 from pal (a, i, s)+\n | | regress_du_role1 from pal (a, i, s)+\n | | regress_du_role2 from pal (a, i, s)\n\n\nSo, I decided not to include regress_du_admin in the test output.\n\nPlease, see version 5 attached. Only tests changed.\n\n-----\nPavel Luzanov", "msg_date": "Tue, 21 Mar 2023 06:37:22 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "In the previous version, I didn't notice (unlike cfbot) the compiler \nwarning. Fixed in version 6.\n\n-----\nPavel Luzanov", "msg_date": "Wed, 22 Mar 2023 21:11:16 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, Mar 22, 2023 at 11:11 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> In the previous version, I didn't notice (unlike cfbot) the compiler\n> warning. Fixed in version 6.\n>\n>\nI've marked this Ready for Committer.\n\nMy opinion is that this is a necessary modification due to the\nalready-committed changes to the membership grant implementation and so\nonly needs to be accepted prior to v16 going live, not feature freeze.\n\nI've added Robert to this thread as the committer of said changes.\n\nDavid J.\n\nOn Wed, Mar 22, 2023 at 11:11 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:In the previous version, I didn't notice (unlike cfbot) the compiler \nwarning. Fixed in version 6.I've marked this Ready for Committer.My opinion is that this is a necessary modification due to the already-committed changes to the membership grant implementation and so only needs to be accepted prior to v16 going live, not feature freeze.I've added Robert to this thread as the committer of said changes.David J.", "msg_date": "Mon, 3 Apr 2023 15:06:02 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I've marked this Ready for Committer.\n\nHmm ... not sure I like the proposed output. The 'a', 'i', 's'\nannotations are short but they don't have much else to recommend them.\nOn the other hand, there's nearby precedent for single-letter\nabbreviations in ACL displays. Nobody particularly likes those,\nthough. Also, if we're modeling this on ACLs then the display\ncould be further shortened to \"(ais)\" or the like.\n\nAlso, the patch is ignoring i18n issues. I suppose if we stick with\nsaid single-letter abbreviations we'd not translate them, but the\nconstruction \"rolename from rolename\" ought to be translatable IMO.\nPerhaps it'd be enough to allow replacement of \"from\", but I wonder\nif the phrase order would need to be different in some languages?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 12:13:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Apr 4, 2023 at 9:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > I've marked this Ready for Committer.\n>\n> Hmm ... not sure I like the proposed output. The 'a', 'i', 's'\n> annotations are short but they don't have much else to recommend them.\n> On the other hand, there's nearby precedent for single-letter\n> abbreviations in ACL displays. Nobody particularly likes those,\n> though. Also, if we're modeling this on ACLs then the display\n> could be further shortened to \"(ais)\" or the like.\n>\n\nI am on board with removing the comma and space between the specifiers. My\nparticular issue with the condensed form is readability, especially the\nlowercase \"i\". We aren't so desperate for horizontal space here that\ncompaction seems particularly desirable.\n\n>\n> Also, the patch is ignoring i18n issues.\n\n\nFair point.\n\n> I suppose if we stick with\n> said single-letter abbreviations we'd not translate them,\n\n\nCorrect. I don't see this being a huge issue - the abbreviations are the\nfirst letter of the various option \"keywords\" specified in the syntax.\n\n\n> but the\n> construction \"rolename from rolename\" ought to be translatable IMO.\n> Perhaps it'd be enough to allow replacement of \"from\", but I wonder\n> if the phrase order would need to be different in some languages?\n>\n>\nLeveraging position and some optional symbols for readability, and sticking\nwith the premise that abbreviations down to the first letter of the\nrelevant syntax keyword is OK:\n\nrolename [g: grantor_role] (ais)\n\nI don't have any ideas regarding i18n concerns besides avoiding them by not\nusing words...but I'd much prefer \"from\" and just hope the equivalent in\nother languages is just as understandable.\n\nI'd rather have the above than go and fully try to emulate ACL presentation\njust to avoid i18n issues.\n\nDavid J.\n\nOn Tue, Apr 4, 2023 at 9:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> I've marked this Ready for Committer.\n\nHmm ... not sure I like the proposed output.  The 'a', 'i', 's'\nannotations are short but they don't have much else to recommend them.\nOn the other hand, there's nearby precedent for single-letter\nabbreviations in ACL displays.  Nobody particularly likes those,\nthough.  Also, if we're modeling this on ACLs then the display\ncould be further shortened to \"(ais)\" or the like.I am on board with removing the comma and space between the specifiers.  My particular issue with the condensed form is readability, especially the lowercase \"i\".  We aren't so desperate for horizontal space here that compaction seems particularly desirable.\n\nAlso, the patch is ignoring i18n issues.Fair point.  I suppose if we stick with\nsaid single-letter abbreviations we'd not translate them, Correct.  I don't see this being a huge issue - the abbreviations are the first letter of the various option \"keywords\" specified in the syntax. but the\nconstruction \"rolename from rolename\" ought to be translatable IMO.\nPerhaps it'd be enough to allow replacement of \"from\", but I wonder\nif the phrase order would need to be different in some languages?Leveraging position and some optional symbols for readability, and sticking with the premise that abbreviations down to the first letter of the relevant syntax keyword is OK:rolename [g: grantor_role] (ais)I don't have any ideas regarding i18n concerns besides avoiding them by not using words...but I'd much prefer \"from\" and just hope the equivalent in other languages is just as understandable.I'd rather have the above than go and fully try to emulate ACL presentation just to avoid i18n issues.David J.", "msg_date": "Tue, 4 Apr 2023 09:38:04 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Apr 4, 2023 at 12:13 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm ... not sure I like the proposed output. The 'a', 'i', 's'\n> annotations are short but they don't have much else to recommend them.\n\nYeah, I don't like that, either.\n\nI'm not sure what the right thing to do is here. It's a problem to\nhave new information in the catalogs that you can't view via\n\\d<whatever>. But displaying that information as a string of\ncharacters that will be gibberish to anyone but an expert doesn't\nnecessarily seem like it really solves the problem. However, if we\nspell out the words, then it gets bulky. But maybe bulky is better\nthan incomprehensible.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 12:39:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm not sure what the right thing to do is here. It's a problem to\n> have new information in the catalogs that you can't view via\n> \\d<whatever>. But displaying that information as a string of\n> characters that will be gibberish to anyone but an expert doesn't\n> necessarily seem like it really solves the problem. However, if we\n> spell out the words, then it gets bulky. But maybe bulky is better\n> than incomprehensible.\n\nThe existing precedent in \\du definitely leans towards \"bulky\":\n\nregression=# \\du\n List of roles\n Role name | Attributes | Member of \n-----------+------------------------------------------------------------+-----------\n alice | Cannot login | {bob}\n bob | Cannot login | {}\n postgres | Superuser, Create role, Create DB, Replication, Bypass RLS | {}\n\nIt seems pretty inconsistent to me to treat the role attributes this\nway and then economize in the adjacent column.\n\nAnother advantage to spelling out the SQL keywords is that it removes\nthe question of whether we should translate them.\n\nI wonder if, while we're here, we should apply the idea of\njoining-with-newlines-not-commas to the attributes column too.\nThat's another source of inconsistency in the proposed display.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 13:12:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wonder if, while we're here, we should apply the idea of\n> joining-with-newlines-not-commas to the attributes column too.\n> That's another source of inconsistency in the proposed display.\n\nThat would make the column narrower, which might be good, because it\nseems to me that listing the memberships could require quite a lot of\nspace, both vertical and horizontal.\n\nThere can be any number of memberships, and each of those memberships\nhas a grantor and three flag bits (INHERIT, SET, ADMIN). If some user\nwith a long username has been granted membership with all three of\nthose flags by a grantor who also has a long username, and if we show\nall that information, we're going to use up a lot of horizontal space.\nAnd if there are many such grants, also a lot of vertical space.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 13:29:10 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if, while we're here, we should apply the idea of\n>> joining-with-newlines-not-commas to the attributes column too.\n\n> That would make the column narrower, which might be good, because it\n> seems to me that listing the memberships could require quite a lot of\n> space, both vertical and horizontal.\n\nRight, that's what I was thinking.\n\n> There can be any number of memberships, and each of those memberships\n> has a grantor and three flag bits (INHERIT, SET, ADMIN). If some user\n> with a long username has been granted membership with all three of\n> those flags by a grantor who also has a long username, and if we show\n> all that information, we're going to use up a lot of horizontal space.\n> And if there are many such grants, also a lot of vertical space.\n\nYup --- and if you were incautious enough to not exclude the bootstrap\nsuperuser, then you'll also have a very wide Attributes column. We\ncan buy back some of that by joining the attributes with newlines.\nAt some point people are going to have to resort to \\x mode for this\ndisplay, but we should do what we can to put that off as long as we're\nnot sacrificing readability.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 04 Apr 2023 13:37:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Apr 4, 2023 at 10:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I wonder if, while we're here, we should apply the idea of\n> >> joining-with-newlines-not-commas to the attributes column too.\n>\n> > That would make the column narrower, which might be good, because it\n> > seems to me that listing the memberships could require quite a lot of\n> > space, both vertical and horizontal.\n>\n> Right, that's what I was thinking.\n>\n>\nSo, by way of example:\n\nregress_du_role1 | cannot login | regress_du_role0 granted by\nregress_du_admin with admin, inherit, set | Description for regress_du_role1\n\n~140 character width with description\n\nNo translations, all words are either identical to syntax or identifiers.\n\nI'm on board with newlines in the attributes field.\n\nThe specific member of column changes are:\n\nfrom -> granted by\n( ) -> \"with\"\nais -> admin, inherit, set\n\nI'm good with any or all of those selections, either as-is or in the more\nverbose form.\n\nDavid J.\n\nOn Tue, Apr 4, 2023 at 10:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if, while we're here, we should apply the idea of\n>> joining-with-newlines-not-commas to the attributes column too.\n\n> That would make the column narrower, which might be good, because it\n> seems to me that listing the memberships could require quite a lot of\n> space, both vertical and horizontal.\n\nRight, that's what I was thinking.So, by way of example:regress_du_role1 | cannot login | regress_du_role0 granted by regress_du_admin with admin, inherit, set | Description for regress_du_role1~140 character width with descriptionNo translations, all words are either identical to syntax or identifiers.I'm on board with newlines in the attributes field.The specific member of column changes are:from -> granted by( ) -> \"with\"ais -> admin, inherit, setI'm good with any or all of those selections, either as-is or in the more verbose form.David J.", "msg_date": "Tue, 4 Apr 2023 12:02:25 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Tue, Apr 4, 2023 at 3:02 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> So, by way of example:\n>\n> regress_du_role1 | cannot login | regress_du_role0 granted by regress_du_admin with admin, inherit, set | Description for regress_du_role1\n>\n> ~140 character width with description\n\nThat seems wider than necessary. Why not have the third column be\nsomething like regress_du_role0 by regress_du_admin (admin, inherit,\nset)?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 4 Apr 2023 16:00:43 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 04.04.2023 23:00, Robert Haas wrote:\n> On Tue, Apr 4, 2023 at 3:02 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n>> So, by way of example:\n>>\n>> regress_du_role1 | cannot login | regress_du_role0 granted by regress_du_admin with admin, inherit, set | Description for regress_du_role1\n>>\n>>\n>> That seems wider than necessary. Why not have the third column be\n>> something like regress_du_role0 by regress_du_admin (admin, inherit,\n>> set)?\n\n'granted by' can be left without translation, but just 'by' required \ntranslation, as I think.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Tue, 4 Apr 2023 23:42:04 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 04.04.2023 22:02, David G. Johnston wrote:\n> On Tue, Apr 4, 2023 at 10:37 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I wonder if, while we're here, we should apply the idea of\n> >> joining-with-newlines-not-commas to the attributes column too.\n>\n> > That would make the column narrower, which might be good, because it\n> > seems to me that listing the memberships could require quite a\n> lot of\n> > space, both vertical and horizontal.\n>\n> Right, that's what I was thinking.\n>\n>\n> So, by way of example:\n>\n> regress_du_role1 | cannot login | regress_du_role0 granted by \n> regress_du_admin with admin, inherit, set | Description for \n> regress_du_role1\n\nPerhaps more closely to syntax?\n\nregress_du_role0 with admin, inherit, set granted by regress_du_admin\n\ninstead of\n\nregress_du_role0 granted by regress_du_admin with admin, inherit, set\n\n\n> No translations, all words are either identical to syntax or identifiers.\n>\n> I'm on board with newlines in the attributes field.\n\n+1\n\n> The specific member of column changes are:\n>\n> from -> granted by\n> ( ) -> \"with\"\n> ais -> admin, inherit, set\n>\n> I'm good with any or all of those selections, either as-is or in the \n> more verbose form.\n\n From yesterday's discussion, I think two things are important:\n- it is advisable to avoid translation,\n- there is no sense in the abbreviation (a,i,s), if there are full names \nin the 'attributes' column.\n\nSo I agree with such changes and plan to implement them.\n\nAnd one more question. (I think it's better to have it explicitly \nrejected than to keep silent.)\n\nWhat if this long output will be available only for \\du+, and for \\du \njust show distinct (without duplicates)\nroles in the current array format? For those, who don't care about these \nnew membership options, nothing will change.\nThose, who need details will use the + modifier.\n?\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 04.04.2023 22:02, David G. Johnston wrote:\n\n\n\n\nOn Tue, Apr\n 4, 2023 at 10:37 AM Tom Lane <tgl@sss.pgh.pa.us>\n wrote:\n\n\n\nRobert Haas <robertmhaas@gmail.com>\n writes:\n > On Tue, Apr 4, 2023 at 1:12 PM Tom Lane <tgl@sss.pgh.pa.us>\n wrote:\n >> I wonder if, while we're here, we should apply the\n idea of\n >> joining-with-newlines-not-commas to the attributes\n column too.\n\n > That would make the column narrower, which might be\n good, because it\n > seems to me that listing the memberships could require\n quite a lot of\n > space, both vertical and horizontal.\n\n Right, that's what I was thinking.\n\n\n\n\nSo, by way of\n example:\n\n\nregress_du_role1\n | cannot login | regress_du_role0 granted by\n regress_du_admin with admin, inherit, set | Description for\n regress_du_role1\n\n\n\n\n Perhaps more closely to syntax?\n\n regress_du_role0 with admin, inherit, set granted by\n regress_du_admin \n\n instead of \n\n regress_du_role0 granted by regress_du_admin with admin, inherit,\n set\n\n\n\n\n\nNo\n translations, all words are either identical to syntax or\n identifiers.\n\n\nI'm on board\n with newlines in the attributes field.\n\n\n\n\n +1\n\n\n\n\nThe specific\n member of column changes are:\n\n\nfrom ->\n granted by\n( ) ->\n \"with\"\nais ->\n admin, inherit, set\n\n\nI'm good with\n any or all of those selections, either as-is or in the more\n verbose form.\n\n\n\n\n From yesterday's discussion, I think two things are important:\n - it is advisable to avoid translation,\n - there is no sense in the abbreviation (a,i,s), if there are full\n names in the 'attributes' column.\n\n So I agree with such changes and plan to implement them.\n\n And one more question. (I think it's better to have it explicitly\n rejected than to keep silent.)\n\n What if this long output will be available only for \\du+, and for\n \\du just show distinct (without duplicates)\n roles in the current array format? For those, who don't care about\n these new membership options, nothing will change.\n Those, who need details will use the + modifier.\n ?\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Wed, 5 Apr 2023 10:42:19 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> What if this long output will be available only for \\du+, and for \\du \n> just show distinct (without duplicates)\n> roles in the current array format? For those, who don't care about these \n> new membership options, nothing will change.\n> Those, who need details will use the + modifier.\n> ?\n\nI kind of like that. Would we change to newlines in the Attributes\nfield in both \\du and \\du+? (I'm +1 for that, but maybe others aren't.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 05 Apr 2023 09:58:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, Apr 5, 2023 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> > What if this long output will be available only for \\du+, and for \\du\n> > just show distinct (without duplicates)\n> > roles in the current array format? For those, who don't care about these\n> > new membership options, nothing will change.\n> > Those, who need details will use the + modifier.\n> > ?\n>\n> I kind of like that. Would we change to newlines in the Attributes\n> field in both \\du and \\du+? (I'm +1 for that, but maybe others aren't.)\n>\n>\nIf we don't change the \\du \"Member of\" column display (aside from removing\nduplicates) I'm disinclined to change the Attributes column.\n\nI too am partial to only exposing this detail on the extended (+) display.\n\nDavid J.\n\nOn Wed, Apr 5, 2023 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> What if this long output will be available only for \\du+, and for \\du \n> just show distinct (without duplicates)\n> roles in the current array format? For those, who don't care about these \n> new membership options, nothing will change.\n> Those, who need details will use the + modifier.\n> ?\n\nI kind of like that.  Would we change to newlines in the Attributes\nfield in both \\du and \\du+?  (I'm +1 for that, but maybe others aren't.)If we don't change the \\du \"Member of\" column display (aside from removing duplicates) I'm disinclined to change the Attributes column.I too am partial to only exposing this detail on the extended (+) display.David J.", "msg_date": "Wed, 5 Apr 2023 07:24:16 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "After playing with the \\du command, I found that we can't avoid translation.\n\nAll attributes are translatable. Also, two of nine attributes shows in \nnew line separated format (connection limit and password valid until).\n\n$ LANGUAGE=fr psql -c \"ALTER ROLE postgres CONNECTION LIMIT 3 VALID \nUNTIL 'infinity'\" -c '\\du'\nALTER ROLE\n                                               Liste des rôles\n  Nom du rôle | Attributs                                    | Membre de\n-------------+---------------------------------------------------------------------------------+-----------\n  postgres    | Superutilisateur, Créer un rôle, Créer une base, \nRéplication, Contournement RLS+| {}\n              | 3 connexions +|\n              | Mot de passe valide jusqu'à \ninfinity                                            |\n\n\nSo I decided to keep the format suggested by David, but without \nabbreviations and only for extended mode.\n\n$ psql -c '\\duS+'\n                                                          List of roles\n           Role name          |          Attributes \n|                     Member of                     | Description\n-----------------------------+-------------------------------+---------------------------------------------------+-------------\n  pg_checkpoint               | Cannot login \n|                                                   |\n  pg_create_subscription      | Cannot login \n|                                                   |\n  pg_database_owner           | Cannot login \n|                                                   |\n  pg_execute_server_program   | Cannot login \n|                                                   |\n  pg_maintain                 | Cannot login \n|                                                   |\n  pg_monitor                  | Cannot login                  | \npg_read_all_settings from postgres (inherit, set)+|\n                              |                               | \npg_read_all_stats from postgres (inherit, set)   +|\n                              |                               | \npg_stat_scan_tables from postgres (inherit, set)  |\n  pg_read_all_data            | Cannot login \n|                                                   |\n  pg_read_all_settings        | Cannot login \n|                                                   |\n  pg_read_all_stats           | Cannot login \n|                                                   |\n  pg_read_server_files        | Cannot login \n|                                                   |\n  pg_signal_backend           | Cannot login \n|                                                   |\n  pg_stat_scan_tables         | Cannot login \n|                                                   |\n  pg_use_reserved_connections | Cannot login \n|                                                   |\n  pg_write_all_data           | Cannot login \n|                                                   |\n  pg_write_server_files       | Cannot login \n|                                                   |\n  postgres                    | Superuser \n+|                                                   |\n                              | Create role \n+|                                                   |\n                              | Create DB \n+|                                                   |\n                              | Replication \n+|                                                   |\n                              | Bypass RLS \n+|                                                   |\n                              | 3 connections \n+|                                                   |\n                              | Password valid until infinity \n|                                                   |\n\n\nPlease look at new version. I understand that this is a compromise choice.\nI am ready to change it if a better solution is offered.\n\nP.S. If no objections I plan to add this patch to Open Items for v16\nhttps://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\nOn 05.04.2023 17:24, David G. Johnston wrote:\n> On Wed, Apr 5, 2023 at 6:58 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> > What if this long output will be available only for \\du+, and\n> for \\du\n> > just show distinct (without duplicates)\n> > roles in the current array format? For those, who don't care\n> about these\n> > new membership options, nothing will change.\n> > Those, who need details will use the + modifier.\n> > ?\n>\n> I kind of like that.  Would we change to newlines in the Attributes\n> field in both \\du and \\du+?  (I'm +1 for that, but maybe others\n> aren't.)\n>\n>\n> If we don't change the \\du \"Member of\" column display (aside from \n> removing duplicates) I'm disinclined to change the Attributes column.\n>\n> I too am partial to only exposing this detail on the extended (+) display.\n>\n> David J.\n>\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com", "msg_date": "Thu, 13 Apr 2023 15:44:20 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Sorry for joining in late..\n\nAt Thu, 13 Apr 2023 15:44:20 +0300, Pavel Luzanov <p.luzanov@postgrespro.ru> wrote in \n> After playing with the \\du command, I found that we can't avoid\n> translation.\n> All attributes are translatable. Also, two of nine attributes shows in\n> new line separated format (connection limit and password valid until).\n\nGoing a bit off-topic here, but I'd like the \"infinity\" to be\ntranslatable...\n\nAs David-G appears to express concern in upthread, I don't think a\ndirect Japanese translation from \"rolename from rolename\" works well,\nas the \"from\" needs accompanying verb. I, as a Japanese speaker, I\nprefer a more non-sentence-like notation, similar to David's\nsuggestion but with slight differences:\n\n\"pg_read_all_stats (grantor: postgres, inherit, set)\"\n\nThis is easily translated into Japanese.\n\n\"pg_read_all_stats (付与者: postgres、継承、設定)\"\n\nCome to think of this, I recalled a past discussion in which we\nconcluded that translating punctuation marks appearing between a\nvariable number of items within list expressions should be avoided...\n\nThus, I'd like to propose to use an ACL-like notation, which doesn't\nneed translation.\n\n..| Mamber of |\n..| pg_read_server_files=ais/horiguti,pg_execute_server_program=/postgres | \n\nIf we'd like, but not likely, we might want to provide a parallel\nfunction to aclexplode for this notation.\n\n=# select memberofexplode('pg_read_server_files=ais/horiguti,pg_execute_server_program=/postgres');\n privilege | grantor | admin | inherit | set\n---------------------------+----------+-------+---------+-------\npg_read_server_files | horiguti | true | true | true\npg_execute_server_programs | postgres | false | false | false\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 14 Apr 2023 16:28:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 14.04.2023 10:28, Kyotaro Horiguchi wrote:\n> As David-G appears to express concern in upthread, I don't think a\n> direct Japanese translation from \"rolename from rolename\" works well,\n> as the \"from\" needs accompanying verb. I, as a Japanese speaker, I\n> prefer a more non-sentence-like notation, similar to David's\n> suggestion but with slight differences:\n>\n> \"pg_read_all_stats (grantor: postgres, inherit, set)\"\n\nIn this form, it confuses me that 'postgres' and 'inherit, set' look \nlike a common list.\n\n> Come to think of this, I recalled a past discussion in which we\n> concluded that translating punctuation marks appearing between a\n> variable number of items within list expressions should be avoided...\n>\n> Thus, I'd like to propose to use an ACL-like notation, which doesn't\n> need translation.\n>\n> ..| Mamber of |\n> ..| pg_read_server_files=ais/horiguti,pg_execute_server_program=/postgres |\n\nIt's very tempting to do so. But I don't like this approach. Showing \nmembership options as an ACL-like column will be confusing.\nEven in your example, my first reaction is that \npg_execute_server_program is available to public.\n(As for the general patterns, we can also consider combining three \noptions into one column, like pg_class.reloptions.)\n\nSo, yet another way to discuss:\n\npg_read_all_stats(inherit,set/horiguti)\npg_execute_server_program(empty/postgres)\n\n\nOne more point. Grants without any option probably should be prohibited \nas useless. But this is for a new thread.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Sat, 15 Apr 2023 16:16:26 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 4/13/23 8:44 AM, Pavel Luzanov wrote:\r\n\r\n> P.S. If no objections I plan to add this patch to Open Items for v16\r\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\r\n\r\n[RMT hat]\r\n\r\nI don't see why this is an open item as this feature was not committed \r\nfor v16. Open items typically revolve around committed features.\r\n\r\nUnless someone makes a convincing argument otherwise, I'll remove this \r\nfrom the Open Items list[1] tomorrow.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items", "msg_date": "Wed, 3 May 2023 12:00:05 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, May 3, 2023 at 9:00 AM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> On 4/13/23 8:44 AM, Pavel Luzanov wrote:\n>\n> > P.S. If no objections I plan to add this patch to Open Items for v16\n> > https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n>\n> [RMT hat]\n>\n> I don't see why this is an open item as this feature was not committed\n> for v16. Open items typically revolve around committed features.\n>\n> Unless someone makes a convincing argument otherwise, I'll remove this\n> from the Open Items list[1] tomorrow.\n>\n> Thanks,\n>\n> Jonathan\n>\n> [1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\n\nThe argument is that updating the psql \\d views to show the newly added\noptions is something that the patch to add those options should have done\nbefore being committed. Or, at worse, we should decide now that we don't\nwant to do so and spare people the effort of trying to get this committed\nlater.\n\nDavid J.\n\nOn Wed, May 3, 2023 at 9:00 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:On 4/13/23 8:44 AM, Pavel Luzanov wrote:\n\n> P.S. If no objections I plan to add this patch to Open Items for v16\n> https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_Items\n\n[RMT hat]\n\nI don't see why this is an open item as this feature was not committed \nfor v16. Open items typically revolve around committed features.\n\nUnless someone makes a convincing argument otherwise, I'll remove this \nfrom the Open Items list[1] tomorrow.\n\nThanks,\n\nJonathan\n\n[1] https://wiki.postgresql.org/wiki/PostgreSQL_16_Open_ItemsThe argument is that updating the psql \\d views to show the newly added options is something that the patch to add those options should have done before being committed.  Or, at worse, we should decide now that we don't want to do so and spare people the effort of trying to get this committed later.David J.", "msg_date": "Wed, 3 May 2023 09:13:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Wed, May 3, 2023 at 9:00 AM Jonathan S. Katz <jkatz@postgresql.org>\n> wrote:\n>> I don't see why this is an open item as this feature was not committed\n>> for v16. Open items typically revolve around committed features.\n\n> The argument is that updating the psql \\d views to show the newly added\n> options is something that the patch to add those options should have done\n> before being committed.\n\nYeah, if there is not any convenient way to see that info in psql\nthen that seems like a missing part of the feature.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 May 2023 12:25:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 5/3/23 12:25 PM, Tom Lane wrote:\r\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\r\n>> On Wed, May 3, 2023 at 9:00 AM Jonathan S. Katz <jkatz@postgresql.org>\r\n>> wrote:\r\n>>> I don't see why this is an open item as this feature was not committed\r\n>>> for v16. Open items typically revolve around committed features.\r\n> \r\n>> The argument is that updating the psql \\d views to show the newly added\r\n>> options is something that the patch to add those options should have done\r\n>> before being committed.\r\n> \r\n> Yeah, if there is not any convenient way to see that info in psql\r\n> then that seems like a missing part of the feature.\r\n\r\n[RMT hat]\r\n\r\nOK -- I was rereading the thread again to see if I could glean that \r\ninsight. There was a comment buried in the thread with David's opinion \r\non that front, but no one had +1'd that.\r\n\r\nHowever, if this is for feature completeness, I'll withdraw the closing \r\nof the open item, but would strongly suggest we complete it in time for \r\nBeta 1.\r\n\r\n[Personal hat]\r\n\r\nLooking at Pavel's latest patch, I do find the output easy to \r\nunderstand, though do we need to explicitly list \"empty\" if there are no \r\nmembership permissions?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 3 May 2023 12:30:34 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, May 3, 2023 at 9:30 AM Jonathan S. Katz <jkatz@postgresql.org>\nwrote:\n\n> [Personal hat]\n>\n> Looking at Pavel's latest patch, I do find the output easy to\n> understand, though do we need to explicitly list \"empty\" if there are no\n> membership permissions?\n>\n>\nYes. I dislike having the equivalent of null embedded within the output\nhere. I would rather label it for what it is. As a membership without any\nattributes has no real purpose I don't see how the choice matters and at\nleast empty both stands out visually and can be grepped.\n\nThe SQL language uses the words \"by\" and \"from\" in its syntax; I'm against\navoiding them in our presentation here without a clearly superior\nalternative that doesn't require a majority of people to have to translate\nthe symbol \" / \" back into the word \" by \" in order to read the output.\n\nBut if it is really a blocker then maybe we should produce 3 separate\nnewline separated columns, one for the member of role, one for the list of\nattributes, and one with the grantor. The column headers can be translated\nmore easily as single nouns. The readability quite probably would end up\nbeing equivalent (maybe even better) in tabular form instead of sentence\nform.\n\nDavid J.\n\nOn Wed, May 3, 2023 at 9:30 AM Jonathan S. Katz <jkatz@postgresql.org> wrote:[Personal hat]\n\nLooking at Pavel's latest patch, I do find the output easy to \nunderstand, though do we need to explicitly list \"empty\" if there are no \nmembership permissions?Yes.  I dislike having the equivalent of null embedded within the output here.  I would rather label it for what it is.  As a membership without any attributes has no real purpose I don't see how the choice matters and at least empty both stands out visually and can be grepped.The SQL language uses the words \"by\" and \"from\" in its syntax; I'm against avoiding them in our presentation here without a clearly superior alternative that doesn't require a majority of people to have to translate the symbol \" / \" back into the word \" by \" in order to read the output.But if it is really a blocker then maybe we should produce 3 separate newline separated columns, one for the member of role, one for the list of attributes, and one with the grantor.  The column headers can be translated more easily as single nouns.  The readability quite probably would end up being equivalent (maybe even better) in tabular form instead of sentence form.David J.", "msg_date": "Fri, 5 May 2023 09:51:58 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 05.05.2023 19:51, David G. Johnston wrote:\n> But if it is really a blocker then maybe we should produce 3 separate \n> newline separated columns, one for the member of role, one for the \n> list of attributes, and one with the grantor.  The column headers can \n> be translated more easily as single nouns.  The readability quite \n> probably would end up being equivalent (maybe even better) in tabular \n> form instead of sentence form.\n\nJust to visualize this approach. Below are the output for the tabular \nform and the sentence form from last patch version (sql script attached):\n\nTabular form     rolname      |     memberof     |       options       \n|     grantor \n------------------+------------------+---------------------+------------------ postgres         \n|                  |                     |  regress_du_admin | \nregress_du_role0+| admin, inherit, set+| postgres        \n+                  | regress_du_role1+| admin, inherit, set+| \npostgres        +                  | regress_du_role2 | admin, inherit, \nset | postgres regress_du_role0 |                  |                     \n|  regress_du_role1 | regress_du_role0+| admin, inherit, set+| \nregress_du_admin+                  | regress_du_role0+| \ninherit            +| regress_du_role1+                  | \nregress_du_role0 | set                 | \nregress_du_role2 regress_du_role2 | regress_du_role0+| \nadmin              +| regress_du_admin+                  | \nregress_du_role0+| inherit, set       +| \nregress_du_role1+                  | regress_du_role0+| \nempty              +| regress_du_role2+                  | \nregress_du_role1 | admin, set          | regress_du_admin(5 \nrows)Sentence form from patch v7     rolname      \n|                           memberof \n------------------+-------------------------------------------------------------- postgres         \n|  regress_du_admin | regress_du_role0 from postgres (admin, inherit, \nset)        +                  | regress_du_role1 from postgres (admin, \ninherit, set)        +                  | regress_du_role2 from postgres \n(admin, inherit, set) regress_du_role0 |  regress_du_role1 | \nregress_du_role0 from regress_du_admin (admin, inherit, \nset)+                  | regress_du_role0 from regress_du_role1 \n(inherit)            +                  | regress_du_role0 from \nregress_du_role2 (set) regress_du_role2 | regress_du_role0 from \nregress_du_admin (admin)              +                  | \nregress_du_role0 from regress_du_role1 (inherit, set)       \n+                  | regress_du_role0 from regress_du_role2 \n(empty)              +                  | regress_du_role1 from \nregress_du_admin (admin, set)(5 rows) \n\nThe tabular form solves the latest patch translation problems mentioned by Kyotaro.\nBut it requires mapping elements between 3 array-like columns.\n\nTo move forward, needs more opinions?\n\n \n-----\nPavel Luzanov", "msg_date": "Sun, 7 May 2023 22:14:41 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 5/7/23 3:14 PM, Pavel Luzanov wrote:\r\n> On 05.05.2023 19:51, David G. Johnston wrote:\r\n>> But if it is really a blocker then maybe we should produce 3 separate \r\n>> newline separated columns, one for the member of role, one for the \r\n>> list of attributes, and one with the grantor.  The column headers can \r\n>> be translated more easily as single nouns.  The readability quite \r\n>> probably would end up being equivalent (maybe even better) in tabular \r\n>> form instead of sentence form.\r\n> \r\n> Just to visualize this approach. Below are the output for the tabular \r\n> form and the sentence form from last patch version (sql script attached):\r\n> \r\n> Tabular form     rolname      |     memberof     |       options \r\n> |     grantor \r\n> ------------------+------------------+---------------------+------------------ postgres         |                  |                     |  regress_du_admin | regress_du_role0+| admin, inherit, set+| postgres        +                  | regress_du_role1+| admin, inherit, set+| postgres        +                  | regress_du_role2 | admin, inherit, set | postgres regress_du_role0 |                  |                     |  regress_du_role1 | regress_du_role0+| admin, inherit, set+| regress_du_admin+                  | regress_du_role0+| inherit            +| regress_du_role1+                  | regress_du_role0 | set                 | regress_du_role2 regress_du_role2 | regress_du_role0+| admin              +| regress_du_admin+                  | regress_du_role0+| inherit, set       +| regress_du_role1+                  | regress_du_role0+| empty              +| regress_du_role2+                  | regress_du_role1 | admin, set          | regress_du_admin(5 rows)Sentence form from patch v7     rolname      |                           memberof ------------------+-------------------------------------------------------------- postgres         |  regress_du_admin | regress_du_role0 from postgres (admin, inherit, set)        +                  | regress_du_role1 from postgres (admin, inherit, set)        +                  | regress_du_role2 from postgres (admin, inherit, set) regress_du_role0 |  regress_du_role1 | regress_du_role0 from regress_du_admin (admin, inherit, set)+                  | regress_du_role0 from regress_du_role1 (inherit)            +                  | regress_du_role0 from regress_du_role2 (set) regress_du_role2 | regress_du_role0 from regress_du_admin (admin)              +                  | regress_du_role0 from regress_du_role1 (inherit, set)       +                  | regress_du_role0 from regress_du_role2 (empty)              +                  | regress_du_role1 from regress_du_admin (admin, set)(5 rows) \r\n> \r\n> The tabular form solves the latest patch translation problems mentioned by Kyotaro.\r\n> But it requires mapping elements between 3 array-like columns.\r\n> \r\n> To move forward, needs more opinions?\r\n\r\n[RMT Hat]\r\n\r\nNudging this along, as it's an open item. It'd be good to get this \r\nresolved before Beta 1, but that may be tough at this point.\r\n\r\n[Personal hat]\r\n\r\nI'm probably not the target user for this feature, so I'm not sure how \r\nmuch you should weigh my opinion (e.g. I still don't agree with \r\nexplicitly showing \"empty\", but as mentioned, I'm not the target user).\r\n\r\nThat said, from a readability standpoint, it was easier for me to follow \r\nthe tabular form vs. the sentence form.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Wed, 17 May 2023 22:42:40 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 18.05.2023 05:42, Jonathan S. Katz wrote:\n\n> That said, from a readability standpoint, it was easier for me to \n> follow the tabular form vs. the sentence form.\n\nMay be possible to reach a agreement on the sentence form. Similar \ndescriptions used for referential constraints in the \\d command:\n\ncreate table t1 (id int primary key);create table t2 (id int references \nt1(id));\\d t2                 Table \"public.t2\" Column |  Type   | \nCollation | Nullable | Default \n--------+---------+-----------+----------+--------- id     | integer \n|           |          | Foreign-key constraints:    \"t2_id_fkey\" \nFOREIGN KEY (id) REFERENCES t1(id)As for tabular form it looks more \nnatural to have a separate psql command for pg_auth_members system \ncatalog. Something based on this query:SELECT r.rolname role, m.rolname \nmember,       admin_option admin, inherit_option inherit, set_option \nset,       g.rolname grantorFROM pg_catalog.pg_auth_members pam     JOIN \npg_catalog.pg_roles r ON (pam.roleid = r.oid)     JOIN \npg_catalog.pg_roles m ON (pam.member = m.oid)     JOIN \npg_catalog.pg_roles g ON (pam.grantor = g.oid)WHERE r.rolname !~ \n'^pg_'ORDER BY role, member, grantor;       role       |      \nmember      | admin | inherit | set |     grantor \n------------------+------------------+-------+---------+-----+------------------ regress_du_role0 \n| regress_du_admin | t     | t       | t   | postgres regress_du_role0 | \nregress_du_role1 | t     | t       | t   | \nregress_du_admin regress_du_role0 | regress_du_role1 | f     | t       | \nf   | regress_du_role1 regress_du_role0 | regress_du_role1 | f     | \nf       | t   | regress_du_role2 regress_du_role0 | regress_du_role2 | \nt     | f       | f   | regress_du_admin regress_du_role0 | \nregress_du_role2 | f     | t       | t   | \nregress_du_role1 regress_du_role0 | regress_du_role2 | f     | f       | \nf   | regress_du_role2 regress_du_role1 | regress_du_admin | t     | \nt       | t   | postgres regress_du_role1 | regress_du_role2 | t     | \nf       | t   | regress_du_admin regress_du_role2 | regress_du_admin | \nt     | t       | t   | postgres(10 rows)But is it worth inventing a new \npsql command for this?\n\n-----\nPavel Luzanov\n\n\n\n\n\n\n\nOn 18.05.2023 05:42, Jonathan S. Katz wrote:\n\n\n\nThat said, from a readability standpoint, it was easier for me to follow \nthe tabular form vs. the sentence form.\n\n\n\nMay be possible to reach a agreement on the sentence form. Similar descriptions used\nfor referential constraints in the \\d command:\n\ncreate table t1 (id int primary key);\ncreate table t2 (id int references t1(id));\n\\d t2\n                 Table \"public.t2\"\n Column |  Type   | Collation | Nullable | Default \n--------+---------+-----------+----------+---------\n id     | integer |           |          | \nForeign-key constraints:\n    \"t2_id_fkey\" FOREIGN KEY (id) REFERENCES t1(id)\n\n\nAs for tabular form it looks more natural to have a separate psql command\nfor pg_auth_members system catalog. Something based on this query:\n\nSELECT r.rolname role, m.rolname member,\n       admin_option admin, inherit_option inherit, set_option set,\n       g.rolname grantor\nFROM pg_catalog.pg_auth_members pam\n     JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)\n     JOIN pg_catalog.pg_roles m ON (pam.member = m.oid)\n     JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)\nWHERE r.rolname !~ '^pg_'\nORDER BY role, member, grantor;\n       role       |      member      | admin | inherit | set |     grantor      \n------------------+------------------+-------+---------+-----+------------------\n regress_du_role0 | regress_du_admin | t     | t       | t   | postgres\n regress_du_role0 | regress_du_role1 | t     | t       | t   | regress_du_admin\n regress_du_role0 | regress_du_role1 | f     | t       | f   | regress_du_role1\n regress_du_role0 | regress_du_role1 | f     | f       | t   | regress_du_role2\n regress_du_role0 | regress_du_role2 | t     | f       | f   | regress_du_admin\n regress_du_role0 | regress_du_role2 | f     | t       | t   | regress_du_role1\n regress_du_role0 | regress_du_role2 | f     | f       | f   | regress_du_role2\n regress_du_role1 | regress_du_admin | t     | t       | t   | postgres\n regress_du_role1 | regress_du_role2 | t     | f       | t   | regress_du_admin\n regress_du_role2 | regress_du_admin | t     | t       | t   | postgres\n(10 rows)\n\nBut is it worth inventing a new psql command for this?\n\n\n-----\nPavel Luzanov", "msg_date": "Thu, 18 May 2023 16:07:22 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Robert - can you please comment on what you are willing to commit in order\nto close out your open item here. My take is that the design for this, the\ntabular form a couple of emails ago (copied here), is ready-to-commit, just\nneeding the actual (trivial) code changes to be made to accomplish it.\n\nTabular form\n\n rolname | memberof | options |\ngrantor ------------------+------------------+---------------------+------------------\npostgres | | |\nregress_du_admin | regress_du_role0+| admin, inherit, set+| postgres\n + | regress_du_role1+| admin, inherit, set+|\npostgres + | regress_du_role2 | admin,\ninherit, set | postgres regress_du_role0 | |\n | regress_du_role1 | regress_du_role0+| admin, inherit,\nset+| regress_du_admin+ | regress_du_role0+| inherit\n +| regress_du_role1+ | regress_du_role0 |\nset | regress_du_role2 regress_du_role2 |\nregress_du_role0+| admin +| regress_du_admin+\n | regress_du_role0+| inherit, set +| regress_du_role1+\n | regress_du_role0+| empty +|\nregress_du_role2+ | regress_du_role1 | admin, set\n | regress_du_admin(5 rows)\n\n\nOn Thu, May 18, 2023 at 6:07 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> On 18.05.2023 05:42, Jonathan S. Katz wrote:\n>\n> That said, from a readability standpoint, it was easier for me to follow\n> the tabular form vs. the sentence form.\n>\n> May be possible to reach a agreement on the sentence form. Similar descriptions used\n> for referential constraints in the \\d command:\n>\n> I think we should consider the tabular form with translatable headers to\nbe the acceptable choice here. I don't see enough value in the sentence\nform to make it worth trying to overcome the i18n objection there.\n\n> As for tabular form it looks more natural to have a separate psql command\n> for pg_auth_members system catalog. Something based on this query:But is it worth inventing a new psql command for this?\n>\n>\nIMO, no. I'd much rather read \"admin, inherit, set\" than deal with\ntrue/false in columns. I think the newlines are better compared to\nrepetition of the rolname as well.\n\nI'm also strongly in favor of explicitly writing out the word \"empty\"\ninstead of leaving the column blank for the case that no options are marked\ntrue. But it isn't a show-stopper for me.\n\nDavid J.\n\nRobert - can you please comment on what you are willing to commit in order to close out your open item here.  My take is that the design for this, the tabular form a couple of emails ago (copied here), is ready-to-commit, just needing the actual (trivial) code changes to be made to accomplish it.Tabular form     rolname      |     memberof     |       options       |     grantor      \n------------------+------------------+---------------------+------------------\n postgres         |                  |                     | \n regress_du_admin | regress_du_role0+| admin, inherit, set+| postgres        +\n                  | regress_du_role1+| admin, inherit, set+| postgres        +\n                  | regress_du_role2 | admin, inherit, set | postgres\n regress_du_role0 |                  |                     | \n regress_du_role1 | regress_du_role0+| admin, inherit, set+| regress_du_admin+\n                  | regress_du_role0+| inherit            +| regress_du_role1+\n                  | regress_du_role0 | set                 | regress_du_role2\n regress_du_role2 | regress_du_role0+| admin              +| regress_du_admin+\n                  | regress_du_role0+| inherit, set       +| regress_du_role1+\n                  | regress_du_role0+| empty              +| regress_du_role2+\n                  | regress_du_role1 | admin, set          | regress_du_admin\n(5 rows)On Thu, May 18, 2023 at 6:07 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n\n\nOn 18.05.2023 05:42, Jonathan S. Katz wrote:\n\n\n\nThat said, from a readability standpoint, it was easier for me to follow \nthe tabular form vs. the sentence form.\n\n\nMay be possible to reach a agreement on the sentence form. Similar descriptions used\nfor referential constraints in the \\d command:I think we should consider the tabular form with translatable headers to be the acceptable choice here.  I don't see enough value in the sentence form to make it worth trying to overcome the i18n objection there.\nAs for tabular form it looks more natural to have a separate psql command\nfor pg_auth_members system catalog. Something based on this query:\n\nBut is it worth inventing a new psql command for this?IMO, no.  I'd much rather read \"admin, inherit, set\" than deal with true/false in columns.  I think the newlines are better compared to repetition of the rolname as well.I'm also strongly in favor of explicitly writing out the word \"empty\" instead of leaving the column blank for the case that no options are marked true.  But it isn't a show-stopper for me.David J.", "msg_date": "Thu, 15 Jun 2023 11:47:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 6/15/23 2:47 PM, David G. Johnston wrote:\r\n> Robert - can you please comment on what you are willing to commit in \r\n> order to close out your open item here.  My take is that the design for \r\n> this, the tabular form a couple of emails ago (copied here), is \r\n> ready-to-commit, just needing the actual (trivial) code changes to be \r\n> made to accomplish it.\r\n> \r\n> Tabular form\r\n> \r\n>      rolname      |     memberof     |       options       | \r\n> grantor \r\n> ------------------+------------------+---------------------+------------------ postgres         |                  |                     |  regress_du_admin | regress_du_role0+| admin, inherit, set+| postgres        +                  | regress_du_role1+| admin, inherit, set+| postgres        +                  | regress_du_role2 | admin, inherit, set | postgres regress_du_role0 |                  |                     |  regress_du_role1 | regress_du_role0+| admin, inherit, set+| regress_du_admin+                  | regress_du_role0+| inherit            +| regress_du_role1+                  | regress_du_role0 | set                 | regress_du_role2 regress_du_role2 | regress_du_role0+| admin              +| regress_du_admin+                  | regress_du_role0+| inherit, set       +| regress_du_role1+                  | regress_du_role0+| empty              +| regress_du_role2+                  | regress_du_role1 | admin, set          | regress_du_admin(5 rows)\r\n> \r\n\r\n[RMT hat]\r\n\r\nCan we resolve this before Beta 2?[1] The RMT originally advised to try \r\nto resolve before Beta 1[2], and this seems to be lingering.\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] \r\nhttps://www.postgresql.org/message-id/460ae02a-3123-16a3-f2d7-ccd79778819b%40postgresql.org\r\n[2] \r\nhttps://www.postgresql.org/message-id/d61db38b-29d9-81cc-55b3-8a5c704bb969%40postgresql.org", "msg_date": "Mon, 19 Jun 2023 09:31:14 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 6/15/23 2:47 PM, David G. Johnston wrote:\n>> Robert - can you please comment on what you are willing to commit in \n>> order to close out your open item here.  My take is that the design for \n>> this, the tabular form a couple of emails ago (copied here), is \n>> ready-to-commit, just needing the actual (trivial) code changes to be \n>> made to accomplish it.\n\n> Can we resolve this before Beta 2?[1] The RMT originally advised to try \n> to resolve before Beta 1[2], and this seems to be lingering.\n\nAt this point I kinda doubt that we can get this done before beta2\neither, but I'll put in my two cents anyway:\n\n* I agree that the \"tabular\" format looks nicer and has fewer i18n\nissues than the other proposals.\n\n* Personally I could do without the \"empty\" business, but that seems\nunnecessary in the tabular format; an empty column will serve fine.\n\n* I also agree with Pavel's comment that we'd be better off taking\nthis out of \\du altogether and inventing a separate \\d command.\nMaybe \"\\drg\" for \"display role grants\"?\n\n* Parenthetically, the \"Attributes\" column of \\du is a complete\ndisaster, lacking not only conceptual but even notational consistency.\n(Who decided that some items belonged on their own line and others\nnot?) I suppose it's way too late to redesign that for v16. But\nI think we'd have more of a free hand to clean that up if we weren't\ntrying to shoehorn role grants into the same display.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 22 Jun 2023 20:08:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> > On 6/15/23 2:47 PM, David G. Johnston wrote:\n> >> Robert - can you please comment on what you are willing to commit in\n> >> order to close out your open item here. My take is that the design for\n> >> this, the tabular form a couple of emails ago (copied here), is\n> >> ready-to-commit, just needing the actual (trivial) code changes to be\n> >> made to accomplish it.\n>\n> > Can we resolve this before Beta 2?[1] The RMT originally advised to try\n> > to resolve before Beta 1[2], and this seems to be lingering.\n>\n> At this point I kinda doubt that we can get this done before beta2\n> either, but I'll put in my two cents anyway:\n>\n> * I agree that the \"tabular\" format looks nicer and has fewer i18n\n> issues than the other proposals.\n>\n\nAs you are on board with a separate command please clarify whether you mean\nthe tabular format but still with newlines, one row per grantee, or the\ntable with one row per grantor-grantee pair.\n\nI still like using newlines here even in the separate meta-command.\n\n>\n> * Personally I could do without the \"empty\" business, but that seems\n> unnecessary in the tabular format; an empty column will serve fine.\n>\n\nI disagree, but not strongly.\n\nI kinda expected you to be on the side of \"why are we discussing a\nsituation that should just be prohibited\" though.\n\n\n> * I also agree with Pavel's comment that we'd be better off taking\n> this out of \\du altogether and inventing a separate \\d command.\n> Maybe \"\\drg\" for \"display role grants\"?\n>\n\nJust to be clear, the open item fix proposal is to remove the presently\nbroken (due to it showing duplicates without any context) \"member of\" array\nin \\du and make a simple table report output in \\drg instead.\n\nI'm good with \\drg as a new meta-command.\n\n\n> * Parenthetically, the \"Attributes\" column of \\du is a complete\n> disaster\n>\n>\nI hadn't thought about this in detail but did get the same impression.\n\nDavid J.\n\nOn Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Jonathan S. Katz\" <jkatz@postgresql.org> writes:\n> On 6/15/23 2:47 PM, David G. Johnston wrote:\n>> Robert - can you please comment on what you are willing to commit in \n>> order to close out your open item here.  My take is that the design for \n>> this, the tabular form a couple of emails ago (copied here), is \n>> ready-to-commit, just needing the actual (trivial) code changes to be \n>> made to accomplish it.\n\n> Can we resolve this before Beta 2?[1] The RMT originally advised to try \n> to resolve before Beta 1[2], and this seems to be lingering.\n\nAt this point I kinda doubt that we can get this done before beta2\neither, but I'll put in my two cents anyway:\n\n* I agree that the \"tabular\" format looks nicer and has fewer i18n\nissues than the other proposals.As you are on board with a separate command please clarify whether you mean the tabular format but still with newlines, one row per grantee, or the table with one row per grantor-grantee pair.I still like using newlines here even in the separate meta-command.\n\n* Personally I could do without the \"empty\" business, but that seems\nunnecessary in the tabular format; an empty column will serve fine.I disagree, but not strongly.I kinda expected you to be on the side of \"why are we discussing a situation that should just be prohibited\" though.\n\n* I also agree with Pavel's comment that we'd be better off taking\nthis out of \\du altogether and inventing a separate \\d command.\nMaybe \"\\drg\" for \"display role grants\"?Just to be clear, the open item fix proposal is to remove the presently broken (due to it showing duplicates without any context) \"member of\" array in \\du and make a simple table report output in \\drg instead.I'm good with \\drg as a new meta-command.\n\n* Parenthetically, the \"Attributes\" column of \\du is a complete\ndisasterI hadn't thought about this in detail but did get the same impression.David J.", "msg_date": "Fri, 23 Jun 2023 08:52:34 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * I agree that the \"tabular\" format looks nicer and has fewer i18n\n>> issues than the other proposals.\n\n> As you are on board with a separate command please clarify whether you mean\n> the tabular format but still with newlines, one row per grantee, or the\n> table with one row per grantor-grantee pair.\n\nI'd lean towards a straight table with a row per grantee/grantor.\nI tend to think that faking table layout with some newlines is\na poor idea. I'm not dead set on that approach though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Jun 2023 12:16:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 6/23/23 11:52 AM, David G. Johnston wrote:\r\n> On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us \r\n> <mailto:tgl@sss.pgh.pa.us>> wrote:\r\n> \r\n> \"Jonathan S. Katz\" <jkatz@postgresql.org\r\n> <mailto:jkatz@postgresql.org>> writes:\r\n> > On 6/15/23 2:47 PM, David G. Johnston wrote:\r\n> >> Robert - can you please comment on what you are willing to\r\n> commit in\r\n> >> order to close out your open item here.  My take is that the\r\n> design for\r\n> >> this, the tabular form a couple of emails ago (copied here), is\r\n> >> ready-to-commit, just needing the actual (trivial) code changes\r\n> to be\r\n> >> made to accomplish it.\r\n> \r\n> > Can we resolve this before Beta 2?[1] The RMT originally advised\r\n> to try\r\n> > to resolve before Beta 1[2], and this seems to be lingering.\r\n> \r\n> At this point I kinda doubt that we can get this done before beta2\r\n> either, but I'll put in my two cents anyway:\r\n\r\n[RMT Hat]\r\n\r\nWell, the probability of completing this before the beta 2 freeze is \r\neffectively zero now. This is a bit disappointing as there was ample \r\ntime since the first RMT nudge on the issue. But let's move forward and \r\nresolve it before Beta 3.\r\n\r\n> * I agree that the \"tabular\" format looks nicer and has fewer i18n\r\n> issues than the other proposals.\r\n> \r\n> As you are on board with a separate command please clarify whether you \r\n> mean the tabular format but still with newlines, one row per grantee, or \r\n> the table with one row per grantor-grantee pair.\r\n> \r\n> I still like using newlines here even in the separate meta-command.\r\n\r\n(I'll save for the downthread comment).\r\n\r\n> \r\n> * Personally I could do without the \"empty\" business, but that seems\r\n> unnecessary in the tabular format; an empty column will serve fine.\r\n> \r\n> \r\n> I disagree, but not strongly.\r\n> \r\n> I kinda expected you to be on the side of \"why are we discussing a \r\n> situation that should just be prohibited\" though.\r\n\r\n[Personal hat]\r\n\r\nI'm still not a fan of \"empty\" but perhaps the formatting around the \r\n\"separate command\" will help drive a conclusion on this.\r\n\r\n> \r\n> * I also agree with Pavel's comment that we'd be better off taking\r\n> this out of \\du altogether and inventing a separate \\d command.\r\n> Maybe \"\\drg\" for \"display role grants\"?\r\n> \r\n> Just to be clear, the open item fix proposal is to remove the presently \r\n> broken (due to it showing duplicates without any context) \"member of\" \r\n> array in \\du and make a simple table report output in \\drg instead.\r\n> \r\n> I'm good with \\drg as a new meta-command.\r\n\r\n[Personal hat]\r\n\r\n+1 for a new command. The proposal above seems reasonable.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 23 Jun 2023 17:18:59 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 6/23/23 12:16 PM, Tom Lane wrote:\r\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\r\n>> On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\r\n>>> * I agree that the \"tabular\" format looks nicer and has fewer i18n\r\n>>> issues than the other proposals.\r\n> \r\n>> As you are on board with a separate command please clarify whether you mean\r\n>> the tabular format but still with newlines, one row per grantee, or the\r\n>> table with one row per grantor-grantee pair.\r\n> \r\n> I'd lean towards a straight table with a row per grantee/grantor.\r\n> I tend to think that faking table layout with some newlines is\r\n> a poor idea. I'm not dead set on that approach though.\r\n\r\n[Personal hat]\r\n\r\nGenerally, I find the tabular view w/o newlines is easier to read, and \r\nmakes it simpler to join to other data (though that may not be \r\napplicable here).\r\n\r\nAgain, I'm not the target user of this feature (until I need to use it), \r\nso my opinion comes with a few grains of salt.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Fri, 23 Jun 2023 17:20:22 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Personally I could do without the \"empty\" business, but that seems\n>> unnecessary in the tabular format; an empty column will serve fine.\n\n> I disagree, but not strongly.\n\n> I kinda expected you to be on the side of \"why are we discussing a\n> situation that should just be prohibited\" though.\n\nI haven't formed an opinion yet on whether it should be prohibited.\nBut even if we do that going forward, won't psql need to deal with\nsuch cases when examining old servers?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 23 Jun 2023 20:12:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Fri, Jun 23, 2023 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> > On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> * Personally I could do without the \"empty\" business, but that seems\n> >> unnecessary in the tabular format; an empty column will serve fine.\n>\n> > I disagree, but not strongly.\n>\n> > I kinda expected you to be on the side of \"why are we discussing a\n> > situation that should just be prohibited\" though.\n>\n> I haven't formed an opinion yet on whether it should be prohibited.\n> But even if we do that going forward, won't psql need to deal with\n> such cases when examining old servers?\n>\n>\nI haven't given enough thought to that. My first reaction is that using\nblank for old servers would be desirable and then, if allowed in v16+\nserver, \"empty\" for those.\n\nThat said, the entire grantor premise that motivated this doesn't exist on\nthose servers so maybe \\drg just shouldn't work against pre-v16 servers -\nand we keep the existing \\du query as-is for those as well while removing\nthe \"member of\" column when \\du is executed against a v16+ server.\n\nDavid J.\n\nOn Fri, Jun 23, 2023 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jun 22, 2023 at 5:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> * Personally I could do without the \"empty\" business, but that seems\n>> unnecessary in the tabular format; an empty column will serve fine.\n\n> I disagree, but not strongly.\n\n> I kinda expected you to be on the side of \"why are we discussing a\n> situation that should just be prohibited\" though.\n\nI haven't formed an opinion yet on whether it should be prohibited.\nBut even if we do that going forward, won't psql need to deal with\nsuch cases when examining old servers?I haven't given enough thought to that.  My first reaction is that using blank for old servers would be desirable and then, if allowed in v16+ server, \"empty\" for those.That said, the entire grantor premise that motivated this doesn't exist on those servers so maybe \\drg just shouldn't work against pre-v16 servers - and we keep the existing \\du query as-is for those as well while removing the \"member of\" column when \\du is executed against a v16+ server.David J.", "msg_date": "Fri, 23 Jun 2023 18:28:26 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Thank you for all valuable comments. I can now continue working on the \npatch.\nHere's what I plan to do in the next version.\n\nChanges for \\du & \\dg commands\n* showing distinct roles in the \"Member of\" column\n* explicit order for list of roles\n* no changes for extended mode (\\du+)\n\nNew meta-command \\drg\n* showing info from pg_auth_members based on a query:\n\nSELECT r.rolname role, m.rolname member,\n        pg_catalog.concat_ws(', ',\n            CASE WHEN pam.admin_option THEN 'ADMIN' END,\n            CASE WHEN pam.inherit_option THEN 'INHERIT' END,\n            CASE WHEN pam.set_option THEN 'SET' END\n        ) AS options,\n        g.rolname grantor\nFROM pg_catalog.pg_auth_members pam\n      JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)\n      JOIN pg_catalog.pg_roles m ON (pam.member = m.oid)\n      JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)\nWHERE r.rolname !~ '^pg_'\nORDER BY role, member, grantor;\n        role       |      member      |       options |     grantor\n------------------+------------------+---------------------+------------------\n  regress_du_role0 | regress_du_admin | ADMIN, INHERIT, SET | postgres\n  regress_du_role0 | regress_du_role1 | ADMIN, INHERIT, SET | \nregress_du_admin\n  regress_du_role0 | regress_du_role1 | INHERIT | regress_du_role1\n  regress_du_role0 | regress_du_role1 | SET | regress_du_role2\n  regress_du_role0 | regress_du_role2 | ADMIN | regress_du_admin\n  regress_du_role0 | regress_du_role2 | INHERIT, SET | regress_du_role1\n  regress_du_role0 | regress_du_role2 | | regress_du_role2\n  regress_du_role1 | regress_du_admin | ADMIN, INHERIT, SET | postgres\n  regress_du_role1 | regress_du_role2 | ADMIN, SET | regress_du_admin\n  regress_du_role2 | regress_du_admin | ADMIN, INHERIT, SET | postgres\n(10 rows)\n\nNotes\n* The name of the new command. It's a good name, if not for the history.\nThere are two commands showing the same information about roles: \\du and \n\\dr.\nThe addition of \\drg may be misinterpreted: if there is \\drg, then there \nis also \\dug.\nMaybe it's time to think about deprecating of the \\du command and leave \nonly \\dg in the next versions?\n\n* 'empty'. I suggest thinking about forbidding the situation with empty \noptions.\nIf we prohibit them, the issue will be resolved automatically.\n\n* The new meta-command will also make sense for versions <16.\nThe ADMIN OPTION is available in all supported versions.\n\n* The new meta-command will not show all roles. It will only show the \nroles included in other roles.\nTo show all roles you need to add an outer join between pg_roles and \npg_auth_members.\nBut all columns except \"role\" will be left blank. Is it worth doing this?\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Sat, 24 Jun 2023 18:11:31 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Sat, Jun 24, 2023 at 8:11 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\nwrote:\n\n> Notes\n> * The name of the new command. It's a good name, if not for the history.\n> There are two commands showing the same information about roles: \\du and\n> \\dr.\n> The addition of \\drg may be misinterpreted: if there is \\drg, then there\n> is also \\dug.\n> Maybe it's time to think about deprecating of the \\du command and leave\n> only \\dg in the next versions?\n>\n\nI would add \\dr as the new official command to complement adding \\drg and\ndeprecate both \\du and \\dg. Though actual removal and de-documenting\ndoesn't seem like a good idea. But if we ever did assign something\nnon-role to \\dr it would be very confusing.\n\n\n\n> * The new meta-command will also make sense for versions <16.\n> The ADMIN OPTION is available in all supported versions.\n>\n\nDoesn't every role pre-16 gain SET permission? We can also deduce whether\nthe grant provides INHERIT based upon the attribute of the role in question.\n\n\n> * The new meta-command will not show all roles. It will only show the\n> roles included in other roles.\n> To show all roles you need to add an outer join between pg_roles and\n> pg_auth_members.\n> But all columns except \"role\" will be left blank. Is it worth doing this?\n>\n>\nI'm inclined to want this. I would be good when specifying a role to\nfilter upon that all rows that do exist matching that filter end up in the\noutput regardless if they are standalone or not.\n\nDavid J.\n\nOn Sat, Jun 24, 2023 at 8:11 AM Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:Notes\n* The name of the new command. It's a good name, if not for the history.\nThere are two commands showing the same information about roles: \\du and \n\\dr.\nThe addition of \\drg may be misinterpreted: if there is \\drg, then there \nis also \\dug.\nMaybe it's time to think about deprecating of the \\du command and leave \nonly \\dg in the next versions? I would add \\dr as the new official command to complement adding \\drg and deprecate both \\du and \\dg.  Though actual removal and de-documenting doesn't seem like a good idea.  But if we ever did assign something non-role to \\dr it would be very confusing. \n* The new meta-command will also make sense for versions <16.\nThe ADMIN OPTION is available in all supported versions.Doesn't every role pre-16 gain SET permission?  We can also deduce whether the grant provides INHERIT based upon the attribute of the role in question.\n\n* The new meta-command will not show all roles. It will only show the \nroles included in other roles.\nTo show all roles you need to add an outer join between pg_roles and \npg_auth_members.\nBut all columns except \"role\" will be left blank. Is it worth doing this?I'm inclined to want this.  I would be good when specifying a role to filter upon that all rows that do exist matching that filter end up in the output regardless if they are standalone or not.David J.", "msg_date": "Sat, 24 Jun 2023 08:57:23 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 24.06.2023 18:57, David G. Johnston wrote:\n> On Sat, Jun 24, 2023 at 8:11 AM Pavel Luzanov \n> <p.luzanov@postgrespro.ru> wrote:\n>\n> There are two commands showing the same information about roles:\n> \\du and\n> \\dr.\n>\n>\n> I would add \\dr as the new official command to complement adding \\drg \n> and deprecate both \\du and \\dg.  Though actual removal and \n> de-documenting doesn't seem like a good idea. But if we ever did \n> assign something non-role to \\dr it would be very confusing.\n\nIt's my mistake and inattention. I was thinking about '\\du' and '\\dg', \nand wrote about '\\du' and '\\dr'.\nI agree that \\dr and \\drg the best names.\nSo, now concentrating on implementing \\drg.\n\n> * The new meta-command will also make sense for versions <16.\n> The ADMIN OPTION is available in all supported versions.\n>\n>\n> Doesn't every role pre-16 gain SET permission?  We can also deduce \n> whether the grant provides INHERIT based upon the attribute of the \n> role in question.\n\nIndeed! I will do so.\n\n>\n>\n> * The new meta-command will not show all roles. It will only show the\n> roles included in other roles.\n> To show all roles you need to add an outer join between pg_roles and\n> pg_auth_members.\n> But all columns except \"role\" will be left blank. Is it worth\n> doing this?\n>\n>\n> I'm inclined to want this.  I would be good when specifying a role to \n> filter upon that all rows that do exist matching that filter end up in \n> the output regardless if they are standalone or not.\n\nOk\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\n On 24.06.2023 18:57, David G. Johnston wrote:\n\n\n\n\nOn Sat, Jun\n 24, 2023 at 8:11 AM Pavel Luzanov <p.luzanov@postgrespro.ru>\n wrote:\n\n\n\nThere are two commands\n showing the same information about roles: \\du and \n \\dr.\n\n\n\n\n\n I would\n add \\dr as the new official command to complement adding\n \\drg and deprecate both \\du and \\dg.  Though actual\n removal and de-documenting doesn't seem like a good idea. \n But if we ever did assign something non-role to \\dr it\n would be very confusing.\n\n\n\n\n\n It's my mistake and inattention. I was thinking about '\\du' and\n '\\dg', and wrote about '\\du' and '\\dr'.\n I agree that \\dr and \\drg the best names.\n So, now concentrating on implementing \\drg.\n\n\n\n\n\n * The new meta-command will also make sense for versions\n <16.\n The ADMIN OPTION is available in all supported versions.\n\n\n\nDoesn't every\n role pre-16 gain SET permission?  We can also deduce whether\n the grant provides INHERIT based upon the attribute of the\n role in question.\n\n\n\n\n Indeed! I will do so.\n\n\n\n\n\n\n\n\n * The new meta-command will not show all roles. It will only\n show the \n roles included in other roles.\n To show all roles you need to add an outer join between\n pg_roles and \n pg_auth_members.\n But all columns except \"role\" will be left blank. Is it\n worth doing this?\n\n\n\n\nI'm inclined\n to want this.  I would be good when specifying a role to\n filter upon that all rows that do exist matching that filter\n end up in the output regardless if they are standalone or\n not.\n\n\n\n\n Ok\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Sun, 25 Jun 2023 17:44:51 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Please find attached new patch version.\nIt implements \\drg command and hides duplicates in \\du & \\dg commands.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Mon, 26 Jun 2023 22:29:46 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> Please find attached new patch version.\n> It implements \\drg command and hides duplicates in \\du & \\dg commands.\n\nI took a quick look through this, and have some minor suggestions:\n\n1. I was thinking in terms of dropping the \"Member of\" column entirely\nin \\du and \\dg. It doesn't tell you enough, and the output of those\ncommands is often too wide already.\n\n2. You have describeRoleGrants() set up to localize \"ADMIN\", \"INHERIT\",\nand \"SET\". Since those are SQL keywords, our usual practice is to not\nlocalize them; this'd simplify the code.\n\n3. Not sure about use of LEFT JOIN in the query. That will mean we\nget a row out even for roles that have no grants, which seems like\nclutter. The LEFT JOINs to r and g are fine, but I suggest changing\nthe first join to a plain join.\n\nBeyond those nits, I think this is a good approach and we should\nadopt it (including in v16).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 08 Jul 2023 13:07:21 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 08.07.2023 20:07, Tom Lane wrote:\n> 1. I was thinking in terms of dropping the \"Member of\" column entirely\n> in \\du and \\dg. It doesn't tell you enough, and the output of those\n> commands is often too wide already.\n\nI understood it that way that the dropping \"Member of\" column will be \ndone as part of another work for v17. [1]\nBut I'm ready to do it now.\n\n> 2. You have describeRoleGrants() set up to localize \"ADMIN\", \"INHERIT\",\n> and \"SET\". Since those are SQL keywords, our usual practice is to not\n> localize them; this'd simplify the code.\n\nThe reason is that \\du has translatable all attributes of the role, \nincluding NOINHERIT.\nI decided to make a new command the same way.\nBut I'm ready to leave them untranslatable, if no objections.\n\n> 3. Not sure about use of LEFT JOIN in the query. That will mean we\n> get a row out even for roles that have no grants, which seems like\n> clutter. The LEFT JOINs to r and g are fine, but I suggest changing\n> the first join to a plain join.\n\nIt was David's suggestion:\n\nOn 24.06.2023 18:57, David G. Johnston wrote:\n> On Sat, Jun 24, 2023 at 8:11 AM Pavel Luzanov \n> <p.luzanov@postgrespro.ru> wrote:\n>\n> * The new meta-command will not show all roles. It will only show the\n> roles included in other roles.\n> To show all roles you need to add an outer join between pg_roles and\n> pg_auth_members.\n> But all columns except \"role\" will be left blank. Is it worth\n> doing this?\n>\n>\n> I'm inclined to want this.  I would be good when specifying a role to \n> filter upon that all rows that do exist matching that filter end up in \n> the output regardless if they are standalone or not.\n\nPersonally, I tend to think that left join is not needed. No memberships \n- nothing shown.\n\nSo, I accepted all three suggestions. I will wait for other opinions and\nplan to implement discussed changes within a few days.\n\n1. https://www.postgresql.org/message-id/4133242.1687481416%40sss.pgh.pa.us\n\n-- \nPavel Luzanov\nPostgres Professional:https://postgrespro.com\n\n\n\n\n\n\nOn 08.07.2023 20:07, Tom Lane wrote:\n\n\n1. I was thinking in terms of dropping the \"Member of\" column entirely\nin \\du and \\dg. It doesn't tell you enough, and the output of those\ncommands is often too wide already.\n\n\nI understood it that way that the dropping \"Member of\"\n column will be done as part of another work for v17. [1]\nBut I'm ready to do it now.\n\n\n\n2. You have describeRoleGrants() set up to localize \"ADMIN\", \"INHERIT\",\nand \"SET\". Since those are SQL keywords, our usual practice is to not\nlocalize them; this'd simplify the code.\n\n\nThe reason is that \\du has translatable all attributes of\n the role, including NOINHERIT.\nI decided to make a new command the same way.\nBut I'm ready to leave them untranslatable, if no\n objections.\n\n\n\n3. Not sure about use of LEFT JOIN in the query. That will mean we\nget a row out even for roles that have no grants, which seems like\nclutter. The LEFT JOINs to r and g are fine, but I suggest changing\nthe first join to a plain join.\n\n\nIt was David's suggestion:\n\n\nOn 24.06.2023 18:57, David G.\n Johnston wrote:\n\n\n\n\n\n\n\nOn Sat, Jun 24, 2023 at 8:11 AM\n Pavel Luzanov <p.luzanov@postgrespro.ru>\n wrote:\n\n\n * The new meta-command will not show all roles. It will\n only show the \n\n roles included in other roles.\n\n To show all roles you need to add an outer join between\n pg_roles and \n\n pg_auth_members.\n\n But all columns except \"role\" will be left blank. Is it\n worth doing this?\n\n\n\n\nI'm inclined to want this.  I\n would be good when specifying a role to filter upon that\n all rows that do exist matching that filter end up in\n the output regardless if they are standalone or not.\n\n\n\n\n\n Personally, I tend to think that left join is not needed. No\n memberships - nothing shown.\n\n So, I accepted all three suggestions. I will wait for other\n opinions and\n plan to implement discussed changes within a few days.\n\n1.\n https://www.postgresql.org/message-id/4133242.1687481416%40sss.pgh.pa.us\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Sun, 9 Jul 2023 13:56:44 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 09.07.2023 13:56, Pavel Luzanov wrote:\n> On 08.07.2023 20:07, Tom Lane wrote:\n>> 1. I was thinking in terms of dropping the \"Member of\" column entirely\n>> in \\du and \\dg. It doesn't tell you enough, and the output of those\n>> commands is often too wide already.\n>\n>> 2. You have describeRoleGrants() set up to localize \"ADMIN\", \"INHERIT\",\n>> and \"SET\". Since those are SQL keywords, our usual practice is to not\n>> localize them; this'd simplify the code.\n>\n>\n>> 3. Not sure about use of LEFT JOIN in the query. That will mean we\n>> get a row out even for roles that have no grants, which seems like\n>> clutter. The LEFT JOINs to r and g are fine, but I suggest changing\n>> the first join to a plain join.\n>\n> So, I accepted all three suggestions. I will wait for other opinions and\n> plan to implement discussed changes within a few days.\n\nPlease review the updated version with suggested changes.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com", "msg_date": "Wed, 12 Jul 2023 13:21:42 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 08.07.2023 20:07, Tom Lane wrote\n> 3. Not sure about use of LEFT JOIN in the query. That will mean we\n> get a row out even for roles that have no grants, which seems like\n> clutter. The LEFT JOINs to r and g are fine, but I suggest changing\n> the first join to a plain join.\n\nHm.\nCan you explain why LEFT JOIN to r and g are fine after removing LEFT \nJOIN to pam?\nWhy not to change all three left joins to plain join?\n\nThe query for v16+ now looks like:\n\nSELECT m.rolname AS \"Role name\", r.rolname AS \"Member of\",\n   pg_catalog.concat_ws(', ',\n     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n     CASE WHEN pam.inherit_option THEN 'INHERIT' END,\n     CASE WHEN pam.set_option THEN 'SET' END\n   ) AS \"Options\",\n   g.rolname AS \"Grantor\"\nFROM pg_catalog.pg_roles m\n      JOIN pg_catalog.pg_auth_members pam ON (pam.member = m.oid)\n      LEFT JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)\n      LEFT JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)\nWHERE m.rolname !~ '^pg_'\nORDER BY 1, 2, 4;\n\n\nAnd for versions <16 I forget to simplify expression for 'Options' \ncolumn after removing LEFT JOIN on pam:\n\nSELECT m.rolname AS \"Role name\", r.rolname AS \"Member of\",\n   pg_catalog.concat_ws(', ',\n     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n     CASE WHEN pam.roleid IS NOT NULL AND m.rolinherit THEN 'INHERIT' END,\n     CASE WHEN pam.roleid IS NOT NULL THEN 'SET' END\n   ) AS \"Options\",\n   g.rolname AS \"Grantor\"\nFROM pg_catalog.pg_roles m\n      JOIN pg_catalog.pg_auth_members pam ON (pam.member = m.oid)\n      LEFT JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)\n      LEFT JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)\nWHERE m.rolname !~ '^pg_'\nORDER BY 1, 2, 4;\n\nI plan to replace it to:\n\n   pg_catalog.concat_ws(', ',\n     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n     CASE WHEN m.rolinherit THEN 'INHERIT' END,\n     'SET'\n   ) AS \"Options\",\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 11:26:07 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> On 08.07.2023 20:07, Tom Lane wrote\n>> 3. Not sure about use of LEFT JOIN in the query. That will mean we\n>> get a row out even for roles that have no grants, which seems like\n>> clutter. The LEFT JOINs to r and g are fine, but I suggest changing\n>> the first join to a plain join.\n\n> Can you explain why LEFT JOIN to r and g are fine after removing LEFT \n> JOIN to pam?\n\nThe idea with that, IMO, is to do something at least minimally sane\nif there's a bogus role OID in pg_auth_members. With plain joins,\nthe output row would disappear and you'd have no clue that anything\nis wrong. With left joins, you get a row with a null column and\nthere's reason to investigate why.\n\nSince such a case should not happen in normal use, I don't think it\ncounts for discussions about compactness of output. However, this\nis also an argument for using a plain not left join between pg_roles\nand pg_auth_members: if we do it as per the earlier patch, then\nnulls in the output are common and wouldn't draw your attention.\n(Indeed, I think broken and not-broken pg_auth_members contents\nwould be indistinguishable.)\n\n> I plan to replace it to:\n\n>   pg_catalog.concat_ws(', ',\n>     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n>     CASE WHEN m.rolinherit THEN 'INHERIT' END,\n>     'SET'\n>   ) AS \"Options\",\n\nThat does not seem right. Is it impossible for pam.set_option\nto be false? Even if it is, should this code assume that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jul 2023 11:01:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Thu, Jul 13, 2023 at 8:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> > I plan to replace it to:\n>\n> > pg_catalog.concat_ws(', ',\n> > CASE WHEN pam.admin_option THEN 'ADMIN' END,\n> > CASE WHEN m.rolinherit THEN 'INHERIT' END,\n> > 'SET'\n> > ) AS \"Options\",\n>\n> That does not seem right. Is it impossible for pam.set_option\n> to be false? Even if it is, should this code assume that?\n>\n>\nThat replacement is for version 15 and earlier where pam.set_option doesn't\nexist at all and the presence of a row here means that set has been granted.\n\nDavid J.\n\nOn Thu, Jul 13, 2023 at 8:01 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I plan to replace it to:\n\n>    pg_catalog.concat_ws(', ',\n>      CASE WHEN pam.admin_option THEN 'ADMIN' END,\n>      CASE WHEN m.rolinherit THEN 'INHERIT' END,\n>      'SET'\n>    ) AS \"Options\",\n\nThat does not seem right.  Is it impossible for pam.set_option\nto be false?  Even if it is, should this code assume that?That replacement is for version 15 and earlier where pam.set_option doesn't exist at all and the presence of a row here means that set has been granted.David J.", "msg_date": "Thu, 13 Jul 2023 08:40:35 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 13.07.2023 18:01, Tom Lane wrote:\n> The idea with that, IMO, is to do something at least minimally sane\n> if there's a bogus role OID in pg_auth_members. With plain joins,\n> the output row would disappear and you'd have no clue that anything\n> is wrong. With left joins, you get a row with a null column and\n> there's reason to investigate why.\n>\n> Since such a case should not happen in normal use, I don't think it\n> counts for discussions about compactness of output. However, this\n> is also an argument for using a plain not left join between pg_roles\n> and pg_auth_members: if we do it as per the earlier patch, then\n> nulls in the output are common and wouldn't draw your attention.\n> (Indeed, I think broken and not-broken pg_auth_members contents\n> would be indistinguishable.)\n\nIt reminds me about defensive programming practices.\nThat's great, thanks for explanation.\n\n> That does not seem right. Is it impossible for pam.set_option\n> to be false? Even if it is, should this code assume that?\n\nFor versions before 16, including one role to another automatically\ngives possibility to issue SET ROLE.\n\nIMO, the only question is whether it is correct to show IMPLICIT and\nSET options in versions where they are not actually present\nin pg_auth_members, but can be determined.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Thu, 13 Jul 2023 18:59:31 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Pavel Luzanov <p.luzanov@postgrespro.ru> writes:\n> On 13.07.2023 18:01, Tom Lane wrote:\n>> That does not seem right. Is it impossible for pam.set_option\n>> to be false? Even if it is, should this code assume that?\n\n> For versions before 16, including one role to another automatically\n> gives possibility to issue SET ROLE.\n\nRight, -ENOCAFFEINE.\n\n> IMO, the only question is whether it is correct to show IMPLICIT and\n> SET options in versions where they are not actually present\n> in pg_auth_members, but can be determined.\n\nHmm, that's definitely a judgment call. You could argue that it's\nuseful and consistent, but also that it's confusing to somebody\nwho's not familiar with the new terminology. On balance I'd lean\nto showing them, but I won't fight hard for that viewpoint.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 13 Jul 2023 12:32:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 13.07.2023 11:26, Pavel Luzanov wrote:\n> And for versions <16 I forget to simplify expression for 'Options' \n> column after removing LEFT JOIN on pam:\n>\n> SELECT m.rolname AS \"Role name\", r.rolname AS \"Member of\",\n>   pg_catalog.concat_ws(', ',\n>     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n>     CASE WHEN pam.roleid IS NOT NULL AND m.rolinherit THEN 'INHERIT' END,\n>     CASE WHEN pam.roleid IS NOT NULL THEN 'SET' END\n>   ) AS \"Options\",\n>   g.rolname AS \"Grantor\"\n> FROM pg_catalog.pg_roles m\n>      JOIN pg_catalog.pg_auth_members pam ON (pam.member = m.oid)\n>      LEFT JOIN pg_catalog.pg_roles r ON (pam.roleid = r.oid)\n>      LEFT JOIN pg_catalog.pg_roles g ON (pam.grantor = g.oid)\n> WHERE m.rolname !~ '^pg_'\n> ORDER BY 1, 2, 4;\n>\n> I plan to replace it to:\n>\n>   pg_catalog.concat_ws(', ',\n>     CASE WHEN pam.admin_option THEN 'ADMIN' END,\n>     CASE WHEN m.rolinherit THEN 'INHERIT' END,\n>     'SET'\n>   ) AS \"Options\",\n>\n\nThe new version contains only this change.\n\n-- \n-----\nPavel Luzanov", "msg_date": "Fri, 14 Jul 2023 00:23:28 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "I tried this out. It looks good to me, and I like it. Not translating\nthe labels seems correct to me.\n\n+1 for backpatching to 16, given that it's a psql-only change that\npertains to a backend change that was done in the 16 timeframe.\n\nRegarding the controversy of showing SET for previous versions, I think\nit's clearer if it's shown, because ultimately what the user really\nwants to know is if the role can be SET to; they don't want to have to\nlearn from memory in which version they can SET because the column is\nempty and in which version they have to look for the label.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Small aircraft do not crash frequently ... usually only once!\"\n (ponder, http://thedailywtf.com/)\n\n\n", "msg_date": "Wed, 19 Jul 2023 12:13:28 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I tried this out. It looks good to me, and I like it. Not translating\n> the labels seems correct to me.\n> +1 for backpatching to 16, given that it's a psql-only change that\n> pertains to a backend change that was done in the 16 timeframe.\n\nAgreed. In the interests of moving things along, I'll take point\non getting this committed.\n\n> Regarding the controversy of showing SET for previous versions, I think\n> it's clearer if it's shown, because ultimately what the user really\n> wants to know is if the role can be SET to; they don't want to have to\n> learn from memory in which version they can SET because the column is\n> empty and in which version they have to look for the label.\n\nSeems reasonable. I'll go with that interpretation unless there's\npretty quick pushback.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jul 2023 11:39:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> +1 for backpatching to 16, given that it's a psql-only change that\n>> pertains to a backend change that was done in the 16 timeframe.\n\n> Agreed. In the interests of moving things along, I'll take point\n> on getting this committed.\n\nAnd done, with some minor editorialization. I'll go mark the\nopen item as closed.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 19 Jul 2023 12:47:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On Wed, Jul 19, 2023 at 9:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I wrote:\n> > Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> >> +1 for backpatching to 16, given that it's a psql-only change that\n> >> pertains to a backend change that was done in the 16 timeframe.\n>\n> > Agreed. In the interests of moving things along, I'll take point\n> > on getting this committed.\n>\n> And done, with some minor editorialization. I'll go mark the\n> open item as closed.\n>\n>\nThank You!\n\nDavid J.\n\nOn Wed, Jul 19, 2023 at 9:47 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> +1 for backpatching to 16, given that it's a psql-only change that\n>> pertains to a backend change that was done in the 16 timeframe.\n\n> Agreed.  In the interests of moving things along, I'll take point\n> on getting this committed.\n\nAnd done, with some minor editorialization.  I'll go mark the\nopen item as closed.Thank You!David J.", "msg_date": "Wed, 19 Jul 2023 09:59:45 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 19.07.2023 19:47, Tom Lane wrote:\n> And done, with some minor editorialization.\n\nThanks to everyone who participated in the work.\nSpecial thanks to David for moving forward this patch for a long time, \nand to Tom for taking commit responsibilities.\n\n-- \nPavel Luzanov\nPostgres Professional: https://postgrespro.com\n\n\n\n", "msg_date": "Wed, 19 Jul 2023 20:44:36 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" }, { "msg_contents": "On 7/19/23 1:44 PM, Pavel Luzanov wrote:\r\n> On 19.07.2023 19:47, Tom Lane wrote:\r\n>> And done, with some minor editorialization.\r\n> \r\n> Thanks to everyone who participated in the work.\r\n> Special thanks to David for moving forward this patch for a long time, \r\n> and to Tom for taking commit responsibilities.\r\n\r\n[RMT]\r\n\r\n+1; thanks to everyone for seeing this through!\r\n\r\nJonathan", "msg_date": "Thu, 20 Jul 2023 10:02:37 -0400", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: psql: Add role's membership options to the \\du+ command" } ]
[ { "msg_contents": "Doc: add XML ID attributes to <sectN> and <varlistentry> tags.\n\nThis doesn't have any external effect at the moment, but it\nwill allow adding useful link-discoverability features later.\n\nBrar Piening, reviewed by Karl Pinc.\n\nDiscussion: https://postgr.es/m/CAB8KJ=jpuQU9QJe4+RgWENrK5g9jhoysMw2nvTN_esoOU0=a_w@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/78ee60ed84bb3a1cf0b6bd9a715dcbcf252a90f5\n\nModified Files\n--------------\ndoc/src/sgml/amcheck.sgml | 8 +-\ndoc/src/sgml/arch-dev.sgml | 6 +-\ndoc/src/sgml/auth-delay.sgml | 4 +-\ndoc/src/sgml/auto-explain.sgml | 32 +--\ndoc/src/sgml/basebackup-to-shell.sgml | 4 +-\ndoc/src/sgml/basic-archive.sgml | 6 +-\ndoc/src/sgml/bloom.sgml | 10 +-\ndoc/src/sgml/btree-gin.sgml | 4 +-\ndoc/src/sgml/btree-gist.sgml | 4 +-\ndoc/src/sgml/charset.sgml | 48 ++--\ndoc/src/sgml/citext.sgml | 10 +-\ndoc/src/sgml/config.sgml | 18 +-\ndoc/src/sgml/contrib-spi.sgml | 8 +-\ndoc/src/sgml/cube.sgml | 12 +-\ndoc/src/sgml/datatype.sgml | 30 +-\ndoc/src/sgml/ddl.sgml | 58 ++--\ndoc/src/sgml/dict-int.sgml | 4 +-\ndoc/src/sgml/dict-xsyn.sgml | 4 +-\ndoc/src/sgml/docguide.sgml | 60 ++--\ndoc/src/sgml/earthdistance.sgml | 4 +-\ndoc/src/sgml/ecpg.sgml | 516 +++++++++++++++++-----------------\ndoc/src/sgml/extend.sgml | 88 +++---\ndoc/src/sgml/features.sgml | 8 +-\ndoc/src/sgml/func.sgml | 24 +-\ndoc/src/sgml/fuzzystrmatch.sgml | 8 +-\ndoc/src/sgml/geqo.sgml | 2 +-\ndoc/src/sgml/history.sgml | 2 +-\ndoc/src/sgml/hstore.sgml | 16 +-\ndoc/src/sgml/install-windows.sgml | 10 +-\ndoc/src/sgml/installation.sgml | 274 +++++++++---------\ndoc/src/sgml/intagg.sgml | 4 +-\ndoc/src/sgml/intarray.sgml | 10 +-\ndoc/src/sgml/isn.sgml | 12 +-\ndoc/src/sgml/jit.sgml | 2 +-\ndoc/src/sgml/json.sgml | 2 +-\ndoc/src/sgml/libpq.sgml | 4 +-\ndoc/src/sgml/lo.sgml | 8 +-\ndoc/src/sgml/logicaldecoding.sgml | 8 +-\ndoc/src/sgml/ltree.sgml | 12 +-\ndoc/src/sgml/nls.sgml | 8 +-\ndoc/src/sgml/oldsnapshot.sgml | 2 +-\ndoc/src/sgml/pageinspect.sgml | 14 +-\ndoc/src/sgml/perform.sgml | 10 +-\ndoc/src/sgml/pgbuffercache.sgml | 8 +-\ndoc/src/sgml/pgcrypto.sgml | 74 ++---\ndoc/src/sgml/pgfreespacemap.sgml | 6 +-\ndoc/src/sgml/pgprewarm.sgml | 6 +-\ndoc/src/sgml/pgrowlocks.sgml | 6 +-\ndoc/src/sgml/pgstatstatements.sgml | 12 +-\ndoc/src/sgml/pgstattuple.sgml | 4 +-\ndoc/src/sgml/pgsurgery.sgml | 4 +-\ndoc/src/sgml/pgtrgm.sgml | 14 +-\ndoc/src/sgml/pgvisibility.sgml | 4 +-\ndoc/src/sgml/pgwalinspect.sgml | 14 +-\ndoc/src/sgml/plpgsql.sgml | 92 +++---\ndoc/src/sgml/plpython.sgml | 12 +-\ndoc/src/sgml/postgres-fdw.sgml | 40 +--\ndoc/src/sgml/problems.sgml | 6 +-\ndoc/src/sgml/protocol.sgml | 14 +-\ndoc/src/sgml/queries.sgml | 2 +-\ndoc/src/sgml/ref/alter_role.sgml | 26 +-\ndoc/src/sgml/ref/alter_table.sgml | 126 ++++-----\ndoc/src/sgml/ref/commit.sgml | 2 +-\ndoc/src/sgml/ref/create_database.sgml | 30 +-\ndoc/src/sgml/ref/create_table.sgml | 90 +++---\ndoc/src/sgml/ref/initdb.sgml | 56 ++--\ndoc/src/sgml/ref/pgbench.sgml | 116 ++++----\ndoc/src/sgml/ref/psql-ref.sgml | 424 ++++++++++++++--------------\ndoc/src/sgml/ref/rollback.sgml | 2 +-\ndoc/src/sgml/regress.sgml | 26 +-\ndoc/src/sgml/rowtypes.sgml | 4 +-\ndoc/src/sgml/rules.sgml | 8 +-\ndoc/src/sgml/runtime.sgml | 2 +-\ndoc/src/sgml/seg.sgml | 12 +-\ndoc/src/sgml/sepgsql.sgml | 12 +-\ndoc/src/sgml/sources.sgml | 40 +--\ndoc/src/sgml/sslinfo.sgml | 4 +-\ndoc/src/sgml/tablefunc.sgml | 14 +-\ndoc/src/sgml/tsm-system-rows.sgml | 2 +-\ndoc/src/sgml/tsm-system-time.sgml | 2 +-\ndoc/src/sgml/unaccent.sgml | 6 +-\ndoc/src/sgml/uuid-ossp.sgml | 6 +-\ndoc/src/sgml/xfunc.sgml | 12 +-\ndoc/src/sgml/xml2.sgml | 14 +-\ndoc/src/sgml/xoper.sgml | 12 +-\n85 files changed, 1372 insertions(+), 1372 deletions(-)", "msg_date": "Mon, 09 Jan 2023 20:08:30 +0000", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry> tags." }, { "msg_contents": "On 09.01.23 21:08, Tom Lane wrote:\n> Doc: add XML ID attributes to <sectN> and <varlistentry> tags.\n\nAny reason the new ids in create_database.sgml deviate from the normal \nnaming schemes used everywhere else? Is it to preserve the existing \ncreate-database-strategy? Maybe we should rename that one and make the \nnew ones consistent?\n\n\n", "msg_date": "Wed, 11 Jan 2023 23:10:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry>\n tags." }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 09.01.23 21:08, Tom Lane wrote:\n>> Doc: add XML ID attributes to <sectN> and <varlistentry> tags.\n\n> Any reason the new ids in create_database.sgml deviate from the normal \n> naming schemes used everywhere else? Is it to preserve the existing \n> create-database-strategy? Maybe we should rename that one and make the \n> new ones consistent?\n\nYou'd have to ask Brar that, I didn't question his choices too much.\n\nI have no objection to changing things as you suggest. I'm hesitant to\nrename very many pre-existing IDs for fear of breaking peoples' bookmarks,\nbut changing create-database-strategy doesn't seem like a big deal.\n\nThat reminds me that I was going to suggest fixing the few existing\nvariances from the \"use '-' not '_'\" policy:\n\n$ grep 'id=\"[a-zA-Z0-9-]*_' *sgml ref/*sgml\nconfig.sgml: <varlistentry id=\"guc-plan-cache_mode\" xreflabel=\"plan_cache_mode\">\nlibpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_OK\">\nlibpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_REJECT\">\nlibpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_RESPONSE\">\nlibpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_ATTEMPT\">\npgbuffercache.sgml: <table id=\"pgbuffercache_summary-columns\">\nref/pg_checksums.sgml: <refsect1 id=\"r1-app-pg_checksums-1\">\n\nAs you say, this isn't required by the toolchain any longer, but it\nseems like a good idea to have consistent tag spelling. I'm particularly\nannoyed by guc-plan-cache_mode, which isn't even consistent with itself\nlet alone every other guc-XXX tag.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 18:05:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry>\n tags." }, { "msg_contents": "On 12.01.2023 at 00:05, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n>> Any reason the new ids in create_database.sgml deviate from the normal\n>> naming schemes used everywhere else? Is it to preserve the existing\n>> create-database-strategy? Maybe we should rename that one and make the\n>> new ones consistent?\n\nI don't remember every single choice I made but the general goal was to\nat least stay consistent within a varlist/[sub]section/file/chapter and\n*never* change pre-existing ids since somebody may already be pointing\nto them.\n\nAfter all it is just an identifier that is supposed to be unique and\nshould not hurt our aesthetic feelings too much.\n\nThe consistency is mostly because we tend to like it and maybe also to\navoid collisions when making up new ids but I doubt that anybody will\never try to remember an id or infer one form knowledge about the thing\nit should be pointing at. I consider it a pretty opaque string that is\nmeant for copy-paste from a browser to some editing window.\n\nIt is all in our head and as a matter of fact we could be using UUIDs as\nIds and save us from any further consistency issues. It's just that they\nlook so ugly.\n\n> You'd have to ask Brar that, I didn't question his choices too much.\n>\n> I have no objection to changing things as you suggest.I'm hesitant to\n> rename very many pre-existing IDs for fear of breaking peoples' bookmarks,\n> but changing create-database-strategy doesn't seem like a big deal.\n\nPersonally I'd only d this for ids that haven't been \"released\" as\nofficial documentation (even as \"devel\" since the new things tend to\nattract more discussions and probably linking). I very much consider\nURLs as UI and go long ways to keep them consistent (HTTP 3xx is a\nfriend of mine) as you never know who might be pointing at them from\nwhere and making them a moving target defeats their purpose and probably\nhurt more than some inconsistency.\n\nRegards,\n\nBrar\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 06:34:44 +0100", "msg_from": "Brar Piening <brar@gmx.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry>\n tags." }, { "msg_contents": "On 12.01.23 00:05, Tom Lane wrote:\n> That reminds me that I was going to suggest fixing the few existing\n> variances from the \"use '-' not '_'\" policy:\n> \n> $ grep 'id=\"[a-zA-Z0-9-]*_' *sgml ref/*sgml\n> config.sgml: <varlistentry id=\"guc-plan-cache_mode\" xreflabel=\"plan_cache_mode\">\n\nshould be fixed\n\n> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_OK\">\n> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_REJECT\">\n> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_RESPONSE\">\n> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_ATTEMPT\">\n\nI think we can leave these. They are internally consistent.\n\n> pgbuffercache.sgml: <table id=\"pgbuffercache_summary-columns\">\n\nshould be fixed\n\n> ref/pg_checksums.sgml: <refsect1 id=\"r1-app-pg_checksums-1\">\n\npretty bogus\n\n\n\n", "msg_date": "Mon, 16 Jan 2023 12:12:10 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry>\n tags." }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 12.01.23 00:05, Tom Lane wrote:\n>> That reminds me that I was going to suggest fixing the few existing\n>> variances from the \"use '-' not '_'\" policy:\n>> \n>> $ grep 'id=\"[a-zA-Z0-9-]*_' *sgml ref/*sgml\n>> config.sgml: <varlistentry id=\"guc-plan-cache_mode\" xreflabel=\"plan_cache_mode\">\n\n> should be fixed\n\n>> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_OK\">\n>> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_REJECT\">\n>> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_RESPONSE\">\n>> libpq.sgml: <varlistentry id=\"libpq-PQpingParams-PQPING_NO_ATTEMPT\">\n\n> I think we can leave these. They are internally consistent.\n\n>> pgbuffercache.sgml: <table id=\"pgbuffercache_summary-columns\">\n\n> should be fixed\n\n>> ref/pg_checksums.sgml: <refsect1 id=\"r1-app-pg_checksums-1\">\n\n> pretty bogus\n\nOK, done like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Jan 2023 17:13:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgsql: Doc: add XML ID attributes to <sectN> and <varlistentry>\n tags." } ]
[ { "msg_contents": "Hi,\n\nA couple times when investigating data corruption issues, the last time just\nyesterday in [1], I needed to see the offsets affected by PRUNE and VACUUM\nrecords. As that's probably not just me, I think we should make that change\nin-tree.\n\nThe attached patch adds details to XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\nXLOG_HEAP2_FREEZE_PAGE.\n\nThe biggest issue I have with the patch is that it's very hard to figure out\nwhat punctuation to use where ;). The existing code is very inconsistent.\n\nI chose to include infomask[2] for the different freeze plans mainly because\nit looks odd to see different plans without a visible reason. But I'm not sure\nthat's the right choice.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/CANtu0ojby3eBdMXfs4QmS%2BK1avBc7NcRq_Ot5bnzrbwM%2BuQ55w%40mail.gmail.com", "msg_date": "Mon, 9 Jan 2023 13:58:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Jan 9, 2023 at 1:58 PM Andres Freund <andres@anarazel.de> wrote:\n> A couple times when investigating data corruption issues, the last time just\n> yesterday in [1], I needed to see the offsets affected by PRUNE and VACUUM\n> records. As that's probably not just me, I think we should make that change\n> in-tree.\n\nI remember how useful this was when we were investigating that early\nbug in 14, that turned out to be in parallel VACUUM. So I'm all in\nfavor of it.\n\n> The attached patch adds details to XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\n> XLOG_HEAP2_FREEZE_PAGE.\n\nI'm bound to end up doing the same in index access methods. Might make\nsense for the utility routines to live somewhere more centralized, at\nleast when code reuse is likely. Practically every index AM has WAL\nrecords that include a sorted page offset number array, just like\nthese ones. It's a very standard thing, obviously.\n\n> I chose to include infomask[2] for the different freeze plans mainly because\n> it looks odd to see different plans without a visible reason. But I'm not sure\n> that's the right choice.\n\nI don't think that it is particularly necessary to do so in order for\nthe output to make sense -- pg_waldump is inherently a tool for\nexperts. What it comes down to for me is whether or not this\ninformation is sufficiently useful to display, and/or can be (or needs\nto be) controlled via some kind of verbosity knob.\n\nI think that it easily could be useful, and I also think that it\neasily could be a bit annoying. How hard would it be to invent a\ngeneral mechanism to control the verbosity of what we'll show for each\nWAL record?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 9 Jan 2023 19:59:42 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Hi,\n\nOn 2023-01-09 19:59:42 -0800, Peter Geoghegan wrote:\n> On Mon, Jan 9, 2023 at 1:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > A couple times when investigating data corruption issues, the last time just\n> > yesterday in [1], I needed to see the offsets affected by PRUNE and VACUUM\n> > records. As that's probably not just me, I think we should make that change\n> > in-tree.\n> \n> I remember how useful this was when we were investigating that early\n> bug in 14, that turned out to be in parallel VACUUM. So I'm all in\n> favor of it.\n\nCool.\n\n\n> > The attached patch adds details to XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\n> > XLOG_HEAP2_FREEZE_PAGE.\n> \n> I'm bound to end up doing the same in index access methods. Might make\n> sense for the utility routines to live somewhere more centralized, at\n> least when code reuse is likely. Practically every index AM has WAL\n> records that include a sorted page offset number array, just like\n> these ones. It's a very standard thing, obviously.\n\nHm, there doesn't seem to be a great location for them today. I guess we could\nadd something like src/include/access/rmgrdesc_utils.h? And put the\nimplementation in src/backend/access/rmgrdesc/rmgrdesc_utils.c? I first was\nthinking of just rmgrdesc.[ch], but custom rmgrs added\nsrc/bin/pg_waldump/rmgrdesc.[ch] ...\n\n\n> > I chose to include infomask[2] for the different freeze plans mainly because\n> > it looks odd to see different plans without a visible reason. But I'm not sure\n> > that's the right choice.\n> \n> I don't think that it is particularly necessary to do so in order for\n> the output to make sense -- pg_waldump is inherently a tool for\n> experts. What it comes down to for me is whether or not this\n> information is sufficiently useful to display, and/or can be (or needs\n> to be) controlled via some kind of verbosity knob.\n\nIt seemed useful enough to me, but I likely also stare more at this stuff than\nmost. Compared to the list of offsets it's not that much content.\n\n\n> How hard would it be to invent a general mechanism to control the verbosity\n> of what we'll show for each WAL record?\n\nNontrivial, I'm afraid. We don't pass any relevant parameters to rm_desc:\n\tvoid\t\t(*rm_desc) (StringInfo buf, XLogReaderState *record);\n\nso we'd need to patch all of them. That might be worth doing at some point,\nbut I don't want to tackle it right now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Jan 2023 11:34:57 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:35 AM Andres Freund <andres@anarazel.de> wrote:\n> Nontrivial, I'm afraid. We don't pass any relevant parameters to rm_desc:\n> void (*rm_desc) (StringInfo buf, XLogReaderState *record);\n>\n> so we'd need to patch all of them. That might be worth doing at some point,\n> but I don't want to tackle it right now.\n\nOkay. Let's just get the basics in soon, then.\n\nI would like to have a similar capability for index access methods,\nbut mostly just for investigating performance. Whenever we've really\nneeded something like this for debugging it seems to have been a\nheapam thing, just because there's a lot more that can go wrong with\npruning, which is spread across many different places.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Jan 2023 14:53:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 14:53:54 -0800, Peter Geoghegan wrote:\n> On Tue, Jan 10, 2023 at 11:35 AM Andres Freund <andres@anarazel.de> wrote:\n> > Nontrivial, I'm afraid. We don't pass any relevant parameters to rm_desc:\n> > void (*rm_desc) (StringInfo buf, XLogReaderState *record);\n> >\n> > so we'd need to patch all of them. That might be worth doing at some point,\n> > but I don't want to tackle it right now.\n> \n> Okay. Let's just get the basics in soon, then.\n\n> I would like to have a similar capability for index access methods,\n> but mostly just for investigating performance. Whenever we've really\n> needed something like this for debugging it seems to have been a\n> heapam thing, just because there's a lot more that can go wrong with\n> pruning, which is spread across many different places.\n\nWhat are your thoughts about the place for the helper functions? You're ok\nwith rmgrdesc_utils.[ch]?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:00:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Wed, Jan 11, 2023 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> What are your thoughts about the place for the helper functions? You're ok\n> with rmgrdesc_utils.[ch]?\n\nYeah, that seems okay.\n\nWe may well need to put more stuff in that file. We're overdue a big\noverhaul of the rmgr output, so that everybody uses the same format\nfor everything. We made some progress on that for 16 already, by\nstandardizing on the name snapshotConflictHorizon, but a lot of\nannoying inconsistencies still remain. Like the punctuation issue you\nmentioned.\n\nIdeally we'd be able to make the output more easy to manipulate via\nthe SQL interface from pg_walinspect, or perhaps via scripting. That\nwould require some rules that are imposed top-down, so that consumers\nof the data can make certain general assumptions. But that's fairly\nnatural. It's not like there is just inherently a great deal of\ndiversity that we need to be considered. For example, the WAL records\nused by each individual index access method are all very similar. In\nfact the most important index AM WAL records used by each index AM\n(e.g. insert, delete, vacuum) have virtually the same format as each\nother already.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:11:32 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Wed, Jan 11, 2023 at 3:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Jan 11, 2023 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > What are your thoughts about the place for the helper functions? You're ok\n> > with rmgrdesc_utils.[ch]?\n>\n> Yeah, that seems okay.\n\nBTW, while playing around with this patch today, I noticed that it\nwon't display the number of elements in each offset array directly.\nPerhaps it's worth including that, too?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 16 Jan 2023 19:09:16 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Hi,\n\nI have taken a stab at doing some of the tasks listed in this email.\n\nI have made the new files rmgr_utils.c/h.\n\nI have come up with a standard format that I like for the output and\nused it in all the heap record types.\n\nExamples below:\n\nsnapshotConflictHorizon: 2184, nplans: 2, plans [ { xmax: 0, infomask:\n2816, infomask2: 2, ntuples: 5, offsets: [ 10, 11, 12, 18, 71 ] }, {\nxmax: 0, infomask: 11008, infomask2: 2, ntuples: 2, offsets: [ 72, 73\n] } ]\n\nsnapshotConflictHorizon: 2199, nredirected: 4, ndead: 0, nunused: 4,\nredirected: [ 1->38, 2->39, 3->40, 4->41 ], dead: [], unused: [ 24,\n25, 26, 27, 37 ]\n\nI started documenting it in the rmgr_utils.h header file in a comment,\nhowever it may be worth a README?\n\nI haven't polished this description of the format (or added examples,\netc) or used it in the btree-related functions because I assume the\nformat and helper function API will need more discussion.\n\nThis is still a rough draft, as I anticipate changes will be requested.\nI would split it into multiple patches, etc. But I am looking for\nfeedback on the suggested format and the array formatting helper\nfunction API.\n\nPerhaps there should also be example output of the offset arrays in\npgwalinspect docs?\n\nI've changed the array format helper functions that Andres added to be a\nsingle function with an additional layer of indirection so that any\nrecord with an array can use it regardless of type and format of the\nindividual elements. The signature is based somewhat off of qsort_r()\nand allows the user to pass a function with the the desired format of\nthe elements.\n\nOn a semi-unrelated note, I think it might be nice to have a comment in\nheapam_xlog.h about what the infobits fields actually are and why they\nexist -- e.g. we only need a subset of infomask[2] bits in these\nrecords.\nI put a random comment in the code where I think it should go.\nI will delete it later, of course.\n\nOn Mon, Jan 9, 2023 at 11:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Jan 9, 2023 at 1:58 PM Andres Freund <andres@anarazel.de> wrote:\n> > The attached patch adds details to XLOG_HEAP2_PRUNE, XLOG_HEAP2_VACUUM,\n> > XLOG_HEAP2_FREEZE_PAGE.\n>\n> I'm bound to end up doing the same in index access methods. Might make\n> sense for the utility routines to live somewhere more centralized, at\n> least when code reuse is likely. Practically every index AM has WAL\n> records that include a sorted page offset number array, just like\n> these ones. It's a very standard thing, obviously.\n\nI plan to add these if the format and API I suggested seems like the\nright direction.\n\nOn Tue, Jan 10, 2023 at 2:35 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> > > I chose to include infomask[2] for the different freeze plans mainly because\n> > > it looks odd to see different plans without a visible reason. But I'm not sure\n> > > that's the right choice.\n> >\n> > I don't think that it is particularly necessary to do so in order for\n> > the output to make sense -- pg_waldump is inherently a tool for\n> > experts. What it comes down to for me is whether or not this\n> > information is sufficiently useful to display, and/or can be (or needs\n> > to be) controlled via some kind of verbosity knob.\n>\n> It seemed useful enough to me, but I likely also stare more at this stuff than\n> most. Compared to the list of offsets it's not that much content.\n>\n\nPersonally, I like having the infomasks for the freeze plans. If we\nsomeday have a more structured input to rmgr_desc, we could then easily\nhave them in their own column and use functions like\nheap_tuple_infomask_flags() on them.\n\n> > How hard would it be to invent a general mechanism to control the verbosity\n> > of what we'll show for each WAL record?\n>\n> Nontrivial, I'm afraid. We don't pass any relevant parameters to rm_desc:\n> void (*rm_desc) (StringInfo buf, XLogReaderState *record);\n>\n> so we'd need to patch all of them. That might be worth doing at some point,\n> but I don't want to tackle it right now.\n\nIn terms of a more structured format, it seems like it would make the\nmost sense to pass a JSON or composite datatype structure to rm_desc\ninstead of that StringInfo.\n\nI would also like to see functions like XLogRecGetBlockRefInfo() pass\nsomething more useful than a stringinfo buffer so that we could easily\nextract out the relfilenode in pgwalinspect.\n\nOn Mon, Jan 16, 2023 at 10:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Wed, Jan 11, 2023 at 3:11 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Wed, Jan 11, 2023 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > > What are your thoughts about the place for the helper functions? You're ok\n> > > with rmgrdesc_utils.[ch]?\n> >\n> > Yeah, that seems okay.\n>\n> BTW, while playing around with this patch today, I noticed that it\n> won't display the number of elements in each offset array directly.\n> Perhaps it's worth including that, too?\n\nI believe I have addressed this in the attached patch.\n\n- Melanie", "msg_date": "Fri, 27 Jan 2023 12:24:07 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Jan 27, 2023 at 12:24 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I believe I have addressed this in the attached patch.\n\nI'm not sure what's best in terms of formatting details but I\ndefinitely like the idea of making pg_waldump show more details. I'd\neven like to have a way to extract the tuple data, when it's\noperations on tuples and we have those tuples in the payload. That'd\nbe a lot more verbose than what you are doing here, though, and I'm\nnot saying you should go do it right now or anything like that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 15:02:04 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Jan 27, 2023 at 9:24 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I have taken a stab at doing some of the tasks listed in this email.\n\nCool.\n\n> I have made the new files rmgr_utils.c/h.\n>\n> I have come up with a standard format that I like for the output and\n> used it in all the heap record types.\n>\n> Examples below:\n\nThat seems like a reasonable approach.\n\n> I started documenting it in the rmgr_utils.h header file in a comment,\n> however it may be worth a README?\n>\n> I haven't polished this description of the format (or added examples,\n> etc) or used it in the btree-related functions because I assume the\n> format and helper function API will need more discussion.\n\nI think that standardization is good, but ISTM that we need clarity on\nwhat the scope is -- what is *not* being standardized? It may or may\nnot be useful to call the end result an API. Or it may not make sense\nto do so in the first committed version, even though we may ultimately\nend up as something that deserves to be called an API. The obligation\nto not break tools that are scraping the output in whatever way seems\nkind of onerous right now -- just not having any gratuitous\ninconsistencies (e.g., fixing totally inconsistent punctuation, making\nthe names for fields across WAL records consistent when they serve\nexactly the same purpose) would be a big improvement.\n\nAs I mentioned in passing already, I actually don't think that the\nB-Tree WAL records are all that special, as far as this stuff goes.\nFor example, the DELETE Btree record type is very similar to heapam's\nPRUNE record type, and practically identical to Btree's VACUUM record\ntype. All of these record types use the same basic conventions, like a\nsnapshotConflictHorizon field for recovery conflicts (which is\ngenerated in a very similar way during original execution, and\nprocessed in precisely the same way during REDO), and arrays of page\noffset numbers sorted in ascending order.\n\nThere are some remaining details where things from an index AM WAL\nrecord aren't directly analogous (or pretty much identical) to some\nother heapam WAL records, such as the way that the DELETE Btree record\ntype deals with deleting a subset of TIDs from a posting list index\ntuple (generated by B-Tree deduplication). But even these exceptions\ndon't require all that much discussion. You could either choose to\nonly display the array of deleted index tuple page offset numbers, as\nwell as the similar array of \"updated\" index tuple page offset numbers\nfrom xl_btree_delete, in which case you just display two arrays of\npage offset numbers, in the same standard way. You may or may not want\nto also show each individual xl_btree_update entry -- doing so would\nbe kinda like showing the details of individual freeze plans, except\nthat you'd probably display something very similar to the page offset\nnumber display here too (even though these aren't page offset numbers,\nthey're 0-based offsets into the posting list's item pointer data\narray).\n\nBTW, there is also a tendency for non-btree index AM WAL records to be\nfairly similar or even near-identical to the B-Tree WAL records. While\nHash indexes are very different to B-Tree indexes at a high level, it\nis nevertheless the case that xl_hash_vacuum_one_page is directly\nbased on xl_btree_delete/xl_btree_vacuum, and that xl_hash_insert is\ndirectly based on xl_btree_insert. There are some other WAL record\ntypes that are completely different across hash and B-Tree, which is a\nreflection of the fact that the index grows using a totally different\napproach in each AM -- but that doesn't seem like something that\nthrows up any roadblocks for you (these can all be displayed as simple\nstructs anyway).\n\nSpeaking with my B-Tree hat on, I'd just be happy to be able to see\nboth of the page offset number arrays (the deleted + updated offset\nnumber arrays from xl_btree_delete/xl_btree_vacuum), without also\nbeing able to\\ see output for each individual xl_btree_update\nitem-pointer-array-offset arrays -- just seeing that much is already a\nhuge improvement. That's why I'm a bit hesitant to use the term API\njust yet, because an obligation to be consistent in whatever way seems\nlike it might block incremental progress.\n\n> Perhaps there should also be example output of the offset arrays in\n> pgwalinspect docs?\n\nThat would definitely make sense.\n\n> I've changed the array format helper functions that Andres added to be a\n> single function with an additional layer of indirection so that any\n> record with an array can use it regardless of type and format of the\n> individual elements. The signature is based somewhat off of qsort_r()\n> and allows the user to pass a function with the the desired format of\n> the elements.\n\nThat's handy.\n\n> Personally, I like having the infomasks for the freeze plans. If we\n> someday have a more structured input to rmgr_desc, we could then easily\n> have them in their own column and use functions like\n> heap_tuple_infomask_flags() on them.\n\nI agree, in general, though long term the best approach is one that\nhas a configurable level of verbosity, with some kind of roughly\nuniform definition of verbosity (kinda like DEBUG1 - DEBUG5, though\nprobably with only 2 or 3 distinct levels).\n\nObviously what you're doing here will lead to a significant increase\nin the verbosity of the output for affected WAL records. I don't feel\ntoo bad about that, though. It's really an existing problem, and one\nthat should be fixed either way. You kind of have to deal with this\nalready, by having a good psql pager, since record types such as\nCOMMIT_PREPARED, INVALIDATIONS, and RUNNING_XACTS are already very\nverbose in roughly the same way. You only need to have one of these\nrecord types output by a function like pg_get_wal_records_info() to\nget absurdly wide output -- it hardly matters that most individual WAL\nrecord types have terse output at that point.\n\n> > > How hard would it be to invent a general mechanism to control the verbosity\n> > > of what we'll show for each WAL record?\n> >\n> > Nontrivial, I'm afraid. We don't pass any relevant parameters to rm_desc:\n> > void (*rm_desc) (StringInfo buf, XLogReaderState *record);\n> >\n> > so we'd need to patch all of them. That might be worth doing at some point,\n> > but I don't want to tackle it right now.\n>\n> In terms of a more structured format, it seems like it would make the\n> most sense to pass a JSON or composite datatype structure to rm_desc\n> instead of that StringInfo.\n>\n> I would also like to see functions like XLogRecGetBlockRefInfo() pass\n> something more useful than a stringinfo buffer so that we could easily\n> extract out the relfilenode in pgwalinspect.\n\nThat does seem particularly important. It's a pain to do this from\nSQL. In general I'm okay with focussing on pg_walinspect over\npg_waldump, since it'll become more important over time. Obviously\npg_waldump needs to still work, but I think it's okay to care less\nabout pg_waldump usability.\n\n> > BTW, while playing around with this patch today, I noticed that it\n> > won't display the number of elements in each offset array directly.\n> > Perhaps it's worth including that, too?\n>\n> I believe I have addressed this in the attached patch.\n\nThanks for taking care of that.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 31 Jan 2023 13:52:14 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Jan 31, 2023 at 1:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I would also like to see functions like XLogRecGetBlockRefInfo() pass\n> > something more useful than a stringinfo buffer so that we could easily\n> > extract out the relfilenode in pgwalinspect.\n>\n> That does seem particularly important. It's a pain to do this from\n> SQL. In general I'm okay with focussing on pg_walinspect over\n> pg_waldump, since it'll become more important over time. Obviously\n> pg_waldump needs to still work, but I think it's okay to care less\n> about pg_waldump usability.\n\nI just realized why you mentioned XLogRecGetBlockRefInfo() -- it\nprobably shouldn't even be used by pg_walinspect at all (just by\npg_waldump). Using something like XLogRecGetBlockRefInfo() within\npg_walinspect misses out on the opportunity to output information in a\nmore descriptive tuple format, with real data types. It's not just the\nrelfilenode, either -- it's the block numbers themselves. And the fork\nnumber.\n\nIn other words, I suspect that this is out of scope for this patch,\nstrictly speaking. We simply shouldn't be using\nXLogRecGetBlockRefInfo() in pg_walinspect in the first place. Rather,\npg_walinspect should be calling some other function that ultimately\nallows the user to work with (say) an array of int8 from SQL for the\nblock numbers. There is no great reason not to, AFAICT, since this\ninformation is completely generic -- it's not like the rmgr-specific\noutput from GetRmgr(), where fine grained type information is just a\nnice-to-have, with usability issues of its own (on account of the\ndetails being record type specific).\n\nI've been managing this problem within my own custom pg_walinspect\nqueries by using my own custom ICU collation. I use ICU's natural sort\norder to order based on block_ref, or based on a substring()\nexpression that extracts something interesting from block_ref, such as\nrelfilenode. You can create a custom collation for this like so, per\nthe docs:\n\nCREATE COLLATION IF NOT EXISTS numeric (provider = icu, locale =\n'en-u-kn-true');\n\nObviously this hack of mine works, but hardly anybody else would be\nwilling to take the time to figure something like this out. Plus it's\nerror prone when it doesn't really have to be. And it suggests that\nthe block_ref field isn't record type generic -- that's sort of\nmisleading IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 31 Jan 2023 14:47:35 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Jan 31, 2023 at 1:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Obviously what you're doing here will lead to a significant increase\n> in the verbosity of the output for affected WAL records. I don't feel\n> too bad about that, though. It's really an existing problem, and one\n> that should be fixed either way. You kind of have to deal with this\n> already, by having a good psql pager, since record types such as\n> COMMIT_PREPARED, INVALIDATIONS, and RUNNING_XACTS are already very\n> verbose in roughly the same way. You only need to have one of these\n> record types output by a function like pg_get_wal_records_info() to\n> get absurdly wide output -- it hardly matters that most individual WAL\n> record types have terse output at that point.\n\nActually the really wide output comes from COMMIT records. After I run\nthe regression tests, and execute some of my own custom pg_walinspect\nqueries, I see that some individual COMMIT records have a\nlength(description) of over 10,000 bytes/characters. There is even one\nparticular COMMIT record whose length(description) is about 46,000\nbytes/characters. So *ludicrously* verbose GetRmgr() strings are not\nuncommon today. The worst case (or even particularly bad cases) won't\nbe made any worse by this patch, because there are obviously limits on\nthe width of the arrays that it outputs details descriptions of, that\ndon't apply to these COMMIT records.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 31 Jan 2023 15:19:39 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Jan 31, 2023 at 6:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Actually the really wide output comes from COMMIT records. After I run\n> the regression tests, and execute some of my own custom pg_walinspect\n> queries, I see that some individual COMMIT records have a\n> length(description) of over 10,000 bytes/characters. There is even one\n> particular COMMIT record whose length(description) is about 46,000\n> bytes/characters. So *ludicrously* verbose GetRmgr() strings are not\n> uncommon today. The worst case (or even particularly bad cases) won't\n> be made any worse by this patch, because there are obviously limits on\n> the width of the arrays that it outputs details descriptions of, that\n> don't apply to these COMMIT records.\n\nIf we're dumping a lot of details out of each WAL record, we might\nwant to switch to a multi-line format of some kind. No one enjoys a\n460-character wide line, let alone 46000.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 08:20:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Wed, Feb 1, 2023 at 5:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> If we're dumping a lot of details out of each WAL record, we might\n> want to switch to a multi-line format of some kind. No one enjoys a\n> 460-character wide line, let alone 46000.\n\nI generally prefer it when I can use psql without using expanded table\nformat mode, and without having to use a pager. Of course that isn't\nalways possible, but it often is. I just don't think that that's going\nto become feasible with pg_walinspect queries any time soon, since it\nreally requires a comprehensive strategy to deal with the issue of\nverbosity.\n\nIt seems practically mandatory to use a pager when running\npg_walinspect queries in psql right now -- pspg is good for this. I\nreally can't use expanded table mode here, since it obscures the\nrelationship between adjoining records. I'm usually looking through\nrows/records in LSN order, and want to be able to easily compare the\nLSNs (or other details) of groups of adjoining records.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 1 Feb 2023 09:47:27 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Wed, Feb 1, 2023 at 12:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Wed, Feb 1, 2023 at 5:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > If we're dumping a lot of details out of each WAL record, we might\n> > want to switch to a multi-line format of some kind. No one enjoys a\n> > 460-character wide line, let alone 46000.\n>\n> I generally prefer it when I can use psql without using expanded table\n> format mode, and without having to use a pager. Of course that isn't\n> always possible, but it often is. I just don't think that that's going\n> to become feasible with pg_walinspect queries any time soon, since it\n> really requires a comprehensive strategy to deal with the issue of\n> verbosity.\n\nWell, if we're thinking of making the output a lot more verbose, it\nseems like we should at least do a bit of brainstorming about what\nthat strategy could be.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 1 Feb 2023 12:51:58 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Jan 31, 2023 at 5:48 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Jan 31, 2023 at 1:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > > I would also like to see functions like XLogRecGetBlockRefInfo() pass\n> > > something more useful than a stringinfo buffer so that we could easily\n> > > extract out the relfilenode in pgwalinspect.\n> >\n> > That does seem particularly important. It's a pain to do this from\n> > SQL. In general I'm okay with focussing on pg_walinspect over\n> > pg_waldump, since it'll become more important over time. Obviously\n> > pg_waldump needs to still work, but I think it's okay to care less\n> > about pg_waldump usability.\n>\n> I just realized why you mentioned XLogRecGetBlockRefInfo() -- it\n> probably shouldn't even be used by pg_walinspect at all (just by\n> pg_waldump). Using something like XLogRecGetBlockRefInfo() within\n> pg_walinspect misses out on the opportunity to output information in a\n> more descriptive tuple format, with real data types. It's not just the\n> relfilenode, either -- it's the block numbers themselves. And the fork\n> number.\n>\n> In other words, I suspect that this is out of scope for this patch,\n> strictly speaking. We simply shouldn't be using\n> XLogRecGetBlockRefInfo() in pg_walinspect in the first place. Rather,\n> pg_walinspect should be calling some other function that ultimately\n> allows the user to work with (say) an array of int8 from SQL for the\n> block numbers. There is no great reason not to, AFAICT, since this\n> information is completely generic -- it's not like the rmgr-specific\n> output from GetRmgr(), where fine grained type information is just a\n> nice-to-have, with usability issues of its own (on account of the\n> details being record type specific).\n\nSomething like the attached?\n\nstart_lsn | 0/19823390\nend_lsn | 0/19824360\nprev_lsn | 0/19821358\nxid | 1355\nresource_manager | Heap\nrecord_type | UPDATE\nrecord_length | 4021\nmain_data_length | 14\nfpi_length | 3948\ndescription | off 11 xmax 1355 flags 0x00 ; new off 109 xmax 0\nblock_ref |\n[0:1][0:8]={{0,1663,5,17033,0,442,460,4244,0},{1,1663,5,17033,0,0,0,0,0}}\n\nIt is a bit annoying not to have information about what each block_ref\nitem in the array represents (previously in the string), so maybe the\nformat in the attached shouldn't be a replacement for what is already\ndisplayed by pg_get_wal_records_info() and friends.\n\nIt could instead be a new function which returns information in this\nformat -- perhaps tuples with separate columns for each labeled block\nref field denormalized to repeat the wal record info for every block?\n\nThe one piece of information I didn't include in the new block_ref\ncolumns is the compression type (since it is a string). Since I used the\nforknum value instead of the forknum name, maybe it is defensible to\nalso provide a documented int value for the compression type and make\nthat an int too?\n\n- Melanie", "msg_date": "Wed, 1 Mar 2023 11:11:05 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On 01.03.23 17:11, Melanie Plageman wrote:\n> diff --git a/contrib/pg_walinspect/pg_walinspect--1.0.sql b/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> index 08b3dd5556..eb8ff82dd8 100644\n> --- a/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> +++ b/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> @@ -17,7 +17,7 @@ CREATE FUNCTION pg_get_wal_record_info(IN in_lsn pg_lsn,\n> OUT main_data_length int4,\n> OUT fpi_length int4,\n> OUT description text,\n> - OUT block_ref text\n> + OUT block_ref int4[][]\n> )\n> AS 'MODULE_PATHNAME', 'pg_get_wal_record_info'\n> LANGUAGE C STRICT PARALLEL SAFE;\n\nA change like this would require a new extension version and an upgrade \nscript.\n\nI suppose it's ok to postpone that work while the actual meat of the \npatch is still being worked out, but I figured I'd mention it in case it \nwasn't considered yet.\n\n\n\n", "msg_date": "Thu, 2 Mar 2023 09:17:34 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Thanks for the various perspectives and feedback.\n\nAttached v2 has additional info for xl_btree_vacuum and xl_btree_delete.\n\nI've quoted various emails by various senders below and replied.\n\nOn Fri, Jan 27, 2023 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jan 27, 2023 at 12:24 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I believe I have addressed this in the attached patch.\n>\n> I'm not sure what's best in terms of formatting details but I\n> definitely like the idea of making pg_waldump show more details. I'd\n> even like to have a way to extract the tuple data, when it's\n> operations on tuples and we have those tuples in the payload. That'd\n> be a lot more verbose than what you are doing here, though, and I'm\n> not saying you should go do it right now or anything like that.\n\nIf I'm not mistaken, this would be quite difficult without changing\nrm_desc to return some kind of self-describing data type.\n\nOn Tue, Jan 31, 2023 at 4:52 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Fri, Jan 27, 2023 at 9:24 AM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I started documenting it in the rmgr_utils.h header file in a comment,\n> > however it may be worth a README?\n> >\n> > I haven't polished this description of the format (or added examples,\n> > etc) or used it in the btree-related functions because I assume the\n> > format and helper function API will need more discussion.\n>\n> I think that standardization is good, but ISTM that we need clarity on\n> what the scope is -- what is *not* being standardized? It may or may\n> not be useful to call the end result an API. Or it may not make sense\n> to do so in the first committed version, even though we may ultimately\n> end up as something that deserves to be called an API. The obligation\n> to not break tools that are scraping the output in whatever way seems\n> kind of onerous right now -- just not having any gratuitous\n> inconsistencies (e.g., fixing totally inconsistent punctuation, making\n> the names for fields across WAL records consistent when they serve\n> exactly the same purpose) would be a big improvement.\n\nSo, we can scrap any README or big comment, but are there other changes\nto the code or structure you think would avoid it being seen as an\nAPI?\n\n> As I mentioned in passing already, I actually don't think that the\n> B-Tree WAL records are all that special, as far as this stuff goes.\n> For example, the DELETE Btree record type is very similar to heapam's\n> PRUNE record type, and practically identical to Btree's VACUUM record\n> type. All of these record types use the same basic conventions, like a\n> snapshotConflictHorizon field for recovery conflicts (which is\n> generated in a very similar way during original execution, and\n> processed in precisely the same way during REDO), and arrays of page\n> offset numbers sorted in ascending order.\n>\n> There are some remaining details where things from an index AM WAL\n> record aren't directly analogous (or pretty much identical) to some\n> other heapam WAL records, such as the way that the DELETE Btree record\n> type deals with deleting a subset of TIDs from a posting list index\n> tuple (generated by B-Tree deduplication). But even these exceptions\n> don't require all that much discussion. You could either choose to\n> only display the array of deleted index tuple page offset numbers, as\n> well as the similar array of \"updated\" index tuple page offset numbers\n> from xl_btree_delete, in which case you just display two arrays of\n> page offset numbers, in the same standard way. You may or may not want\n> to also show each individual xl_btree_update entry -- doing so would\n> be kinda like showing the details of individual freeze plans, except\n> that you'd probably display something very similar to the page offset\n> number display here too (even though these aren't page offset numbers,\n> they're 0-based offsets into the posting list's item pointer data\n> array).\n\nI have added detail to xl_btree_delete and xl_btree_vacuum. I have added\nthe updated/deleted target offset numbers and the updated tuples\nmetadata.\n\nI wondered if there was any reason to do xl_btree_dedup deduplication\nintervals.\n\n> BTW, there is also a tendency for non-btree index AM WAL records to be\n> fairly similar or even near-identical to the B-Tree WAL records. While\n> Hash indexes are very different to B-Tree indexes at a high level, it\n> is nevertheless the case that xl_hash_vacuum_one_page is directly\n> based on xl_btree_delete/xl_btree_vacuum, and that xl_hash_insert is\n> directly based on xl_btree_insert. There are some other WAL record\n> types that are completely different across hash and B-Tree, which is a\n> reflection of the fact that the index grows using a totally different\n> approach in each AM -- but that doesn't seem like something that\n> throws up any roadblocks for you (these can all be displayed as simple\n> structs anyway).\n\nI chose not to take on any other index types until I saw if this was viable.\n\n> > Perhaps there should also be example output of the offset arrays in\n> > pgwalinspect docs?\n>\n> That would definitely make sense.\n\nI wanted to include at least a minimal example for those following along\nwith this thread that would cause creation of one of the record types\nwhich I have enhanced, but I had a little trouble making a reliable\nexample.\n\nBelow is my strategy for getting a Heap PRUNE record with redirects, but\nit occasionally doesn't end up working and I wasn't sure why (I can do\nmore investigation if we think that having some kind of test for this is\nuseful).\n\nCREATE EXTENSION pg_walinspect;\nDROP TABLE IF EXISTS lsns;\nCREATE TABLE lsns(name TEXT, lsn pg_lsn);\n\nDROP TABLE IF EXISTS baz;\ncreate table baz(a int, b int) with (autovacuum_enabled=false);\ninsert into baz select i, i % 3 from generate_series(1,100)i;\n\nupdate baz set b = 0 where b = 1;\nupdate baz set b = 7 where b = 0;\nINSERT INTO lsns VALUES('start_lsn', (SELECT pg_current_wal_lsn()));\nvacuum baz;\nselect count(*) from baz;\nINSERT INTO lsns VALUES('end_lsn', (SELECT pg_current_wal_lsn()));\nSELECT * FROM pg_get_wal_records_info((select lsn from lsns where name\n= 'start_lsn'),\n (select lsn from lsns where name = 'end_lsn'))\n WHERE record_type LIKE 'PRUNE%' AND resource_manager = 'Heap2' LIMIT 1;\n\n> > Personally, I like having the infomasks for the freeze plans. If we\n> > someday have a more structured input to rmgr_desc, we could then easily\n> > have them in their own column and use functions like\n> > heap_tuple_infomask_flags() on them.\n>\n> I agree, in general, though long term the best approach is one that\n> has a configurable level of verbosity, with some kind of roughly\n> uniform definition of verbosity (kinda like DEBUG1 - DEBUG5, though\n> probably with only 2 or 3 distinct levels).\n\nGiven this comment and Robert's concern quoted below, I am wondering if\nthe consensus is that a lack of verbosity control is a dealbreaker for\nadding offsets or not.\n\nOn Wed, Feb 1, 2023 at 12:52 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Feb 1, 2023 at 12:47 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Wed, Feb 1, 2023 at 5:20 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > If we're dumping a lot of details out of each WAL record, we might\n> > > want to switch to a multi-line format of some kind. No one enjoys a\n> > > 460-character wide line, let alone 46000.\n> >\n> > I generally prefer it when I can use psql without using expanded table\n> > format mode, and without having to use a pager. Of course that isn't\n> > always possible, but it often is. I just don't think that that's going\n> > to become feasible with pg_walinspect queries any time soon, since it\n> > really requires a comprehensive strategy to deal with the issue of\n> > verbosity.\n>\n> Well, if we're thinking of making the output a lot more verbose, it\n> seems like we should at least do a bit of brainstorming about what\n> that strategy could be.\n\nIn terms of strategies for controlling output verbosity, it seems\ndifficult to do without changing the rmgrdesc function signature. Unless\nyou are thinking of trying to reparse the rmgrdesc string output on the\npg_walinspect/pg_waldump side?\n\nI think if there was a more structured output of rmgrdesc, then this\nwould also solve the verbosity level problem. Consumers could decide on\ntheir verbosity level -- in various pg_walinspect function outputs, that\nwould probably just be column selection. For pg_waldump, I imagine that\nsome kind of parameter or flag would work.\n\nUnless you are suggesting that we add a verbosity parameter to the\nrmgrdesc function API now?\n\nOn Thu, Mar 2, 2023 at 3:17 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> On 01.03.23 17:11, Melanie Plageman wrote:\n> > diff --git a/contrib/pg_walinspect/pg_walinspect--1.0.sql b/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> > index 08b3dd5556..eb8ff82dd8 100644\n> > --- a/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> > +++ b/contrib/pg_walinspect/pg_walinspect--1.0.sql\n> > @@ -17,7 +17,7 @@ CREATE FUNCTION pg_get_wal_record_info(IN in_lsn pg_lsn,\n> > OUT main_data_length int4,\n> > OUT fpi_length int4,\n> > OUT description text,\n> > - OUT block_ref text\n> > + OUT block_ref int4[][]\n> > )\n> > AS 'MODULE_PATHNAME', 'pg_get_wal_record_info'\n> > LANGUAGE C STRICT PARALLEL SAFE;\n>\n> A change like this would require a new extension version and an upgrade\n> script.\n>\n> I suppose it's ok to postpone that work while the actual meat of the\n> patch is still being worked out, but I figured I'd mention it in case it\n> wasn't considered yet.\n\nThanks for letting me know. This pg_walinspect patch ended up being\ndiscussed over in [1].\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/flat/CAAKRu_bORebdZmcV8V4cZBzU8M_C6tDDdbiPhCZ6i-iuSXW9TA%40mail.gmail.com", "msg_date": "Mon, 13 Mar 2023 19:00:59 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Mar 13, 2023 at 4:01 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Fri, Jan 27, 2023 at 3:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I'm not sure what's best in terms of formatting details but I\n> > definitely like the idea of making pg_waldump show more details.\n\n> If I'm not mistaken, this would be quite difficult without changing\n> rm_desc to return some kind of self-describing data type.\n\nI'd say that it would depend on how far you went with it. Basic\ninformation about the tuple wouldn't require any of that. I suggest\nleaving this part out for now, though.\n\n> So, we can scrap any README or big comment, but are there other changes\n> to the code or structure you think would avoid it being seen as an\n> API?\n\nI think that it would be good to try to build something that looks\nlike an API, while making zero promises about its stability -- at\nleast until further notice. Kind of like how there are no guarantees\nabout the stability of internal interfaces within the Linux kernel.\n\nThere is no reason to not take a firm position on some things now.\nThings like punctuation, and symbol names for generic cross-record\nsymbols like snapshotConflictHorizon. Many of the differences that\nexist now are wholly gratuitous -- just accidents. It would make sense\nto standardize-away these clearly unnecessary variations. And to\ndocument the new standard. I'd be surprised if anybody disagreed with\nme on this point.\n\n> I have added detail to xl_btree_delete and xl_btree_vacuum. I have added\n> the updated/deleted target offset numbers and the updated tuples\n> metadata.\n>\n> I wondered if there was any reason to do xl_btree_dedup deduplication\n> intervals.\n\nNo reason. It wouldn't be hard to cover xl_btree_dedup deduplication\nintervals -- each element is a page offset number, and a corresponding\ncount of index tuples to merge together in the REDO routine. That's\nslightly different to anything else, but not in a way that seems like\nit requires very much additional effort.\n\n> I wanted to include at least a minimal example for those following along\n> with this thread that would cause creation of one of the record types\n> which I have enhanced, but I had a little trouble making a reliable\n> example.\n>\n> Below is my strategy for getting a Heap PRUNE record with redirects, but\n> it occasionally doesn't end up working and I wasn't sure why (I can do\n> more investigation if we think that having some kind of test for this is\n> useful).\n\nI'm not sure, but offhand I think that there could be a number of\nannoying little implementation details that make it hard to come up\nwith a perfectly reliable test case. Perhaps try it while using VACUUM\nVERBOSE, with the proviso that we should only expect the revised\nexample workflow to show a redirect record as intended when the\nVERBOSE output confirms that VACUUM actually ran as expected, in\nwhatever way. For example, VACUUM can't have failed to acquire a\ncleanup lock on a heap page due to the current phase of the moon.\nVACUUM shouldn't have its \"removable cutoff\" held back by\nwho-knows-what when the test case is run, either.\n\nSome of the tests for VACUUM use a temp table, since they conveniently\ncannot have their \"removable cutoff\" held back -- not since commit\na7212be8. Of course, that strategy won't help you here. Getting VACUUM\nto behave very predictably for testing purposes has proven tricky at\ntimes.\n\n> > I agree, in general, though long term the best approach is one that\n> > has a configurable level of verbosity, with some kind of roughly\n> > uniform definition of verbosity (kinda like DEBUG1 - DEBUG5, though\n> > probably with only 2 or 3 distinct levels).\n>\n> Given this comment and Robert's concern quoted below, I am wondering if\n> the consensus is that a lack of verbosity control is a dealbreaker for\n> adding offsets or not.\n\nThere are several different things that seem important to me\npersonally. These are in tension with each other, to a degree. These\nare:\n\n1. Like Andres, I'd really like to have some way of inspecting things\nlike heapam PRUNE, VACUUM, and FREEZE_PAGE records in significant\ndetail. These record types happen to be very important in general, and\nthe ability to see detailed information about the WAL record would\ndefinitely help with some debugging scenarios. I've really missed\nstuff like this while debugging serious issues under time pressure.\n\n2. To a lesser extent I would like to see similar detailed information\nfor nbtree's DELETE, VACUUM, and possibly DEDUPLICATE record types.\nThey might also come in handy for debugging, in about the same way.\n\n3. More manageable verbosity.\n\nI think that it would be okay to put off coming up with a solution to\n3, for now. 1 and 2 seem more important than 3.\n\n> I think if there was a more structured output of rmgrdesc, then this\n> would also solve the verbosity level problem. Consumers could decide on\n> their verbosity level -- in various pg_walinspect function outputs, that\n> would probably just be column selection. For pg_waldump, I imagine that\n> some kind of parameter or flag would work.\n>\n> Unless you are suggesting that we add a verbosity parameter to the\n> rmgrdesc function API now?\n\nThe verbosity problem will get somewhat worse if we do just my items 1\nand 2, so it would be nice if we at least had a strategy in mind that\ndelivers on item 3/verbosity -- though the implementation can appear\nin a later release. Maybe something simple would work, like promising\nto output (say) 30 characters or less in terse mode, and making no\nsuch promise otherwise. Terse mode wouldn't just truncate the output\nof verbose mode -- it would never display information that could in\nprinciple exceed the 30 character allowance, even with records that\nhappen to fall under the limit.\n\nI can't feel too bad about putting this part off. A pager like pspg is\nalready table stakes when using pg_walinspect in any sort of serious\nway. As I said upthread, absurdly wide output is already reasonably\ncommon in most cases.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 13 Mar 2023 18:41:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Mar 13, 2023 at 6:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> There are several different things that seem important to me\n> personally. These are in tension with each other, to a degree. These\n> are:\n>\n> 1. Like Andres, I'd really like to have some way of inspecting things\n> like heapam PRUNE, VACUUM, and FREEZE_PAGE records in significant\n> detail. These record types happen to be very important in general, and\n> the ability to see detailed information about the WAL record would\n> definitely help with some debugging scenarios. I've really missed\n> stuff like this while debugging serious issues under time pressure.\n\nOne problem that I often run into when performing analysis of VACUUM\nusing pg_walinspect is the issue of *who* pruned which heap page, for\nany given PRUNE record. Was it VACUUM/autovacuum, or was it\nopportunistic pruning? There is no way of knowing for sure right now.\nYou *cannot* rely on an xid of 0 as an indicator of a given PRUNE\nrecord coming from VACUUM; it could just have been an opportunistic\nprune operation that happened to take place when a SELECT query ran,\nbefore any XID was ever allocated.\n\nI think that we should do something like the attached, to completely\navoid this ambiguity. This patch adds a new XLOG_HEAP2 bit that's\nsimilar to XLOG_HEAP_INIT_PAGE -- XLOG_HEAP2_BYVACUUM. This allows all\nXLOG_HEAP2 record types to indicate that they took place during\nVACUUM, by XOR'ing the flag with the record type/info when\nXLogInsert() is called. For now this is only used by PRUNE records.\nTools like pg_walinspect will report a separate \"Heap2/PRUNE+BYVACUUM\"\nrecord_type, as well as the unadorned Heap2/PRUNE record_type, which\nwe'll now know must have been opportunistic pruning.\n\nThe approach of using a bit in the style of the heapam init bit makes\nsense to me, because the bit is available, and works in a way that is\nminimally invasive. Also, one can imagine needing to resolve a similar\nambiguity in the future, when (say) opportunistic freezing is added.\n\nI think that it makes sense to treat this within the scope of\nMelanie's ongoing work to improve the instrumentation of these records\n-- meaning that it's in scope for Postgres 16. Admittedly this is a\nslightly creative interpretation, so if others disagree then I won't\nargue. This is quite a small patch, though, which makes debugging\nsignificantly easier. I think that there could be a great deal of\nutility in being able to easily \"pair up\" corresponding\n\"Heap2/PRUNE+BYVACUUM\" and \"Heap2/VACUUM\" records in debugging\nscenarios. I can imagine linking these to \"Heap2/FREEZE_PAGE\" and\n\"Heap2/VISIBLE\" records, too, since they're all closely related record\ntypes.\n\n--\nPeter Geoghegan", "msg_date": "Tue, 21 Mar 2023 15:37:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Mar 21, 2023 at 3:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> One problem that I often run into when performing analysis of VACUUM\n> using pg_walinspect is the issue of *who* pruned which heap page, for\n> any given PRUNE record. Was it VACUUM/autovacuum, or was it\n> opportunistic pruning? There is no way of knowing for sure right now.\n> You *cannot* rely on an xid of 0 as an indicator of a given PRUNE\n> record coming from VACUUM; it could just have been an opportunistic\n> prune operation that happened to take place when a SELECT query ran,\n> before any XID was ever allocated.\n\nIn case it's unclear how much of a problem this can be, here's an example:\n\nThe misc.sql regression test does a bulk update of the table \"onek\". A\nlittle later, one of the queries that appears under the section \"copy\"\nfrom the same file SELECTs from \"onek\". This produces a succession of\nopportunistic prune records that look exactly like what you'd expect from\na VACUUM when viewed through pg_walinspect (without this patch). Each\nPRUNE record has XID 0. The records appear in ascending heap block\nnumber order, since there is a sequential scan involved (we go through\nheapgetpage() to get to heap_page_prune_opt(), where the query prunes\nopportunistically).\n\nAnother slightly surprising fact revealed by the patch is the ratio of\nopportunistic prunes (\"Heap2/PRUNE\") to prunes run during VACUUM\n(\"Heap2/PRUNE+BYVACUUM\") with the regression tests:\n\n│ resource_manager/record_type │ Heap2/PRUNE │\n│ count │ 4,521 │\n│ count_perc │ 0.220 │\n│ rec_size │ 412,442 │\n│ avg_rec_size │ 91 │\n│ rec_size_perc │ 0.194 │\n│ fpi_size │ 632,828 │\n│ fpi_size_perc │ 1.379 │\n│ combined_size │ 1,045,270 │\n│ combined_size_perc │ 0.404 │\n├─[ RECORD 61 ]────────────────┼─────────────────────────────┤\n│ resource_manager/record_type │ Heap2/PRUNE+BYVACUUM │\n│ count │ 2,784 │\n│ count_perc │ 0.135 │\n│ rec_size │ 467,057 │\n│ avg_rec_size │ 167 │\n│ rec_size_perc │ 0.219 │\n│ fpi_size │ 546,344 │\n│ fpi_size_perc │ 1.190 │\n│ combined_size │ 1,013,401 │\n│ combined_size_perc │ 0.391 │\n├─[ RECORD 62 ]────────────────┼─────────────────────────────┤\n│ resource_manager/record_type │ Heap2/VACUUM │\n│ count │ 3,463 │\n│ count_perc │ 0.168 │\n│ rec_size │ 610,038 │\n│ avg_rec_size │ 176 │\n│ rec_size_perc │ 0.286 │\n│ fpi_size │ 893,964 │\n│ fpi_size_perc │ 1.948 │\n│ combined_size │ 1,504,002 │\n│ combined_size_perc │ 0.581 │\n├─[ RECORD 63 ]────────────────┼─────────────────────────────┤\n│ resource_manager/record_type │ Heap2/VISIBLE │\n│ count │ 7,293 │\n│ count_perc │ 0.354 │\n│ rec_size │ 431,382 │\n│ avg_rec_size │ 59 │\n│ rec_size_perc │ 0.202 │\n│ fpi_size │ 1,794,048 │\n│ fpi_size_perc │ 3.909 │\n│ combined_size │ 2,225,430 │\n│ combined_size_perc │ 0.859 │\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 21 Mar 2023 18:31:09 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Mar 13, 2023 at 9:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Mar 13, 2023 at 4:01 PM Melanie Plageman <melanieplageman@gmail.com> wrote:\n>\n> > I have added detail to xl_btree_delete and xl_btree_vacuum. I have added\n> > the updated/deleted target offset numbers and the updated tuples\n> > metadata.\n> >\n> > I wondered if there was any reason to do xl_btree_dedup deduplication\n> > intervals.\n>\n> No reason. It wouldn't be hard to cover xl_btree_dedup deduplication\n> intervals -- each element is a page offset number, and a corresponding\n> count of index tuples to merge together in the REDO routine. That's\n> slightly different to anything else, but not in a way that seems like\n> it requires very much additional effort.\n\nI went to add dedup records and noticed that since the actual\nBTDedupInterval struct is what is put in the xlog, I would need access\nto that type from nbtdesc.c, however, including nbtree.h doesn't seem to\nwork because it includes files that cannot be included in frontend code.\n\nI, of course, could make some local struct in nbtdesc.c which has an\nOffsetNumber and a uint16, since the BTDedupInterval is pretty\nstraightforward, but that seems a bit annoying.\nI'm probably missing something obvious, but is there a better way to do\nthis?\n\nOn another note, I've thought about how to include some example output\nin docs, and, for example we could modify the example output in the\npgwalinspect docs which includes a PRUNE record already for\npg_get_wal_record_info() docs. We'd probably just want to keep it short.\n\n- Melanie\n\n\n", "msg_date": "Mon, 27 Mar 2023 17:29:08 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Mar 27, 2023 at 2:29 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I went to add dedup records and noticed that since the actual\n> BTDedupInterval struct is what is put in the xlog, I would need access\n> to that type from nbtdesc.c, however, including nbtree.h doesn't seem to\n> work because it includes files that cannot be included in frontend code.\n\nI suppose that the BTDedupInterval struct could have just as easily\ngone in nbtxlog.h, next to xl_btree_dedup. There might have been a\nmoment where I thought about doing it that way, but I guess I found it\nslightly preferable to use that symbol name (BTDedupInterval) rather\nthan (say) xl_btree_dedup_interval in places like the nearby\nBTDedupStateData struct.\n\nActually, I suppose that it's hard to make that alternative work, at\nleast without\nincluding nbtxlog.h in nbtree.h. Which sounds wrong.\n\n> I, of course, could make some local struct in nbtdesc.c which has an\n> OffsetNumber and a uint16, since the BTDedupInterval is pretty\n> straightforward, but that seems a bit annoying.\n> I'm probably missing something obvious, but is there a better way to do\n> this?\n\nIt was probably just one of those cases where I settled on the\narrangement that looked least odd overall. Not a particularly\nprincipled approach. But the approach that I'm going to take once more\nhere. ;-)\n\nAll of the available alternatives are annoying in roughly the same\nway, though perhaps to varying degrees. All except one: I'm okay with\njust not adding coverage for deduplication records, for the time being\n-- just seeing the number of intervals alone is relatively informative\nwith deduplication records, unlike (say) nbtree delete records. I'm\nalso okay with having coverage for dedup records if you feel it's\nworth having. Your call.\n\nIf we're going to have coverage for deduplication records then it\nseems to me that we have to have a struct in nbtxlog.h for your code\nto work off of. It also seems likely that we'll want to use that same\nstruct within nbtxlog.c. What's less clear is what that means for the\nBTDedupInterval struct. I don't think that we should include nbtxlog.h\nin nbtree.h, nor should we do the converse.\n\nI guess maybe two identical structs would be okay. BTDedupInterval,\nand xl_btree_dedup_interval, with the former still used in nbtdedup.c,\nand the latter used through a pointer at the point that nbtxlog.c\nreads a dedup record. Then maybe at a sizeof() static assert beside\nthe existing btree_xlog_dedup() assertions that check that the dedup\nstate interval array matches the array taken from the WAL record.\nThat's still a bit weird, but I find it preferable to any alternative\nthat I can think of.\n\n> On another note, I've thought about how to include some example output\n> in docs, and, for example we could modify the example output in the\n> pgwalinspect docs which includes a PRUNE record already for\n> pg_get_wal_record_info() docs. We'd probably just want to keep it short.\n\nYeah. Perhaps a PRUNE record for one of the system catalogs whose\nrelfilenode is relatively recognizable. Say pg_class. It probably\ndoesn't matter that much, but there is perhaps some small value in\npicking an example that is relatively easy to recreate later on (or to\napproximately recreate). I'm certainly not insisting on that, though.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 27 Mar 2023 15:27:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Attached v3 is cleaned up and includes a pg_walinspect docs update as\nwell as some edited comments in rmgr_utils.c\n\nOn Mon, Mar 27, 2023 at 6:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 27, 2023 at 2:29 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > I went to add dedup records and noticed that since the actual\n> > BTDedupInterval struct is what is put in the xlog, I would need access\n> > to that type from nbtdesc.c, however, including nbtree.h doesn't seem to\n> > work because it includes files that cannot be included in frontend code.\n>\n> I suppose that the BTDedupInterval struct could have just as easily\n> gone in nbtxlog.h, next to xl_btree_dedup. There might have been a\n> moment where I thought about doing it that way, but I guess I found it\n> slightly preferable to use that symbol name (BTDedupInterval) rather\n> than (say) xl_btree_dedup_interval in places like the nearby\n> BTDedupStateData struct.\n>\n> Actually, I suppose that it's hard to make that alternative work, at\n> least without\n> including nbtxlog.h in nbtree.h. Which sounds wrong.\n>\n> > I, of course, could make some local struct in nbtdesc.c which has an\n> > OffsetNumber and a uint16, since the BTDedupInterval is pretty\n> > straightforward, but that seems a bit annoying.\n> > I'm probably missing something obvious, but is there a better way to do\n> > this?\n>\n> It was probably just one of those cases where I settled on the\n> arrangement that looked least odd overall. Not a particularly\n> principled approach. But the approach that I'm going to take once more\n> here. ;-)\n>\n> All of the available alternatives are annoying in roughly the same\n> way, though perhaps to varying degrees. All except one: I'm okay with\n> just not adding coverage for deduplication records, for the time being\n> -- just seeing the number of intervals alone is relatively informative\n> with deduplication records, unlike (say) nbtree delete records. I'm\n> also okay with having coverage for dedup records if you feel it's\n> worth having. Your call.\n>\n> If we're going to have coverage for deduplication records then it\n> seems to me that we have to have a struct in nbtxlog.h for your code\n> to work off of. It also seems likely that we'll want to use that same\n> struct within nbtxlog.c. What's less clear is what that means for the\n> BTDedupInterval struct. I don't think that we should include nbtxlog.h\n> in nbtree.h, nor should we do the converse.\n>\n> I guess maybe two identical structs would be okay. BTDedupInterval,\n> and xl_btree_dedup_interval, with the former still used in nbtdedup.c,\n> and the latter used through a pointer at the point that nbtxlog.c\n> reads a dedup record. Then maybe at a sizeof() static assert beside\n> the existing btree_xlog_dedup() assertions that check that the dedup\n> state interval array matches the array taken from the WAL record.\n> That's still a bit weird, but I find it preferable to any alternative\n> that I can think of.\n\nI've omitted enhancements for the dedup record type for now.\n\n> > On another note, I've thought about how to include some example output\n> > in docs, and, for example we could modify the example output in the\n> > pgwalinspect docs which includes a PRUNE record already for\n> > pg_get_wal_record_info() docs. We'd probably just want to keep it short.\n>\n> Yeah. Perhaps a PRUNE record for one of the system catalogs whose\n> relfilenode is relatively recognizable. Say pg_class. It probably\n> doesn't matter that much, but there is perhaps some small value in\n> picking an example that is relatively easy to recreate later on (or to\n> approximately recreate). I'm certainly not insisting on that, though.\n\nI've added such an example to pg_walinspect docs.\n\nOn Tue, Mar 21, 2023 at 6:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Mar 13, 2023 at 6:41 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > There are several different things that seem important to me\n> > personally. These are in tension with each other, to a degree. These\n> > are:\n> >\n> > 1. Like Andres, I'd really like to have some way of inspecting things\n> > like heapam PRUNE, VACUUM, and FREEZE_PAGE records in significant\n> > detail. These record types happen to be very important in general, and\n> > the ability to see detailed information about the WAL record would\n> > definitely help with some debugging scenarios. I've really missed\n> > stuff like this while debugging serious issues under time pressure.\n>\n> One problem that I often run into when performing analysis of VACUUM\n> using pg_walinspect is the issue of *who* pruned which heap page, for\n> any given PRUNE record. Was it VACUUM/autovacuum, or was it\n> opportunistic pruning? There is no way of knowing for sure right now.\n> You *cannot* rely on an xid of 0 as an indicator of a given PRUNE\n> record coming from VACUUM; it could just have been an opportunistic\n> prune operation that happened to take place when a SELECT query ran,\n> before any XID was ever allocated.\n>\n> I think that we should do something like the attached, to completely\n> avoid this ambiguity. This patch adds a new XLOG_HEAP2 bit that's\n> similar to XLOG_HEAP_INIT_PAGE -- XLOG_HEAP2_BYVACUUM. This allows all\n> XLOG_HEAP2 record types to indicate that they took place during\n> VACUUM, by XOR'ing the flag with the record type/info when\n> XLogInsert() is called. For now this is only used by PRUNE records.\n> Tools like pg_walinspect will report a separate \"Heap2/PRUNE+BYVACUUM\"\n> record_type, as well as the unadorned Heap2/PRUNE record_type, which\n> we'll now know must have been opportunistic pruning.\n>\n> The approach of using a bit in the style of the heapam init bit makes\n> sense to me, because the bit is available, and works in a way that is\n> minimally invasive. Also, one can imagine needing to resolve a similar\n> ambiguity in the future, when (say) opportunistic freezing is added.\n>\n> I think that it makes sense to treat this within the scope of\n> Melanie's ongoing work to improve the instrumentation of these records\n> -- meaning that it's in scope for Postgres 16. Admittedly this is a\n> slightly creative interpretation, so if others disagree then I won't\n> argue. This is quite a small patch, though, which makes debugging\n> significantly easier. I think that there could be a great deal of\n> utility in being able to easily \"pair up\" corresponding\n> \"Heap2/PRUNE+BYVACUUM\" and \"Heap2/VACUUM\" records in debugging\n> scenarios. I can imagine linking these to \"Heap2/FREEZE_PAGE\" and\n> \"Heap2/VISIBLE\" records, too, since they're all closely related record\n> types.\n\nI really like this idea and would find it useful. I reviewed the patch\nand tried it out and it worked for me and code looked fine as well.\n\nI didn't include it in the attached patchset because I don't feel\nconfident enough in my own understanding of any potential implications\nof splitting up these record types to definitively endorse it. But, if\nsomeone else felt comfortable with it, I would like to see it in the\ntree.\n\n- Melanie", "msg_date": "Fri, 7 Apr 2023 16:33:08 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 1:33 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Attached v3 is cleaned up and includes a pg_walinspect docs update as\n> well as some edited comments in rmgr_utils.c\n\nAttached v4 has some small tweaks on your v3. Mostly just whitespace\ntweaks. Two slightly notable tweaks:\n\n* I changed the approach to globbing in the Makefile, rather than use\nyour original overwide formulation for the new rmgrdesc_utils.c file.\n\nWhat do you think of this approach?\n\n* Removed use of the restrict keyword.\n\nWhile \"restrict\" is C99, I'm not completely sure that it's totally\nsupported by Postgres. I'm a bit surprised that you opted to use it in\nthis particular patch.\n\nI meant to ask you about this earlier...why use restrict in this patch?\n\n> I've added such an example to pg_walinspect docs.\n\nThere already was a PRUNE example, though -- for the\npg_get_wal_record_info function (singular, not to be confused with\npg_get_wal_records_info).\n\nv4 makes the example a VACUUM record, which replaces the previous\npg_get_wal_record_info PRUNE example -- that needed to be updated\nanyway. This approach has the advantage of not being too verbose,\nwhich still showing some of this kind of detail.\n\nThis has the advantage of allowing pg_get_wal_records_info's example\nto continue to be an example that lacks a block reference (and so has\na NULL block_ref). This is a useful contrast against the new\npg_get_wal_block_info function.\n\n> I really like this idea and would find it useful. I reviewed the patch\n> and tried it out and it worked for me and code looked fine as well.\n>\n> I didn't include it in the attached patchset because I don't feel\n> confident enough in my own understanding of any potential implications\n> of splitting up these record types to definitively endorse it. But, if\n> someone else felt comfortable with it, I would like to see it in the\n> tree.\n\nI'm not going to move on it now for 16, given the lack of feedback about it.\n\n-- \nPeter Geoghegan", "msg_date": "Fri, 7 Apr 2023 14:43:24 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 5:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 7, 2023 at 1:33 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Attached v3 is cleaned up and includes a pg_walinspect docs update as\n> > well as some edited comments in rmgr_utils.c\n>\n> Attached v4 has some small tweaks on your v3. Mostly just whitespace\n> tweaks. Two slightly notable tweaks:\n>\n> * I changed the approach to globbing in the Makefile, rather than use\n> your original overwide formulation for the new rmgrdesc_utils.c file.\n>\n> What do you think of this approach?\n\nSeems fine.\n\n> * Removed use of the restrict keyword.\n>\n> While \"restrict\" is C99, I'm not completely sure that it's totally\n> supported by Postgres. I'm a bit surprised that you opted to use it in\n> this particular patch.\n>\n> I meant to ask you about this earlier...why use restrict in this patch?\n\n\nSo, I think the signature I meant to have was:\n\nvoid\narray_desc(StringInfo buf, void *array, size_t elem_size, int count,\n void (*elem_desc) (StringInfo buf, const void *elem, void *data),\n void *data)\n\nBasically I wanted to indicate that elem was not and should not be\nmodified and data can be modified but that they should not be the same\nelement or overlap at all.\n\n> > I've added such an example to pg_walinspect docs.\n>\n> There already was a PRUNE example, though -- for the\n> pg_get_wal_record_info function (singular, not to be confused with\n> pg_get_wal_records_info).\n>\n> v4 makes the example a VACUUM record, which replaces the previous\n> pg_get_wal_record_info PRUNE example -- that needed to be updated\n> anyway. This approach has the advantage of not being too verbose,\n> which still showing some of this kind of detail.\n>\n> This has the advantage of allowing pg_get_wal_records_info's example\n> to continue to be an example that lacks a block reference (and so has\n> a NULL block_ref). This is a useful contrast against the new\n> pg_get_wal_block_info function.\n\nLGTM\n\n- Melanie\n\n\n", "msg_date": "Fri, 7 Apr 2023 19:01:29 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 4:01 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> LGTM\n\nPushed, thanks.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 7 Apr 2023 16:09:16 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 7:09 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 7, 2023 at 4:01 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > LGTM\n>\n> Pushed, thanks.\n\nIt's come to my attention that I forgot to include the btree patch earlier.\n\nPFA", "msg_date": "Fri, 7 Apr 2023 19:21:22 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 4:21 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> It's come to my attention that I forgot to include the btree patch earlier.\n\nPushed that one too.\n\nAlso removed the use of the \"restrict\" keyword here.\n\nThanks\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 7 Apr 2023 16:46:56 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Fri, Apr 7, 2023 at 4:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Pushed that one too.\n\nI noticed that the nbtree VACUUM and DELETE record types have their\nupdate/xl_btree_update arrays output incorrectly. We cannot use the\ngeneric array_desc() approach with xl_btree_update elements, because\nthey're variable-width elements. The problem is that array_desc() only deals\nwith fixed-width elements.\n\nI also changed some of the details around whitespace in arrays in the\nfixup patch (though I didn't do the same with objects). It doesn't\nseem useful to use so much whitespace for long arrays of integers\n(really page offset numbers). And I brought a few nbtree desc routines\nthat still used \";\" characters as punctuation in line with the new\nconvention.\n\nFinally, the patch revises the guidelines written for rmgr desc\nroutine authors. I don't think that we need to describe how to handle\noutputting whitespace in detail. It'll be quite natural for other\nrmgrs to use existing facilities such as array_desc() themselves,\nwhich makes whitespace type inconsistencies unlikely. I've tried to\nmake the limits of the guidelines clear. The main goal is to avoid\ngratuitous inconsistencies, and to provide a standard way of doing\nthings that many different rmgrs are likely to want to do, again and\nagain. But individual rmgrs still have a certain amount of discretion,\nwhich seems like a good thing to me (the alternative requires that we\nfix at least a couple of things in nbtdesc.c and in heapdesc.c, which\ndoesn't seem useful to me).\n\n--\nPeter Geoghegan", "msg_date": "Sun, 9 Apr 2023 17:12:30 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Sun, Apr 9, 2023 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I noticed that the nbtree VACUUM and DELETE record types have their\n> update/xl_btree_update arrays output incorrectly. We cannot use the\n> generic array_desc() approach with xl_btree_update elements, because\n> they're variable-width elements. The problem is that array_desc() only deals\n> with fixed-width elements.\n\nI pushed this fix just now, though without the updates to the\nguidelines (or only minimal updates).\n\nA remaining problem with arrays appears in \"infobits\" fields for\nrecord types such as LOCK. Here's an example of the problem:\n\noff: 34, xid: 3, flags: 0x00, infobits: [, LOCK_ONLY, EXCL_LOCK ]\n\nClearly the punctuation from the array is malformed.\n\nA second issue (related to the first) is the name of the key itself,\n\"infobits\". While \"infobits\" actually seems fine in this particular\nexample, I don't think that we want to do the same for record types\nsuch as HEAP_UPDATE, since such records require that the description\nshow information about flags whose underlying field in the WAL record\nstruct is actually called \"old_infobits_set\". I think that we should\nbe outputting \"old_infobits: [ ... ] \" in the description of\nHEAP_UPDATE records, which isn't the case right now.\n\nA third issue is present in the nearby handling of xl_heap_truncate\nstatus flags. It's the same basic array punctuation issue again, so\narguably this is the same issue as the first one.\n\nAttached patch fixes all of these issues, and overhauls the guidelines\nin the way originally proposed by the nbtree fix patch (since I didn't\nkeep that part of the nbtree patch when I pushed it today).\n\nNote that the patch makes many individual (say) HOT_UPDATE records\nhave descriptions that look like this:\n\n... old_infobits: [], ...\n\nThis differs from HEAD, where the output is totally suppressed because\nthere are no flag bits to show. I think that this behavior is more\nlogical and consistent overall.\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 10 Apr 2023 12:18:27 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Sun, Apr 9, 2023 at 8:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Apr 7, 2023 at 4:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Pushed that one too.\n>\n> I noticed that the nbtree VACUUM and DELETE record types have their\n> update/xl_btree_update arrays output incorrectly. We cannot use the\n> generic array_desc() approach with xl_btree_update elements, because\n> they're variable-width elements. The problem is that array_desc() only deals\n> with fixed-width elements.\n\nYou are right. I'm sorry for the rather egregious oversight.\n\nI took a look at the first patch even though you've pushed the bugfix\npart. Any reason you didn't use array_desc() for the inner array (of\n\"ptids\")? I find that following the pattern of using array_desc (when it\nis correct, of course!) helps me to quickly identify: \"okay, this is an\narray of x\" without having to stare at the loop too much.\n\nI will say that the prefix of p in \"ptid\" makes it sound like pointer to\na tid, which I don't believe is what you meant.\n\n> I also changed some of the details around whitespace in arrays in the\n> fixup patch (though I didn't do the same with objects). It doesn't\n> seem useful to use so much whitespace for long arrays of integers\n> (really page offset numbers). And I brought a few nbtree desc routines\n> that still used \";\" characters as punctuation in line with the new\n> convention.\n\nCool.\n\n> Finally, the patch revises the guidelines written for rmgr desc\n> routine authors. I don't think that we need to describe how to handle\n> outputting whitespace in detail. It'll be quite natural for other\n> rmgrs to use existing facilities such as array_desc() themselves,\n> which makes whitespace type inconsistencies unlikely. I've tried to\n> make the limits of the guidelines clear. The main goal is to avoid\n> gratuitous inconsistencies, and to provide a standard way of doing\n> things that many different rmgrs are likely to want to do, again and\n> again. But individual rmgrs still have a certain amount of discretion,\n> which seems like a good thing to me (the alternative requires that we\n> fix at least a couple of things in nbtdesc.c and in heapdesc.c, which\n> doesn't seem useful to me).\n\nI like the new guidelines you proposed (in the patch).\nThey are well-written and clear.\n\n\nOn Mon, Apr 10, 2023 at 3:18 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Apr 9, 2023 at 5:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I noticed that the nbtree VACUUM and DELETE record types have their\n> > update/xl_btree_update arrays output incorrectly. We cannot use the\n> > generic array_desc() approach with xl_btree_update elements, because\n> > they're variable-width elements. The problem is that array_desc() only deals\n> > with fixed-width elements.\n>\n> I pushed this fix just now, though without the updates to the\n> guidelines (or only minimal updates).\n>\n> A remaining problem with arrays appears in \"infobits\" fields for\n> record types such as LOCK. Here's an example of the problem:\n>\n> off: 34, xid: 3, flags: 0x00, infobits: [, LOCK_ONLY, EXCL_LOCK ]\n>\n> Clearly the punctuation from the array is malformed.\n\nSo, I did do this on purpose -- because I didn't want to have to do the\ngymnastics to determine which flag was hit first (though it looks like I\nmistakenly omitted the comma prepending IS_MULTI -- that was not\nintentional).\nI recognized that the output doesn't look nice, but I hadn't exactly\nthought of it as malformed. Perhaps you are right.\n\nI will say and I am still not a fan of the \"if (first) else\" logic in\nyour attached patch.\n\nI've put my suggestion for how to do it instead inline with the code\ndiff below for clarity.\n\ndiff --git a/src/backend/access/rmgrdesc/heapdesc.c\nb/src/backend/access/rmgrdesc/heapdesc.c\nindex 3bd083875..a64d14c2c 100644\n--- a/src/backend/access/rmgrdesc/heapdesc.c\n+++ b/src/backend/access/rmgrdesc/heapdesc.c\n@@ -18,29 +18,75 @@\n #include \"access/rmgrdesc_utils.h\"\n\n static void\n-out_infobits(StringInfo buf, uint8 infobits)\n+infobits_desc(StringInfo buf, uint8 infobits, const char *keyname)\n...\n if (infobits & XLHL_KEYS_UPDATED)\n- appendStringInfoString(buf, \", KEYS_UPDATED\");\n+ {\n+ if (first)\n+ appendStringInfoString(buf, \"KEYS_UPDATED\");\n+ else\n+ appendStringInfoString(buf, \", KEYS_UPDATED\");\n+ first = false;\n+ }\n\nHow about we have the flags use a trailing comma and space and then\noverwrite the last one with something this:\n\n if (infobits & XLHL_KEYS_UPDATED)\n appendStringInfoString(buf, \"KEYS_UPDATED, \");\n buf->data[buf->len -= strlen(\", \")] = '\\0';\n\n\n@@ -230,7 +271,9 @@ heap2_desc(StringInfo buf, XLogReaderState *record)\n OffsetNumber *offsets;\n\nI don't prefer this to what I had, which is also correct, right?\n\n plans = (xl_heap_freeze_plan *)\nXLogRecGetBlockData(record, 0, NULL);\n- offsets = (OffsetNumber *) &plans[xlrec->nplans];\n+ offsets = (OffsetNumber *) ((char *) plans +\n+ (xlrec->nplans *\n+ sizeof(xl_heap_freeze_plan)));\n appendStringInfoString(buf, \", plans:\");\n array_desc(buf, plans, sizeof(xl_heap_freeze_plan), xlrec->nplans,\n &plan_elem_desc, &offsets);\n\n> A second issue (related to the first) is the name of the key itself,\n> \"infobits\". While \"infobits\" actually seems fine in this particular\n> example, I don't think that we want to do the same for record types\n> such as HEAP_UPDATE, since such records require that the description\n> show information about flags whose underlying field in the WAL record\n> struct is actually called \"old_infobits_set\". I think that we should\n> be outputting \"old_infobits: [ ... ] \" in the description of\n> HEAP_UPDATE records, which isn't the case right now.\n\n--- a/src/backend/access/rmgrdesc/heapdesc.c\n+++ b/src/backend/access/rmgrdesc/heapdesc.c\n@@ -18,29 +18,75 @@\n #include \"access/rmgrdesc_utils.h\"\n\n static void\n-out_infobits(StringInfo buf, uint8 infobits)\n+infobits_desc(StringInfo buf, uint8 infobits, const char *keyname)\n\nI like the keyname parameter.\n\n> Note that the patch makes many individual (say) HOT_UPDATE records\n> have descriptions that look like this:\n>\n> ... old_infobits: [], ...\n>\n> This differs from HEAD, where the output is totally suppressed because\n> there are no flag bits to show. I think that this behavior is more\n> logical and consistent overall.\n\nYea, I think it is better to include things and show that they are empty\nthen omit them. I find it more clear.\n\n- Melanie\n\n\n", "msg_date": "Mon, 10 Apr 2023 18:03:56 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Apr 10, 2023 at 3:04 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> I took a look at the first patch even though you've pushed the bugfix\n> part. Any reason you didn't use array_desc() for the inner array (of\n> \"ptids\")? I find that following the pattern of using array_desc (when it\n> is correct, of course!) helps me to quickly identify: \"okay, this is an\n> array of x\" without having to stare at the loop too much.\n\nIt was fairly arbitrary. I was thinking \"we can't use array_desc for\nthis\", which wasn't 100% true, but seemed close enough. It helped that\nthis allowed me to remove uint16_elem_desc(), which likely wouldn't\nhave been reused later on.\n\n> I will say that the prefix of p in \"ptid\" makes it sound like pointer to\n> a tid, which I don't believe is what you meant.\n\nI was thinking of the symbol name \"ptid\" from\n_bt_delitems_delete_check() (it even appears in code comments). I\nintended \"posting list TID\". But \"pointer to a TID\" actually kinda\nworks too, since these are offsets into a posting list (a simple\nItemPointerData array) for those TIDs that we're in the process of\nremoving/deleted from the tuple.\n\n> I like the new guidelines you proposed (in the patch).\n> They are well-written and clear.\n\nThanks. The guidelines might well become stricter in the future. Right\nnow I'd be happy if everybody could at least be in rough agreement\nabout the most basic things.\n\n> I recognized that the output doesn't look nice, but I hadn't exactly\n> thought of it as malformed. Perhaps you are right.\n\nIt does seem like an annoying thing to have to handle if you actually\nwant to parse the array. It requires a different approach to every\nother array, which seems bad.\n\n> I will say and I am still not a fan of the \"if (first) else\" logic in\n> your attached patch.\n\nI agree that my approach there was pretty ugly.\n\n> How about we have the flags use a trailing comma and space and then\n> overwrite the last one with something this:\n>\n> if (infobits & XLHL_KEYS_UPDATED)\n> appendStringInfoString(buf, \"KEYS_UPDATED, \");\n> buf->data[buf->len -= strlen(\", \")] = '\\0';\n\nI'll try something like that instead.\n\n> - offsets = (OffsetNumber *) &plans[xlrec->nplans];\n> + offsets = (OffsetNumber *) ((char *) plans +\n> + (xlrec->nplans *\n> + sizeof(xl_heap_freeze_plan)));\n> appendStringInfoString(buf, \", plans:\");\n> array_desc(buf, plans, sizeof(xl_heap_freeze_plan), xlrec->nplans,\n> &plan_elem_desc, &offsets);\n\nI thought that it made sense to match the FREEZE_PAGE REDO routine.\n\nAnother fairly arbitrary change, to be honest.\n\n> > Note that the patch makes many individual (say) HOT_UPDATE records\n> > have descriptions that look like this:\n> >\n> > ... old_infobits: [], ...\n> >\n> > This differs from HEAD, where the output is totally suppressed because\n> > there are no flag bits to show. I think that this behavior is more\n> > logical and consistent overall.\n>\n> Yea, I think it is better to include things and show that they are empty\n> then omit them. I find it more clear.\n\nRight. It makes sense for something like this, because generally\nspeaking the structures aren't nested in any real sense. They're also\nvery static -- WAL records have a fixed structure. So it's unlikely\nthat anybody is going to try to parse the description before knowing\nwhich particular WAL record type (or perhaps types, plural) are\ninvolved.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 Apr 2023 16:31:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Apr 10, 2023 at 04:31:44PM -0700, Peter Geoghegan wrote:\n> On Mon, Apr 10, 2023 at 3:04 PM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > \n> > I will say that the prefix of p in \"ptid\" makes it sound like pointer to\n> > a tid, which I don't believe is what you meant.\n> \n> I was thinking of the symbol name \"ptid\" from\n> _bt_delitems_delete_check() (it even appears in code comments). I\n> intended \"posting list TID\". But \"pointer to a TID\" actually kinda\n> works too, since these are offsets into a posting list (a simple\n> ItemPointerData array) for those TIDs that we're in the process of\n> removing/deleted from the tuple.\n\nIf you keep the name, I'd explain it briefly in a comment above the code\nthen -- for those of us who spend less time with btrees. It is a tool\nthat will be often used by developers, so it is not unreasonable to\nassume they may read the code if they are confused.\n\n- Melanie\n\n\n", "msg_date": "Mon, 10 Apr 2023 20:23:15 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Apr 10, 2023 at 5:23 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> If you keep the name, I'd explain it briefly in a comment above the code\n> then -- for those of us who spend less time with btrees. It is a tool\n> that will be often used by developers, so it is not unreasonable to\n> assume they may read the code if they are confused.\n\nOkay, I'll do something about that shortly.\n\nAttached v2 deals with the \"trailing comma and space in flags array\"\nheap desc issue using an approach that's along the same lines as your\nsuggested approach. What do you think?\n\n-- \nPeter Geoghegan", "msg_date": "Mon, 10 Apr 2023 17:39:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "Hi,\n\nstatic void\ninfobits_desc(StringInfo buf, uint8 infobits, const char *keyname)\n{\n appendStringInfo(buf, \"%s: [\", keyname);\n\nWhy can we assume that there will be no space at the end here?\n\nI know we need to be able to avoid doing the comma overwriting if no\nflags were set. In general, we expect record description elements to\nprepend themselves with commas and spaces, but these infobits, for\nexample, use a trailing comma and space. If someone adds a description\nelement with a trailing space, they will trip this assert. We should at\nleast add a comment explaining this assertion so someone knows what to\ndo if they trip it.\n\nOtherwise, we can return early if no flags are set. That will probably\nmake for slightly messier code since we would still have to construct\nthe empty list.\n\n Assert(buf->data[buf->len - 1] != ' ');\n\n if (infobits & XLHL_XMAX_IS_MULTI)\n appendStringInfoString(buf, \"IS_MULTI, \");\n if (infobits & XLHL_XMAX_LOCK_ONLY)\n appendStringInfoString(buf, \"LOCK_ONLY, \");\n if (infobits & XLHL_XMAX_EXCL_LOCK)\n appendStringInfoString(buf, \"EXCL_LOCK, \");\n if (infobits & XLHL_XMAX_KEYSHR_LOCK)\n appendStringInfoString(buf, \"KEYSHR_LOCK, \");\n if (infobits & XLHL_KEYS_UPDATED)\n appendStringInfoString(buf, \"KEYS_UPDATED, \");\n\n if (buf->data[buf->len - 1] == ' ')\n {\n /* Truncate-away final unneeded \", \" */\n Assert(buf->data[buf->len - 2] == ',');\n buf->len -= 2;\n buf->data[buf->len] = '\\0';\n }\n\n appendStringInfoString(buf, \"]\");\n}\n\nAlso you didn't add the same assert to truncate_flags_desc().\n\nstatic void\ntruncate_flags_desc(StringInfo buf, uint8 flags)\n{\n appendStringInfoString(buf, \"flags: [\");\n\n if (flags & XLH_TRUNCATE_CASCADE)\n appendStringInfoString(buf, \"CASCADE, \");\n if (flags & XLH_TRUNCATE_RESTART_SEQS)\n appendStringInfoString(buf, \"RESTART_SEQS, \");\n\n if (buf->data[buf->len - 1] == ' ')\n {\n /* Truncate-away final unneeded \", \" */\n Assert(buf->data[buf->len - 2] == ',');\n buf->len -= 2;\n buf->data[buf->len] = '\\0';\n }\n\n appendStringInfoString(buf, \"]\");\n}\n\nNot the fault of this patch, but I also noticed that heap UPDATE and\nHOT_UPDATE records have xmax twice and don't differentiate between new\nand old. I think that was probably a mistake.\n\ndescription | off: 119, xmax: 1105, flags: 0x00, old_infobits:\n[], new off: 100, xmax 0\n\n else if (info == XLOG_HEAP_UPDATE)\n {\n xl_heap_update *xlrec = (xl_heap_update *) rec;\n\n appendStringInfo(buf, \"off: %u, xmax: %u, flags: 0x%02X, \",\n xlrec->old_offnum,\n xlrec->old_xmax,\n xlrec->flags);\n infobits_desc(buf, xlrec->old_infobits_set, \"old_infobits\");\n appendStringInfo(buf, \", new off: %u, xmax %u\",\n xlrec->new_offnum,\n xlrec->new_xmax);\n }\n else if (info == XLOG_HEAP_HOT_UPDATE)\n {\n xl_heap_update *xlrec = (xl_heap_update *) rec;\n\n appendStringInfo(buf, \"off: %u, xmax: %u, flags: 0x%02X, \",\n xlrec->old_offnum,\n xlrec->old_xmax,\n xlrec->flags);\n infobits_desc(buf, xlrec->old_infobits_set, \"old_infobits\");\n appendStringInfo(buf, \", new off: %u, xmax: %u\",\n xlrec->new_offnum,\n xlrec->new_xmax);\n }\n\nAlso not the fault of this patch, but looking at the output while using\nthis, I realized truncate record type has a stringified version of its\nflags while other record types, like update, don't. Do you think this\nmakes sense? Perhaps not something we can change now, though...\n\ndescription | off: 1, xmax: 1183, flags: 0x00, old_infobits: [],\nnew off: 119, xmax 0\n\nAlso not the fault of this patch, but I noticed that leaftopparent is\noften InvalidBlockNumber--which shows up as 4294967295. I wonder if\nanyone would be confused by this. Maybe devs know that this value is\nInvalidBlockNumber. In the future, perhaps we should interpolate the\nstring \"InvalidBlockNumber\"?\n\ndescription | left: 436, right: 389, level: 0, safexid: 0:1091,\nleafleft: 436, leafright: 389, leaftopparent: 4294967295\n\n- Melanie\n\n\n", "msg_date": "Tue, 11 Apr 2023 10:39:47 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Apr 11, 2023 at 7:40 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> static void\n> infobits_desc(StringInfo buf, uint8 infobits, const char *keyname)\n> {\n> appendStringInfo(buf, \"%s: [\", keyname);\n>\n> Why can we assume that there will be no space at the end here?\n\nI don't think that anybody is going to try that, but if they do then\nthe assertion will fail reliably.\n\n> I know we need to be able to avoid doing the comma overwriting if no\n> flags were set. In general, we expect record description elements to\n> prepend themselves with commas and spaces, but these infobits, for\n> example, use a trailing comma and space. If someone adds a description\n> element with a trailing space, they will trip this assert. We should at\n> least add a comment explaining this assertion so someone knows what to\n> do if they trip it.\n\nThe invariant here is roughly: caller's keyname argument cannot have\ntrailing spaces or punctuation characters. It looks like it would be\ninconvenient to write a precise assertion for that, but it doesn't\nfeel particularly necessary, given that this is just a static helper\nfunction.\n\n> Otherwise, we can return early if no flags are set. That will probably\n> make for slightly messier code since we would still have to construct\n> the empty list.\n\nI prefer to keep this as simple as possible for now.\n\n> Also you didn't add the same assert to truncate_flags_desc().\n\nThat's because truncate_flags_desc doesn't have a \"keyname\" argument.\nThough it does have an assertion at the end that is almost equivalent:\nthe \"Assert(buf->data[buf->len - 2] == ',') assertion (a matching\nassertion appears at the end of infobits_desc).\n\n> Not the fault of this patch, but I also noticed that heap UPDATE and\n> HOT_UPDATE records have xmax twice and don't differentiate between new\n> and old. I think that was probably a mistake.\n>\n> description | off: 119, xmax: 1105, flags: 0x00, old_infobits:\n> [], new off: 100, xmax 0\n\nThat doesn't seem great to me either. I don't like this ambiguity,\nbecause it seems like it makes the description hard to parse in a way\nthat flies in the face of what we're trying to do here, in general.\nSo it seems like it might be worth fixing now, in the scope of this\npatch.\n\n> Also not the fault of this patch, but looking at the output while using\n> this, I realized truncate record type has a stringified version of its\n> flags while other record types, like update, don't. Do you think this\n> makes sense? Perhaps not something we can change now, though...\n\nYou don't have to look at the truncate record type (which is a\nrelatively obscure and unimportant record type) to see these kinds of\ninconsistencies. You can see the same thing with HEAP_UPDATE and\nHEAP_HOT_UPDATE, which have stringified constants for infomask bits,\nbut not for the xl_heap_update.flags status bits.\n\nI don't see any principled reason why such an inconsistency should\nexist -- and we're talking about a pretty glaring inconsistency here.\nOn the other hand I don't think that we're obligated to do anything\nabout it for 16.\n\n> Also not the fault of this patch, but I noticed that leaftopparent is\n> often InvalidBlockNumber--which shows up as 4294967295. I wonder if\n> anyone would be confused by this. Maybe devs know that this value is\n> InvalidBlockNumber. In the future, perhaps we should interpolate the\n> string \"InvalidBlockNumber\"?\n>\n> description | left: 436, right: 389, level: 0, safexid: 0:1091,\n> leafleft: 436, leafright: 389, leaftopparent: 4294967295\n\nIn my personal opinion (this is a totally subjective question), the\ncurrent approach here is okay because (on its own) \"leaftopparent:\n4294967295\" isn't any more or less meaningful than \"leaftopparent:\nInvalidBlockNumber\". It's not as if the REDO routine actually relies\non the value ever being InvalidBlockNumber at all (except in an\nassertion).\n\nPlus it's easier to parse as-is. That's what swings it for me, in fact\n(as with the \"two xmax fields in update records\" question).\n\nThis is the kind of question that tends to lead to bikeshedding. The\nguidelines should avoid taking a firm position on these more\nsubjective questions, where we must make a subjective trade-off.\nEspecially a trade-off around how faithfully we represent the physical\nWAL record versus readability (whatever \"readability\" means). I\npondered a similar trade-off in comments added to delvacuum_desc. That\ncontributed to my feeling that this is best left up to individual rmgr\ndesc routines.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Apr 2023 10:34:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Apr 11, 2023 at 10:34 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > description | off: 119, xmax: 1105, flags: 0x00, old_infobits:\n> > [], new off: 100, xmax 0\n>\n> That doesn't seem great to me either. I don't like this ambiguity,\n> because it seems like it makes the description hard to parse in a way\n> that flies in the face of what we're trying to do here, in general.\n> So it seems like it might be worth fixing now, in the scope of this\n> patch.\n\nAttached revision deals with this by spelling out the names in full\n(e.g., \"old_xmax\" and \"new_xmax\"). It also reorders the output fields\nto match the order from the physical UPDATE, HOT_UPDATE, and LOCK WAL\nrecord types, on the theory that those should match the physical\nrecord (unless there is a good reason not to, which doesn't apply\nhere). I also removed some inconsistencies between\nxl_heap_lock_updated and xl_heap_lock, since they're very similar\nrecord types.\n\nThe revision also adds an extra sentence to the guidelines, since this\nseems like something that we're entitled to take a relatively firm\nposition on. Finally, it also adds a comment about the rules for\ninfobits_desc callers in header comments for the function, per your\nconcern about that.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 11 Apr 2023 11:48:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Apr 11, 2023 at 11:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached revision deals with this by spelling out the names in full\n> (e.g., \"old_xmax\" and \"new_xmax\"). It also reorders the output fields\n> to match the order from the physical UPDATE, HOT_UPDATE, and LOCK WAL\n> record types, on the theory that those should match the physical\n> record (unless there is a good reason not to, which doesn't apply\n> here).\n\nI just noticed that we don't even show xmax in the case of DELETE\nrecords. Perhaps the original assumption is that it must match the\nrecord's own XID, but that's not true after the MultiXact enhancements\nfor foreign key locking added to 9.3 (and in any case there is no\nreason at all to show xmax in UPDATE but not in DELETE).\n\nAttached revision v4 fixes this, making DELETE, UPDATE, HOT_UPDATE,\nLOCK, and LOCK_UPDATED record types consistent with each other in\nterms of the key names output by the heap desc routine. The field\norder also needed a couple of tweaks for struct consistency (and\ncross-record consistency) for v4.\n\n-- \nPeter Geoghegan", "msg_date": "Tue, 11 Apr 2023 12:22:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Apr 11, 2023 at 1:35 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 11, 2023 at 7:40 AM Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > Not the fault of this patch, but I also noticed that heap UPDATE and\n> > HOT_UPDATE records have xmax twice and don't differentiate between new\n> > and old. I think that was probably a mistake.\n> >\n> > description | off: 119, xmax: 1105, flags: 0x00, old_infobits:\n> > [], new off: 100, xmax 0\n>\n> That doesn't seem great to me either. I don't like this ambiguity,\n> because it seems like it makes the description hard to parse in a way\n> that flies in the face of what we're trying to do here, in general.\n> So it seems like it might be worth fixing now, in the scope of this\n> patch.\n\nAgreed.\n\nOn Tue, Apr 11, 2023 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 11, 2023 at 11:48 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Attached revision deals with this by spelling out the names in full\n> > (e.g., \"old_xmax\" and \"new_xmax\"). It also reorders the output fields\n> > to match the order from the physical UPDATE, HOT_UPDATE, and LOCK WAL\n> > record types, on the theory that those should match the physical\n> > record (unless there is a good reason not to, which doesn't apply\n> > here).\n>\n> I just noticed that we don't even show xmax in the case of DELETE\n> records. Perhaps the original assumption is that it must match the\n> record's own XID, but that's not true after the MultiXact enhancements\n> for foreign key locking added to 9.3 (and in any case there is no\n> reason at all to show xmax in UPDATE but not in DELETE).\n>\n> Attached revision v4 fixes this, making DELETE, UPDATE, HOT_UPDATE,\n> LOCK, and LOCK_UPDATED record types consistent with each other in\n> terms of the key names output by the heap desc routine. The field\n> order also needed a couple of tweaks for struct consistency (and\n> cross-record consistency) for v4.\n\nCode in v4 all seems fine to me.\nI like the update guidelines comment.\n\nI agree it would be nice for xl_heap_lock->locking_xid to be renamed\nxmax for clarity. I would suggest that if you don't intend to put it\nin a separate commit, you mention it explicitly in the final commit\nmessage. Its motivation isn't immediately obvious to the reader.\n\n- Melanie\n\n\n", "msg_date": "Tue, 11 Apr 2023 17:29:40 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Apr 11, 2023 at 2:29 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > That doesn't seem great to me either. I don't like this ambiguity,\n> > because it seems like it makes the description hard to parse in a way\n> > that flies in the face of what we're trying to do here, in general.\n> > So it seems like it might be worth fixing now, in the scope of this\n> > patch.\n>\n> Agreed.\n\nGreat -- pushed a fix for this just now, which included that change.\n\n> I agree it would be nice for xl_heap_lock->locking_xid to be renamed\n> xmax for clarity. I would suggest that if you don't intend to put it\n> in a separate commit, you mention it explicitly in the final commit\n> message. Its motivation isn't immediately obvious to the reader.\n\nWhat I ended up doing is making that part of a bug fix for a minor\nbuglet I noticed in passing -- it became part of the \"Fix xl_heap_lock\nWAL record field's data type\" commit from a bit earlier on.\n\nThanks for your help with the follow-up work. Seems like we're done\nwith this now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 11 Apr 2023 15:29:18 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On 12/04/2023 01:29, Peter Geoghegan wrote:\n> Thanks for your help with the follow-up work. Seems like we're done\n> with this now.\n\nThis is still listed in the July commitfest; is there some work remaining?\n\nI'm late to the party, but regarding commit c03c2eae0a, which added the \nguidelines for writing formatting desc functions:\n\nYou moved the comment from rmgrdesc_utils.c into rmgrdesc_utils.h, but I \ndon't think that was a good idea. Our usual convention is to have the \nfunction comment in the .c file, not at the declaration in the header \nfile. When I want to know what a function does, I jump to the .c file, \nand might miss the comment in the header entirely.\n\nLet's add a src/backend/access/rmgrdesc/README file. We don't currently \nhave any explanation anywhere why the rmgr desc functions are in a \nseparate directory. The README would be a good place to explain that, \nand to have the formatting guidelines. See attached.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)", "msg_date": "Mon, 10 Jul 2023 10:44:27 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Jul 10, 2023 at 12:44 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> This is still listed in the July commitfest; is there some work remaining?\n\nI don't think so; not in the scope of the original patch series from\nMelanie, at least.\n\n> You moved the comment from rmgrdesc_utils.c into rmgrdesc_utils.h, but I\n> don't think that was a good idea. Our usual convention is to have the\n> function comment in the .c file, not at the declaration in the header\n> file. When I want to know what a function does, I jump to the .c file,\n> and might miss the comment in the header entirely.\n\nI think that this was a gray area. It wasn't particularly obvious\nwhere this would go. At least not to me.\n\n> Let's add a src/backend/access/rmgrdesc/README file. We don't currently\n> have any explanation anywhere why the rmgr desc functions are in a\n> separate directory. The README would be a good place to explain that,\n> and to have the formatting guidelines. See attached.\n\nI agree that it's better this way, though.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 10 Jul 2023 22:29:17 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Jul 10, 2023 at 10:29 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Let's add a src/backend/access/rmgrdesc/README file. We don't currently\n> > have any explanation anywhere why the rmgr desc functions are in a\n> > separate directory. The README would be a good place to explain that,\n> > and to have the formatting guidelines. See attached.\n>\n> I agree that it's better this way, though.\n\nDid you forget to follow up here?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 25 Jul 2023 16:06:26 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Mon, Jul 10, 2023 at 3:44 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I'm late to the party, but regarding commit c03c2eae0a, which added the\n> guidelines for writing formatting desc functions:\n>\n> You moved the comment from rmgrdesc_utils.c into rmgrdesc_utils.h, but I\n> don't think that was a good idea. Our usual convention is to have the\n> function comment in the .c file, not at the declaration in the header\n> file. When I want to know what a function does, I jump to the .c file,\n> and might miss the comment in the header entirely.\n>\n> Let's add a src/backend/access/rmgrdesc/README file. We don't currently\n> have any explanation anywhere why the rmgr desc functions are in a\n> separate directory. The README would be a good place to explain that,\n> and to have the formatting guidelines. See attached.\n\ndiff --git a/src/backend/access/rmgrdesc/README\nb/src/backend/access/rmgrdesc/README\nnew file mode 100644\nindex 0000000000..abe84b9f11\n--- /dev/null\n+++ b/src/backend/access/rmgrdesc/README\n@@ -0,0 +1,60 @@\n+src/backend/access/rmgrdesc/README\n+\n+WAL resource manager description functions\n+==========================================\n+\n+For debugging purposes, there is a \"description function\", or rmgrdesc\n+function, for each WAL resource manager. The rmgrdesc function parses the WAL\n+record and prints the contents of the WAL record in a somewhat human-readable\n+format.\n+\n+The rmgrdesc functions for all resource managers are gathered in this\n+directory, because they are also used in the stand-alone pg_waldump program.\n\n\"standalone\" seems the more common spelling of this adjective in the\ncodebase today.\n\n+They could potentially be used by out-of-tree debugging tools too, although\n+the the functions or the output format should not be considered a stable API.\n\nYou have an extra \"the\".\n\nI might phrase the last bit as \"neither the description functions nor\nthe output format should be considered part of a stable API\"\n\n+Guidelines for rmgrdesc output format\n+=====================================\n\nI noticed you used === for both headings and wondered if it was\nintentional. Other READMEs I looked at in src/backend/access tend to\nhave a single heading underlined with ==== and then subsequent\nheadings are underlined with -----. I could see an argument either way\nhere, but I just thought I would bring it up in case it was not a\nconscious choice.\n\nOtherwise, LGTM.\n\n- Melanie\n\n\n", "msg_date": "Mon, 4 Sep 2023 16:02:51 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On 04/09/2023 23:02, Melanie Plageman wrote:\n> I might phrase the last bit as \"neither the description functions nor\n> the output format should be considered part of a stable API\"\n> \n> +Guidelines for rmgrdesc output format\n> +=====================================\n> \n> I noticed you used === for both headings and wondered if it was\n> intentional. Other READMEs I looked at in src/backend/access tend to\n> have a single heading underlined with ==== and then subsequent\n> headings are underlined with -----. I could see an argument either way\n> here, but I just thought I would bring it up in case it was not a\n> conscious choice.\n> \n> Otherwise, LGTM.\n\nMade these changes and committed. Thank you!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 2 Oct 2023 12:19:29 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Show various offset arrays for heap WAL records" }, { "msg_contents": "On Tue, Mar 21, 2023 at 3:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I think that we should do something like the attached, to completely\n> avoid this ambiguity. This patch adds a new XLOG_HEAP2 bit that's\n> similar to XLOG_HEAP_INIT_PAGE -- XLOG_HEAP2_BYVACUUM. This allows all\n> XLOG_HEAP2 record types to indicate that they took place during\n> VACUUM, by XOR'ing the flag with the record type/info when\n> XLogInsert() is called. For now this is only used by PRUNE records.\n> Tools like pg_walinspect will report a separate \"Heap2/PRUNE+BYVACUUM\"\n> record_type, as well as the unadorned Heap2/PRUNE record_type, which\n> we'll now know must have been opportunistic pruning.\n>\n> The approach of using a bit in the style of the heapam init bit makes\n> sense to me, because the bit is available, and works in a way that is\n> minimally invasive. Also, one can imagine needing to resolve a similar\n> ambiguity in the future, when (say) opportunistic freezing is added.\n\nStarting a new, dedicated thread to keep track of this in the CF app.\n\nThis patch bitrot. Attached is v2, rebased on top of HEAD.\n\n-- \nPeter Geoghegan", "msg_date": "Sat, 9 Dec 2023 13:48:45 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Recording whether Heap2/PRUNE records are from VACUUM or from\n opportunistic pruning (Was: Show various offset arrays for heap WAL records)" }, { "msg_contents": "On 09/12/2023 23:48, Peter Geoghegan wrote:\n> On Tue, Mar 21, 2023 at 3:37 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> I think that we should do something like the attached, to completely\n>> avoid this ambiguity. This patch adds a new XLOG_HEAP2 bit that's\n>> similar to XLOG_HEAP_INIT_PAGE -- XLOG_HEAP2_BYVACUUM. This allows all\n>> XLOG_HEAP2 record types to indicate that they took place during\n>> VACUUM, by XOR'ing the flag with the record type/info when\n>> XLogInsert() is called. For now this is only used by PRUNE records.\n>> Tools like pg_walinspect will report a separate \"Heap2/PRUNE+BYVACUUM\"\n>> record_type, as well as the unadorned Heap2/PRUNE record_type, which\n>> we'll now know must have been opportunistic pruning.\n>>\n>> The approach of using a bit in the style of the heapam init bit makes\n>> sense to me, because the bit is available, and works in a way that is\n>> minimally invasive. Also, one can imagine needing to resolve a similar\n>> ambiguity in the future, when (say) opportunistic freezing is added.\n> \n> Starting a new, dedicated thread to keep track of this in the CF app.\n> \n> This patch bitrot. Attached is v2, rebased on top of HEAD.\n\nI included changes like this in commit f83d709760 (\"Merge prune, freeze \nand vacuum WAL record formats\"). Marking this as Committed in the \ncommitfest.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Mon, 25 Mar 2024 15:04:54 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Recording whether Heap2/PRUNE records are from VACUUM or from\n opportunistic pruning (Was: Show various offset arrays for heap WAL records)" }, { "msg_contents": "On Mon, Mar 25, 2024 at 9:04 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I included changes like this in commit f83d709760 (\"Merge prune, freeze\n> and vacuum WAL record formats\"). Marking this as Committed in the\n> commitfest.\n\nThanks for making sure that that happened. I suspect that the amount\nof pruning performed opportunistically is sometimes much higher than\ngenerally assumed, so having a way of measuring that seems like it\nmight lead to valuable insights.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 25 Mar 2024 10:21:14 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Recording whether Heap2/PRUNE records are from VACUUM or from\n opportunistic pruning (Was: Show various offset arrays for heap WAL records)" } ]
[ { "msg_contents": "Hi all,\n\nThe problem mentioned in $subject has been discussed here:\nhttps://www.postgresql.org/message-id/DM5PR0501MB38800D9E4605BCA72DD35557CCE10@DM5PR0501MB3880.namprd05.prod.outlook.com\n\nThs issue has been fixed by 947789f, without a backpatch to v12 (as\nper 96cdeae) as the risk seemed rather limited seen from here, back\nwhen the problem was discussed. Unfortunately, I have seen customer\ndeployments on v12 and v13 playing with pg_database entries large\nenough that they would have toast entries and would be able to trigger\nthe problem fixed in v14 at the end of a vacuum.\n\nAny objections about getting 947789f applied to REL_13_STABLE and\nREL_12_STABLE and see this issue completely gone from all the versions\nsupported?\n\nThanks,\n--\nMichael", "msg_date": "Tue, 10 Jan 2023 16:43:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Any objections about getting 947789f applied to REL_13_STABLE and\n> REL_12_STABLE and see this issue completely gone from all the versions\n> supported?\n\nNo objections to back-patching the fix, but I wonder if we can find\nsome mechanical way to prevent this sort of error in future. It's\nsurely far from obvious that we need to apply heap_inplace_update\nto a raw tuple rather than a syscache entry.\n\nA partial fix perhaps could be to verify that the supplied tuple\nis the same length as what we see on-disk? It's partial because\nit'd only trigger if there had actually been a toasted-field\nexpansion, so it'd most likely not catch such coding errors\nduring developer testing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 02:57:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" }, { "msg_contents": "On Tue, Jan 10, 2023 at 02:57:43AM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Any objections about getting 947789f applied to REL_13_STABLE and\n>> REL_12_STABLE and see this issue completely gone from all the versions\n>> supported?\n> \n> No objections to back-patching the fix...\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:54:31 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 02:57:43 -0500, Tom Lane wrote:\n> No objections to back-patching the fix, but I wonder if we can find\n> some mechanical way to prevent this sort of error in future.\n\nWhat about a define that forces external toasting very aggressively for\ncatalog tables, iff they have a toast table? I suspect doing so for\nnon-catalog tables as well would trigger test changes. Running a buildfarm\nanimal with that would at least make issues like this much easier to discover.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Jan 2023 11:05:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" }, { "msg_contents": "On Tue, Jan 10, 2023 at 09:54:31AM -0800, Nathan Bossart wrote:\n> +1\n\nOkay, thanks. Done this part as of c0ee694 and 72b6098, then.\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 15:27:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:05:04AM -0800, Andres Freund wrote:\n> What about a define that forces external toasting very aggressively for\n> catalog tables, iff they have a toast table? I suspect doing so for\n> non-catalog tables as well would trigger test changes. Running a buildfarm\n> animal with that would at least make issues like this much easier to discover.\n\nHmm. That could work. I guess that you mean to do something like\nthat in SearchSysCacheCopy() when we build the tuple copy. There is\nan access to the where the cacheId, meaning that we know the catalog\ninvolved. Still, we would need a lookup at its pg_class entry to\ncheck after a reltoastrelid, meaning an extra relation opened, which\nwould be fine under a specific #define, anyway..\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 15:40:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: Avoiding \"wrong tuple length\" errors at the end of VACUUM on\n pg_database update (Backpatch of 947789f to v12 and v13)" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile working on [1], I noticed that xl_hash_vacuum_one_page.ntuples is an int.\n\nUnless I'm missing something, It seems to me that it would make more sense to use an uint16 (like this is done for\ngistxlogDelete.ntodelete for example).\n\nPlease find attached a patch proposal to do so.\n\nWhile that does not currently change the struct size:\n\nNo patch:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 4 */ int ntuples;\n\n /* total size (bytes): 8 */\n }\nWith patch:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 2 */ uint16 ntuples;\n/* XXX 2-byte padding */\n\n /* total size (bytes): 8 */\n }\n\nIt could reduce it when adding new fields (like this is is done in [1]).\n\nWe would get:\n\nNo patch:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 4 */ int ntuples;\n/* 8 | 1 */ _Bool isCatalogRel;\n/* XXX 3-byte padding */\n\n /* total size (bytes): 12 */\n }\n\nWith patch:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 2 */ uint16 ntuples;\n/* 6 | 1 */ _Bool isCatalogRel;\n/* XXX 1-byte padding */\n\n /* total size (bytes): 8 */\n }\n\nMeans saving 4 bytes in that case.\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Jan 2023 11:08:33 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:08:33AM +0100, Drouvot, Bertrand wrote:\n> While working on [1], I noticed that xl_hash_vacuum_one_page.ntuples is an int.\n> \n> Unless I'm missing something, It seems to me that it would make more sense to use an uint16 (like this is done for\n> gistxlogDelete.ntodelete for example).\n\nI think that is correct. This value is determined by looping through\noffsets, which are uint16 as well. Should we also change the related\nvariables (e.g., ndeletable in _hash_vacuum_one_page()) to uint16?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 12:01:55 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi,\n\nOn 1/20/23 9:01 PM, Nathan Bossart wrote:\n> On Tue, Jan 10, 2023 at 11:08:33AM +0100, Drouvot, Bertrand wrote:\n>> While working on [1], I noticed that xl_hash_vacuum_one_page.ntuples is an int.\n>>\n>> Unless I'm missing something, It seems to me that it would make more sense to use an uint16 (like this is done for\n>> gistxlogDelete.ntodelete for example).\n> \n> I think that is correct. This value is determined by looping through\n> offsets, which are uint16 as well. \n\nThanks for the review!\n\n> Should we also change the related\n> variables (e.g., ndeletable in _hash_vacuum_one_page()) to uint16?\n> \n\nYeah, I thought about it too. What I saw is that there is other places that would be good candidates for the same\nkind of changes (see the int ntodelete argument in gistXLogDelete() being assigned to gistxlogDelete.ntodelete (uint16) for example).\n\nSo, what do you think about:\n\n1) keep this patch as it is (to \"only\" address the struct field and avoid possible future \"useless\" padding size increase)\nand\n2) create a new patch (once this one is committed) to align the types for variables/arguments with the structs (related to XLOG records) fields when they are not?\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 21 Jan 2023 06:42:08 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Sat, Jan 21, 2023 at 06:42:08AM +0100, Drouvot, Bertrand wrote:\n> On 1/20/23 9:01 PM, Nathan Bossart wrote:\n>> Should we also change the related\n>> variables (e.g., ndeletable in _hash_vacuum_one_page()) to uint16?\n> \n> Yeah, I thought about it too. What I saw is that there is other places that would be good candidates for the same\n> kind of changes (see the int ntodelete argument in gistXLogDelete() being assigned to gistxlogDelete.ntodelete (uint16) for example).\n> \n> So, what do you think about:\n> \n> 1) keep this patch as it is (to \"only\" address the struct field and avoid possible future \"useless\" padding size increase)\n> and\n> 2) create a new patch (once this one is committed) to align the types for variables/arguments with the structs (related to XLOG records) fields when they are not?\n\nOkay. I've marked this one as ready-for-committer, then.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 14 Feb 2023 14:05:18 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Wed, Feb 15, 2023 at 3:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Sat, Jan 21, 2023 at 06:42:08AM +0100, Drouvot, Bertrand wrote:\n> > On 1/20/23 9:01 PM, Nathan Bossart wrote:\n> >> Should we also change the related\n> >> variables (e.g., ndeletable in _hash_vacuum_one_page()) to uint16?\n> >\n> > Yeah, I thought about it too. What I saw is that there is other places that would be good candidates for the same\n> > kind of changes (see the int ntodelete argument in gistXLogDelete() being assigned to gistxlogDelete.ntodelete (uint16) for example).\n> >\n> > So, what do you think about:\n> >\n> > 1) keep this patch as it is (to \"only\" address the struct field and avoid possible future \"useless\" padding size increase)\n> > and\n> > 2) create a new patch (once this one is committed) to align the types for variables/arguments with the structs (related to XLOG records) fields when they are not?\n>\n> Okay. I've marked this one as ready-for-committer, then.\n>\n\nLGTM. I think the padding space we are trying to save here can be used\nfor the patch [1], right? BTW, feel free to create the second patch\n(to align the types for variables/arguments) as that would be really\nhelpful after we commit this one.\n\nI think this would require XLOG_PAGE_MAGIC as it changes the WAL record.\n\nBTW, how about a commit message like:\nChange xl_hash_vacuum_one_page.ntuples from int to uint16.\n\nThis will create two bytes of padding space in xl_hash_vacuum_one_page\nwhich can be used for future patches. This makes the datatype of\nxl_hash_vacuum_one_page.ntuples same as gistxlogDelete.ntodelete which\nis advisable as both are used for the same purpose.\n\n[1] - https://www.postgresql.org/message-id/2d62f212-fce6-d639-b9eb-2a5bc4bec3b4%40gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 16 Feb 2023 16:30:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi,\n\nOn 2/16/23 12:00 PM, Amit Kapila wrote:\n> On Wed, Feb 15, 2023 at 3:35 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>>\n>> On Sat, Jan 21, 2023 at 06:42:08AM +0100, Drouvot, Bertrand wrote:\n>>> On 1/20/23 9:01 PM, Nathan Bossart wrote:\n>>>> Should we also change the related\n>>>> variables (e.g., ndeletable in _hash_vacuum_one_page()) to uint16?\n>>>\n>>> Yeah, I thought about it too. What I saw is that there is other places that would be good candidates for the same\n>>> kind of changes (see the int ntodelete argument in gistXLogDelete() being assigned to gistxlogDelete.ntodelete (uint16) for example).\n>>>\n>>> So, what do you think about:\n>>>\n>>> 1) keep this patch as it is (to \"only\" address the struct field and avoid possible future \"useless\" padding size increase)\n>>> and\n>>> 2) create a new patch (once this one is committed) to align the types for variables/arguments with the structs (related to XLOG records) fields when they are not?\n>>\n>> Okay. I've marked this one as ready-for-committer, then.\n>>\n> \n> LGTM. \n\nThanks for looking at it!\n\n> I think the padding space we are trying to save here can be used\n> for the patch [1], right?\n\nYes exactly, without the current patch and adding isCatalogRel (from\nthe patch [1] you mentioned) we would get:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 4 */ int ntuples;\n/* 8 | 1 */ _Bool isCatalogRel;\n/* XXX 3-byte padding */\n\n\n /* total size (bytes): 12 */\n }\n\nWhile with the proposed patch:\n\n(gdb) ptype /o struct xl_hash_vacuum_one_page\n/* offset | size */ type = struct xl_hash_vacuum_one_page {\n/* 0 | 4 */ TransactionId snapshotConflictHorizon;\n/* 4 | 2 */ uint16 ntuples;\n/* 6 | 1 */ _Bool isCatalogRel;\n/* XXX 1-byte padding */\n\n\n /* total size (bytes): 8 */\n }\n\n> BTW, feel free to create the second patch\n> (to align the types for variables/arguments) as that would be really\n> helpful after we commit this one.\n> \n\nYes, will do.\n\n> I think this would require XLOG_PAGE_MAGIC as it changes the WAL record.\n> \n\nOh, I Was not aware about it, thanks! Will do in V2 (and in the logical\ndecoding on standby patch as it adds the new field mentioned above).\n\n> BTW, how about a commit message like:\n> Change xl_hash_vacuum_one_page.ntuples from int to uint16.\n> \n> This will create two bytes of padding space in xl_hash_vacuum_one_page\n> which can be used for future patches. This makes the datatype of\n> xl_hash_vacuum_one_page.ntuples same as gistxlogDelete.ntodelete which\n> is advisable as both are used for the same purpose.\n> \n\nLGTM, will add it to V2!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 Feb 2023 13:26:00 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi,\n\nOn 2/16/23 1:26 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 2/16/23 12:00 PM, Amit Kapila wrote:\n>> I think this would require XLOG_PAGE_MAGIC as it changes the WAL record.\n>>\n> \n> Oh, I Was not aware about it, thanks! Will do in V2 (and in the logical\n> decoding on standby patch as it adds the new field mentioned above).\n> \n>> BTW, how about a commit message like:\n>> Change xl_hash_vacuum_one_page.ntuples from int to uint16.\n>>\n>> This will create two bytes of padding space in xl_hash_vacuum_one_page\n>> which can be used for future patches. This makes the datatype of\n>> xl_hash_vacuum_one_page.ntuples same as gistxlogDelete.ntodelete which\n>> is advisable as both are used for the same purpose.\n>>\n> \n> LGTM, will add it to V2!\n> \nPlease find V2 attached.\nThe commit message also mention the XLOG_PAGE_MAGIC bump.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 Feb 2023 16:09:45 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Thu, Feb 16, 2023 at 8:39 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 2/16/23 1:26 PM, Drouvot, Bertrand wrote:\n> > Hi,\n> >\n> > On 2/16/23 12:00 PM, Amit Kapila wrote:\n> >> I think this would require XLOG_PAGE_MAGIC as it changes the WAL record.\n> >>\n> >\n> > Oh, I Was not aware about it, thanks! Will do in V2 (and in the logical\n> > decoding on standby patch as it adds the new field mentioned above).\n> >\n> >> BTW, how about a commit message like:\n> >> Change xl_hash_vacuum_one_page.ntuples from int to uint16.\n> >>\n> >> This will create two bytes of padding space in xl_hash_vacuum_one_page\n> >> which can be used for future patches. This makes the datatype of\n> >> xl_hash_vacuum_one_page.ntuples same as gistxlogDelete.ntodelete which\n> >> is advisable as both are used for the same purpose.\n> >>\n> >\n> > LGTM, will add it to V2!\n> >\n> Please find V2 attached.\n> The commit message also mention the XLOG_PAGE_MAGIC bump.\n>\n\nThanks, I was not completely sure about whether we need to bump\nXLOG_PAGE_MAGIC for this patch as this makes the additional space just\nby changing the datatype of one of the members of the existing WAL\nrecord. We normally change it for the addition/removal of new fields\naka change in WAL record format, or addition of a new type of WAL\nrecord. Does anyone else have an opinion/suggestion on this matter?\n\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 17 Feb 2023 08:30:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi\n\nOn 2023-02-17 08:30:09 +0530, Amit Kapila wrote:\n> Thanks, I was not completely sure about whether we need to bump\n> XLOG_PAGE_MAGIC for this patch as this makes the additional space just\n> by changing the datatype of one of the members of the existing WAL\n> record. We normally change it for the addition/removal of new fields\n> aka change in WAL record format, or addition of a new type of WAL\n> record. Does anyone else have an opinion/suggestion on this matter?\n\nI'd definitely change it - the width of a field still means you can't really\nparse the old WAL sensibly anymore.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 16 Feb 2023 20:13:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi,\n\nOn 2/16/23 1:26 PM, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 2/16/23 12:00 PM, Amit Kapila wrote:\n> \n>> BTW, feel free to create the second patch\n>> (to align the types for variables/arguments) as that would be really\n>> helpful after we commit this one.\n\nPlease find attached a patch proposal to do so.\n\nIt looks like a Pandora's box as it produces\nthose cascading changes:\n\n _hash_vacuum_one_page\n index_compute_xid_horizon_for_tuples\n gistprunepage\n PageIndexMultiDelete\n gistXLogDelete\n PageIndexMultiDelete\n spgRedoVacuumRedirect\n vacuumRedirectAndPlaceholder\n spgPageIndexMultiDelete\n moveLeafs\n doPickSplit\n _bt_delitems_vacuum\n btvacuumpage\n _bt_delitems_delete\n _bt_delitems_delete_check\n hash_xlog_move_page_contents\n gistvacuumpage\n gistXLogUpdate\n gistplacetopage\n hashbucketcleanup\n\n\nWhich makes me:\n\n- wonder it is not too intrusive (we could reduce the scope and keep the\nPageIndexMultiDelete()'s nitems argument as an int for example).\n\n- worry if there is no side effects (like the one I'm mentioning as a comment\nin PageIndexMultiDelete()) even if it passes the CI tests.\n\n- wonder if we should not change MaxIndexTuplesPerPage from int to uint16 too (given\nthe fact that the maximum block size is 32 KB.\n\nI'm sharing this version but I still need to think about it and\nI'm curious about your thoughts too.\n \nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 17 Feb 2023 15:13:17 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Fri, Feb 17, 2023 at 9:43 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-02-17 08:30:09 +0530, Amit Kapila wrote:\n> > Thanks, I was not completely sure about whether we need to bump\n> > XLOG_PAGE_MAGIC for this patch as this makes the additional space just\n> > by changing the datatype of one of the members of the existing WAL\n> > record. We normally change it for the addition/removal of new fields\n> > aka change in WAL record format, or addition of a new type of WAL\n> > record. Does anyone else have an opinion/suggestion on this matter?\n>\n> I'd definitely change it - the width of a field still means you can't really\n> parse the old WAL sensibly anymore.\n>\n\nOkay, thanks for your input. I'll push this patch early next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 18 Feb 2023 09:07:59 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "On Fri, Feb 17, 2023 at 7:44 PM Drouvot, Bertrand\n<bertranddrouvot.pg@gmail.com> wrote:\n>\n> On 2/16/23 1:26 PM, Drouvot, Bertrand wrote:\n> > Hi,\n> >\n> > On 2/16/23 12:00 PM, Amit Kapila wrote:\n> >\n> >> BTW, feel free to create the second patch\n> >> (to align the types for variables/arguments) as that would be really\n> >> helpful after we commit this one.\n>\n\nPushed the first patch.\n\n> Please find attached a patch proposal to do so.\n>\n> It looks like a Pandora's box as it produces\n> those cascading changes:\n>\n> _hash_vacuum_one_page\n> index_compute_xid_horizon_for_tuples\n> gistprunepage\n> PageIndexMultiDelete\n> gistXLogDelete\n> PageIndexMultiDelete\n> spgRedoVacuumRedirect\n> vacuumRedirectAndPlaceholder\n> spgPageIndexMultiDelete\n> moveLeafs\n> doPickSplit\n> _bt_delitems_vacuum\n> btvacuumpage\n> _bt_delitems_delete\n> _bt_delitems_delete_check\n> hash_xlog_move_page_contents\n> gistvacuumpage\n> gistXLogUpdate\n> gistplacetopage\n> hashbucketcleanup\n>\n>\n> Which makes me:\n>\n> - wonder it is not too intrusive (we could reduce the scope and keep the\n> PageIndexMultiDelete()'s nitems argument as an int for example).\n>\n> - worry if there is no side effects (like the one I'm mentioning as a comment\n> in PageIndexMultiDelete()) even if it passes the CI tests.\n>\n> - wonder if we should not change MaxIndexTuplesPerPage from int to uint16 too (given\n> the fact that the maximum block size is 32 KB.\n>\n> I'm sharing this version but I still need to think about it and\n> I'm curious about your thoughts too.\n>\n\n@@ -591,11 +591,11 @@ hash_xlog_move_page_contents(XLogReaderState *record)\n\n if (len > 0)\n {\n- OffsetNumber *unused;\n- OffsetNumber *unend;\n+ uint16 *unused;\n+ uint16 *unend;\n\n- unused = (OffsetNumber *) ptr;\n- unend = (OffsetNumber *) ((char *) ptr + len);\n+ unused = (uint16 *) ptr;\n+ unend = (uint16 *) ((char *) ptr + len);\n\nIt doesn't seem useful to me to make such changes. About other changes\nin the second patch, it is not clear whether there is much value\naddition by those even though I don't see anything wrong with them.\nSo, let's see if Nathan or others see the value in the proposed patch\nor any subset of these changes.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 27 Feb 2023 10:57:16 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" }, { "msg_contents": "Hi,\n\nOn 2/27/23 6:27 AM, Amit Kapila wrote:\n> On Fri, Feb 17, 2023 at 7:44 PM Drouvot, Bertrand\n> <bertranddrouvot.pg@gmail.com> wrote:\n>>\n>> On 2/16/23 1:26 PM, Drouvot, Bertrand wrote:\n>>> Hi,\n>>>\n>>> On 2/16/23 12:00 PM, Amit Kapila wrote:\n>>>\n>>>> BTW, feel free to create the second patch\n>>>> (to align the types for variables/arguments) as that would be really\n>>>> helpful after we commit this one.\n>>\n> \n> Pushed the first patch.\n\nThanks!\n\n> \n>> Please find attached a patch proposal to do so.\n>>\n>> It looks like a Pandora's box as it produces\n>> those cascading changes:\n>>\n>> _hash_vacuum_one_page\n>> index_compute_xid_horizon_for_tuples\n>> gistprunepage\n>> PageIndexMultiDelete\n>> gistXLogDelete\n>> PageIndexMultiDelete\n>> spgRedoVacuumRedirect\n>> vacuumRedirectAndPlaceholder\n>> spgPageIndexMultiDelete\n>> moveLeafs\n>> doPickSplit\n>> _bt_delitems_vacuum\n>> btvacuumpage\n>> _bt_delitems_delete\n>> _bt_delitems_delete_check\n>> hash_xlog_move_page_contents\n>> gistvacuumpage\n>> gistXLogUpdate\n>> gistplacetopage\n>> hashbucketcleanup\n>>\n>>\n>> Which makes me:\n>>\n>> - wonder it is not too intrusive (we could reduce the scope and keep the\n>> PageIndexMultiDelete()'s nitems argument as an int for example).\n>>\n>> - worry if there is no side effects (like the one I'm mentioning as a comment\n>> in PageIndexMultiDelete()) even if it passes the CI tests.\n>>\n>> - wonder if we should not change MaxIndexTuplesPerPage from int to uint16 too (given\n>> the fact that the maximum block size is 32 KB.\n>>\n>> I'm sharing this version but I still need to think about it and\n>> I'm curious about your thoughts too.\n>>\n> \n> @@ -591,11 +591,11 @@ hash_xlog_move_page_contents(XLogReaderState *record)\n> \n> if (len > 0)\n> {\n> - OffsetNumber *unused;\n> - OffsetNumber *unend;\n> + uint16 *unused;\n> + uint16 *unend;\n> \n> - unused = (OffsetNumber *) ptr;\n> - unend = (OffsetNumber *) ((char *) ptr + len);\n> + unused = (uint16 *) ptr;\n> + unend = (uint16 *) ((char *) ptr + len);\n> \n> It doesn't seem useful to me to make such changes.\n\nYeah, the OffsetNumber is currently defined as uint16, but I wonder if it's\nnot better that those matches the functions arguments types they are linked to (should OffsetNumber\nor the functions arguments types change).\n\n> About other changes\n> in the second patch, it is not clear whether there is much value\n> addition by those even though I don't see anything wrong with them.\n> So, let's see if Nathan or others see the value in the proposed patch\n> or any subset of these changes.\n> \n\n+1.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 27 Feb 2023 09:36:50 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Change xl_hash_vacuum_one_page.ntuples from int to uint16" } ]
[ { "msg_contents": "While reviewing [1], I visited other places where sorting is needed, and\nhave some findings.\n\nIn add_paths_with_pathkeys_for_rel, we do not try incremental sort atop\nof the epq_path, which I think we can do. I'm not sure how useful this\nis in real world since the epq_path is used only for EPQ checks, but it\nseems doing that doesn't cost too much.\n\nIn create_ordered_paths, we are trying to sort the cheapest partial path\nand incremental sort on any partial paths with presorted keys, and then\nuse Gather Merge. If the cheapest partial path is not completely sorted\nbut happens to have presorted keys, we would create a full sort path and\nan incremental sort path on it. I think this is not what we want. We\nare supposed to only create an incremental sort path if there are\npresorted keys.\n\nIn gather_grouping_paths, we have the same issue. In addition, for the\nincremental sort paths created atop partial paths, we neglect to\ncalculate 'total_groups' before we use it in create_gather_merge_path.\n\n[1]\nhttps://www.postgresql.org/message-id/flat/CAApHDvo8Lz2H%3D42urBbfP65LTcEUOh288MT7DsG2_EWtW1AXHQ%40mail.gmail.com\n\nThanks\nRichard", "msg_date": "Tue, 10 Jan 2023 19:05:40 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Some revises in adding sorting path" }, { "msg_contents": "I looked at the three patches and have some thoughts:\n\n0001:\n\nDoes the newly added test have to be this complex? I think it might\nbe better to just come up with the most simple test you can that uses\nan incremental sort. I really can't think why the test requires a FOR\nUPDATE, to test incremental sort, for example. The danger with making\na test more complex than it needs to be is that it frequently gets\nbroken by unrelated changes. The complexity makes it harder for\npeople to understand the test's intentions and that increases the risk\nthat the test eventually does not test what it was originally meant to\ntest as the underlying code changes and the expected output is\nupdated.\n\n0002:\n\nI think the following existing comment does not seem to be true any longer:\n\n> However, there's one more\n> * possibility: it may make sense to sort the cheapest partial path\n> * according to the required output order and then use Gather Merge.\n\nYou've removed the comment that talks about trying Incremental Sort ->\nGather Merge paths yet the code is still doing that, the two are just\nmore consolidated now, so perhaps you need to come up with a new\ncomment to explain what we're trying to achieve.\n\n> * already (no need to deal with paths which have presorted\n> * keys when incremental sort is disabled unless it's the\n> * cheapest input path).\n\nI think \"input path\" should be \"partial path\". (I maybe didn't get\nthat right in all places in 4a29eabd1).\n\n0003:\n\nLooking at gather_grouping_paths(), I see it calls\ngenerate_useful_gather_paths() which generates a bunch of Gather Merge\npaths after sorting the cheapest path and incrementally sorting any\npartially sorted paths. Then back in gather_grouping_paths(), we go\nand create Gather Merge paths again, but this time according to the\ngroup_pathkeys instead of the query_pathkeys. I know you're not really\nchanging anything here, but as I'm struggling to understand why\nexactly we're creating two sets of Gather Merge paths, it makes it a\nbit scary to consider changing anything in this area. I've not really\nfound any comments that can explain to me sufficiently well enough so\nthat I understand why this needs to be done. Do you know?\n\nDavid\n\n\n", "msg_date": "Tue, 14 Feb 2023 15:53:40 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "Hi Richard,\n\nOn Tue, Jan 10, 2023 at 8:06 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> In add_paths_with_pathkeys_for_rel, we do not try incremental sort atop\n> of the epq_path, which I think we can do. I'm not sure how useful this\n> is in real world since the epq_path is used only for EPQ checks, but it\n> seems doing that doesn't cost too much.\n\nI'm not sure this is a good idea, because the epq_path will return at\nmost one tuple in an EPQ recheck.\n\nThe reason why an extra Sort node is injected into the epq_path is\nonly label it with the correct sort order to use it as an input for\nthe EPQ merge-join path of a higher-level foreign join, so shouldn't\nwe keep this step as much as simple and save cycles even a little?\n\nSorry for being late to the party.\n\nBest regards,\nEtsuro Fujita\n\n\n", "msg_date": "Thu, 16 Feb 2023 20:49:57 +0900", "msg_from": "Etsuro Fujita <etsuro.fujita@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Thu, Feb 16, 2023 at 7:50 PM Etsuro Fujita <etsuro.fujita@gmail.com>\nwrote:\n\n> I'm not sure this is a good idea, because the epq_path will return at\n> most one tuple in an EPQ recheck.\n>\n> The reason why an extra Sort node is injected into the epq_path is\n> only label it with the correct sort order to use it as an input for\n> the EPQ merge-join path of a higher-level foreign join, so shouldn't\n> we keep this step as much as simple and save cycles even a little?\n\n\nAgreed. Thanks for the explanation. I also wondered whether it's\nworthwhile to do the change here. I'll remove the 0001 patch.\n\nThanks\nRichard\n\nOn Thu, Feb 16, 2023 at 7:50 PM Etsuro Fujita <etsuro.fujita@gmail.com> wrote:\nI'm not sure this is a good idea, because the epq_path will return at\nmost one tuple in an EPQ recheck.\n\nThe reason why an extra Sort node is injected into the epq_path is\nonly label it with the correct sort order to use it as an input for\nthe EPQ merge-join path of a higher-level foreign join, so shouldn't\nwe keep this step as much as simple and save cycles even a little? Agreed.  Thanks for the explanation.  I also wondered whether it'sworthwhile to do the change here.  I'll remove the 0001 patch.ThanksRichard", "msg_date": "Tue, 21 Feb 2023 16:55:24 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Tue, Feb 14, 2023 at 10:53 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I looked at the three patches and have some thoughts:\n\n\nThanks for reviewing!\n\n\n> 0001:\n>\n> Does the newly added test have to be this complex? I think it might\n> be better to just come up with the most simple test you can that uses\n> an incremental sort. I really can't think why the test requires a FOR\n> UPDATE, to test incremental sort, for example. The danger with making\n> a test more complex than it needs to be is that it frequently gets\n> broken by unrelated changes. The complexity makes it harder for\n> people to understand the test's intentions and that increases the risk\n> that the test eventually does not test what it was originally meant to\n> test as the underlying code changes and the expected output is\n> updated.\n\n\nThat makes sense. I agree that we should always use the minimal query\nfor test. For this patch, as pointed by Etsuro, it may not be a good\nidea for the change. So I'll remove 0001.\n\n\n> 0002:\n>\n> I think the following existing comment does not seem to be true any longer:\n>\n> > However, there's one more\n> > * possibility: it may make sense to sort the cheapest partial path\n> > * according to the required output order and then use Gather Merge.\n>\n> You've removed the comment that talks about trying Incremental Sort ->\n> Gather Merge paths yet the code is still doing that, the two are just\n> more consolidated now, so perhaps you need to come up with a new\n> comment to explain what we're trying to achieve.\n\n\nYes, you are right. How about the comment below?\n\n- * possibility: it may make sense to sort the cheapest partial path\n- * according to the required output order and then use Gather Merge.\n+ * possibility: it may make sense to sort the cheapest partial path or\n+ * incrementally sort any partial path that is partially sorted according\n+ * to the required output order and then use Gather Merge.\n\nLooking at the codes now I have some concern that what we do in\ncreate_ordered_paths for partial paths may have already been done in\ngenerate_useful_gather_paths, especially when query_pathkeys is equal to\nsort_pathkeys. Not sure if this is a problem. And the comment there\nmentions generate_gather_paths(), but ISTM we should mention what\ngenerate_useful_gather_paths has done.\n\n\n> 0003:\n>\n> Looking at gather_grouping_paths(), I see it calls\n> generate_useful_gather_paths() which generates a bunch of Gather Merge\n> paths after sorting the cheapest path and incrementally sorting any\n> partially sorted paths. Then back in gather_grouping_paths(), we go\n> and create Gather Merge paths again, but this time according to the\n> group_pathkeys instead of the query_pathkeys. I know you're not really\n> changing anything here, but as I'm struggling to understand why\n> exactly we're creating two sets of Gather Merge paths, it makes it a\n> bit scary to consider changing anything in this area. I've not really\n> found any comments that can explain to me sufficiently well enough so\n> that I understand why this needs to be done. Do you know?\n\n\nI'm also not sure why gather_grouping_paths creates Gather Merge paths\naccording to group_pathkeys after what generate_useful_gather_paths has\ndone. There is comment that mentions this but seems more explanation is\nneeded.\n\n* generate_useful_gather_paths does most of the work, but we also consider a\n* special case: we could try sorting the data by the group_pathkeys and then\n* applying Gather Merge.\n\nIt seems that if there is available group_pathkeys, we will set\nquery_pathkeys to group_pathkeys because we want the result sorted for\ngrouping. In this case gather_grouping_paths may just generate\nduplicate Gather Merge paths because generate_useful_gather_paths has\ngenerated Gather Merge paths according to query_pathkeys. I tried to\nreduce the code of gather_grouping_paths to just call\ngenerate_useful_gather_paths and found no diffs in regression tests.\n\nThanks\nRichard\n\nOn Tue, Feb 14, 2023 at 10:53 AM David Rowley <dgrowleyml@gmail.com> wrote:I looked at the three patches and have some thoughts: Thanks for reviewing! \n0001:\n\nDoes the newly added test have to be this complex?  I think it might\nbe better to just come up with the most simple test you can that uses\nan incremental sort. I really can't think why the test requires a FOR\nUPDATE, to test incremental sort, for example. The danger with making\na test more complex than it needs to be is that it frequently gets\nbroken by unrelated changes.  The complexity makes it harder for\npeople to understand the test's intentions and that increases the risk\nthat the test eventually does not test what it was originally meant to\ntest as the underlying code changes and the expected output is\nupdated. That makes sense.  I agree that we should always use the minimal queryfor test.  For this patch, as pointed by Etsuro, it may not be a goodidea for the change.  So I'll remove 0001. \n0002:\n\nI think the following existing comment does not seem to be true any longer:\n\n> However, there's one more\n> * possibility: it may make sense to sort the cheapest partial path\n> * according to the required output order and then use Gather Merge.\n\nYou've removed the comment that talks about trying Incremental Sort ->\nGather Merge paths yet the code is still doing that, the two are just\nmore consolidated now, so perhaps you need to come up with a new\ncomment to explain what we're trying to achieve. Yes, you are right.  How about the comment below?- * possibility: it may make sense to sort the cheapest partial path- * according to the required output order and then use Gather Merge.+ * possibility: it may make sense to sort the cheapest partial path or+ * incrementally sort any partial path that is partially sorted according+ * to the required output order and then use Gather Merge.Looking at the codes now I have some concern that what we do increate_ordered_paths for partial paths may have already been done ingenerate_useful_gather_paths, especially when query_pathkeys is equal tosort_pathkeys.  Not sure if this is a problem.  And the comment therementions generate_gather_paths(), but ISTM we should mention whatgenerate_useful_gather_paths has done. \n0003:\n\nLooking at gather_grouping_paths(), I see it calls\ngenerate_useful_gather_paths() which generates a bunch of Gather Merge\npaths after sorting the cheapest path and incrementally sorting any\npartially sorted paths.  Then back in gather_grouping_paths(), we go\nand create Gather Merge paths again, but this time according to the\ngroup_pathkeys instead of the query_pathkeys. I know you're not really\nchanging anything here, but as I'm struggling to understand why\nexactly we're creating two sets of Gather Merge paths, it makes it a\nbit scary to consider changing anything in this area. I've not really\nfound any comments that can explain to me sufficiently well enough so\nthat I understand why this needs to be done.  Do you know? I'm also not sure why gather_grouping_paths creates Gather Merge pathsaccording to group_pathkeys after what generate_useful_gather_paths hasdone.  There is comment that mentions this but seems more explanation isneeded.* generate_useful_gather_paths does most of the work, but we also consider a* special case: we could try sorting the data by the group_pathkeys and then* applying Gather Merge.It seems that if there is available group_pathkeys, we will setquery_pathkeys to group_pathkeys because we want the result sorted forgrouping.  In this case gather_grouping_paths may just generateduplicate Gather Merge paths because generate_useful_gather_paths hasgenerated Gather Merge paths according to query_pathkeys.  I tried toreduce the code of gather_grouping_paths to just callgenerate_useful_gather_paths and found no diffs in regression tests.ThanksRichard", "msg_date": "Tue, 21 Feb 2023 17:02:44 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Tue, 21 Feb 2023 at 22:02, Richard Guo <guofenglinux@gmail.com> wrote:\n> Looking at the codes now I have some concern that what we do in\n> create_ordered_paths for partial paths may have already been done in\n> generate_useful_gather_paths, especially when query_pathkeys is equal to\n> sort_pathkeys. Not sure if this is a problem. And the comment there\n> mentions generate_gather_paths(), but ISTM we should mention what\n> generate_useful_gather_paths has done.\n\nI think you need to write some tests for this. I've managed to come up\nwith something to get that code to perform a Sort, but I've not\nmanaged to get it to perform an incremental sort.\n\ncreate or replace function parallel_safe_volatile(a int) returns int\nas $$ begin return a; end; $$ parallel safe volatile language plpgsql;\ncreate table abc(a int, b int, c int);\ninsert into abc select x,y,z from generate_Series(1,100)x,\ngenerate_Series(1,100)y, generate_Series(1,100)z;\nset parallel_tuple_cost=0;\n\nWithout making those parallel paths, we get:\n\npostgres=# explain select * from abc where a=1 order by\na,b,parallel_safe_volatile(c);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Sort (cost=13391.49..13417.49 rows=10400 width=16)\n Sort Key: b, (parallel_safe_volatile(c))\n -> Gather (cost=1000.00..12697.58 rows=10400 width=16)\n Workers Planned: 2\n -> Parallel Seq Scan on abc (cost=0.00..11697.58 rows=4333 width=16)\n Filter: (a = 1)\n(6 rows)\n\nbut with, the plan is:\n\npostgres=# explain select * from abc where a=1 order by\na,b,parallel_safe_volatile(c);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Gather Merge (cost=12959.35..13060.52 rows=8666 width=16)\n Workers Planned: 2\n -> Sort (cost=11959.32..11970.15 rows=4333 width=16)\n Sort Key: b, (parallel_safe_volatile(c))\n -> Parallel Seq Scan on abc (cost=0.00..11697.58 rows=4333 width=16)\n Filter: (a = 1)\n(6 rows)\n\nI added the parallel safe and volatile function so that\nget_useful_pathkeys_for_relation() wouldn't include all of the\nquery_pathkeys.\n\nIf you write some tests for this code, it will be useful to prove that\nit actually does something, and also that it does not break again in\nthe future. I don't really want to just blindly copy the pattern used\nin 3c6fc5820 for creating incremental sort paths if it's not useful\nhere. It would be good to see tests that make an Incremental Sort path\nusing the code you're changing.\n\nSame for the 0003 patch.\n\nI'll mark this as waiting on author while you work on that.\n\nDavid\n\n\n", "msg_date": "Wed, 29 Mar 2023 08:59:48 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "> On 28 Mar 2023, at 21:59, David Rowley <dgrowleyml@gmail.com> wrote:\n\n> I'll mark this as waiting on author while you work on that.\n\nRichard: have you had a chance to incorporate the tests proposed by David in\nthis thread into your patch?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 10 Jul 2023 11:37:47 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Wed, Mar 29, 2023 at 4:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> If you write some tests for this code, it will be useful to prove that\n> it actually does something, and also that it does not break again in\n> the future. I don't really want to just blindly copy the pattern used\n> in 3c6fc5820 for creating incremental sort paths if it's not useful\n> here. It would be good to see tests that make an Incremental Sort path\n> using the code you're changing.\n\n\nThanks for the suggestion. I've managed to come up with a query that\ngets the codes being changed here to perform an incremental sort.\n\nset min_parallel_index_scan_size to 0;\nset enable_seqscan to off;\n\nWithout making those parallel paths:\n\nexplain (costs off)\nselect * from tenk1 where four = 2 order by four, hundred,\nparallel_safe_volatile(thousand);\n QUERY PLAN\n--------------------------------------------------------------\n Incremental Sort\n Sort Key: hundred, (parallel_safe_volatile(thousand))\n Presorted Key: hundred\n -> Gather Merge\n Workers Planned: 3\n -> Parallel Index Scan using tenk1_hundred on tenk1\n Filter: (four = 2)\n(7 rows)\n\nand with those parallel paths:\n\nexplain (costs off)\nselect * from tenk1 where four = 2 order by four, hundred,\nparallel_safe_volatile(thousand);\n QUERY PLAN\n---------------------------------------------------------------\n Gather Merge\n Workers Planned: 3\n -> Incremental Sort\n Sort Key: hundred, (parallel_safe_volatile(thousand))\n Presorted Key: hundred\n -> Parallel Index Scan using tenk1_hundred on tenk1\n Filter: (four = 2)\n(7 rows)\n\nI've added two tests for the code changes in create_ordered_paths in the\nnew patch.\n\n\n> Same for the 0003 patch.\n\n\nFor the code changes in gather_grouping_paths, I've managed to come up\nwith a query that makes an explicit Sort atop cheapest partial path.\n\nexplain (costs off)\nselect count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n QUERY PLAN\n--------------------------------------------------------------------\n Finalize GroupAggregate\n Group Key: twenty, (parallel_safe_volatile(two))\n -> Gather Merge\n Workers Planned: 4\n -> Sort\n Sort Key: twenty, (parallel_safe_volatile(two))\n -> Partial HashAggregate\n Group Key: twenty, parallel_safe_volatile(two)\n -> Parallel Seq Scan on tenk1\n(9 rows)\n\nWithout this logic the plan would look like:\n\nexplain (costs off)\nselect count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n QUERY PLAN\n--------------------------------------------------------------------\n Finalize GroupAggregate\n Group Key: twenty, (parallel_safe_volatile(two))\n -> Sort\n Sort Key: twenty, (parallel_safe_volatile(two))\n -> Gather\n Workers Planned: 4\n -> Partial HashAggregate\n Group Key: twenty, parallel_safe_volatile(two)\n -> Parallel Seq Scan on tenk1\n(9 rows)\n\nThis test is also added in the new patch.\n\nBut I did not find a query that makes an incremental sort in this case.\nAfter trying for a while it seems to me that we do not need to consider\nincremental sort in this case, because for a partial path of a grouped\nor partially grouped relation, it is either unordered (HashAggregate or\nAppend), or it has been ordered by the group_pathkeys (GroupAggregate).\nIt seems there is no case that we'd have a partial path that is\npartially sorted.\n\nSo update the patches to v2.\n\nThanks\nRichard", "msg_date": "Mon, 17 Jul 2023 16:55:23 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Mon, Jul 10, 2023 at 5:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 28 Mar 2023, at 21:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> > I'll mark this as waiting on author while you work on that.\n>\n> Richard: have you had a chance to incorporate the tests proposed by David\n> in\n> this thread into your patch?\n\n\nI just updated the patches according to David's reviews. So I'll change\nit back to needs review.\n\nThanks\nRichard\n\nOn Mon, Jul 10, 2023 at 5:37 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 28 Mar 2023, at 21:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> I'll mark this as waiting on author while you work on that.\n\nRichard: have you had a chance to incorporate the tests proposed by David in\nthis thread into your patch?I just updated the patches according to David's reviews.  So I'll changeit back to needs review.ThanksRichard", "msg_date": "Mon, 17 Jul 2023 17:13:05 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Thu, Dec 28, 2023 at 4:00 PM Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Wed, Mar 29, 2023 at 4:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> If you write some tests for this code, it will be useful to prove that\n>> it actually does something, and also that it does not break again in\n>> the future. I don't really want to just blindly copy the pattern used\n>> in 3c6fc5820 for creating incremental sort paths if it's not useful\n>> here. It would be good to see tests that make an Incremental Sort path\n>> using the code you're changing.\n>\n>\n> Thanks for the suggestion. I've managed to come up with a query that\n> gets the codes being changed here to perform an incremental sort.\n>\n> set min_parallel_index_scan_size to 0;\n> set enable_seqscan to off;\n>\n> Without making those parallel paths:\n>\n> explain (costs off)\n> select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> QUERY PLAN\n> --------------------------------------------------------------\n> Incremental Sort\n> Sort Key: hundred, (parallel_safe_volatile(thousand))\n> Presorted Key: hundred\n> -> Gather Merge\n> Workers Planned: 3\n> -> Parallel Index Scan using tenk1_hundred on tenk1\n> Filter: (four = 2)\n> (7 rows)\n>\n> and with those parallel paths:\n>\n> explain (costs off)\n> select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Gather Merge\n> Workers Planned: 3\n> -> Incremental Sort\n> Sort Key: hundred, (parallel_safe_volatile(thousand))\n> Presorted Key: hundred\n> -> Parallel Index Scan using tenk1_hundred on tenk1\n> Filter: (four = 2)\n> (7 rows)\n>\n> I've added two tests for the code changes in create_ordered_paths in the\n> new patch.\n>\n>>\n>> Same for the 0003 patch.\n>\n>\n> For the code changes in gather_grouping_paths, I've managed to come up\n> with a query that makes an explicit Sort atop cheapest partial path.\n>\n> explain (costs off)\n> select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Finalize GroupAggregate\n> Group Key: twenty, (parallel_safe_volatile(two))\n> -> Gather Merge\n> Workers Planned: 4\n> -> Sort\n> Sort Key: twenty, (parallel_safe_volatile(two))\n> -> Partial HashAggregate\n> Group Key: twenty, parallel_safe_volatile(two)\n> -> Parallel Seq Scan on tenk1\n> (9 rows)\n>\n> Without this logic the plan would look like:\n>\n> explain (costs off)\n> select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Finalize GroupAggregate\n> Group Key: twenty, (parallel_safe_volatile(two))\n> -> Sort\n> Sort Key: twenty, (parallel_safe_volatile(two))\n> -> Gather\n> Workers Planned: 4\n> -> Partial HashAggregate\n> Group Key: twenty, parallel_safe_volatile(two)\n> -> Parallel Seq Scan on tenk1\n> (9 rows)\n>\n> This test is also added in the new patch.\n>\n> But I did not find a query that makes an incremental sort in this case.\n> After trying for a while it seems to me that we do not need to consider\n> incremental sort in this case, because for a partial path of a grouped\n> or partially grouped relation, it is either unordered (HashAggregate or\n> Append), or it has been ordered by the group_pathkeys (GroupAggregate).\n> It seems there is no case that we'd have a partial path that is\n> partially sorted.\n>\nI reviewed the Patch and it looks fine to me.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 28 Dec 2023 16:01:55 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Thu, Dec 28, 2023 at 4:01 PM Shubham Khanna\n<khannashubham1197@gmail.com> wrote:\n>\n> On Thu, Dec 28, 2023 at 4:00 PM Richard Guo <guofenglinux@gmail.com> wrote:\n> >\n> >\n> > On Wed, Mar 29, 2023 at 4:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n> >>\n> >> If you write some tests for this code, it will be useful to prove that\n> >> it actually does something, and also that it does not break again in\n> >> the future. I don't really want to just blindly copy the pattern used\n> >> in 3c6fc5820 for creating incremental sort paths if it's not useful\n> >> here. It would be good to see tests that make an Incremental Sort path\n> >> using the code you're changing.\n> >\n> >\n> > Thanks for the suggestion. I've managed to come up with a query that\n> > gets the codes being changed here to perform an incremental sort.\n> >\n> > set min_parallel_index_scan_size to 0;\n> > set enable_seqscan to off;\n> >\n> > Without making those parallel paths:\n> >\n> > explain (costs off)\n> > select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> > QUERY PLAN\n> > --------------------------------------------------------------\n> > Incremental Sort\n> > Sort Key: hundred, (parallel_safe_volatile(thousand))\n> > Presorted Key: hundred\n> > -> Gather Merge\n> > Workers Planned: 3\n> > -> Parallel Index Scan using tenk1_hundred on tenk1\n> > Filter: (four = 2)\n> > (7 rows)\n> >\n> > and with those parallel paths:\n> >\n> > explain (costs off)\n> > select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> > QUERY PLAN\n> > ---------------------------------------------------------------\n> > Gather Merge\n> > Workers Planned: 3\n> > -> Incremental Sort\n> > Sort Key: hundred, (parallel_safe_volatile(thousand))\n> > Presorted Key: hundred\n> > -> Parallel Index Scan using tenk1_hundred on tenk1\n> > Filter: (four = 2)\n> > (7 rows)\n> >\n> > I've added two tests for the code changes in create_ordered_paths in the\n> > new patch.\n> >\n> >>\n> >> Same for the 0003 patch.\n> >\n> >\n> > For the code changes in gather_grouping_paths, I've managed to come up\n> > with a query that makes an explicit Sort atop cheapest partial path.\n> >\n> > explain (costs off)\n> > select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> > QUERY PLAN\n> > --------------------------------------------------------------------\n> > Finalize GroupAggregate\n> > Group Key: twenty, (parallel_safe_volatile(two))\n> > -> Gather Merge\n> > Workers Planned: 4\n> > -> Sort\n> > Sort Key: twenty, (parallel_safe_volatile(two))\n> > -> Partial HashAggregate\n> > Group Key: twenty, parallel_safe_volatile(two)\n> > -> Parallel Seq Scan on tenk1\n> > (9 rows)\n> >\n> > Without this logic the plan would look like:\n> >\n> > explain (costs off)\n> > select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> > QUERY PLAN\n> > --------------------------------------------------------------------\n> > Finalize GroupAggregate\n> > Group Key: twenty, (parallel_safe_volatile(two))\n> > -> Sort\n> > Sort Key: twenty, (parallel_safe_volatile(two))\n> > -> Gather\n> > Workers Planned: 4\n> > -> Partial HashAggregate\n> > Group Key: twenty, parallel_safe_volatile(two)\n> > -> Parallel Seq Scan on tenk1\n> > (9 rows)\n> >\n> > This test is also added in the new patch.\n> >\n> > But I did not find a query that makes an incremental sort in this case.\n> > After trying for a while it seems to me that we do not need to consider\n> > incremental sort in this case, because for a partial path of a grouped\n> > or partially grouped relation, it is either unordered (HashAggregate or\n> > Append), or it has been ordered by the group_pathkeys (GroupAggregate).\n> > It seems there is no case that we'd have a partial path that is\n> > partially sorted.\n> >\nJust for clarity; I am not familiar with the code. And for the review,\nI ran 'make check' and 'make check-world' and all the test cases\npassed successfully.\n\nThanks and Regards,\nShubham Khanna.\n\n\n", "msg_date": "Thu, 28 Dec 2023 16:37:34 +0530", "msg_from": "Shubham Khanna <khannashubham1197@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Mon, 17 Jul 2023 at 14:25, Richard Guo <guofenglinux@gmail.com> wrote:\n>\n>\n> On Wed, Mar 29, 2023 at 4:00 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>>\n>> If you write some tests for this code, it will be useful to prove that\n>> it actually does something, and also that it does not break again in\n>> the future. I don't really want to just blindly copy the pattern used\n>> in 3c6fc5820 for creating incremental sort paths if it's not useful\n>> here. It would be good to see tests that make an Incremental Sort path\n>> using the code you're changing.\n>\n>\n> Thanks for the suggestion. I've managed to come up with a query that\n> gets the codes being changed here to perform an incremental sort.\n>\n> set min_parallel_index_scan_size to 0;\n> set enable_seqscan to off;\n>\n> Without making those parallel paths:\n>\n> explain (costs off)\n> select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> QUERY PLAN\n> --------------------------------------------------------------\n> Incremental Sort\n> Sort Key: hundred, (parallel_safe_volatile(thousand))\n> Presorted Key: hundred\n> -> Gather Merge\n> Workers Planned: 3\n> -> Parallel Index Scan using tenk1_hundred on tenk1\n> Filter: (four = 2)\n> (7 rows)\n>\n> and with those parallel paths:\n>\n> explain (costs off)\n> select * from tenk1 where four = 2 order by four, hundred, parallel_safe_volatile(thousand);\n> QUERY PLAN\n> ---------------------------------------------------------------\n> Gather Merge\n> Workers Planned: 3\n> -> Incremental Sort\n> Sort Key: hundred, (parallel_safe_volatile(thousand))\n> Presorted Key: hundred\n> -> Parallel Index Scan using tenk1_hundred on tenk1\n> Filter: (four = 2)\n> (7 rows)\n>\n> I've added two tests for the code changes in create_ordered_paths in the\n> new patch.\n>\n>>\n>> Same for the 0003 patch.\n>\n>\n> For the code changes in gather_grouping_paths, I've managed to come up\n> with a query that makes an explicit Sort atop cheapest partial path.\n>\n> explain (costs off)\n> select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Finalize GroupAggregate\n> Group Key: twenty, (parallel_safe_volatile(two))\n> -> Gather Merge\n> Workers Planned: 4\n> -> Sort\n> Sort Key: twenty, (parallel_safe_volatile(two))\n> -> Partial HashAggregate\n> Group Key: twenty, parallel_safe_volatile(two)\n> -> Parallel Seq Scan on tenk1\n> (9 rows)\n>\n> Without this logic the plan would look like:\n>\n> explain (costs off)\n> select count(*) from tenk1 group by twenty, parallel_safe_volatile(two);\n> QUERY PLAN\n> --------------------------------------------------------------------\n> Finalize GroupAggregate\n> Group Key: twenty, (parallel_safe_volatile(two))\n> -> Sort\n> Sort Key: twenty, (parallel_safe_volatile(two))\n> -> Gather\n> Workers Planned: 4\n> -> Partial HashAggregate\n> Group Key: twenty, parallel_safe_volatile(two)\n> -> Parallel Seq Scan on tenk1\n> (9 rows)\n>\n> This test is also added in the new patch.\n>\n> But I did not find a query that makes an incremental sort in this case.\n> After trying for a while it seems to me that we do not need to consider\n> incremental sort in this case, because for a partial path of a grouped\n> or partially grouped relation, it is either unordered (HashAggregate or\n> Append), or it has been ordered by the group_pathkeys (GroupAggregate).\n> It seems there is no case that we'd have a partial path that is\n> partially sorted.\n>\n> So update the patches to v2.\n\nCFBot shows that the patch does not apply anymore as in [1]:\n=== Applying patches on top of PostgreSQL commit ID\nf2bf8fb04886e3ea82e7f7f86696ac78e06b7e60 ===\n...\n=== applying patch\n./v2-0002-Revise-how-we-sort-partial-paths-in-gather_grouping_paths.patch\npatching file src/backend/optimizer/plan/planner.c\nHunk #1 succeeded at 7289 (offset -91 lines).\nHunk #2 FAILED at 7411.\n1 out of 2 hunks FAILED -- saving rejects to file\nsrc/backend/optimizer/plan/planner.c.rej\n\nPlease post an updated version for the same.\n\n[1] - http://cfbot.cputube.org/patch_46_4119.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sat, 27 Jan 2024 08:33:01 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Mon, Jul 17, 2023 at 4:55 PM Richard Guo <guofenglinux@gmail.com> wrote:\n\n> But I did not find a query that makes an incremental sort in this case.\n> After trying for a while it seems to me that we do not need to consider\n> incremental sort in this case, because for a partial path of a grouped\n> or partially grouped relation, it is either unordered (HashAggregate or\n> Append), or it has been ordered by the group_pathkeys (GroupAggregate).\n> It seems there is no case that we'd have a partial path that is\n> partially sorted.\n>\n\nSince now we'd try to reorder the group by keys (see 0452b461bc), it is\npossible that we have a partial path that is partially sorted. So this\nconclusion is not true any more. For instance,\n\ncreate table t (a int, b int, c int, d int);\ninsert into t select i%10, i%10, i%10, i%10 from\ngenerate_series(1,1000000)i;\ncreate index on t (a, b);\nanalyze t;\n\nset enable_hashagg to off;\nset enable_seqscan to off;\n\nexplain (costs off)\nselect count(*) from t group by a, c, b, parallel_safe_volatile(d);\n QUERY PLAN\n--------------------------------------------------------------------------\n Finalize GroupAggregate\n Group Key: a, c, b, (parallel_safe_volatile(d))\n -> Gather Merge\n Workers Planned: 2\n -> Incremental Sort\n Sort Key: a, c, b, (parallel_safe_volatile(d))\n Presorted Key: a\n -> Partial GroupAggregate\n Group Key: a, b, c, (parallel_safe_volatile(d))\n -> Incremental Sort\n Sort Key: a, b, c, (parallel_safe_volatile(d))\n Presorted Key: a, b\n -> Parallel Index Scan using t_a_b_idx on t\n(13 rows)\n\nIf we do not consider incremental sort on partial paths in\ngather_grouping_paths(), we'd get a plan that looks like:\n\nexplain (costs off)\nselect count(*) from t group by a, c, b, parallel_safe_volatile(d);\n QUERY PLAN\n--------------------------------------------------------------------------------\n Finalize GroupAggregate\n Group Key: a, c, b, (parallel_safe_volatile(d))\n -> Incremental Sort\n Sort Key: a, c, b, (parallel_safe_volatile(d))\n Presorted Key: a, c, b\n -> Gather Merge\n Workers Planned: 2\n -> Incremental Sort\n Sort Key: a, c, b\n Presorted Key: a\n -> Partial GroupAggregate\n Group Key: a, b, c, (parallel_safe_volatile(d))\n -> Incremental Sort\n Sort Key: a, b, c,\n(parallel_safe_volatile(d))\n Presorted Key: a, b\n -> Parallel Index Scan using t_a_b_idx on\nt\n(16 rows)\n\nSo in the v3 patch I've brought back the logic that considers\nincremental sort on partial paths in gather_grouping_paths(). However,\nI failed to compose a test case for this scenario without having to\ngenerate a huge table. So in the v3 patch I did not include a test case\nfor this aspect.\n\nThanks\nRichard", "msg_date": "Mon, 29 Jan 2024 17:39:06 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Mon, 29 Jan 2024 at 22:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> So in the v3 patch I've brought back the logic that considers\n> incremental sort on partial paths in gather_grouping_paths(). However,\n> I failed to compose a test case for this scenario without having to\n> generate a huge table. So in the v3 patch I did not include a test case\n> for this aspect.\n\nCan you share the test with the huge table?\n\nI tried and failed as, if I'm not mistaken, you're talking about a\nparallel aggregate query with an incremental sort between the Partial\nAggregate node and the Finalize Group Aggregate node. If the partial\naggregate was a Group Aggregate, then it would already be correctly\nsorted. We don't need a more strict sort ordering to perform the\nFinalize Group Aggregate, the results must already be sorted by at\nleast the GROUP BY clause. If the partial aggregate had opted to Hash\nAggregate, then there'd be no presorted keys, so we could only get a\nfull sort. I can't see any way to get an incremental sort between the\n2 aggregate phases.\n\nWhat am I missing?\n\nI also tried reverting your changes to planner.c to see if your new\ntests would fail. They all passed. So it looks like none of these\ntests are testing anything new.\n\nDavid\n\n\n", "msg_date": "Wed, 31 Jan 2024 00:00:05 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Tue, Jan 30, 2024 at 7:00 PM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Mon, 29 Jan 2024 at 22:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> > So in the v3 patch I've brought back the logic that considers\n> > incremental sort on partial paths in gather_grouping_paths(). However,\n> > I failed to compose a test case for this scenario without having to\n> > generate a huge table. So in the v3 patch I did not include a test case\n> > for this aspect.\n>\n> Can you share the test with the huge table?\n\n\nThe test had been shown in upthread [1]. Pasting it here:\n\ncreate table t (a int, b int, c int, d int);\ninsert into t select i%10, i%10, i%10, i%10 from\ngenerate_series(1,1000000)i;\ncreate index on t (a, b);\nanalyze t;\n\nset enable_hashagg to off;\nset enable_seqscan to off;\n\nexplain (costs off)\nselect count(*) from t group by a, c, b, parallel_safe_volatile(d);\n QUERY PLAN\n--------------------------------------------------------------------------\n Finalize GroupAggregate\n Group Key: a, c, b, (parallel_safe_volatile(d))\n -> Gather Merge\n Workers Planned: 2\n -> Incremental Sort\n Sort Key: a, c, b, (parallel_safe_volatile(d))\n Presorted Key: a\n -> Partial GroupAggregate\n Group Key: a, b, c, (parallel_safe_volatile(d))\n -> Incremental Sort\n Sort Key: a, b, c, (parallel_safe_volatile(d))\n Presorted Key: a, b\n -> Parallel Index Scan using t_a_b_idx on t\n(13 rows)\n\n\n> I tried and failed as, if I'm not mistaken, you're talking about a\n> parallel aggregate query with an incremental sort between the Partial\n> Aggregate node and the Finalize Group Aggregate node. If the partial\n> aggregate was a Group Aggregate, then it would already be correctly\n> sorted. We don't need a more strict sort ordering to perform the\n> Finalize Group Aggregate, the results must already be sorted by at\n> least the GROUP BY clause. If the partial aggregate had opted to Hash\n> Aggregate, then there'd be no presorted keys, so we could only get a\n> full sort. I can't see any way to get an incremental sort between the\n> 2 aggregate phases.\n>\n> What am I missing?\n\n\nThis was true before 0452b461bc, and I reached the same conclusion in\n[2]. Quote it here:\n\n\"\nBut I did not find a query that makes an incremental sort in this case.\nAfter trying for a while it seems to me that we do not need to consider\nincremental sort in this case, because for a partial path of a grouped\nor partially grouped relation, it is either unordered (HashAggregate or\nAppend), or it has been ordered by the group_pathkeys (GroupAggregate).\nIt seems there is no case that we'd have a partial path that is\npartially sorted.\n\"\n\nBut if we've reordered the group by keys to match the input path's\npathkeys, we might have a partial GroupAggregate that is partially\nsorted. See the plan above.\n\n\n> I also tried reverting your changes to planner.c to see if your new\n> tests would fail. They all passed. So it looks like none of these\n> tests are testing anything new.\n\n\nThis patchset does not aim to introduce anything new; it simply\nrefactors the existing code. The newly added tests are used to show\nthat the code that is touched here is not redundant, but rather\nessential for generating certain paths. I remember the tests were added\nper your comment in [3].\n\n[1]\nhttps://www.postgresql.org/message-id/CAMbWs4_iDwMAf5mp%2BG-tXq-gYzvR6koSHvNUqBPK4pt7%2B11tJw%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CAMbWs497h5jVCVwNDb%2BBX31Z_K8iBaPQKOcsTvpFQ7kF18a2%2Bg%40mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/CAApHDvo%2BFagxVSGmvt-LUrhLZQ0KViiLvX8dPaG3ZzWLNd-Zpg%40mail.gmail.com\n\nThanks\nRichard\n\nOn Tue, Jan 30, 2024 at 7:00 PM David Rowley <dgrowleyml@gmail.com> wrote:On Mon, 29 Jan 2024 at 22:39, Richard Guo <guofenglinux@gmail.com> wrote:\n> So in the v3 patch I've brought back the logic that considers\n> incremental sort on partial paths in gather_grouping_paths().  However,\n> I failed to compose a test case for this scenario without having to\n> generate a huge table.  So in the v3 patch I did not include a test case\n> for this aspect.\n\nCan you share the test with the huge table?The test had been shown in upthread [1].  Pasting it here:create table t (a int, b int, c int, d int);insert into t select i%10, i%10, i%10, i%10 from generate_series(1,1000000)i;create index on t (a, b);analyze t;set enable_hashagg to off;set enable_seqscan to off;explain (costs off)select count(*) from t group by a, c, b, parallel_safe_volatile(d);                                QUERY PLAN-------------------------------------------------------------------------- Finalize GroupAggregate   Group Key: a, c, b, (parallel_safe_volatile(d))   ->  Gather Merge         Workers Planned: 2         ->  Incremental Sort               Sort Key: a, c, b, (parallel_safe_volatile(d))               Presorted Key: a               ->  Partial GroupAggregate                     Group Key: a, b, c, (parallel_safe_volatile(d))                     ->  Incremental Sort                           Sort Key: a, b, c, (parallel_safe_volatile(d))                           Presorted Key: a, b                           ->  Parallel Index Scan using t_a_b_idx on t(13 rows) \nI tried and failed as, if I'm not mistaken, you're talking about a\nparallel aggregate query with an incremental sort between the Partial\nAggregate node and the Finalize Group Aggregate node.  If the partial\naggregate was a Group Aggregate, then it would already be correctly\nsorted.  We don't need a more strict sort ordering to perform the\nFinalize Group Aggregate, the results must already be sorted by at\nleast the GROUP BY clause.  If the partial aggregate had opted to Hash\nAggregate, then there'd be no presorted keys, so we could only get a\nfull sort.  I can't see any way to get an incremental sort between the\n2 aggregate phases.\n\nWhat am I missing?This was true before 0452b461bc, and I reached the same conclusion in[2].  Quote it here:\"But I did not find a query that makes an incremental sort in this case.After trying for a while it seems to me that we do not need to considerincremental sort in this case, because for a partial path of a groupedor partially grouped relation, it is either unordered (HashAggregate orAppend), or it has been ordered by the group_pathkeys (GroupAggregate).It seems there is no case that we'd have a partial path that ispartially sorted.\"But if we've reordered the group by keys to match the input path'spathkeys, we might have a partial GroupAggregate that is partiallysorted.  See the plan above. \nI also tried reverting your changes to planner.c to see if your new\ntests would fail. They all passed. So it looks like none of these\ntests are testing anything new.This patchset does not aim to introduce anything new; it simplyrefactors the existing code.  The newly added tests are used to showthat the code that is touched here is not redundant, but ratheressential for generating certain paths.  I remember the tests were addedper your comment in [3].[1] https://www.postgresql.org/message-id/CAMbWs4_iDwMAf5mp%2BG-tXq-gYzvR6koSHvNUqBPK4pt7%2B11tJw%40mail.gmail.com[2] https://www.postgresql.org/message-id/CAMbWs497h5jVCVwNDb%2BBX31Z_K8iBaPQKOcsTvpFQ7kF18a2%2Bg%40mail.gmail.com[3] https://www.postgresql.org/message-id/CAApHDvo%2BFagxVSGmvt-LUrhLZQ0KViiLvX8dPaG3ZzWLNd-Zpg%40mail.gmail.comThanksRichard", "msg_date": "Tue, 30 Jan 2024 19:44:07 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Wed, 31 Jan 2024 at 00:44, Richard Guo <guofenglinux@gmail.com> wrote:\n> This patchset does not aim to introduce anything new; it simply\n> refactors the existing code. The newly added tests are used to show\n> that the code that is touched here is not redundant, but rather\n> essential for generating certain paths. I remember the tests were added\n> per your comment in [3].\n>\n> [3] https://www.postgresql.org/message-id/CAApHDvo%2BFagxVSGmvt-LUrhLZQ0KViiLvX8dPaG3ZzWLNd-Zpg%40mail.gmail.com\n\nOK. I've pushed the patched based on it being a simplification of the\npartial path generation.\n\nDavid\n\n\n", "msg_date": "Wed, 31 Jan 2024 10:13:07 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Some revises in adding sorting path" }, { "msg_contents": "On Wed, Jan 31, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:\n\n> On Wed, 31 Jan 2024 at 00:44, Richard Guo <guofenglinux@gmail.com> wrote:\n> > This patchset does not aim to introduce anything new; it simply\n> > refactors the existing code. The newly added tests are used to show\n> > that the code that is touched here is not redundant, but rather\n> > essential for generating certain paths. I remember the tests were added\n> > per your comment in [3].\n> >\n> > [3]\n> https://www.postgresql.org/message-id/CAApHDvo%2BFagxVSGmvt-LUrhLZQ0KViiLvX8dPaG3ZzWLNd-Zpg%40mail.gmail.com\n>\n> OK. I've pushed the patched based on it being a simplification of the\n> partial path generation.\n\n\nThanks for pushing it!\n\nThanks\nRichard\n\nOn Wed, Jan 31, 2024 at 5:13 AM David Rowley <dgrowleyml@gmail.com> wrote:On Wed, 31 Jan 2024 at 00:44, Richard Guo <guofenglinux@gmail.com> wrote:\n> This patchset does not aim to introduce anything new; it simply\n> refactors the existing code.  The newly added tests are used to show\n> that the code that is touched here is not redundant, but rather\n> essential for generating certain paths.  I remember the tests were added\n> per your comment in [3].\n>\n> [3] https://www.postgresql.org/message-id/CAApHDvo%2BFagxVSGmvt-LUrhLZQ0KViiLvX8dPaG3ZzWLNd-Zpg%40mail.gmail.com\n\nOK.  I've pushed the patched based on it being a simplification of the\npartial path generation.Thanks for pushing it!ThanksRichard", "msg_date": "Wed, 31 Jan 2024 15:11:16 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Some revises in adding sorting path" } ]
[ { "msg_contents": "Hi,\n\nI propose using windows VMs instead of containers, the patch is \nattached. Currently, windows containers are used on the CI, but these \ncontainer images are needs to get pulled on every CI run, also they are \nslow to run.\n\nThese VM images are created in the same way how container images are \ncreated [1].\n\nThe comparison between VMs and containers are (based on d952373a98 and \nwith same numbers of CPU and memory):\n\nScheduling step:\n\n\n\tVS 2019\n\tMinGW64\nVM [2]\n\t00:17m\n\t00:16m\nContainer [3]\n\t03:51m \t04:28m\n\nExecution step:\n\n\n\tVS 2019\n\tMinGW64\nVM [2]\n\t12:16m\n\t07.55m\nContainer [3]\n\t26:02m \t16:34m\n\nThere is more than 2x speed gain when VMs are used.\n\n[1] \nhttps://github.com/anarazel/pg-vm-images/blob/main/packer/windows.pkr.hcl\n[2] https://cirrus-ci.com/build/4720774045499392\n[3] https://cirrus-ci.com/build/5468256027279360\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Tue, 10 Jan 2023 15:20:18 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\nTables didn't seem nice on web interface. Re-sending with correct \nformatting.\n\nScheduling step:\n\n| VS 2019 | MinGW64\n--------------------------------------------------------------\nVM | 00:17m | 00:16m\n--------------------------------------------------------------\nContainer | 03:51m | 04:28m\nExecution step:\n\n| VS 2019 | MinGW64\n--------------------------------------------------------------\nVM | 12:16m| 07:55m\n--------------------------------------------------------------\nContainer | 26:02m | 16:34m\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n\n\n\nHi,\n\n Tables didn't seem nice on web interface. Re-sending with correct\n formatting.\n\n Scheduling step:\n\n | VS 2019 | MinGW64 --------------------------------------------------------------VM | 00:17m | 00:16m--------------------------------------------------------------Container | 03:51m | 04:28m\n\n\n Execution step:\n | VS 2019 | MinGW64 --------------------------------------------------------------VM | 12:16m | 07:55m--------------------------------------------------------------Container | 26:02m | 16:34m\n\n Regards,\n Nazir Bilal Yavuz\n Microsoft", "msg_date": "Tue, 10 Jan 2023 15:37:17 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\nIt didn't work again. Sending numbers until I figure out how to solve this.\n\nScheduling Step:\n\nVM + VS 2019: 00.17m\nContainer + VS 2019: 03.51m\n\nVM + MinGW64: 00.16m\nContainer + MinGW64: 04.28m\n\n\nExecution step:\n\nVM + VS 2019: 12.16m\nContainer + VS 2019: 26.02m\n\nVM + MinGW64: 07.55m\nContainer + MinGW64: 16.34m\n\n\nSorry for the multiple mails.\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft\n\n\n\n\n\n\nHi,\n\n It didn't work again. Sending numbers until I figure out how to\n solve this.\n\n Scheduling Step:\n\n VM + VS 2019: 00.17m\n Container + VS 2019: 03.51m\n\n VM + MinGW64: 00.16m\n Container + MinGW64: 04.28m\n\n Execution step:\n\n VM + VS 2019: 12.16m\n Container + VS 2019: 26.02m\n\n VM + MinGW64: 07.55m\n Container + MinGW64: 16.34m\n\n Sorry for the multiple mails.\n\n Regards,\n Nazir Bilal Yavuz\n Microsoft", "msg_date": "Tue, 10 Jan 2023 15:51:42 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "On Tue, Jan 10, 2023 at 03:20:18PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> I propose using windows VMs instead of containers, the patch is attached.\n> Currently, windows containers are used on the CI, but these container images\n> are needs to get pulled on every CI run, also they are slow to run.\n\n+1\n\n> There is more than 2x speed gain when VMs are used.\n\nOne consideration is that if windows runs twice as fast, we'll suddenly\nstart using twice as many resources at cirrus/google/amazon - the\nwindows task has been throttling everything else. Not sure if we should\nto do anything beyond the limits that cfbot already uses.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:22:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 09:22:12 -0600, Justin Pryzby wrote:\n> > There is more than 2x speed gain when VMs are used.\n> \n> One consideration is that if windows runs twice as fast, we'll suddenly\n> start using twice as many resources at cirrus/google/amazon - the\n> windows task has been throttling everything else. Not sure if we should\n> to do anything beyond the limits that cfbot already uses.\n\nI'm not sure we would. cfbot has a time based limit for how often it tries to\nrebuild entries, and I think we were just about keeping up with that. In which\ncase we shouldn't, on average, schedule more jobs than we currently\ndo. Although peak \"job throughput\" would be higher.\n\nThomas?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Jan 2023 11:20:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "On Wed, Jan 11, 2023 at 8:20 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-01-10 09:22:12 -0600, Justin Pryzby wrote:\n> > > There is more than 2x speed gain when VMs are used.\n> >\n> > One consideration is that if windows runs twice as fast, we'll suddenly\n> > start using twice as many resources at cirrus/google/amazon - the\n> > windows task has been throttling everything else. Not sure if we should\n> > to do anything beyond the limits that cfbot already uses.\n>\n> I'm not sure we would. cfbot has a time based limit for how often it tries to\n> rebuild entries, and I think we were just about keeping up with that. In which\n> case we shouldn't, on average, schedule more jobs than we currently\n> do. Although peak \"job throughput\" would be higher.\n>\n> Thomas?\n\nIt currently tries to re-test each patch every 24 hours, but doesn't\nachieve that. It looks like it's currently re-testing every ~30\nhours. Justin's right, we'll consume more non-Windows resources if\nWindows speeds up, but not 2x, more like 1.25x when cfbot's own\nthrottling kicks in. Or I could change the cycle target to 36 or 48\nhours, to spread the work out more.\n\nBack-of-a-napkin maths:\n\n * there are currently 240 entries in a testable status\n * it takes ~0.5 hours to test (because that's the slow Windows time)\n * therefore it takes ~120 hours to test them all\n * but we can do 4 at a time, so that's ~30 hours to get through them\nall and start again\n * that matches what we see:\n\ncfbot=> select created - lag(created) over (order by created) from\nbranch where submission_id = 4068;\n ?column?\n-----------------------\n\n 1 day 06:30:00.265047\n 1 day 05:43:59.978949\n 1 day 04:13:59.754048\n 1 day 05:28:59.811916\n 1 day 07:00:00.651655\n(6 rows)\n\nIf, with this change, we can test in only ~0.25 hours, then we'll only\nneed 60 hours of Cirrus time to test them all. With a target of\nre-testing every 24 hours, it should now only have to run ~2.5 jobs at\nall times. Having free slots would be kind to Cirrus, and also lower\nthe latency when a new patch is posted (which currently has to wait\nfor a free slot before it can begin). Great news.\n\n\n", "msg_date": "Wed, 11 Jan 2023 13:12:46 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "On Tue, Jan 10, 2023 at 03:20:18PM +0300, Nazir Bilal Yavuz wrote:\n> Hi,\n> \n> I propose using windows VMs instead of containers, the patch is attached.\n> Currently, windows containers are used on the CI, but these container images\n> are needs to get pulled on every CI run, also they are slow to run.\n\n> @@ -589,8 +591,10 @@ task:\n> # otherwise it'll be sorted before other tasks\n> depends_on: SanityCheck\n> \n> - windows_container:\n> - image: $CONTAINER_REPO/windows_ci_mingw64:latest\n> + compute_engine_instance:\n> + image_project: $IMAGE_PROJECT\n> + image: family/pg-ci-windows-ci-mingw64\n> + platform: windows\n> cpu: $CPUS\n> memory: 4G\n\nIt looks like MinGW currently doesn't have the necessary perl modules:\n\n[19:58:46.356] Message: Can't locate IPC/Run.pm in @INC (you may need to install the IPC::Run module) (@INC contains: C:/msys64/ucrt64/lib/perl5/site_perl/5.32.1 C:/msys64/ucrt64/lib/perl5/site_perl/5.32.1 C:/msys64/ucrt64/lib/perl5/site_perl C:/msys64/ucrt64/lib/perl5/vendor_perl C:/msys64/ucrt64/lib/perl5/core_perl) at config/check_modules.pl line 11.\n[19:58:46.356] BEGIN failed--compilation aborted at config/check_modules.pl line 11.\n[19:58:46.356] meson.build:1337: WARNING: Additional Perl modules are required to run TAP tests.\n\nThat could be caused by a transient failure combined with bad error\nhandling - if there's an error while building the image, it shouldn't be\nuploaded.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Jan 2023 17:21:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 17:21:21 -0600, Justin Pryzby wrote:\n> On Tue, Jan 10, 2023 at 03:20:18PM +0300, Nazir Bilal Yavuz wrote:\n> > Hi,\n> > \n> > I propose using windows VMs instead of containers, the patch is attached.\n> > Currently, windows containers are used on the CI, but these container images\n> > are needs to get pulled on every CI run, also they are slow to run.\n> \n> > @@ -589,8 +591,10 @@ task:\n> > # otherwise it'll be sorted before other tasks\n> > depends_on: SanityCheck\n> > \n> > - windows_container:\n> > - image: $CONTAINER_REPO/windows_ci_mingw64:latest\n> > + compute_engine_instance:\n> > + image_project: $IMAGE_PROJECT\n> > + image: family/pg-ci-windows-ci-mingw64\n> > + platform: windows\n> > cpu: $CPUS\n> > memory: 4G\n> \n> It looks like MinGW currently doesn't have the necessary perl modules:\n> \n> [19:58:46.356] Message: Can't locate IPC/Run.pm in @INC (you may need to install the IPC::Run module) (@INC contains: C:/msys64/ucrt64/lib/perl5/site_perl/5.32.1 C:/msys64/ucrt64/lib/perl5/site_perl/5.32.1 C:/msys64/ucrt64/lib/perl5/site_perl C:/msys64/ucrt64/lib/perl5/vendor_perl C:/msys64/ucrt64/lib/perl5/core_perl) at config/check_modules.pl line 11.\n> [19:58:46.356] BEGIN failed--compilation aborted at config/check_modules.pl line 11.\n> [19:58:46.356] meson.build:1337: WARNING: Additional Perl modules are required to run TAP tests.\n> \n> That could be caused by a transient failure combined with bad error\n> handling - if there's an error while building the image, it shouldn't be\n> uploaded.\n\nYea, there's a problem where packer on windows doesn't seem to abort after a\npowershell script error out. The reason isn't yet quiete clear. I think Bilal\nis working on a workaround.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:30:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\n\nOn 1/12/2023 3:30 AM, Andres Freund wrote:\n> Yea, there's a problem where packer on windows doesn't seem to abort after a\n> powershell script error out. The reason isn't yet quiete clear. I think Bilal\n> is working on a workaround.\n\n\nThat should be fixed now. Also, adding a patch for PG15. There were \nconflicts while applying current patch to the REL_15_STABLE branch.\n\n\nRegards,\nNazir Bilal Yavuz\nMicrosoft", "msg_date": "Thu, 2 Feb 2023 17:47:37 +0300", "msg_from": "Nazir Bilal Yavuz <byavuz81@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "Hi,\n\nOn 2023-02-02 17:47:37 +0300, Nazir Bilal Yavuz wrote:\n> On 1/12/2023 3:30 AM, Andres Freund wrote:\n> > Yea, there's a problem where packer on windows doesn't seem to abort after a\n> > powershell script error out. The reason isn't yet quiete clear. I think Bilal\n> > is working on a workaround.\n> \n> \n> That should be fixed now. Also, adding a patch for PG15. There were\n> conflicts while applying current patch to the REL_15_STABLE branch.\n\nAnd pushed! I think an improvement in CI times of this degree is pretty\nawesome.\n\n\nUnfortunately I also noticed that the tap tests on mingw don't run\nanymore, due to IPC::Run not being available. But it's independent of\nthis change. I don't know when that broke. Could you check it out?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 2 Feb 2023 21:56:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "On Fri, Feb 3, 2023 at 6:57 PM Andres Freund <andres@anarazel.de> wrote:\n> And pushed! I think an improvement in CI times of this degree is pretty\n> awesome.\n\n+1\n\nA lot of CI compute time is saved. The Cirrus account[1] was\npreviously hitting the 4 job limit all day long, and now it's often\nrunning 1 or 2 jobs when I look, and it has space capacity to start a\nnew job immediately if someone posts a new patch. I'll monitor it\nover the next few days but it looks great.\n\n[1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\n\n", "msg_date": "Fri, 3 Feb 2023 22:56:24 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" }, { "msg_contents": "On Fri, Feb 3, 2023 at 3:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Fri, Feb 3, 2023 at 6:57 PM Andres Freund <andres@anarazel.de> wrote:\n> > And pushed! I think an improvement in CI times of this degree is pretty\n> > awesome.\n>\n> +1\n>\n> A lot of CI compute time is saved. The Cirrus account[1] was\n> previously hitting the 4 job limit all day long, and now it's often\n> running 1 or 2 jobs when I look, and it has space capacity to start a\n> new job immediately if someone posts a new patch. I'll monitor it\n> over the next few days but it looks great.\n>\n> [1] https://cirrus-ci.com/github/postgresql-cfbot/postgresql\n\nOh, wow! This commit drastically improved testing time on Windows. It\nwas Windows tests that were always behind in my github repo's CI, now\nI can see it got much faster. Thanks for working on this.\n\nWindows tests were taking around 27min\n(https://cirrus-ci.com/build/6060448332644352) before the patch, but\nit came down to 13min (https://cirrus-ci.com/build/6528980879147008)\nafter the patch - Yay! 2X improvement :).\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 4 Feb 2023 17:56:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Use windows VMs instead of windows containers on the CI" } ]
[ { "msg_contents": "The following documentation comment has been logged on the website:\n\nPage: https://www.postgresql.org/docs/15/index.html\nDescription:\n\nhttps://www.postgresql.org/docs/devel/storage-toast.html - This is the\ndevelopment version.\r\n\r\n> PLAIN prevents either compression or out-of-line storage; furthermore it\ndisables use of single-byte headers for varlena types. This is the only\npossible strategy for columns of non-TOAST-able data types.\r\n\r\nHowever, it does allow \"single byte\" headers. How to verify this?\r\n\r\nCREATE EXTENSION pageinspect;\r\nCREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\r\nINSERT INTO test VALUES (repeat('A',10));\r\n\r\nNow peek into the page with pageinspect functions\r\n\r\nSELECT left(encode(t_data, 'hex'), 40) FROM\nheap_page_items(get_raw_page('test', 0));\r\n\r\nThis returned value of \"1741414141414141414141\".\r\nHere the first byte 0x17 = 0001 0111 in binary.\r\nLength + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n[base-10] = 10 [base-10]\r\nwhich exactly matches the expected length. Further the data \"41\" repeated 10\ntimes also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\r\n\r\nSo....This does **not** disable 1-B header. That sentence should be removed\nfrom the documentation unless this is a bug.", "msg_date": "Tue, 10 Jan 2023 15:53:10 +0000", "msg_from": "PG Doc comments form <noreply@postgresql.org>", "msg_from_op": true, "msg_subject": "The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "On Tue, 2023-01-10 at 15:53 +0000, PG Doc comments form wrote:\n> https://www.postgresql.org/docs/devel/storage-toast.html - This is the\n> development version.\n> \n> > PLAIN prevents either compression or out-of-line storage; furthermore it\n> > disables use of single-byte headers for varlena types. This is the only\n> > possible strategy for columns of non-TOAST-able data types.\n> \n> However, it does allow \"single byte\" headers. How to verify this?\n> \n> CREATE EXTENSION pageinspect;\n> CREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n> INSERT INTO test VALUES (repeat('A',10));\n> \n> Now peek into the page with pageinspect functions\n> \n> SELECT left(encode(t_data, 'hex'), 40) FROM\n> heap_page_items(get_raw_page('test', 0));\n> \n> This returned value of \"1741414141414141414141\".\n> Here the first byte 0x17 = 0001 0111 in binary.\n> Length + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n> [base-10] = 10 [base-10]\n> which exactly matches the expected length. Further the data \"41\" repeated 10\n> times also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\n> \n> So....This does **not** disable 1-B header. That sentence should be removed\n> from the documentation unless this is a bug.\n\nI think that the documentation is wrong. The attached patch removes the\noffending half-sentence.\n\nYours,\nLaurenz Albe", "msg_date": "Thu, 12 Jan 2023 15:43:57 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2023-01-10 at 15:53 +0000, PG Doc comments form wrote:\n>>> PLAIN prevents either compression or out-of-line storage; furthermore it\n>>> disables use of single-byte headers for varlena types. This is the only\n>>> possible strategy for columns of non-TOAST-able data types.\n\n>> However, it does allow \"single byte\" headers. How to verify this?\n>> CREATE EXTENSION pageinspect;\n>> CREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n>> INSERT INTO test VALUES (repeat('A',10));\n>> \n>> Now peek into the page with pageinspect functions\n>> \n>> SELECT left(encode(t_data, 'hex'), 40) FROM\n>> heap_page_items(get_raw_page('test', 0));\n>> \n>> This returned value of \"1741414141414141414141\".\n\n> I think that the documentation is wrong. The attached patch removes the\n> offending half-sentence.\n\nThe documentation is correct, what is broken is the code. I'm not\nsure when we broke it, but what I see in tracing through the INSERT\nis that we are forming the tuple using a tupdesc with the wrong\nvalue of attstorage. It looks like the tupdesc belongs to the\nvirtual slot representing the output of the INSERT statement,\nwhich is not identical to the target relation's tupdesc.\n\n(The virtual slot's tupdesc is probably reverse-engineered from\njust the data types of the columns, so it'll have whatever is the\ndefault attstorage for the data type. It's blind luck that this\nattstorage value isn't used for anything more consequential,\nlike TOAST decisions.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 16:40:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 16:40:27 -0500, Tom Lane wrote:\n> The documentation is correct, what is broken is the code. I'm not\n> sure when we broke it\n\nLooks to be an old issue, predating the slot type stuff. It reproduces at\nleast as far back as 10.\n\nI've not thought through this fully. But after a first look, this might be\nhard to fix without incuring a lot of overhead / complexity. We check whether\nprojection is needed between nodes with tlist_matches_tupdesc() - targetlists\ndon't know about storage. And we decide whether we need to project in\nnodeModifyTuple solely based on\n\n\t/* Extract non-junk columns of the subplan's result tlist. */\n\tforeach(l, subplan->targetlist)\n\t{\n\t\tTargetEntry *tle = (TargetEntry *) lfirst(l);\n\n\t\tif (!tle->resjunk)\n\t\t\tinsertTargetList = lappend(insertTargetList, tle);\n\t\telse\n\t\t\tneed_projection = true;\n\t}\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 15:03:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-15 16:40:27 -0500, Tom Lane wrote:\n>> The documentation is correct, what is broken is the code. I'm not\n>> sure when we broke it\n\n> I've not thought through this fully. But after a first look, this might be\n> hard to fix without incuring a lot of overhead / complexity.\n\nIt appeared to me that it was failing at this step in\nExecGetInsertNewTuple:\n\n if (relinfo->ri_newTupleSlot->tts_ops != planSlot->tts_ops)\n {\n ExecCopySlot(relinfo->ri_newTupleSlot, planSlot);\n return relinfo->ri_newTupleSlot;\n }\n\nri_newTupleSlot has the tupdesc we want, planSlot is a virtual slot\nthat has the bogus tupdesc, and for some reason heap_form_tuple is\ngetting called with planSlot's tupdesc not ri_newTupleSlot's. I'm\nnot quite sure if this is just a thinko somewhere or there's a\ndeficiency in the design of the slot APIs.\n\nThe UPDATE path seems to work fine, btw.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:08:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 18:08:21 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-15 16:40:27 -0500, Tom Lane wrote:\n> >> The documentation is correct, what is broken is the code. I'm not\n> >> sure when we broke it\n>\n> > I've not thought through this fully. But after a first look, this might be\n> > hard to fix without incuring a lot of overhead / complexity.\n>\n> It appeared to me that it was failing at this step in\n> ExecGetInsertNewTuple:\n>\n> if (relinfo->ri_newTupleSlot->tts_ops != planSlot->tts_ops)\n> {\n> ExecCopySlot(relinfo->ri_newTupleSlot, planSlot);\n> return relinfo->ri_newTupleSlot;\n> }\n>\n> ri_newTupleSlot has the tupdesc we want, planSlot is a virtual slot\n> that has the bogus tupdesc, and for some reason heap_form_tuple is\n> getting called with planSlot's tupdesc not ri_newTupleSlot's.\n\nThe way we copy a slot into a heap slot is to materialize the source slot and\ncopy the heap tuple into target slot. Which is also what happened before the\nslot type abstraction (hence the problem also existing before that was\nintroduced).\n\n\n> I'm not quite sure if this is just a thinko somewhere or there's a\n> deficiency in the design of the slot APIs.\n\nI think it's fairly fundamental that copying between two slots assumes a\ncompatible tupdescs.\n\nI think the problem is more in the determination whether we need to project,\nor not (i.e. ExecInitInsertProjection()). But we can't really make a good\ndecision, because we just determine the types of \"incoming\" tuples based on\ntargetlists, which don't contain information about the storage type.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 15:19:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-15 18:08:21 -0500, Tom Lane wrote:\n>> ri_newTupleSlot has the tupdesc we want, planSlot is a virtual slot\n>> that has the bogus tupdesc, and for some reason heap_form_tuple is\n>> getting called with planSlot's tupdesc not ri_newTupleSlot's.\n\n> The way we copy a slot into a heap slot is to materialize the source slot and\n> copy the heap tuple into target slot. Which is also what happened before the\n> slot type abstraction (hence the problem also existing before that was\n> introduced).\n\nHmm. For the case of virtual->physical slot, that doesn't sound\nterribly efficient.\n\n> I think it's fairly fundamental that copying between two slots assumes a\n> compatible tupdescs.\n\nWe could possibly make some effort to inject the desired attstorage\nproperties into the planSlot's tupdesc. Not sure where would be a\ngood place.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:41:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 18:41:22 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-15 18:08:21 -0500, Tom Lane wrote:\n> >> ri_newTupleSlot has the tupdesc we want, planSlot is a virtual slot\n> >> that has the bogus tupdesc, and for some reason heap_form_tuple is\n> >> getting called with planSlot's tupdesc not ri_newTupleSlot's.\n>\n> > The way we copy a slot into a heap slot is to materialize the source slot and\n> > copy the heap tuple into target slot. Which is also what happened before the\n> > slot type abstraction (hence the problem also existing before that was\n> > introduced).\n>\n> Hmm. For the case of virtual->physical slot, that doesn't sound\n> terribly efficient.\n\nIt's ok, I think. For virtual->heap we form the tuple in the context of the\ndestination heap slot. I don't think we could avoid creating a HeapTuple. I\nguess we could try to avoid needing to deform the heap tuple again in the\ntarget slot, but I'm not sure that's worth the complexity (we'd need to\nreadjust by-reference datums to point into the heap tuple). It might be worth\nadding a version of ExecCopySlot() that explicitly does that, I think it could\nbe useful for some executor nodes that know that columns will be accessed\nimmediately after.\n\n\n> > I think it's fairly fundamental that copying between two slots assumes a\n> > compatible tupdescs.\n>\n> We could possibly make some effort to inject the desired attstorage\n> properties into the planSlot's tupdesc. Not sure where would be a\n> good place.\n\nI'm not sure that'd get us very far. Consider the case of\nINSERT INTO table_using_plain SELECT * FROM table_using_extended;\n\nIn that case we just deal with heap tuples coming in, without a need to\nproject, without a need to copy from one slot to another.\n\n\nI don't see how we can fix this mess entirely without tracking the storage\ntype a lot more widely. Most importantly in targetlists, as we use the\ntargetlists to compute the tupledescs of executor nodes, which then influence\nwhere we build projections.\n\n\nGiven that altering a column to PLAIN doesn't rewrite the table, we already\nhave to be prepared to receive short or compressed varlenas, even after\nsetting STORAGE to PLAIN.\n\nI think we should consider just reformulating the \"furthermore it disables use\nof single-byte headers for varlena types\" portion to say that short varlenas\nare disabled for non-toastable datatypes. I don't see much point in investing\na lot of complexity making this a hard restriction. Afaict the only point in\nchanging to PLAIN is to disallow external storage and compression, which it\nachieves eved when using short varlenas.\n\nThe compression bit is a bit worse, I guess. We probably have the same problem\nwith EXTERNAL, which supposedly doesn't allow compression - but I don't think\nwe have code ensuring that we decompress in-line datums. It'll end up\nhappening if there's other columns that get newly compressed or stored\nexternally, but not guaranteed.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 16:49:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 16:49:01 -0800, Andres Freund wrote:\n> I don't see how we can fix this mess entirely without tracking the storage\n> type a lot more widely. Most importantly in targetlists, as we use the\n> targetlists to compute the tupledescs of executor nodes, which then influence\n> where we build projections.\n> \n> \n> Given that altering a column to PLAIN doesn't rewrite the table, we already\n> have to be prepared to receive short or compressed varlenas, even after\n> setting STORAGE to PLAIN.\n> \n> I think we should consider just reformulating the \"furthermore it disables use\n> of single-byte headers for varlena types\" portion to say that short varlenas\n> are disabled for non-toastable datatypes. I don't see much point in investing\n> a lot of complexity making this a hard restriction. Afaict the only point in\n> changing to PLAIN is to disallow external storage and compression, which it\n> achieves eved when using short varlenas.\n> \n> The compression bit is a bit worse, I guess. We probably have the same problem\n> with EXTERNAL, which supposedly doesn't allow compression - but I don't think\n> we have code ensuring that we decompress in-line datums. It'll end up\n> happening if there's other columns that get newly compressed or stored\n> externally, but not guaranteed.\n\nOne way we could deal with it would be to force the tuple to be processed by\nheap_toast_insert_or_update() when there's a difference between typstorage and\nattstorage. I think to make that cheap enough to determine, we'd have to cache\nthat information in the relcache. I haven't thought it through, but I suspect\nit'd be problematic to add a pg_type lookup to RelationBuildTupleDesc(),\nleading to building that information on demand later.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 17:08:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Sun, 2023-01-15 at 16:40 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Tue, 2023-01-10 at 15:53 +0000, PG Doc comments form wrote:\n> > > > PLAIN prevents either compression or out-of-line storage; furthermore it\n> > > > disables use of single-byte headers for varlena types. This is the only\n> > > > possible strategy for columns of non-TOAST-able data types.\n> \n> > > However, it does allow \"single byte\" headers. How to verify this?\n> > > CREATE EXTENSION pageinspect;\n> > > CREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n> > > INSERT INTO test VALUES (repeat('A',10));\n> > > \n> > > Now peek into the page with pageinspect functions\n> > > \n> > > SELECT left(encode(t_data, 'hex'), 40) FROM\n> > > heap_page_items(get_raw_page('test', 0));\n> > > \n> > > This returned value of \"1741414141414141414141\".\n> \n> > I think that the documentation is wrong.  The attached patch removes the\n> > offending half-sentence.\n> \n> The documentation is correct, what is broken is the code.\n\nI see. But what is the reason for that anyway? Why not allow short varlena\nheaders if TOAST storage is set to PLAIN?\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Mon, 16 Jan 2023 14:07:48 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Sun, 2023-01-15 at 16:40 -0500, Tom Lane wrote:\n>> The documentation is correct, what is broken is the code.\n\n> I see. But what is the reason for that anyway? Why not allow short varlena\n> headers if TOAST storage is set to PLAIN?\n\nThe original motivation for that whole mechanism was to protect data\ntypes for which the C functions haven't been upgraded to support\nnon-traditional varlena headers. So I was worried that this behavior\nwould somehow break those cases (which still exist, eg oidvector and\nint2vector). However, the thing that actually marks such a datatype\nis that pg_type.typstorage is PLAIN, and as far as I can find we do\nstill honor that case in full. If that's the case then every tupdesc\nwe ever create for such a column will say PLAIN, so there's no\nopportunity for the wrong thing to happen.\n\nSo maybe it's okay to move the goalposts and acknowledge that setting\nattstorage to PLAIN isn't a complete block on applying toast-related\ntransformations. I wonder though whether short-header is the only\ncase that can slide through. In particular, for \"INSERT ... SELECT\nFROM othertable\", I suspect it's possible for a compressed-in-line\ndatum to slide through without decompression. (We certainly must\nfix out-of-line datums, but that doesn't necessarily mean we undo\ncompression.) So I'm not convinced that the proposed wording is\nfully correct yet.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 11:50:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "On Mon, 2023-01-16 at 11:50 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Sun, 2023-01-15 at 16:40 -0500, Tom Lane wrote:\n> > > The documentation is correct, what is broken is the code.\n> \n> > I see.  But what is the reason for that anyway?  Why not allow short varlena\n> > headers if TOAST storage is set to PLAIN?\n> \n> The original motivation for that whole mechanism was to protect data\n> types for which the C functions haven't been upgraded to support\n> non-traditional varlena headers.  So I was worried that this behavior\n> would somehow break those cases (which still exist, eg oidvector and\n> int2vector).  However, the thing that actually marks such a datatype\n> is that pg_type.typstorage is PLAIN, and as far as I can find we do\n> still honor that case in full.  If that's the case then every tupdesc\n> we ever create for such a column will say PLAIN, so there's no\n> opportunity for the wrong thing to happen.\n> \n> So maybe it's okay to move the goalposts and acknowledge that setting\n> attstorage to PLAIN isn't a complete block on applying toast-related\n> transformations.  I wonder though whether short-header is the only\n> case that can slide through.  In particular, for \"INSERT ... SELECT\n> FROM othertable\", I suspect it's possible for a compressed-in-line\n> datum to slide through without decompression.  (We certainly must\n> fix out-of-line datums, but that doesn't necessarily mean we undo\n> compression.)  So I'm not convinced that the proposed wording is\n> fully correct yet.\n\nI see, thanks for the explanation.\n\nSince the only storage format I have ever had use for are EXTENDED\nand EXTERNAL, it is not very important for me if PLAIN supports short\nheaders or not. Since single-byte headers are part of the TOAST\nmechanism (and documented as such), it makes sense to disable them\nin PLAIN. Then the documentation could describe PLAIN as\n\"skip all TOAST processing\".\n\nSo we should probably go with the simplest fix that restores\nconsistency.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Tue, 17 Jan 2023 09:05:34 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Thu, Jan 12, 2023 at 03:43:57PM +0100, Laurenz Albe wrote:\n> On Tue, 2023-01-10 at 15:53 +0000, PG Doc comments form wrote:\n> > https://www.postgresql.org/docs/devel/storage-toast.html - This is the\n> > development version.\n> > \n> > > PLAIN prevents either compression or out-of-line storage; furthermore it\n> > > disables use of single-byte headers for varlena types. This is the only\n> > > possible strategy for columns of non-TOAST-able data types.\n> > \n> > However, it does allow \"single byte\" headers. How to verify this?\n> > \n> > CREATE EXTENSION pageinspect;\n> > CREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n> > INSERT INTO test VALUES (repeat('A',10));\n> > \n> > Now peek into the page with pageinspect functions\n> > \n> > SELECT left(encode(t_data, 'hex'), 40) FROM\n> > heap_page_items(get_raw_page('test', 0));\n> > \n> > This returned value of \"1741414141414141414141\".\n> > Here the first byte 0x17 = 0001 0111 in binary.\n> > Length + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n> > [base-10] = 10 [base-10]\n> > which exactly matches the expected length. Further the data \"41\" repeated 10\n> > times also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\n> > \n> > So....This does **not** disable 1-B header. That sentence should be removed\n> > from the documentation unless this is a bug.\n> \n> I think that the documentation is wrong. The attached patch removes the\n> offending half-sentence.\n> \n> Yours,\n> Laurenz Albe\n\n> From 5bf0b43fe73384a21f59d9ad1f7a8d7cbc81f8c4 Mon Sep 17 00:00:00 2001\n> From: Laurenz Albe <laurenz.albe@cybertec.at>\n> Date: Thu, 12 Jan 2023 15:41:56 +0100\n> Subject: [PATCH] Fix documentation for STORAGE PLAIN\n> \n> Commit 3e23b68dac0, which introduced single-byte varlena headers,\n> added documentation that STORAGE PLAIN would prevent such single-byte\n> headers. This has never been true.\n> ---\n> doc/src/sgml/storage.sgml | 4 +---\n> 1 file changed, 1 insertion(+), 3 deletions(-)\n> \n> diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml\n> index e5b9f3f1ff..4795a485d0 100644\n> --- a/doc/src/sgml/storage.sgml\n> +++ b/doc/src/sgml/storage.sgml\n> @@ -456,9 +456,7 @@ for storing <acronym>TOAST</acronym>-able columns on disk:\n> <listitem>\n> <para>\n> <literal>PLAIN</literal> prevents either compression or\n> - out-of-line storage; furthermore it disables use of single-byte headers\n> - for varlena types.\n> - This is the only possible strategy for\n> + out-of-line storage. This is the only possible strategy for\n> columns of non-<acronym>TOAST</acronym>-able data types.\n> </para>\n> </listitem>\n> -- \n> 2.39.0\n> \n\nWhere did we end with this? Is a doc patch the solution?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 29 Sep 2023 18:19:58 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Fri, 2023-09-29 at 18:19 -0400, Bruce Momjian wrote:\n> On Thu, Jan 12, 2023 at 03:43:57PM +0100, Laurenz Albe wrote:\n> > On Tue, 2023-01-10 at 15:53 +0000, PG Doc comments form wrote:\n> > > https://www.postgresql.org/docs/devel/storage-toast.html - This is the\n> > > development version.\n> > > \n> > > > PLAIN prevents either compression or out-of-line storage; furthermore it\n> > > > disables use of single-byte headers for varlena types. This is the only\n> > > > possible strategy for columns of non-TOAST-able data types.\n> > > \n> > > However, it does allow \"single byte\" headers. How to verify this?\n> > > \n> > > CREATE EXTENSION pageinspect;\n> > > CREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n> > > INSERT INTO test VALUES (repeat('A',10));\n> > > \n> > > Now peek into the page with pageinspect functions\n> > > \n> > > SELECT left(encode(t_data, 'hex'), 40) FROM\n> > > heap_page_items(get_raw_page('test', 0));\n> > > \n> > > This returned value of \"1741414141414141414141\".\n> > > Here the first byte 0x17 = 0001 0111 in binary.\n> > > Length + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n> > > [base-10] = 10 [base-10]\n> > > which exactly matches the expected length. Further the data \"41\" repeated 10\n> > > times also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\n> > > \n> > > So....This does **not** disable 1-B header. That sentence should be removed\n> > > from the documentation unless this is a bug.\n> > \n> > I think that the documentation is wrong.  The attached patch removes the\n> > offending half-sentence.\n> > \n> > Yours,\n> > Laurenz Albe\n> \n> > From 5bf0b43fe73384a21f59d9ad1f7a8d7cbc81f8c4 Mon Sep 17 00:00:00 2001\n> > From: Laurenz Albe <laurenz.albe@cybertec.at>\n> > Date: Thu, 12 Jan 2023 15:41:56 +0100\n> > Subject: [PATCH] Fix documentation for STORAGE PLAIN\n> > \n> > Commit 3e23b68dac0, which introduced single-byte varlena headers,\n> > added documentation that STORAGE PLAIN would prevent such single-byte\n> > headers.  This has never been true.\n> > ---\n> >  doc/src/sgml/storage.sgml | 4 +---\n> >  1 file changed, 1 insertion(+), 3 deletions(-)\n> > \n> > diff --git a/doc/src/sgml/storage.sgml b/doc/src/sgml/storage.sgml\n> > index e5b9f3f1ff..4795a485d0 100644\n> > --- a/doc/src/sgml/storage.sgml\n> > +++ b/doc/src/sgml/storage.sgml\n> > @@ -456,9 +456,7 @@ for storing <acronym>TOAST</acronym>-able columns on disk:\n> >      <listitem>\n> >       <para>\n> >        <literal>PLAIN</literal> prevents either compression or\n> > -      out-of-line storage; furthermore it disables use of single-byte headers\n> > -      for varlena types.\n> > -      This is the only possible strategy for\n> > +      out-of-line storage.  This is the only possible strategy for\n> >        columns of non-<acronym>TOAST</acronym>-able data types.\n> >       </para>\n> >      </listitem>\n> > -- \n> > 2.39.0\n> > \n> \n> Where did we end with this?  Is a doc patch the solution?\n\nI don't think this went anywhere, and a doc patch is not the solution.\n\nTom has argued convincingly that single-byte headers are an effect of the TOAST\nsystem, and that STORAGE PLAIN should disable all effects of TOAST.\n\nSo this would need a code patch.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 30 Sep 2023 00:35:00 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Fri, 2023-09-29 at 18:19 -0400, Bruce Momjian wrote:\n>> Where did we end with this? Is a doc patch the solution?\n\n> I don't think this went anywhere, and a doc patch is not the solution.\n> Tom has argued convincingly that single-byte headers are an effect of the TOAST\n> system, and that STORAGE PLAIN should disable all effects of TOAST.\n\nWell, that was the original idea: you could use STORAGE PLAIN if you\nhad C code that wasn't yet toast-aware. However, given the lack of\ncomplaints, it seems there's no non-toast-aware code left anywhere.\nAnd that's not too surprising, because the evolutionary pressure to\nfix such code would be mighty strong, and a lot of time has passed.\n\nI'm now inclined to think that changing the docs is better than\nchanging the code; we'd be more likely to create new problems than\nfix anything useful.\n\nI wonder though if there's really just one place claiming that\nthat's how it works. A trawl through the code comments might\nbe advisable.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 29 Sep 2023 18:45:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows single\n byte header" }, { "msg_contents": "On Fri, Sep 29, 2023 at 06:45:52PM -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Fri, 2023-09-29 at 18:19 -0400, Bruce Momjian wrote:\n> >> Where did we end with this? Is a doc patch the solution?\n> \n> > I don't think this went anywhere, and a doc patch is not the solution.\n> > Tom has argued convincingly that single-byte headers are an effect of the TOAST\n> > system, and that STORAGE PLAIN should disable all effects of TOAST.\n> \n> Well, that was the original idea: you could use STORAGE PLAIN if you\n> had C code that wasn't yet toast-aware. However, given the lack of\n> complaints, it seems there's no non-toast-aware code left anywhere.\n> And that's not too surprising, because the evolutionary pressure to\n> fix such code would be mighty strong, and a lot of time has passed.\n> \n> I'm now inclined to think that changing the docs is better than\n> changing the code; we'd be more likely to create new problems than\n> fix anything useful.\n> \n> I wonder though if there's really just one place claiming that\n> that's how it works. A trawl through the code comments might\n> be advisable.\n\n[ Discussion moved to hackers, same subject. ]\n\nHere is the original thread from pgsql-docs:\n\n\thttps://www.postgresql.org/message-id/flat/167336599095.2667301.15497893107226841625%40wrigleys.postgresql.org\n\nThe report is about single-byte headers being used for varlena values\nwith PLAIN storage.\n\nHere is the reproducible report:\n\n\tCREATE EXTENSION pageinspect;\n\tCREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n\tINSERT INTO test VALUES (repeat('A',10));\n\t\n\tNow peek into the page with pageinspect functions\n\t\n\tSELECT left(encode(t_data, 'hex'), 40) FROM\n\theap_page_items(get_raw_page('test', 0));\n\t\n\tThis returned value of \"1741414141414141414141\".\n\tHere the first byte 0x17 = 0001 0111 in binary.\n\tLength + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n\t[base-10] = 10 [base-10]\n\twhich exactly matches the expected length. Further the data \"41\" repeated 10\n\ttimes also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\n\nI researched this and thought it would be a case where we were lacking a\ncheck before creating a single-byte header, but I couldn't find anything\nmissing. I think the problem is that the _source_ tupleDesc attstorage\nattribute is being used to decide if we should use a short header, while\nit is really the storage type of the destination that we should be\nchecking. Unfortunately, I don't think the destination is accessible at\nthe location were we are deciding about a short header.\n\nI am confused how to proceed. I feel we need to fully understand why\nthis happening before we adjust anything. Here is a backtrace --- the\nshort header is being created in fill_val() and the attstorage value\nthere is 'x'/EXTENDED.\n\n---------------------------------------------------------------------------\n\n#0 fill_val (att=0x56306f61dae8, bit=0x0, bitmask=0x7ffcfcfc1fb4, dataP=0x7ffcfcfc1f90, infomask=0x56306f61e25c, datum=94766026487048, isnull=false) at heaptuple.c:278\n#1 0x000056306e7800eb in heap_fill_tuple (tupleDesc=0x56306f61dad0, values=0x56306f61dc20, isnull=0x56306f61dc28, data=0x56306f61e260 \"\", data_size=11, infomask=0x56306f61e25c, bit=0x0) at heaptuple.c:427\n#2 0x000056306e781708 in heap_form_tuple (tupleDescriptor=0x56306f61dad0, values=0x56306f61dc20, isnull=0x56306f61dc28) at heaptuple.c:1181\n#3 0x000056306ea13dcb in tts_virtual_copy_heap_tuple (slot=0x56306f61dbd8) at execTuples.c:280\n#4 0x000056306ea1346e in ExecCopySlotHeapTuple (slot=0x56306f61dbd8) at ../../../src/include/executor/tuptable.h:463\n#5 0x000056306ea14928 in tts_buffer_heap_copyslot (dstslot=0x56306f61e1a8, srcslot=0x56306f61dbd8) at execTuples.c:798\n#6 0x000056306ea4342e in ExecCopySlot (dstslot=0x56306f61e1a8, srcslot=0x56306f61dbd8) at ../../../src/include/executor/tuptable.h:487\n#7 0x000056306ea44785 in ExecGetInsertNewTuple (relinfo=0x56306f61d678, planSlot=0x56306f61dbd8) at nodeModifyTable.c:685\n#8 0x000056306ea49123 in ExecModifyTable (pstate=0x56306f61d470) at nodeModifyTable.c:3789\n#9 0x000056306ea0ef3c in ExecProcNodeFirst (node=0x56306f61d470) at execProcnode.c:464\n#10 0x000056306ea03702 in ExecProcNode (node=0x56306f61d470) at ../../../src/include/executor/executor.h:273\n#11 0x000056306ea05fe2 in ExecutePlan (estate=0x56306f61d228, planstate=0x56306f61d470, use_parallel_mode=false, operation=CMD_INSERT, sendTuples=false, numberTuples=0, direction=ForwardScanDirection, dest=0x56306f588170, execute_once=true) at execMain.c:1670\n#12 0x000056306ea03c63 in standard_ExecutorRun (queryDesc=0x56306f527888, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:365\n#13 0x000056306ea03aee in ExecutorRun (queryDesc=0x56306f527888, direction=ForwardScanDirection, count=0, execute_once=true) at execMain.c:309\n#14 0x000056306ec70cf5 in ProcessQuery (plan=0x56306f588020, sourceText=0x56306f552a98 \"INSERT INTO test VALUES (repeat('A',10));\", params=0x0, queryEnv=0x0, dest=0x56306f588170, qc=0x7ffcfcfc25c0) at pquery.c:160\n#15 0x000056306ec72514 in PortalRunMulti (portal=0x56306f5cccf8, isTopLevel=true, setHoldSnapshot=false, dest=0x56306f588170, altdest=0x56306f588170, qc=0x7ffcfcfc25c0) at pquery.c:1277\n#16 0x000056306ec71b3a in PortalRun (portal=0x56306f5cccf8, count=9223372036854775807, isTopLevel=true, run_once=true, dest=0x56306f588170, altdest=0x56306f588170, qc=0x7ffcfcfc25c0) at pquery.c:791\n#17 0x000056306ec6b465 in exec_simple_query (query_string=0x56306f552a98 \"INSERT INTO test VALUES (repeat('A',10));\") at postgres.c:1273\n#18 0x000056306ec6fdd3 in PostgresMain (dbname=0x56306f58ab88 \"test\", username=0x56306f50ee68 \"postgres\") at postgres.c:4657\n#19 0x000056306ebb304c in BackendRun (port=0x56306f57dc20) at postmaster.c:4423\n#20 0x000056306ebb26fc in BackendStartup (port=0x56306f57dc20) at postmaster.c:4108\n#21 0x000056306ebaf134 in ServerLoop () at postmaster.c:1767\n#22 0x000056306ebaead5 in PostmasterMain (argc=1, argv=0x56306f50ce30) at postmaster.c:1466\n#23 0x000056306ea8108c in main (argc=1, argv=0x56306f50ce30) at main.c:198\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 20 Oct 2023 21:48:05 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Fri, Oct 20, 2023 at 09:48:05PM -0400, Bruce Momjian wrote:\n> Here is the original thread from pgsql-docs:\n> \n> \thttps://www.postgresql.org/message-id/flat/167336599095.2667301.15497893107226841625%40wrigleys.postgresql.org\n> \n> The report is about single-byte headers being used for varlena values\n> with PLAIN storage.\n> \n> Here is the reproducible report:\n> \n> \tCREATE EXTENSION pageinspect;\n> \tCREATE TABLE test(a VARCHAR(10000) STORAGE PLAIN);\n> \tINSERT INTO test VALUES (repeat('A',10));\n> \t\n> \tNow peek into the page with pageinspect functions\n> \t\n> \tSELECT left(encode(t_data, 'hex'), 40) FROM\n> \theap_page_items(get_raw_page('test', 0));\n> \t\n> \tThis returned value of \"1741414141414141414141\".\n> \tHere the first byte 0x17 = 0001 0111 in binary.\n> \tLength + 1 is stored in the length bits (1-7). So Len = 0001011-1 = (11-1)\n> \t[base-10] = 10 [base-10]\n> \twhich exactly matches the expected length. Further the data \"41\" repeated 10\n> \ttimes also indicates character A (65 or 0x41 in ASCII) repeated 10 times.\n> \n> I researched this and thought it would be a case where we were lacking a\n> check before creating a single-byte header, but I couldn't find anything\n> missing. I think the problem is that the _source_ tupleDesc attstorage\n> attribute is being used to decide if we should use a short header, while\n> it is really the storage type of the destination that we should be\n> checking. Unfortunately, I don't think the destination is accessible at\n> the location were we are deciding about a short header.\n> \n> I am confused how to proceed. I feel we need to fully understand why\n> this happening before we adjust anything. Here is a backtrace --- the\n> short header is being created in fill_val() and the attstorage value\n> there is 'x'/EXTENDED.\n\nI did some more research. It turns out that the source slot/planSlot is\npopulating its pg_attribute information via makeTargetEntry() and it\nhas no concept of a storage type.\n\nDigging further, I found that we cannot get rid of the the use of\natt->attstorage != TYPSTORAGE_PLAIN in macros ATT_IS_PACKABLE and\nVARLENA_ATT_IS_PACKABLE macros in src/backend/access/common/heaptuple.c\nbecause there are internal uses of fill_val() that can't handle packed\nvarlena headers.\n\nI ended up with a doc patch that adds a C comment about this odd\nbehavior and removes doc text about PLAIN storage not using packed\nheaders.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Sat, 21 Oct 2023 21:56:13 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Sat, Oct 21, 2023 at 09:56:13PM -0400, Bruce Momjian wrote:\n> I did some more research. It turns out that the source slot/planSlot is\n> populating its pg_attribute information via makeTargetEntry() and it\n> has no concept of a storage type.\n> \n> Digging further, I found that we cannot get rid of the the use of\n> att->attstorage != TYPSTORAGE_PLAIN in macros ATT_IS_PACKABLE and\n> VARLENA_ATT_IS_PACKABLE macros in src/backend/access/common/heaptuple.c\n> because there are internal uses of fill_val() that can't handle packed\n> varlena headers.\n> \n> I ended up with a doc patch that adds a C comment about this odd\n> behavior and removes doc text about PLAIN storage not using packed\n> headers.\n\nOops, patch attached.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.", "msg_date": "Sat, 21 Oct 2023 21:59:04 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" }, { "msg_contents": "On Sat, Oct 21, 2023 at 09:59:04PM -0400, Bruce Momjian wrote:\n> On Sat, Oct 21, 2023 at 09:56:13PM -0400, Bruce Momjian wrote:\n> > I did some more research. It turns out that the source slot/planSlot is\n> > populating its pg_attribute information via makeTargetEntry() and it\n> > has no concept of a storage type.\n> > \n> > Digging further, I found that we cannot get rid of the the use of\n> > att->attstorage != TYPSTORAGE_PLAIN in macros ATT_IS_PACKABLE and\n> > VARLENA_ATT_IS_PACKABLE macros in src/backend/access/common/heaptuple.c\n> > because there are internal uses of fill_val() that can't handle packed\n> > varlena headers.\n> > \n> > I ended up with a doc patch that adds a C comment about this odd\n> > behavior and removes doc text about PLAIN storage not using packed\n> > headers.\n> \n> Oops, patch attached.\n\nPatch applied to all supported versions.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Tue, 31 Oct 2023 09:10:56 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: The documentation for storage type 'plain' actually allows\n single byte header" } ]
[ { "msg_contents": "This question is about ClockSweepTick function and the code is below.\nhttps://github.com/postgres/postgres/blob/24d2b2680a8d0e01b30ce8a41c4eb3b47aca5031/src/backend/storage/buffer/freelist.c#L146-L165\n\n The value of expected, NBuffers, wrapped variable is fixed in the while\nloop, so that when the value of expected variable is not equal to\nStrategyControl->nextVictimBuffer, CAS operation fails and the while loop\nwill be run kind-of infinitely.\nIt is possible for this problem to occur when ClockSweepTick function is\nconcurrently called and nextVictimBuffer is incremented by other process\nbefore CAS operation in the loop (ex: in this case, the value of expected\nvariable is NBuffers+1 while the value of nextVictimBuffer variable is\nNBuffers+2. so CAS operation fails)\nI think. `expected = originalVictim + 1;` line should be in while loop\n(before acquiring spin lock) so that, even in the case above, expected\nvariable is incremented for each loop and CAS operation will be successful\nat some point.\nIs my understanding correct? If so, I will send PR for fixing this issue.\n\nThank you in advance\nHayato Shiba\n\nThis question is about ClockSweepTick function and the code is below.https://github.com/postgres/postgres/blob/24d2b2680a8d0e01b30ce8a41c4eb3b47aca5031/src/backend/storage/buffer/freelist.c#L146-L165 The value of expected, NBuffers, wrapped variable is fixed in the while loop, so that when the value of expected variable is not equal to StrategyControl->nextVictimBuffer, CAS operation fails and the while loop will be run kind-of infinitely. It is possible for this problem to occur when ClockSweepTick function is concurrently called and nextVictimBuffer is incremented by other process before CAS operation in the loop (ex: in this case, the value of expected variable is NBuffers+1 while the value of nextVictimBuffer variable is NBuffers+2. so CAS operation fails) I think. `expected = originalVictim + 1;` line should be in while loop (before acquiring spin lock) so that, even in the case above, expected variable is incremented for each loop and CAS operation will be successful at some point. Is my understanding correct? If so, I will send PR for fixing this issue.Thank you in advanceHayato Shiba", "msg_date": "Wed, 11 Jan 2023 01:25:06 +0900", "msg_from": "=?UTF-8?B?5pav5rOi6Zq85paX?= <shibahayaton@gmail.com>", "msg_from_op": true, "msg_subject": "can while loop in ClockSweepTick function be kind of infinite loop in\n some cases?" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 01:25:06 +0900, 斯波隼斗 wrote:\n> This question is about ClockSweepTick function and the code is below.\n> https://github.com/postgres/postgres/blob/24d2b2680a8d0e01b30ce8a41c4eb3b47aca5031/src/backend/storage/buffer/freelist.c#L146-L165\n> \n> The value of expected, NBuffers, wrapped variable is fixed in the while\n> loop, so that when the value of expected variable is not equal to\n> StrategyControl->nextVictimBuffer, CAS operation fails and the while loop\n> will be run kind-of infinitely.\n> It is possible for this problem to occur when ClockSweepTick function is\n> concurrently called and nextVictimBuffer is incremented by other process\n> before CAS operation in the loop (ex: in this case, the value of expected\n> variable is NBuffers+1 while the value of nextVictimBuffer variable is\n> NBuffers+2. so CAS operation fails)\n> I think. `expected = originalVictim + 1;` line should be in while loop\n> (before acquiring spin lock) so that, even in the case above, expected\n> variable is incremented for each loop and CAS operation will be successful\n> at some point.\n> Is my understanding correct? If so, I will send PR for fixing this issue.\n\nYes, I think your understanding might be correct. Interesting that this\napparently has never occurred.\n\nYes, please send a patch.\n\nThanks,\n\nAndres\n\n\n", "msg_date": "Tue, 10 Jan 2023 09:39:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: can while loop in ClockSweepTick function be kind of infinite\n loop in some cases?" }, { "msg_contents": "On Tue, Jan 10, 2023 at 12:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > I think. `expected = originalVictim + 1;` line should be in while loop\n> > (before acquiring spin lock) so that, even in the case above, expected\n> > variable is incremented for each loop and CAS operation will be successful\n> > at some point.\n> > Is my understanding correct? If so, I will send PR for fixing this issue.\n>\n> Yes, I think your understanding might be correct. Interesting that this\n> apparently has never occurred.\n\nDoesn't pg_atomic_compare_exchange_u32 set expected if it fails?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 13:11:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: can while loop in ClockSweepTick function be kind of infinite\n loop in some cases?" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 13:11:35 -0500, Robert Haas wrote:\n> On Tue, Jan 10, 2023 at 12:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think. `expected = originalVictim + 1;` line should be in while loop\n> > > (before acquiring spin lock) so that, even in the case above, expected\n> > > variable is incremented for each loop and CAS operation will be successful\n> > > at some point.\n> > > Is my understanding correct? If so, I will send PR for fixing this issue.\n> >\n> > Yes, I think your understanding might be correct. Interesting that this\n> > apparently has never occurred.\n>\n> Doesn't pg_atomic_compare_exchange_u32 set expected if it fails?\n\nIndeed, so there's no problem.\n\nI wonder if we should make ->nextVictimBuffer a 64bit atomic. At the time the\nchanges went in we didn't (or rather, couldn't) rely on it, but these days we\ncould. I think with a 64bit number we could get rid of ->completePasses and\njust infer it from ->nextVictimBuffer/NBuffers, removing th need for the\nspinlock. It's very unlikely that 64bit would ever wrap, and even if, it'd\njust be a small inaccuracy in the assumed number of passes. OTOH, it's\ndoubtful the overflow handling / the spinlock matters performance wise.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 10 Jan 2023 10:58:56 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: can while loop in ClockSweepTick function be kind of infinite\n loop in some cases?" }, { "msg_contents": "Hi, Thank you for your quick reply\n\nI misunderstood the logic of pg_atomic_compare_exchange_u32, so the loop\ncannot be infinite.\n\n> I wonder if we should make ->nextVictimBuffer a 64bit atomic. At the time\nthe changes went in we didn't (or rather, couldn't) rely on it, but these\ndays we could. I think with a 64bit number we could get rid of\n->completePasses and just infer it from ->nextVictimBuffer/NBuffers,\nremoving th need for the spinlock. It's very unlikely that 64bit would\never wrap, and even if, it'd just be a small inaccuracy in the assumed\nnumber of passes. OTOH, it's doubtful the overflow handling / the spinlock\nmatters performance wise.\n\nI'm not sure why 64 bit was not used at the time, so I'm concerned about\nit.\nbut, except for it, you have a point and I completely agree with you. as\nyou have said, we should use 64 bit whose higher-order 32 bit indicates\ncompletePasses, and should remove spinlock.\nmaybe we don't have to exceptionally worry about the overflow here mainly\nbecause, even now, the completePasses can overflow and the possibility of\noverflow may not be so different so that the 64 bit atomic operation is\nbetter.\n\nif overflow would happen, passes_delta variable in the function called by\nbgwriter would be negative high value and it would lead to the failure of\nassert. (the code is below\nhttps://github.com/postgres/postgres/blob/d9d873bac67047cfacc9f5ef96ee488f2cb0f1c3/src/backend/storage/buffer/bufmgr.c#L2298-L2303\n\nDo you send patch for the replacement with 64 bit? If you don't mind, I\nwould like to send patch. ( or is there some procedure before sending patch?\n\nThanks\nhayato\n\n2023年1月11日(水) 3:59 Andres Freund <andres@anarazel.de>:\n\n> Hi,\n>\n> On 2023-01-10 13:11:35 -0500, Robert Haas wrote:\n> > On Tue, Jan 10, 2023 at 12:40 PM Andres Freund <andres@anarazel.de>\n> wrote:\n> > > > I think. `expected = originalVictim + 1;` line should be in while\n> loop\n> > > > (before acquiring spin lock) so that, even in the case above,\n> expected\n> > > > variable is incremented for each loop and CAS operation will be\n> successful\n> > > > at some point.\n> > > > Is my understanding correct? If so, I will send PR for fixing this\n> issue.\n> > >\n> > > Yes, I think your understanding might be correct. Interesting that this\n> > > apparently has never occurred.\n> >\n> > Doesn't pg_atomic_compare_exchange_u32 set expected if it fails?\n>\n> Indeed, so there's no problem.\n>\n> I wonder if we should make ->nextVictimBuffer a 64bit atomic. At the time\n> the\n> changes went in we didn't (or rather, couldn't) rely on it, but these days\n> we\n> could. I think with a 64bit number we could get rid of ->completePasses\n> and\n> just infer it from ->nextVictimBuffer/NBuffers, removing th need for the\n> spinlock. It's very unlikely that 64bit would ever wrap, and even if, it'd\n> just be a small inaccuracy in the assumed number of passes. OTOH, it's\n> doubtful the overflow handling / the spinlock matters performance wise.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nHi, Thank you for your quick replyI misunderstood the logic of pg_atomic_compare_exchange_u32, so the loop cannot be infinite.> I wonder if we should make ->nextVictimBuffer a 64bit atomic. At the time the changes went in we didn't (or rather, couldn't) rely on it, but these days we could.  I think with a 64bit number we could get rid of ->completePasses and just infer it from ->nextVictimBuffer/NBuffers, removing th need for the spinlock.  It's very unlikely that 64bit would ever wrap, and even if, it'd just be a small inaccuracy in the assumed number of passes. OTOH, it's doubtful the overflow handling / the spinlock matters performance wise.I'm not sure why 64 bit was not used at the time, so I'm concerned about it. but, except for it, you have a point and I completely agree with you. as you have said,  we should use 64 bit whose higher-order 32 bit indicates completePasses, and should remove spinlock.maybe we don't have to exceptionally worry about the overflow here mainly because, even now, the completePasses can overflow and the possibility of overflow may not be so different so that the 64 bit atomic operation is better.if overflow would happen, passes_delta variable in the function called by bgwriter would be negative high value and it would lead to the failure of assert. (the code is belowhttps://github.com/postgres/postgres/blob/d9d873bac67047cfacc9f5ef96ee488f2cb0f1c3/src/backend/storage/buffer/bufmgr.c#L2298-L2303Do you send patch for the replacement with 64 bit? If you don't mind, I would like to send patch. ( or is there some procedure before sending patch?Thankshayato2023年1月11日(水) 3:59 Andres Freund <andres@anarazel.de>:Hi,\n\nOn 2023-01-10 13:11:35 -0500, Robert Haas wrote:\n> On Tue, Jan 10, 2023 at 12:40 PM Andres Freund <andres@anarazel.de> wrote:\n> > > I think. `expected = originalVictim + 1;` line should be in while loop\n> > > (before acquiring spin lock) so that, even in the case above, expected\n> > > variable is incremented for each loop and CAS operation will be successful\n> > > at some point.\n> > > Is my understanding correct? If so, I will send PR for fixing this issue.\n> >\n> > Yes, I think your understanding might be correct. Interesting that this\n> > apparently has never occurred.\n>\n> Doesn't pg_atomic_compare_exchange_u32 set expected if it fails?\n\nIndeed, so there's no problem.\n\nI wonder if we should make ->nextVictimBuffer a 64bit atomic. At the time the\nchanges went in we didn't (or rather, couldn't) rely on it, but these days we\ncould.  I think with a 64bit number we could get rid of ->completePasses and\njust infer it from ->nextVictimBuffer/NBuffers, removing th need for the\nspinlock.  It's very unlikely that 64bit would ever wrap, and even if, it'd\njust be a small inaccuracy in the assumed number of passes. OTOH, it's\ndoubtful the overflow handling / the spinlock matters performance wise.\n\nGreetings,\n\nAndres Freund", "msg_date": "Wed, 11 Jan 2023 23:44:11 +0900", "msg_from": "=?UTF-8?B?5pav5rOi6Zq85paX?= <shibahayaton@gmail.com>", "msg_from_op": true, "msg_subject": "Re: can while loop in ClockSweepTick function be kind of infinite\n loop in some cases?" } ]
[ { "msg_contents": "Hi,\nI was reading src/backend/replication/logical/applyparallelworker.c .\nIn `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I\nthink the `ParallelApplyTxnHash` should be released.\n\nPlease see the patch.\n\nThanks\n\nHi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.Thanks", "msg_date": "Tue, 10 Jan 2023 09:25:29 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "releasing ParallelApplyTxnHash when pa_launch_parallel_worker returns\n NULL" }, { "msg_contents": "On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n> Hi,\n> I was reading src/backend/replication/logical/applyparallelworker.c .\n> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I\n> think the `ParallelApplyTxnHash` should be released.\n>\n> Please see the patch.\n>\n> Thanks\n>\nHere is the patch :-)", "msg_date": "Tue, 10 Jan 2023 09:26:17 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n>\n>\n> On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n>> Hi,\n>> I was reading src/backend/replication/logical/applyparallelworker.c .\n>> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I\n>> think the `ParallelApplyTxnHash` should be released.\n>>\n>> Please see the patch.\n>>\n>> Thanks\n>>\n> Here is the patch :-)\n>\n\nIn `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call\nimmediately follows `pa_lock_stream`.\nI assume the following is the intended sequence of calls. If this is the\ncase, I can add it to the patch.\n\nCheers\n\ndiff --git a/src/backend/replication/logical/applyparallelworker.c\nb/src/backend/replication/logical/applyparallelworker.c\nindex 2e5914d5d9..9879b3fff2 100644\n--- a/src/backend/replication/logical/applyparallelworker.c\n+++ b/src/backend/replication/logical/applyparallelworker.c\n@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)\n if (fileset_state == FS_SERIALIZE_IN_PROGRESS)\n {\n pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n- pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n\n fileset_state = pa_get_fileset_state();\n+ pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n }\n\n /*\n\nOn Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:Hi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.ThanksHere is the patch :-) In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call immediately follows `pa_lock_stream`.I assume the following is the intended sequence of calls. If this is the case, I can add it to the patch.Cheersdiff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.cindex 2e5914d5d9..9879b3fff2 100644--- a/src/backend/replication/logical/applyparallelworker.c+++ b/src/backend/replication/logical/applyparallelworker.c@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)     if (fileset_state == FS_SERIALIZE_IN_PROGRESS)     {         pa_lock_stream(MyParallelShared->xid, AccessShareLock);-        pa_unlock_stream(MyParallelShared->xid, AccessShareLock);         fileset_state = pa_get_fileset_state();+        pa_unlock_stream(MyParallelShared->xid, AccessShareLock);     }     /*", "msg_date": "Tue, 10 Jan 2023 09:43:25 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Tue, Jan 10, 2023 at 9:43 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n>\n>\n> On Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n>>\n>>\n>> On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>>\n>>> Hi,\n>>> I was reading src/backend/replication/logical/applyparallelworker.c .\n>>> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I\n>>> think the `ParallelApplyTxnHash` should be released.\n>>>\n>>> Please see the patch.\n>>>\n>>> Thanks\n>>>\n>> Here is the patch :-)\n>>\n>\n> In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call\n> immediately follows `pa_lock_stream`.\n> I assume the following is the intended sequence of calls. If this is the\n> case, I can add it to the patch.\n>\n> Cheers\n>\n> diff --git a/src/backend/replication/logical/applyparallelworker.c\n> b/src/backend/replication/logical/applyparallelworker.c\n> index 2e5914d5d9..9879b3fff2 100644\n> --- a/src/backend/replication/logical/applyparallelworker.c\n> +++ b/src/backend/replication/logical/applyparallelworker.c\n> @@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)\n> if (fileset_state == FS_SERIALIZE_IN_PROGRESS)\n> {\n> pa_lock_stream(MyParallelShared->xid, AccessShareLock);\n> - pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n>\n> fileset_state = pa_get_fileset_state();\n> + pa_unlock_stream(MyParallelShared->xid, AccessShareLock);\n> }\n>\n> /*\n>\nLooking closer at the comment above this code and other part of the file,\nit seems the order is intentional.\n\nPlease disregard my email about `pa_process_spooled_messages_if_required`.\n\nOn Tue, Jan 10, 2023 at 9:43 AM Ted Yu <yuzhihong@gmail.com> wrote:On Tue, Jan 10, 2023 at 9:26 AM Ted Yu <yuzhihong@gmail.com> wrote:On Tue, Jan 10, 2023 at 9:25 AM Ted Yu <yuzhihong@gmail.com> wrote:Hi,I was reading src/backend/replication/logical/applyparallelworker.c .In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.Please see the patch.ThanksHere is the patch :-) In `pa_process_spooled_messages_if_required`, the `pa_unlock_stream` call immediately follows `pa_lock_stream`.I assume the following is the intended sequence of calls. If this is the case, I can add it to the patch.Cheersdiff --git a/src/backend/replication/logical/applyparallelworker.c b/src/backend/replication/logical/applyparallelworker.cindex 2e5914d5d9..9879b3fff2 100644--- a/src/backend/replication/logical/applyparallelworker.c+++ b/src/backend/replication/logical/applyparallelworker.c@@ -684,9 +684,9 @@ pa_process_spooled_messages_if_required(void)     if (fileset_state == FS_SERIALIZE_IN_PROGRESS)     {         pa_lock_stream(MyParallelShared->xid, AccessShareLock);-        pa_unlock_stream(MyParallelShared->xid, AccessShareLock);         fileset_state = pa_get_fileset_state();+        pa_unlock_stream(MyParallelShared->xid, AccessShareLock);     }     /* Looking closer at the comment above this code and other part of the file, it seems the order is intentional.Please disregard my email about `pa_process_spooled_messages_if_required`.", "msg_date": "Tue, 10 Jan 2023 10:37:45 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Wednesday, January 11, 2023 1:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\r\n\r\n> I was reading src/backend/replication/logical/applyparallelworker.c .\r\n> In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I think the `ParallelApplyTxnHash` should be released.\r\n\r\nThanks for reporting.\r\n\r\nParallelApplyTxnHash is used to cache the state of streaming transactions being\r\napplied. There could be multiple streaming transactions being applied in\r\nparallel and their information were already saved in ParallelApplyTxnHash, so\r\nwe should not release them just because we don't have a worker available to\r\nhandle a new transaction here.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 11 Jan 2023 02:12:40 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Tue, Jan 10, 2023 at 6:13 PM houzj.fnst@fujitsu.com <\nhouzj.fnst@fujitsu.com> wrote:\n\n> On Wednesday, January 11, 2023 1:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n> > I was reading src/backend/replication/logical/applyparallelworker.c .\n> > In `pa_allocate_worker`, when pa_launch_parallel_worker returns NULL, I\n> think the `ParallelApplyTxnHash` should be released.\n>\n> Thanks for reporting.\n>\n> ParallelApplyTxnHash is used to cache the state of streaming transactions\n> being\n> applied. There could be multiple streaming transactions being applied in\n> parallel and their information were already saved in ParallelApplyTxnHash,\n> so\n> we should not release them just because we don't have a worker available to\n> handle a new transaction here.\n>\n> Best Regards,\n> Hou zj\n>\nHi,\n\n /* First time through, initialize parallel apply worker state\nhashtable. */\n if (!ParallelApplyTxnHash)\n\nI think it would be better if `ParallelApplyTxnHash` is created by the\nfirst successful parallel apply worker.\n\nPlease take a look at the new patch and see if it makes sense.\n\nCheers", "msg_date": "Tue, 10 Jan 2023 18:20:54 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:\r\n> /* First time through, initialize parallel apply worker state hashtable. */\r\n> if (!ParallelApplyTxnHash)\r\n> \r\n> I think it would be better if `ParallelApplyTxnHash` is created by the first\r\n> successful parallel apply worker.\r\n\r\nThanks for the suggestion. But I am not sure if it's worth to changing the\r\norder here, because It will only optimize the case where user enable parallel\r\napply but never get an available worker which should be rare. And in such a\r\ncase, it'd be better to increase the number of workers or disable the parallel mode.\r\n\r\nBest Regards,\r\nHou zj\r\n", "msg_date": "Wed, 11 Jan 2023 03:55:25 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <\nhouzj.fnst@fujitsu.com> wrote:\n\n> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com>\n> wrote:\n> > /* First time through, initialize parallel apply worker state\n> hashtable. */\n> > if (!ParallelApplyTxnHash)\n> >\n> > I think it would be better if `ParallelApplyTxnHash` is created by the\n> first\n> > successful parallel apply worker.\n>\n> Thanks for the suggestion. But I am not sure if it's worth to changing the\n> order here, because It will only optimize the case where user enable\n> parallel\n> apply but never get an available worker which should be rare. And in such a\n> case, it'd be better to increase the number of workers or disable the\n> parallel mode.\n>\n> Best Regards,\n> Hou zj\n>\n\nI think even though the chance is rare, we shouldn't leak resource.\n\nThe `ParallelApplyTxnHash` shouldn't be created if there is no single apply\nworker.\n\nOn Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>         /* First time through, initialize parallel apply worker state hashtable. */\n>         if (!ParallelApplyTxnHash)\n> \n> I think it would be better if `ParallelApplyTxnHash` is created by the first\n> successful parallel apply worker.\n\nThanks for the suggestion. But I am not sure if it's worth to changing the\norder here, because It will only optimize the case where user enable parallel\napply but never get an available worker which should be rare. And in such a\ncase, it'd be better to increase the number of workers or disable the parallel mode.\n\nBest Regards,\nHou zjI think even though the chance is rare, we shouldn't leak resource.The `ParallelApplyTxnHash` shouldn't be created if there is no single apply worker.", "msg_date": "Tue, 10 Jan 2023 20:01:15 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n> On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>>\n>> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>> > /* First time through, initialize parallel apply worker state hashtable. */\n>> > if (!ParallelApplyTxnHash)\n>> >\n>> > I think it would be better if `ParallelApplyTxnHash` is created by the first\n>> > successful parallel apply worker.\n>>\n>> Thanks for the suggestion. But I am not sure if it's worth to changing the\n>> order here, because It will only optimize the case where user enable parallel\n>> apply but never get an available worker which should be rare. And in such a\n>> case, it'd be better to increase the number of workers or disable the parallel mode.\n>>\n>\n>\n> I think even though the chance is rare, we shouldn't leak resource.\n>\n\nBut that is true iff we are never able to start the worker. Anyway, I\nthink it is probably fine either way but we can change it as per your\nsuggestion to make it more robust and probably for the code clarity\nsake. I'll push this tomorrow unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 12 Jan 2023 08:24:07 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote:\n> >\n> > On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <\n> houzj.fnst@fujitsu.com> wrote:\n> >>\n> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com>\n> wrote:\n> >> > /* First time through, initialize parallel apply worker state\n> hashtable. */\n> >> > if (!ParallelApplyTxnHash)\n> >> >\n> >> > I think it would be better if `ParallelApplyTxnHash` is created by\n> the first\n> >> > successful parallel apply worker.\n> >>\n> >> Thanks for the suggestion. But I am not sure if it's worth to changing\n> the\n> >> order here, because It will only optimize the case where user enable\n> parallel\n> >> apply but never get an available worker which should be rare. And in\n> such a\n> >> case, it'd be better to increase the number of workers or disable the\n> parallel mode.\n> >>\n> >\n> >\n> > I think even though the chance is rare, we shouldn't leak resource.\n> >\n>\n> But that is true iff we are never able to start the worker. Anyway, I\n> think it is probably fine either way but we can change it as per your\n> suggestion to make it more robust and probably for the code clarity\n> sake. I'll push this tomorrow unless someone thinks otherwise.\n>\n> --\n> With Regards,\n> Amit Kapila.\n>\n\nThanks Amit for the confirmation.\n\nCheers\n\nOn Wed, Jan 11, 2023 at 6:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n> On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>>\n>> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>> >         /* First time through, initialize parallel apply worker state hashtable. */\n>> >         if (!ParallelApplyTxnHash)\n>> >\n>> > I think it would be better if `ParallelApplyTxnHash` is created by the first\n>> > successful parallel apply worker.\n>>\n>> Thanks for the suggestion. But I am not sure if it's worth to changing the\n>> order here, because It will only optimize the case where user enable parallel\n>> apply but never get an available worker which should be rare. And in such a\n>> case, it'd be better to increase the number of workers or disable the parallel mode.\n>>\n>\n>\n> I think even though the chance is rare, we shouldn't leak resource.\n>\n\nBut that is true iff we are never able to start the worker. Anyway, I\nthink it is probably fine either way but we can change it as per your\nsuggestion to make it more robust and probably for the code clarity\nsake. I'll push this tomorrow unless someone thinks otherwise.\n\n-- \nWith Regards,\nAmit Kapila.Thanks Amit for the confirmation.Cheers", "msg_date": "Wed, 11 Jan 2023 18:55:18 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" }, { "msg_contents": "On Thu, Jan 12, 2023 at 8:25 AM Ted Yu <yuzhihong@gmail.com> wrote:\n\n> On Wed, Jan 11, 2023 at 6:54 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Jan 11, 2023 at 9:31 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>> >\n>> > On Tue, Jan 10, 2023 at 7:55 PM houzj.fnst@fujitsu.com <houzj.fnst@fujitsu.com> wrote:\n>> >>\n>> >> On Wednesday, January 11, 2023 10:21 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>> >> > /* First time through, initialize parallel apply worker state hashtable. */\n>> >> > if (!ParallelApplyTxnHash)\n>> >> >\n>> >> > I think it would be better if `ParallelApplyTxnHash` is created by the first\n>> >> > successful parallel apply worker.\n>> >>\n>> >> Thanks for the suggestion. But I am not sure if it's worth to changing the\n>> >> order here, because It will only optimize the case where user enable parallel\n>> >> apply but never get an available worker which should be rare. And in such a\n>> >> case, it'd be better to increase the number of workers or disable the parallel mode.\n>> >>\n>> >\n>> >\n>> > I think even though the chance is rare, we shouldn't leak resource.\n>> >\n>>\n>> But that is true iff we are never able to start the worker. Anyway, I\n>> think it is probably fine either way but we can change it as per your\n>> suggestion to make it more robust and probably for the code clarity\n>> sake. I'll push this tomorrow unless someone thinks otherwise.\n>>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 13 Jan 2023 15:14:12 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: releasing ParallelApplyTxnHash when pa_launch_parallel_worker\n returns NULL" } ]
[ { "msg_contents": "Add new GUC createrole_self_grant.\n\nCan be set to the empty string, or to either or both of \"set\" or\n\"inherit\". If set to a non-empty value, a non-superuser who creates\na role (necessarily by relying up the CREATEROLE privilege) will\ngrant that role back to themselves with the specified options.\n\nThis isn't a security feature, because the grant that this feature\ntriggers can also be performed explicitly. Instead, it's a user experience\nfeature. A superuser would necessarily inherit the privileges of any\ncreated role and be able to access all such roles via SET ROLE;\nwith this patch, you can configure createrole_self_grant = 'set, inherit'\nto provide a similar experience for a user who has CREATEROLE but not\nSUPERUSER.\n\nDiscussion: https://postgr.es/m/CA+TgmobN59ct+Emmz6ig1Nua2Q-_o=r6DSD98KfU53kctq_kQw@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/e5b8a4c098ad6add39626a14475148872cd687e0\n\nModified Files\n--------------\ndoc/src/sgml/config.sgml | 33 +++++++++\ndoc/src/sgml/ref/create_role.sgml | 1 +\ndoc/src/sgml/ref/createuser.sgml | 1 +\nsrc/backend/commands/user.c | 97 ++++++++++++++++++++++++++-\nsrc/backend/utils/misc/guc_tables.c | 12 ++++\nsrc/backend/utils/misc/postgresql.conf.sample | 1 +\nsrc/include/commands/user.h | 10 ++-\nsrc/test/regress/expected/create_role.out | 33 +++++++++\nsrc/test/regress/sql/create_role.sql | 37 ++++++++++\n9 files changed, 220 insertions(+), 5 deletions(-)", "msg_date": "Tue, 10 Jan 2023 17:46:00 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "Robert Haas <rhaas@postgresql.org> writes:\n> Add new GUC createrole_self_grant.\n> Can be set to the empty string, or to either or both of \"set\" or\n> \"inherit\". If set to a non-empty value, a non-superuser who creates\n> a role (necessarily by relying up the CREATEROLE privilege) will\n> grant that role back to themselves with the specified options.\n\n> This isn't a security feature, because the grant that this feature\n> triggers can also be performed explicitly.\n\n[ squint ... ] Are you sure it's not a security *hazard*, though?\n\nIt troubles me that we're introducing a command-semantics-altering\nGUC at all; we have substantial experience with regretting such\nchoices. It troubles me more that the semantics change bears on\nsecurity matters, and even more that you've made it USERSET.\nThat at least opens the door to unprivileged user X causing code\nbelonging to more-privileged user Y to do something other than\nwhat Y expected.\n\nI'll hold my tongue if you're willing to make it SUSET or higher.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 20:47:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Tue, Jan 10, 2023 at 8:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> [ squint ... ] Are you sure it's not a security *hazard*, though?\n\nI think you have to squint pretty hard to find a security hazard here.\nThe effect of this GUC is to control the set of privileges that a\nCREATEROLE user automatically grants to themselves. But granting\nyourself privileges does not normally lead to any sort of security\nprivilege. It's not completely impossible, I believe. For example,\nsuppose that I, as a CREATEROLE user who is not a superuser, execute\n\"CREATE ROLE shifty\". I then set up a schema that shifty can access\nand I cannot. I then put that schema into my search_path despite the\nfact that I haven't given myself access to it. Now, depending on the\nvalue of this setting, I might either implicitly inherit shifty's\nprivileges, or I might not. So, if I was expecting that I wouldn't,\nand I do, then I have now created a situation where if I do more dumb\nthings I could execute some shifty code that lets that shifty user\ntake over my account.\n\nBut, you know, if I'm that dumb, I could also hit myself in the head\nwith a hammer and the shifty guy could use the fact that I'm\nunconscious to fish the sticky note out of my wallet where,\npresumably, I keep my database password.\n\nThe bigger point here, I think, is that this GUC only controls default\nprivileges -- and we already have a system for default privileges that\nallows any user to give away privileges on virtually any object that\nthey create to anyone. Nothing about that system is superuser-only.\nThis system is far more restricted in its scope. It only allows you to\ngive privileges to yourself, not anyone else, and only if you're a\nCREATEROLE user who is not a SUPERUSER. It seems a bit crazy to say\nthat it's not a hazard for Alice to automatically grant every\npermission in the book to Emil every time she creates a table or\nschema or type or sequence or a function, but it is a hazard if Bob\ncan grant INHERIT and SET to himself on roles that he creates.\n\nThat said, in my original design, this was controlled via a different\nmechanism which was superuser-only. I was informed that made no sense,\nso I changed it. Now here we are.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 21:26:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 10, 2023 at 8:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> [ squint ... ] Are you sure it's not a security *hazard*, though?\n\n> I think you have to squint pretty hard to find a security hazard here.\n\nMaybe, but I'd be sad if somebody manages to find one after this is\nout in the wild.\n\n> That said, in my original design, this was controlled via a different\n> mechanism which was superuser-only. I was informed that made no sense,\n> so I changed it. Now here we are.\n\nYeah. I concur that a SUSET GUC isn't much fun for a non-superuser\nCREATEROLE holder who might wish to adjust the default behavior they get.\nI also concur that it seems a bit far-fetched that a CREATEROLE holder\nmight create a SECURITY DEFINER function that would do something that\nwould be affected by this setting. Still, we have no field experience\nwith how these mechanisms will actually be used, so I'm worried.\n\nThe scenario I'm worried about could be closed, mostly, if we were willing\nto invent an intermediate GUC privilege level \"can be set interactively\nbut only by CREATEROLE holders\" (\"PGC_CRSET\"?). But that's an awful lot\nof infrastructure to add for one GUC. Are there any other GUCs where\nthat'd be a more useful choice than any we have now?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 21:40:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Tue, Jan 10, 2023 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah. I concur that a SUSET GUC isn't much fun for a non-superuser\n> CREATEROLE holder who might wish to adjust the default behavior they get.\n> I also concur that it seems a bit far-fetched that a CREATEROLE holder\n> might create a SECURITY DEFINER function that would do something that\n> would be affected by this setting. Still, we have no field experience\n> with how these mechanisms will actually be used, so I'm worried.\n\nAll right. I'm not that worried because I think any problems that crop\nup probably won't be that bad, primarily due to the extremely\nrestricted set of circumstances in which the GUC operates -- but\nthat's a judgement call, and reasonable people can think differently.\n\n> The scenario I'm worried about could be closed, mostly, if we were willing\n> to invent an intermediate GUC privilege level \"can be set interactively\n> but only by CREATEROLE holders\" (\"PGC_CRSET\"?). But that's an awful lot\n> of infrastructure to add for one GUC. Are there any other GUCs where\n> that'd be a more useful choice than any we have now?\n\nI don't quite understand what that would do. If a non-CREATEROLE user\nsets the GUC, absolutely nothing happens, because the code that is\ncontrolled by the GUC cannot be reached without CREATEROLE privileges.\n\nOf course, if it's possible for a non-CREATEROLE user to set the value\nthat a CREATEROLE user experiences, that'd be more of a problem --\nthough still insufficient to create a security vulnerability in and of\nitself -- but if user A can change the GUC settings that user B\nexperiences, why screw around with this when you could just set\nsearch_path?\n\nTo answer your question directly, though, I don't know of any other\nsetting where that would be a useful level. Up until this morning,\nCREATEROLE was not usable for any serious purpose because we've been\nshipping something that was broken by design for years, so it's\nprobably fortunate that not much depends on it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 22:10:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 10, 2023 at 9:40 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The scenario I'm worried about could be closed, mostly, if we were willing\n>> to invent an intermediate GUC privilege level \"can be set interactively\n>> but only by CREATEROLE holders\" (\"PGC_CRSET\"?).\n\n> Of course, if it's possible for a non-CREATEROLE user to set the value\n> that a CREATEROLE user experiences, that'd be more of a problem --\n\nThat's exactly the case I'm worried about, and it's completely reachable\nif a CREATEROLE user makes a SECURITY DEFINER function that executes\nan affected GRANT and is callable by an unprivileged user. Now, that\nprobably isn't terribly likely, and it's unclear that there'd be any\nserious security consequence even if the GRANT did do something\ndifferent than the author of the SECURITY DEFINER function was expecting.\nNonetheless, I'm feeling itchy about this chain of assumptions.\n\n> To answer your question directly, though, I don't know of any other\n> setting where that would be a useful level.\n\nYeah, I didn't think of one either. Also, even if we invented PGC_CRSET,\nit can't stop one CREATEROLE user from attacking another one, assuming\nthat there is some interesting attack that can be constructed here.\nI think the whole point of your recent patches is to not assume that\nCREATEROLE users are mutually trusting, so that's bad.\n\nBottom line is that a GUC doesn't feel like the right mechanism to use.\nWhat do you think about inventing a role property, or a grantable role\nthat controls this behavior?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 22:25:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Tue, Jan 10, 2023 at 10:25 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Of course, if it's possible for a non-CREATEROLE user to set the value\n> > that a CREATEROLE user experiences, that'd be more of a problem --\n>\n> That's exactly the case I'm worried about, and it's completely reachable\n> if a CREATEROLE user makes a SECURITY DEFINER function that executes\n> an affected GRANT and is callable by an unprivileged user. Now, that\n> probably isn't terribly likely, and it's unclear that there'd be any\n> serious security consequence even if the GRANT did do something\n> different than the author of the SECURITY DEFINER function was expecting.\n> Nonetheless, I'm feeling itchy about this chain of assumptions.\n\nIf you want to make safe a SECURITY DEFINER function written using sql\nor plpgsql, you either have to schema-qualify every single reference\nor, more realistically, attach a SET clause to the function to set the\nsearch_path to a sane value during the time that the function is\nexecuting. The problem here can be handled the same way, except that\nit's needed in a vastly more limited set of circumstances: you have to\nbe calling a SECURITY DEFINER function that will execute CREATE ROLE\nas a non-superuser (and that user then needs to be sensitive to the\nvalue of this GUC in some security-relevant way). It might be good to\ndocument this -- I just noticed that the CREATE FUNCTION page has a\nsection on \"Writing SECURITY DEFINER Functions Safely\" which talks\nabout dealing with the search_path issues, and it seems like it would\nbe worth adding a sentence or two there to talk about this.\n\nIt might also be a good idea to make the section of the page that\nexplains the meaning of the SECURITY INVOKER and SECURITY DEFINER\nclauses cross-link to the section on writing such functions safely,\nbecause that section is way down at the bottom of the page and seems\neasy to miss.\n\nBut I'm not convinced that we need more than documentation changes here.\n\n> Bottom line is that a GUC doesn't feel like the right mechanism to use.\n> What do you think about inventing a role property, or a grantable role\n> that controls this behavior?\n\nWell, I originally had a pair of role properties called\nINHERITCREATEDROLES and SETCREATEDROLES, but nobody liked that,\nincluding you. One reason was probably that the names are long and\nugly, but this is also unlike existing role properties, which\ntypically can only be changed by a superuser, or by a CREATEROLE user\nwith ADMIN OPTION on the target role. Since you don't have admin\noption on yourself, that would mean you couldn't change this property\nfor yourself, which I wasn't too exercised about, but everyone else\nwho commented disliked it. We could go back to having those properties\nand have the rules for changing them be different than the roles for\nchanging other role properties, I suppose.\n\nBut I don't think that's better. Some of the existing role properties\n(SUPERUSER, REPLICATION, BYPASSRLS, CREATEROLE, CREATEDB) are\ncapabilities, and others (LOGIN, CONNECTION LIMIT, VALID UNTIL) are\nlimits established by the system administrator. This is neither: it's\na default. So if we put it into the role property system, then we're\nsaying this one particular default is a role property, whereas pretty\nmuch all of the others are GUCs. Now, admittedly, I did have it as a\nrole property originally, but that was before we decided that it made\nsense to give the CREATEROLE user, rather than the superuser, control\nover the value of the property. And I think we decided that for good\nreason, because the CREATEROLE user can always turn \"no\" into \"yes\" by\ngranting the created role back to themselves with additional options.\nThey can't necessarily turn \"yes\" into \"no,\" because we could create\nthe implicit grant with one or both of those options turned on and the\nCREATEROLE user can't undo that. But nobody thought that was a useful\nthing to enforce, and neither do I. Given that shift in thinking, I\nhave a hard time believing that it's a good idea to shove this option\ninto a system that is meant for capabilities or limits and has no\nexisting precedent for anything that behaves like a setting or user\npreference (except perhaps ALTER USER ... PASSWORD '...' but calling\nthat a preference might be a stretch).\n\nIf we used grantable roles, I suppose the design there would to invent\ntwo new predefined role and grant them to the CREATEROLE user, or not,\nto affect whether and how subsequent implicit self-grants by that user\nwould be performed. But that again takes the decision out of the hands\nof the CREATEROLE user, and it again puts a user preference into a\nsystem that we currently use only for capabilities.\n\nStepping back a second, I think that your underlying concern here is\nthat the entire GUC system is shaky from a security perspective.\nTrying to write SECURITY DEFINER functions or procedures in sql or\nplpgsql is not unlike trying to write a safe setuid shell script.\nAmong the problems with trying to do that is that the caller might\nestablish surprising values for PATH, IFS, or other environment\nvariables before calling your script, and if you're not careful,\nyou'll end up doing whatever the caller wants instead of what you\nintended to be doing. I think it's really justifiable to be worried\nabout the same kinds of problems within PostgreSQL, but I don't think\nthat the right solution is to have new settings opt out of the GUC\nmechanism on a retail basis. We need a solution that's going to work\nfor every GUC we have, in a consistent and understandable way, and if\nwe can't have that then we at least need a solution for search_path.\nIf we come up with such a solution, it seems likely that also adopting\nthat solution for createrole_self_grant would be a good idea. But if\nwe don't, I have trouble believing that doing something only for\ncreaterole_self_grant is really going to improve security.\n\nIn fact, it might make it harder to fix the real problems in this\narea. If we have all of our settings in one system, then any solution\nwe devise can apply to all of them equally. If we start storing some\nof them in other places, it's potentially more separate things that\nhave to be fixed. I don't want to make overly strong statements here\nbecause I don't think we really know what any of the fixes to these\nproblems are. If we can find some place to put this where it fits\nnicely and that place isn't the GUC system, well and good: I don't\nmind writing a patch to do it. But what I don't want to do is do\ncontortions to avoid relying on GUCs because we don't trust the GUC\nmechanism in general. I don't think that kind of thing will end well.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 13:24:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If you want to make safe a SECURITY DEFINER function written using sql\n> or plpgsql, you either have to schema-qualify every single reference\n> or, more realistically, attach a SET clause to the function to set the\n> search_path to a sane value during the time that the function is\n> executing. The problem here can be handled the same way, except that\n> it's needed in a vastly more limited set of circumstances: you have to\n> be calling a SECURITY DEFINER function that will execute CREATE ROLE\n> as a non-superuser (and that user then needs to be sensitive to the\n> value of this GUC in some security-relevant way). It might be good to\n> document this -- I just noticed that the CREATE FUNCTION page has a\n> section on \"Writing SECURITY DEFINER Functions Safely\" which talks\n> about dealing with the search_path issues, and it seems like it would\n> be worth adding a sentence or two there to talk about this.\n\nOK, I'd be satisfied with that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:00:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Wed, Jan 11, 2023 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > If you want to make safe a SECURITY DEFINER function written using sql\n> > or plpgsql, you either have to schema-qualify every single reference\n> > or, more realistically, attach a SET clause to the function to set the\n> > search_path to a sane value during the time that the function is\n> > executing. The problem here can be handled the same way, except that\n> > it's needed in a vastly more limited set of circumstances: you have to\n> > be calling a SECURITY DEFINER function that will execute CREATE ROLE\n> > as a non-superuser (and that user then needs to be sensitive to the\n> > value of this GUC in some security-relevant way). It might be good to\n> > document this -- I just noticed that the CREATE FUNCTION page has a\n> > section on \"Writing SECURITY DEFINER Functions Safely\" which talks\n> > about dealing with the search_path issues, and it seems like it would\n> > be worth adding a sentence or two there to talk about this.\n>\n> OK, I'd be satisfied with that.\n\nOK, I'll draft a patch tomorrow.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:16:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Wed, Jan 11, 2023 at 2:16 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jan 11, 2023 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Robert Haas <robertmhaas@gmail.com> writes:\n> > > If you want to make safe a SECURITY DEFINER function written using sql\n> > > or plpgsql, you either have to schema-qualify every single reference\n> > > or, more realistically, attach a SET clause to the function to set the\n> > > search_path to a sane value during the time that the function is\n> > > executing. The problem here can be handled the same way, except that\n> > > it's needed in a vastly more limited set of circumstances: you have to\n> > > be calling a SECURITY DEFINER function that will execute CREATE ROLE\n> > > as a non-superuser (and that user then needs to be sensitive to the\n> > > value of this GUC in some security-relevant way). It might be good to\n> > > document this -- I just noticed that the CREATE FUNCTION page has a\n> > > section on \"Writing SECURITY DEFINER Functions Safely\" which talks\n> > > about dealing with the search_path issues, and it seems like it would\n> > > be worth adding a sentence or two there to talk about this.\n> >\n> > OK, I'd be satisfied with that.\n>\n> OK, I'll draft a patch tomorrow.\n>\n>\nJusted wanted to chime in and say Robert has eloquently put into words much\nof what I have been thinking here, and that I concur that guiding the DBA\nto use care with the power they have been provided is a sane position to\ntake.\n\n+1, and thank you.\n\nDavid J.\n\nOn Wed, Jan 11, 2023 at 2:16 PM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jan 11, 2023 at 4:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > If you want to make safe a SECURITY DEFINER function written using sql\n> > or plpgsql, you either have to schema-qualify every single reference\n> > or, more realistically, attach a SET clause to the function to set the\n> > search_path to a sane value during the time that the function is\n> > executing. The problem here can be handled the same way, except that\n> > it's needed in a vastly more limited set of circumstances: you have to\n> > be calling a SECURITY DEFINER function that will execute CREATE ROLE\n> > as a non-superuser (and that user then needs to be sensitive to the\n> > value of this GUC in some security-relevant way). It might be good to\n> > document this -- I just noticed that the CREATE FUNCTION page has a\n> > section on \"Writing SECURITY DEFINER Functions Safely\" which talks\n> > about dealing with the search_path issues, and it seems like it would\n> > be worth adding a sentence or two there to talk about this.\n>\n> OK, I'd be satisfied with that.\n\nOK, I'll draft a patch tomorrow.Justed wanted to chime in and say Robert has eloquently put into words much of what I have been thinking here, and that I concur that guiding the DBA to use care with the power they have been provided is a sane position to take.+1, and thank you.David J.", "msg_date": "Wed, 11 Jan 2023 17:52:46 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Wed, Jan 11, 2023 at 7:53 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Justed wanted to chime in and say Robert has eloquently put into words much of what I have been thinking here, and that I concur that guiding the DBA to use care with the power they have been provided is a sane position to take.\n>\n> +1, and thank you.\n\nThanks!\n\nHere's a patch. In it I make three changes, only one of which is\ndirectly relevant to the topic at hand:\n\n1. Add a sentence to the documentation on writing SECURITY FUNCTIONS\nsafely concerning createrole_self_grant.\n2. Add a sentence to the documentation on SECURITY DEFINER referring\nto the section about writing such functions safely.\n3. Remove a note discussing the fact that pre-8.3 versions did not\nhave SET clauses for functions.\n\nI can separate this into multiple patches if desired. And of course\nyou, Tom, or others may have suggestions on which of these changes\nshould be included at all or how to word them better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Jan 2023 10:11:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Thu, Jan 12, 2023 at 8:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Jan 11, 2023 at 7:53 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> > Justed wanted to chime in and say Robert has eloquently put into words\n> much of what I have been thinking here, and that I concur that guiding the\n> DBA to use care with the power they have been provided is a sane position\n> to take.\n> >\n> > +1, and thank you.\n>\n> Thanks!\n>\n> Here's a patch. In it I make three changes, only one of which is\n> directly relevant to the topic at hand:\n>\n> 1. Add a sentence to the documentation on writing SECURITY FUNCTIONS\n> safely concerning createrole_self_grant.\n> 2. Add a sentence to the documentation on SECURITY DEFINER referring\n> to the section about writing such functions safely.\n> 3. Remove a note discussing the fact that pre-8.3 versions did not\n> have SET clauses for functions.\n>\n> I can separate this into multiple patches if desired. And of course\n> you, Tom, or others may have suggestions on which of these changes\n> should be included at all or how to word them better.\n>\n>\nI'm still not really feeling how security definer is the problematic option\nhere. Security invoker scares me much more.\n\npostgres=# create role unpriv login;\nCREATE ROLE\npostgres=# create role creator createrole login;\nCREATE ROLE\npostgres=# grant pg_read_all_data to creator with admin option;\npostgres=#\ncreate procedure makeunprivpowerful() LANGUAGE sql AS $$\ngrant pg_read_all_data to unpriv;\n$$;\nCREATE PROCEDURE\npostgres=# alter procedure makeunprivpowerful() owner to unpriv;\nALTER PROCEDURE\n\npostgres=# \\c postgres unpriv\nYou are now connected to database \"postgres\" as user \"unpriv\".\npostgres=> call makeunprivpowerful();\nERROR: must have admin option on role \"pg_read_all_data\"\nCONTEXT: SQL function \"makeunprivpowerful\" statement 1\n\npostgres=# \\c postgres creator\nYou are now connected to database \"postgres\" as user \"creator\".\npostgres=> call makeunprivpowerful();\nCALL\n\nPersonally I think the best advice for a CREATEROLE granted user is to\nnever call routines they themselves don't own; or at least only those have\nreviewed thoroughly and understand the consequences of. Regardless of\nsecurity definer/invoker.\n\nIn short, a low-privilege user can basically run anything without much fear\nbecause they can't do harm anyway. Thus security invoker applies to them,\nand having security definer gives them the ability to do some things\nwithout needing to have permanent higher privileges. These are useful,\nsecurity related attributes with respect to them.\n\nFor a high-privilege I would argue that neither security invoker nor\ndefiner are relevant considerations from a security point-of-view. If you\nhave high-privilege you must know what it is you are executing before you\nexecute it and anything bad it causes you to do using your privileges, or\nhigher if you run a security definer function blindly, is an administrative\nfailure, not a security hole.\n\nI think it would be good to move the goalposts here a bit with respect to\nencouraging safe behavior. But I also don't really think that it is fair\nto make this a prerequisite for the feature.\n\nIf we cannot write a decent why sentence for the proposed paragraph I say\nwe don't commit it (the cross-ref should go in):\n\nIf the security definer function intends to create roles, and if it\nis running as a non-superuser, <varname>createrole_self_grant</varname>\nshould also be set to a known value using the <literal>SET</literal>\nclause.\n\nThis is a convenience feature that a CREATEROLE user can leverage if they\nso choose. Anything bad coming of it is going to be strictly less worse\nthan whatever can happen just because the CREATEROLE user is being\ncareless. Whomever gets the admin privilege grant from the superuser when\nthe role is created may or may not have two other self-granted memberships\non the newly created role. Do the two optional grants really mean anything\nimportant here compared to the newly created object and superuser-granted\nadmin privilege (which means that regardless of the GUC the same end state\ncan eventually be reached anyway)?\n\nDavid J.\n\nOn Thu, Jan 12, 2023 at 8:11 AM Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Jan 11, 2023 at 7:53 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> Justed wanted to chime in and say Robert has eloquently put into words much of what I have been thinking here, and that I concur that guiding the DBA to use care with the power they have been provided is a sane position to take.\n>\n> +1, and thank you.\n\nThanks!\n\nHere's a patch. In it I make three changes, only one of which is\ndirectly relevant to the topic at hand:\n\n1. Add a sentence to the documentation on writing SECURITY FUNCTIONS\nsafely concerning createrole_self_grant.\n2. Add a sentence to the documentation on SECURITY DEFINER referring\nto the section about writing such functions safely.\n3. Remove a note discussing the fact that pre-8.3 versions did not\nhave SET clauses for functions.\n\nI can separate this into multiple patches if desired. And of course\nyou, Tom, or others may have suggestions on which of these changes\nshould be included at all or how to word them better.I'm still not really feeling how security definer is the problematic option here.  Security invoker scares me much more.postgres=# create role unpriv login;CREATE ROLEpostgres=# create role creator createrole login;CREATE ROLEpostgres=# grant pg_read_all_data to creator with admin option;postgres=#create procedure makeunprivpowerful() LANGUAGE sql AS $$grant pg_read_all_data to unpriv;$$;CREATE PROCEDUREpostgres=# alter procedure makeunprivpowerful() owner to unpriv;ALTER PROCEDUREpostgres=# \\c postgres unprivYou are now connected to database \"postgres\" as user \"unpriv\".postgres=> call makeunprivpowerful();ERROR:  must have admin option on role \"pg_read_all_data\"CONTEXT:  SQL function \"makeunprivpowerful\" statement 1postgres=# \\c postgres creatorYou are now connected to database \"postgres\" as user \"creator\".postgres=> call makeunprivpowerful();CALLPersonally I think the best advice for a CREATEROLE granted user is to never call routines they themselves don't own; or at least only those have reviewed thoroughly and understand the consequences of.  Regardless of security definer/invoker.In short, a low-privilege user can basically run anything without much fear because they can't do harm anyway.  Thus security invoker applies to them, and having security definer gives them the ability to do some things without needing to have permanent higher privileges.  These are useful, security related attributes with respect to them.For a high-privilege I would argue that neither security invoker nor definer are relevant considerations from a security point-of-view.  If you have high-privilege you must know what it is you are executing before you execute it and anything bad it causes you to do using your privileges, or higher if you run a security definer function blindly, is an administrative failure, not a security hole.I think it would be good to move the goalposts here a bit with respect to encouraging safe behavior.  But I also don't really think that it is fair to make this a prerequisite for the feature.If we cannot write a decent why sentence for the proposed paragraph I say we don't commit it (the cross-ref should go in):If the security definer function intends to create roles, and if itis running as a non-superuser, <varname>createrole_self_grant</varname>should also be set to a known value using the <literal>SET</literal>clause.This is a convenience feature that a CREATEROLE user can leverage if they so choose.  Anything bad coming of it is going to be strictly less worse than whatever can happen just because the CREATEROLE user is being careless. Whomever gets the admin privilege grant from the superuser when the role is created may or may not have two other self-granted memberships on the newly created role.  Do the two optional grants really mean anything important here compared to the newly created object and superuser-granted admin privilege (which means that regardless of the GUC the same end state can eventually be reached anyway)?David J.", "msg_date": "Thu, 12 Jan 2023 18:15:50 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "Hi,\n\nOn 2023-01-12 18:15:50 -0700, David G. Johnston wrote:\n> On Thu, Jan 12, 2023 at 8:11 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Wed, Jan 11, 2023 at 7:53 PM David G. Johnston\n> > <david.g.johnston@gmail.com> wrote:\n> > > Justed wanted to chime in and say Robert has eloquently put into words\n> > much of what I have been thinking here, and that I concur that guiding the\n> > DBA to use care with the power they have been provided is a sane position\n> > to take.\n> > >\n> > > +1, and thank you.\n> >\n> > Thanks!\n> >\n> > Here's a patch. In it I make three changes, only one of which is\n> > directly relevant to the topic at hand:\n> >\n> > 1. Add a sentence to the documentation on writing SECURITY FUNCTIONS\n> > safely concerning createrole_self_grant.\n> > 2. Add a sentence to the documentation on SECURITY DEFINER referring\n> > to the section about writing such functions safely.\n> > 3. Remove a note discussing the fact that pre-8.3 versions did not\n> > have SET clauses for functions.\n> >\n> > I can separate this into multiple patches if desired. And of course\n> > you, Tom, or others may have suggestions on which of these changes\n> > should be included at all or how to word them better.\n> >\n> >\n> I'm still not really feeling how security definer is the problematic option\n> here. Security invoker scares me much more.\n\nI don't really see what that has to do with the topic at hand, unless you want\nto suggest removing the entire section about how to write secure security\ndefiner functions?\n\n\n> postgres=# create role unpriv login;\n> CREATE ROLE\n> postgres=# create role creator createrole login;\n> CREATE ROLE\n> postgres=# grant pg_read_all_data to creator with admin option;\n> postgres=#\n> create procedure makeunprivpowerful() LANGUAGE sql AS $$\n> grant pg_read_all_data to unpriv;\n> $$;\n> CREATE PROCEDURE\n> postgres=# alter procedure makeunprivpowerful() owner to unpriv;\n> ALTER PROCEDURE\n> \n> postgres=# \\c postgres unpriv\n> You are now connected to database \"postgres\" as user \"unpriv\".\n> postgres=> call makeunprivpowerful();\n> ERROR: must have admin option on role \"pg_read_all_data\"\n> CONTEXT: SQL function \"makeunprivpowerful\" statement 1\n> \n> postgres=# \\c postgres creator\n> You are now connected to database \"postgres\" as user \"creator\".\n> postgres=> call makeunprivpowerful();\n> CALL\n> \n> Personally I think the best advice for a CREATEROLE granted user is to\n> never call routines they themselves don't own; or at least only those have\n> reviewed thoroughly and understand the consequences of. Regardless of\n> security definer/invoker.\n> \n> In short, a low-privilege user can basically run anything without much fear\n> because they can't do harm anyway. Thus security invoker applies to them,\n> and having security definer gives them the ability to do some things\n> without needing to have permanent higher privileges. These are useful,\n> security related attributes with respect to them.\n> \n> For a high-privilege I would argue that neither security invoker nor\n> definer are relevant considerations from a security point-of-view. If you\n> have high-privilege you must know what it is you are executing before you\n> execute it and anything bad it causes you to do using your privileges, or\n> higher if you run a security definer function blindly, is an administrative\n> failure, not a security hole.\n\nThe point of the security definer section is to explain how to safely write\nsecurity definer functions that you grant to less privileged users. It's not\nabout whether it's safe to call a security invoker / definer function -\nindeed, if you don't trust the function author / owner, it's never safe to\ncall the function.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 15:46:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Fri, Jan 13, 2023 at 4:46 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> I don't really see what that has to do with the topic at hand, unless you\n> want\n> to suggest removing the entire section about how to write secure security\n> definer functions?\n>\n\nNot remove, but I'm not seeing why the introduction of this GUC requires\nany change to the documentation.\n\nI'll leave discussion of security invoker to the other thread going on\nright now.\n\n\n> The point of the security definer section is to explain how to safely write\n> security definer functions that you grant to less privileged users\n>\n\nYeah, we are really good at \"how\".\n\n+ If the security definer function intends to create roles, and if it\n+ is running as a non-superuser, <varname>createrole_self_grant</varname>\n+ should also be set to a known value using the <literal>SET</literal>\n+ clause.\n\nI'd like to know \"why\". Without knowing why we are adding this I can't\ngive it a +1. I want the patch to include the why.\n\nDavid J.\n\nOn Fri, Jan 13, 2023 at 4:46 PM Andres Freund <andres@anarazel.de> wrote:\nI don't really see what that has to do with the topic at hand, unless you want\nto suggest removing the entire section about how to write secure security\ndefiner functions?Not remove, but I'm not seeing why the introduction of this GUC requires any change to the documentation.I'll leave discussion of security invoker to the other thread going on right now.\n\nThe point of the security definer section is to explain how to safely write\nsecurity definer functions that you grant to less privileged usersYeah, we are really good at \"how\".+    If the security definer function intends to create roles, and if it+    is running as a non-superuser, <varname>createrole_self_grant</varname>+    should also be set to a known value using the <literal>SET</literal>+    clause.I'd like to know \"why\".  Without knowing why we are adding this I can't give it a +1.  I want the patch to include the why.David J.", "msg_date": "Fri, 13 Jan 2023 18:29:00 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Fri, Jan 13, 2023 at 8:29 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> The point of the security definer section is to explain how to safely write\n>> security definer functions that you grant to less privileged users\n>\n> Yeah, we are really good at \"how\".\n>\n> + If the security definer function intends to create roles, and if it\n> + is running as a non-superuser, <varname>createrole_self_grant</varname>\n> + should also be set to a known value using the <literal>SET</literal>\n> + clause.\n>\n> I'd like to know \"why\". Without knowing why we are adding this I can't give it a +1. I want the patch to include the why.\n\nWhat don't you understand about the \"why\"? If your security-definer\nfunction relies on some GUC having some particular value for some\nsecurity-critical purpose, and the caller can substitute some other\nvalue, they might be able to create a security compromise. Since this\nGUC has some connection to security, there is at least some distant\npossibility of that happening. Reasonable people can, perhaps, differ\nabout how likely that is, but I don't really see what's confusing\nabout the theory. As a general statement about the human condition, if\nyou know that someone may be untrustworthy, you should be careful\nabout letting them influence your decisions. If you aren't, something\nbad may happen to you.\n\nThe whole thing about SECURITY INVOKER functions is really a separate\nissue. You can tell people how to write SECURITY DEFINER functions\nmore safely, and we do. You cannot tell them how to write SECURITY\nINVOKER functions more safely, because the direction of the attack is\nreversed. In the case of SECURITY DEFINER, the caller of the function\ncan attack the owner of the function. In the case of a SECURITY\nINVOKER function, the owner of the function can attack the caller of\nthe function. We can't document how to write security invoker\nfunctions safely because the author of the function is the one\npotentially making an attack, and therefore would do the opposite of\nwhatever advice we gave. We *could* add whole new sections to the\ndocumentation telling people to be careful about calling security\ninvoker functions, and that's a fine thing to discuss, but what I'm\ndoing here is following up an already-committed patch by adjusting\nparts of the existing documentation to account for the changes.\nInventing whole new sections of the documentation would be a job for a\nnew patch on a new thread, not a follow-up patch on an existing\nthread.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 14 Jan 2023 19:31:30 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Sat, Jan 14, 2023 at 5:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Jan 13, 2023 at 8:29 PM David G. Johnston\n> <david.g.johnston@gmail.com> wrote:\n> >> The point of the security definer section is to explain how to safely\n> write\n> >> security definer functions that you grant to less privileged users\n> >\n> > Yeah, we are really good at \"how\".\n> >\n> > + If the security definer function intends to create roles, and if it\n> > + is running as a non-superuser,\n> <varname>createrole_self_grant</varname>\n> > + should also be set to a known value using the <literal>SET</literal>\n> > + clause.\n> >\n> > I'd like to know \"why\". Without knowing why we are adding this I can't\n> give it a +1. I want the patch to include the why.\n>\n> What don't you understand about the \"why\"? If your security-definer\n> function relies on some GUC having some particular value for some\n> security-critical purpose, and the caller can substitute some other\n> value, they might be able to create a security compromise. Since this\n> GUC has some connection to security, there is at least some distant\n> possibility of that happening. Reasonable people can, perhaps, differ\n> about how likely that is, but I don't really see what's confusing\n> about the theory. As a general statement about the human condition, if\n> you know that someone may be untrustworthy, you should be careful\n> about letting them influence your decisions. If you aren't, something\n> bad may happen to you.\n>\n>\nOK, given all of that, I suggest reworking the first paragraph of security\ndefiner functions safely to something like the following:\n\n(Replace: Because a SECURITY DEFINER function is executed with the\nprivileges of the user that owns it, care is needed to ensure that the\nfunction cannot be misused. For security, search_path should be set to\nexclude any schemas writable by untrusted users.) with:\n\nThe execution of a SECURITY DEFINER function has two interacting behaviors\nthat make writing and administering such functions require extra care.\nWhile the privileges that come into play during execution are those of the\nfunction owner, the execution environment is inherited from the calling\ncontext. Therefore, any settings that the function relies upon must be\nspecified in the SET clause of the CREATE command (or within the body of\nthe function).\n\nOf particular importance is the search_path setting. The search_path\nshould be set to the bare minimum required for the function to operate and,\nmore importantly, not include any schemas writable by untrusted users.\n\n<existing wording>\nThis prevents malicious users [...]\n(existing example)\n[...] the function could be subverted by creating a temporary table named\npwds.\n</existing wording>\n\n<added note=\"specifically by this patch\">\nAnother setting of note (at least in the case that the function owner is\nnot a superuser) is createrole_self_grant. While the function owner has\ntheir own pg_db_role_setting preference for this setting, when wrapping\nexecution of CREATE ROLE within a function, particularly to be executed by\nothers, it is the executor's setting that would be in effect, not the\nowner's.\n</added>\n\n(existing wording regarding revoke from public)\n(existing example)\n\nDavid J.\n\nOn Sat, Jan 14, 2023 at 5:31 PM Robert Haas <robertmhaas@gmail.com> wrote:On Fri, Jan 13, 2023 at 8:29 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n>> The point of the security definer section is to explain how to safely write\n>> security definer functions that you grant to less privileged users\n>\n> Yeah, we are really good at \"how\".\n>\n> +    If the security definer function intends to create roles, and if it\n> +    is running as a non-superuser, <varname>createrole_self_grant</varname>\n> +    should also be set to a known value using the <literal>SET</literal>\n> +    clause.\n>\n> I'd like to know \"why\".  Without knowing why we are adding this I can't give it a +1.  I want the patch to include the why.\n\nWhat don't you understand about the \"why\"? If your security-definer\nfunction relies on some GUC having some particular value for some\nsecurity-critical purpose, and the caller can substitute some other\nvalue, they might be able to create a security compromise. Since this\nGUC has some connection to security, there is at least some distant\npossibility of that happening. Reasonable people can, perhaps, differ\nabout how likely that is, but I don't really see what's confusing\nabout the theory. As a general statement about the human condition, if\nyou know that someone may be untrustworthy, you should be careful\nabout letting them influence your decisions. If you aren't, something\nbad may happen to you.OK, given all of that, I suggest reworking the first paragraph of security definer functions safely to something like the following: (Replace: Because a SECURITY DEFINER function is executed with the privileges of the user that owns it, care is needed to ensure that the function cannot be misused. For security, search_path should be set to exclude any schemas writable by untrusted users.) with:The execution of a SECURITY DEFINER function has two interacting behaviors that make writing and administering such functions require extra care.  While the privileges that come into play during execution are those of the function owner, the execution environment is inherited from the calling context.  Therefore, any settings that the function relies upon must be specified in the SET clause of the CREATE command (or within the body of the function).Of particular importance is the search_path setting.  The search_path should be set to the bare minimum required for the function to operate and, more importantly, not include any schemas writable by untrusted users.<existing wording>This prevents malicious users [...](existing example)[...] the function could be subverted by creating a temporary table named pwds.</existing wording><added note=\"specifically by this patch\">Another setting of note (at least in the case that the function owner is not a superuser) is createrole_self_grant.  While the function owner has their own pg_db_role_setting preference for this setting, when wrapping execution of CREATE ROLE within a function, particularly to be executed by others, it is the executor's setting that would be in effect, not the owner's.</added>(existing wording regarding revoke from public)(existing example)David J.", "msg_date": "Sat, 14 Jan 2023 18:12:24 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Sat, Jan 14, 2023 at 6:12 PM David G. Johnston <\ndavid.g.johnston@gmail.com> wrote:\n\n> While the function owner has their own pg_db_role_setting preference for\n> this setting,\n>\n\nShould we be pointing out that if the role with CREATEROLE isn't also a\nLOGIN role then there is little point to setting createrole_self_grant on\nit specifically? Instead this setting should be set for any user that can\nSET to the CREATEROLE role but does have a LOGIN attribute.\n\nDavid J.\n\nOn Sat, Jan 14, 2023 at 6:12 PM David G. Johnston <david.g.johnston@gmail.com> wrote:While the function owner has their own pg_db_role_setting preference for this setting,Should we be pointing out that if the role with CREATEROLE isn't also a LOGIN role then there is little point to setting createrole_self_grant on it specifically?  Instead this setting should be set for any user that can SET to the CREATEROLE role but does have a LOGIN attribute.David J.", "msg_date": "Sat, 14 Jan 2023 19:19:21 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Sat, Jan 14, 2023 at 8:12 PM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> OK, given all of that, I suggest reworking the first paragraph of security definer functions safely to something like the following:\n>\n> (Replace: Because a SECURITY DEFINER function is executed with the privileges of the user that owns it, care is needed to ensure that the function cannot be misused. For security, search_path should be set to exclude any schemas writable by untrusted users.) with:\n>\n> The execution of a SECURITY DEFINER function has two interacting behaviors that make writing and administering such functions require extra care. While the privileges that come into play during execution are those of the function owner, the execution environment is inherited from the calling context. Therefore, any settings that the function relies upon must be specified in the SET clause of the CREATE command (or within the body of the function).\n>\n> Of particular importance is the search_path setting. The search_path should be set to the bare minimum required for the function to operate and, more importantly, not include any schemas writable by untrusted users.\n>\n> <existing wording>\n> This prevents malicious users [...]\n> (existing example)\n> [...] the function could be subverted by creating a temporary table named pwds.\n> </existing wording>\n\nI find this wording less clear than what we have now. And I reiterate\nthat the purpose of the patch under discussion is to add a mention of\nthe new GUC to an existing section, not to rewrite that section -- or\nany other section -- of the documentation.\n\n> <added note=\"specifically by this patch\">\n> Another setting of note (at least in the case that the function owner is not a superuser) is createrole_self_grant. While the function owner has their own pg_db_role_setting preference for this setting, when wrapping execution of CREATE ROLE within a function, particularly to be executed by others, it is the executor's setting that would be in effect, not the owner's.\n> </added>\n\nI think these sentences are really contorted, and they are also\nfactually incorrect. For this setting to matter, you need (1) the\nfunction to be running CREATEROLE and (2) the owner of that function\nto not be a superuser. You've put those facts at opposite end of the\nsentence. You've also brought pg_db_role_setting into it, which\ndoesn't matter here because it's applied at login, and a security\ndefiner function doesn't log in as the user to which it switches.\nThere really is no such thing as \"the owner's\" setting. There may be a\nsetting which is applied to the owner's session if the owner logs in,\nbut there's no default value for all code run as the owner -- perhaps\nthere should be, but that's not how it works. I don't think we have\nmuch precedent for using the word \"executor\" to mean \"the user who\ncalled a function\" as opposed to \"the code that executed a planned\nquery\".\n\nI don't really think there's too much wrong with what I wrote in the\npatch as proposed, and I would like to get it committed and move on\nwithout getting drawn into a wide-ranging discussion of every way in\nwhich we might be able to improve the surrounding structure.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:26:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Monday, January 16, 2023, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n>\n> I don't really think there's too much wrong with what I wrote in the\n> patch as proposed, and I would like to get it committed and move on\n> without getting drawn into a wide-ranging discussion of every way in\n> which we might be able to improve the surrounding structure.\n>\n\nI’m moving on as well. Go with what you have. I have my personal\nunderstanding clarified at this point. If the docs need more work people\nwill ask questions to help guide such work.\n\nDavid J.\n\nOn Monday, January 16, 2023, Robert Haas <robertmhaas@gmail.com> wrote:\n\nI don't really think there's too much wrong with what I wrote in the\npatch as proposed, and I would like to get it committed and move on\nwithout getting drawn into a wide-ranging discussion of every way in\nwhich we might be able to improve the surrounding structure.\nI’m moving on as well.  Go with what you have.  I have my personal understanding clarified at this point.  If the docs need more work people will ask questions to help guide such work.David J.", "msg_date": "Mon, 16 Jan 2023 08:33:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." }, { "msg_contents": "On Mon, Jan 16, 2023 at 10:33 AM David G. Johnston\n<david.g.johnston@gmail.com> wrote:\n> I’m moving on as well. Go with what you have. I have my personal understanding clarified at this point. If the docs need more work people will ask questions to help guide such work.\n\nYeah, I hope so.\n\nIt's becoming increasingly clear to me that we haven't put enough\neffort into clarifying what I will broadly call \"trust issues\" in the\ndocumentation. It's bad if you call untrusted code that runs as you,\nand it's bad if code that runs as you gets called by untrusted people\nfor whose antics you are not sufficiently prepared, and there are a\nlot of ways those things things can happen: direction function calls,\noperators, triggers, row-level security, views, index or materialized\nview rebuilds, etc. I think it would be good to have a general\ntreatment of those issues in the documentation written by a\nsecurity-conscious hacker or hackers who are really familiar both with\nthe behavior of the system and also able to make the security\nconsequences understandable to people who are not so deeply invested\nin PostgreSQL. I don't want to do that on this thread, but to the\nextent that you're arguing that the current treatment is inadequate,\nI'm fully in agreement with that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:49:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add new GUC createrole_self_grant." } ]
[ { "msg_contents": "Hi Hackers,\n\nvacuum is not able to clean up dead tuples when OldestXmin is not moving\n(because of a long running transaction or when hot_standby_feedback is\nbehind). Even though OldestXmin is not moved from the last time it checked,\nit keeps retrying every autovacuum_naptime and wastes CPU cycles and IOs\nwhen pages are not in memory. Can we not bypass the dead tuple collection\nand cleanup step until OldestXmin is advanced? Below log shows the vacuum\nrunning every 1 minute.\n\n2023-01-09 08:13:01.364 UTC [727219] LOG: automatic vacuum of table\n\"postgres.public.t1\": index scans: 0\n pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)\n tuples: 0 removed, 1572864 remain, 786432 are dead but not yet\nremovable\n removable cutoff: 852, which was 2 XIDs old when operation ended\n frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n index scan not needed: 0 pages from table (0.00% of total) had 0\ndead item identifiers removed\n avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n buffer usage: 13939 hits, 0 misses, 0 dirtied\n WAL usage: 0 records, 0 full page images, 0 bytes\n system usage: CPU: user: 0.15 s, system: 0.00 s, elapsed: 0.29 s\n2023-01-09 08:14:01.363 UTC [727289] LOG: automatic vacuum of table\n\"postgres.public.t1\": index scans: 0\n pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)\n tuples: 0 removed, 1572864 remain, 786432 are dead but not yet\nremovable\n removable cutoff: 852, which was 2 XIDs old when operation ended\n frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n index scan not needed: 0 pages from table (0.00% of total) had 0\ndead item identifiers removed\n avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n buffer usage: 13939 hits, 0 misses, 0 dirtied\n WAL usage: 0 records, 0 full page images, 0 bytes\n system usage: CPU: user: 0.14 s, system: 0.00 s, elapsed: 0.29 s\n\nThanks,\nSirisha\n\nHi Hackers,vacuum is not able to clean up dead tuples when OldestXmin is not moving (because of a long running transaction or when hot_standby_feedback is behind). Even though OldestXmin is not moved from the last time it checked, it keeps retrying every autovacuum_naptime and wastes CPU cycles and IOs when pages are not in memory. Can we not bypass the dead tuple collection and cleanup step until OldestXmin is advanced? Below log shows the vacuum running every 1 minute.2023-01-09 08:13:01.364 UTC [727219] LOG:  automatic vacuum of table \"postgres.public.t1\": index scans: 0        pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)        tuples: 0 removed, 1572864 remain, 786432 are dead but not yet removable        removable cutoff: 852, which was 2 XIDs old when operation ended        frozen: 0 pages from table (0.00% of total) had 0 tuples frozen        index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed        avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s        buffer usage: 13939 hits, 0 misses, 0 dirtied        WAL usage: 0 records, 0 full page images, 0 bytes        system usage: CPU: user: 0.15 s, system: 0.00 s, elapsed: 0.29 s2023-01-09 08:14:01.363 UTC [727289] LOG:  automatic vacuum of table \"postgres.public.t1\": index scans: 0        pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)        tuples: 0 removed, 1572864 remain, 786432 are dead but not yet removable        removable cutoff: 852, which was 2 XIDs old when operation ended        frozen: 0 pages from table (0.00% of total) had 0 tuples frozen        index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed        avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s        buffer usage: 13939 hits, 0 misses, 0 dirtied        WAL usage: 0 records, 0 full page images, 0 bytes        system usage: CPU: user: 0.14 s, system: 0.00 s, elapsed: 0.29 sThanks,Sirisha", "msg_date": "Tue, 10 Jan 2023 13:46:19 -0800", "msg_from": "sirisha chamarthi <sirichamarthi22@gmail.com>", "msg_from_op": true, "msg_subject": "Wasted Vacuum cycles when OldestXmin is not moving" }, { "msg_contents": "On Wed, Jan 11, 2023 at 3:16 AM sirisha chamarthi\n<sirichamarthi22@gmail.com> wrote:\n>\n> Hi Hackers,\n>\n> vacuum is not able to clean up dead tuples when OldestXmin is not moving (because of a long running transaction or when hot_standby_feedback is behind). Even though OldestXmin is not moved from the last time it checked, it keeps retrying every autovacuum_naptime and wastes CPU cycles and IOs when pages are not in memory. Can we not bypass the dead tuple collection and cleanup step until OldestXmin is advanced? Below log shows the vacuum running every 1 minute.\n>\n> 2023-01-09 08:13:01.364 UTC [727219] LOG: automatic vacuum of table \"postgres.public.t1\": index scans: 0\n> pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)\n> tuples: 0 removed, 1572864 remain, 786432 are dead but not yet removable\n> removable cutoff: 852, which was 2 XIDs old when operation ended\n> frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n> buffer usage: 13939 hits, 0 misses, 0 dirtied\n> WAL usage: 0 records, 0 full page images, 0 bytes\n> system usage: CPU: user: 0.15 s, system: 0.00 s, elapsed: 0.29 s\n> 2023-01-09 08:14:01.363 UTC [727289] LOG: automatic vacuum of table \"postgres.public.t1\": index scans: 0\n> pages: 0 removed, 6960 remain, 6960 scanned (100.00% of total)\n> tuples: 0 removed, 1572864 remain, 786432 are dead but not yet removable\n> removable cutoff: 852, which was 2 XIDs old when operation ended\n> frozen: 0 pages from table (0.00% of total) had 0 tuples frozen\n> index scan not needed: 0 pages from table (0.00% of total) had 0 dead item identifiers removed\n> avg read rate: 0.000 MB/s, avg write rate: 0.000 MB/s\n> buffer usage: 13939 hits, 0 misses, 0 dirtied\n> WAL usage: 0 records, 0 full page images, 0 bytes\n> system usage: CPU: user: 0.14 s, system: 0.00 s, elapsed: 0.29 s\n\nCan you provide a patch and test case, if possible, a TAP test with\nand without patch?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:51:34 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Wasted Vacuum cycles when OldestXmin is not moving" } ]
[ { "msg_contents": "Is it desirable to support specifying a level ?\n\nMaybe there's a concern about using high compression levels, but \nI'll start by asking if the feature is wanted at all.\n\nPrevious discussion at: 20210614012412.GA31772@telsasoft.com", "msg_date": "Tue, 10 Jan 2023 17:26:34 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "wal_compression = method:level" } ]
[ { "msg_contents": "Hi all,\n\nI've been talking to other Timescale devs about a requested change to\npg_dump, and there's been quite a bit of back-and-forth to figure out\nwhat, exactly, we want. Any mistakes here are mine, but I think we've\nbeen able to distill it down to the following request:\n\nWe'd like to be allowed to change the schema for a table that's been\nmarked in the past with pg_extension_config_dump().\n\nUnless I'm missing something obvious (please, let it be that) there's no\nway to do this safely. Once you've marked an internal table as dumpable,\nits schema is effectively frozen if you want your dumps to work across\nversions, because otherwise you'll try to restore that \"catalog\" data\ninto a table that has different columns. And while sometimes you can\nmake that work, it doesn't in the general case.\n\nWe (Timescale) do already change the schemas today, but we pay the\nassociated costs in that dump/restore doesn't work without manual\nversion bookkeeping and user fiddling -- and in the worst cases, it\nappears to \"work\" across versions but leaves our catalog tables in an\ninconsistent state. So the request is to come up with a way to support\nthis case.\n\nSome options that have been proposed so far:\n\n1) Don't ask for a new feature, and instead try to ensure infinite\nbackwards compatibility for those tables.\n\nFor extension authors who have already done this -- and have likely done\nsome heavy architectural lifting to make it work -- this is probably the\nfirst thing that will come to mind, and it was the first thing I said,\ntoo.\n\nBut the more I say it, the less acceptable it feels. Not even Postgres\nis expected to maintain infinite catalog compatibility into the future.\nWe need to evolve our catalogs, too -- and we already provide the\nstandard update scripts to perform migrations of those tables, but a\ndump/restore doesn't have any way to use them today.\n\n2) Provide a way to record the exact version of an extension in a dump.\n\nBrute-force, but pretty much guaranteed to fix the cross-version\nproblem, because the dump can't be accidentally restored to an extension\nversion with a different catalog schema. Users then manually ALTER\nEXTENSION ... UPDATE (or we could even include that in the dump itself,\nas the final action). Doing this by default would punish extensions that\ndon't have this problem, so it would have to be opt-in in some way.\n\nIt's also unnecessarily strict IMO -- even if we don't have a config\ntable change in a new version, we'll still require the old extension\nversion to be available alongside the new version during a restore.\nMaybe a tweak on this idea would be to introduce a catversion for\nextensions.\n\n3) Provide a way to record the entire internal state of an extension in\na dump.\n\nEvery extension is already expected to handle the case where the\ninternal state is at version X but the installed extension is at version\nX+N, and the update scripts we provide will perform the necessary\nmigrations. But there's no way to reproduce this case using\ndump/restore, because dumping an extension omits its internals.\n\nIf a dump could instead include the entire internal state of an\nextension, then we'd be guaranteed to reproduce the exact situation that\nwe already have to support for an in-place upgrade. After a restore, the\nSQL is at version X, the installed extension is some equal or later\nversion, and all that remains is to run the update scripts, either\nmanually or within the dump itself.\n\nLike (2), I think there's no way you'd all accept this cost for every\nextension. It'd have to be opt-in.\n\n--\n\nHopefully that makes a certain amount of sense. Does it seem like a\nreasonable thing to ask?\n\nI'm happy to clarify anything above, and if you know of an obvious\nsolution I'm missing, I would love to be corrected. :D\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 10 Jan 2023 16:08:18 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> We'd like to be allowed to change the schema for a table that's been\n> marked in the past with pg_extension_config_dump().\n\n> Unless I'm missing something obvious (please, let it be that) there's no\n> way to do this safely. Once you've marked an internal table as dumpable,\n> its schema is effectively frozen if you want your dumps to work across\n> versions, because otherwise you'll try to restore that \"catalog\" data\n> into a table that has different columns. And while sometimes you can\n> make that work, it doesn't in the general case.\n\nI agree that's a problem, but it's not that we're arbitrarily prohibiting\nsomething that would work. What, concretely, do you think could be\ndone to improve the situation?\n\n> 2) Provide a way to record the exact version of an extension in a dump.\n\nDon't really see how that helps? I also fear that it will break\na bunch of use-cases that work fine today, which are exactly the\nones for which we originally defined pg_dump as *not* committing\nto a particular extension version.\n\nIt feels like what we really need here is some way to mutate the\nold format of an extension config table into the new format.\nSimple addition of new columns shouldn't be a problem (in fact,\nI think that works already, or could easily be made to). If you\nwant some ETL processing then it's harder :-(. Could an ON INSERT\ntrigger on an old config table transpose converted data into a\nnewer config table?\n\nAnother point that ought to be made here is that pg_dump is not\nthe only outside consumer of extension config data. You're likely\nto break some applications if you change a config table too much.\nThat's not an argument that we shouldn't try to make pg_dump more\nforgiving, but I'm not sure that we need to move heaven and earth.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 22:53:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "On Tue, Jan 10, 2023 at 7:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n> > Unless I'm missing something obvious (please, let it be that) there's no\n> > way to do this safely. Once you've marked an internal table as dumpable,\n> > its schema is effectively frozen if you want your dumps to work across\n> > versions, because otherwise you'll try to restore that \"catalog\" data\n> > into a table that has different columns. And while sometimes you can\n> > make that work, it doesn't in the general case.\n>\n> I agree that's a problem, but it's not that we're arbitrarily prohibiting\n> something that would work. What, concretely, do you think could be\n> done to improve the situation?\n\nConcretely, I think extensions should be able to invoke their update\nscripts at some point after a dump/restore cycle, whether\nautomatically or manually.\n\n> > 2) Provide a way to record the exact version of an extension in a dump.\n>\n> Don't really see how that helps?\n\nIf pg_dump recorded our extension using\n\n CREATE EXTENSION timescaledb VERSION <original version>\n\nthen we'd be able to migrate the changed catalog post-restore, using a\nstandard ALTER EXTENSION ... UPDATE.\n\n> I also fear that it will break\n> a bunch of use-cases that work fine today, which are exactly the\n> ones for which we originally defined pg_dump as *not* committing\n> to a particular extension version.\n\nRight, I think it would have to be opt-in. Say, a new control file\noption dump_version or some such.\n\n> It feels like what we really need here is some way to mutate the\n> old format of an extension config table into the new format.\n\nAgreed. We already provide mutation functions via the update scripts,\nso I think both proposal 2 and 3 do that. I'm curious about your\nopinion on option 3, since it would naively seem to make pg_dump do\n_less_ work for a given extension.\n\n> Simple addition of new columns shouldn't be a problem (in fact,\n> I think that works already, or could easily be made to). If you\n> want some ETL processing then it's harder :-(.\n\nOne sharp edge for the add-a-new-column case is where you give the new\ncolumn a default, and you want all of the old migrated rows to have\nsome non-default value to handle backwards compatibility. (But that\ncase is handled trivially if you _know_ that you're performing a\nmigration.)\n\n> Could an ON INSERT\n> trigger on an old config table transpose converted data into a\n> newer config table?\n\nYou mean something like, introduce table catalog_v2, and have all\nINSERTs to catalog_v1 migrate and redirect the rows? That seems like\nit could work today, though it would mean maintaining two different\nupgrade paths for the same table, migrating all users of the catalog\nto the new name, and needing to drop the old table at... some point\nafter the restore? I don't know if there would be performance concerns\nwith larger catalogs (in fact I'm not sure how big these catalogs\nget).\n\n> Another point that ought to be made here is that pg_dump is not\n> the only outside consumer of extension config data. You're likely\n> to break some applications if you change a config table too much.\n\nSuch as? We don't really want applications to be coupled against our\ninternals by accident, but we have to dump the internals to be able to\nreproduce the state of the system.\n\n> That's not an argument that we shouldn't try to make pg_dump more\n> forgiving, but I'm not sure that we need to move heaven and earth.\n\nAgreed. Hopefully we can find something that just moves a little earth. :D\n\nThanks!\n--Jacob\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:27:29 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "Jacob Champion <jchampion@timescale.com> writes:\n> On Tue, Jan 10, 2023 at 7:53 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I also fear that it will break\n>> a bunch of use-cases that work fine today, which are exactly the\n>> ones for which we originally defined pg_dump as *not* committing\n>> to a particular extension version.\n\n> Right, I think it would have to be opt-in. Say, a new control file\n> option dump_version or some such.\n\nThat would require all the installed extensions to cope with this\nthe same way, which does not seem like a great assumption.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:03:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "On Wed, Jan 11, 2023 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Jacob Champion <jchampion@timescale.com> writes:\n> > Right, I think it would have to be opt-in. Say, a new control file\n> > option dump_version or some such.\n>\n> That would require all the installed extensions to cope with this\n> the same way, which does not seem like a great assumption.\n\nHow so? Most installed extensions would not opt into a version dump,\nI'd imagine.\n\nOr do you mean that the version dump would apply retroactively to\nolder versions of the extension, even if it wasn't needed in the past?\n\n--Jacob\n\n\n", "msg_date": "Thu, 12 Jan 2023 11:04:49 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "On 1/12/23 11:04, Jacob Champion wrote:\n> On Wed, Jan 11, 2023 at 1:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Jacob Champion <jchampion@timescale.com> writes:\n>>> Right, I think it would have to be opt-in. Say, a new control file\n>>> option dump_version or some such.\n>>\n>> That would require all the installed extensions to cope with this\n>> the same way, which does not seem like a great assumption.\n> \n> How so? Most installed extensions would not opt into a version dump,\n> I'd imagine.\n\nAs a concrete example, Timescale's extension control file could look\nlike this:\n\n default_version = '2.x.y'\n module_pathname = '$libdir/timescaledb-2.x.y'\n ...\n dump_version = true\n\nwhich would then cause pg_dump to issue a VERSION for its CREATE\nEXTENSION line. Other extensions would remain with the default\n(dump_version = false), so they'd be dumped without an explicit VERSION.\n\n(And in the case of option 3, the name of the control file option\nchanges -- dump_internals, maybe? -- but it still doesn't affect other\ninstalled extensions.)\n\n--Jacob\n\n\n", "msg_date": "Tue, 17 Jan 2023 15:18:26 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "On Tue, Jan 17, 2023 at 3:18 PM Jacob Champion <jchampion@timescale.com> wrote:\n> As a concrete example, Timescale's extension control file could look\n> like this:\n>\n> default_version = '2.x.y'\n> module_pathname = '$libdir/timescaledb-2.x.y'\n> ...\n> dump_version = true\n>\n> which would then cause pg_dump to issue a VERSION for its CREATE\n> EXTENSION line. Other extensions would remain with the default\n> (dump_version = false), so they'd be dumped without an explicit VERSION.\n\nEven more concretely, here's a draft patch. There's no nuance --\nsetting dump_version affects all past versions of the extension, and\nit can't be turned off at restore time. So extensions opting in would\nhave to be written to be side-by-side installable. (Ours is, in its\nown way, but the PGDG installers don't allow it -- which maybe\nhighlights a weakness in this approach.) I'm still not sure if this\naddresses Tom's concerns, or just adds new ones.\n\nWe could maybe give the user more control by overriding the default\nversion for an extension in a different TOC entry, and then add\noptions to ignore or include those version numbers during restore.\nThat doesn't address the side-by-side problem directly but gives an\nescape hatch.\n\nEarlier I wrote:\n\n> I'm curious about your\n> opinion on option 3, since it would naively seem to make pg_dump do\n> _less_ work for a given extension.\n\nThis was definitely naive :D We can't just make use of the\nbinary-upgrade machinery to dump extension internals, because it pins\nOIDs. So that might still be a valid approach, but it's not \"less\nwork.\"\n\nThanks,\n--Jacob", "msg_date": "Tue, 7 Feb 2023 10:16:17 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" }, { "msg_contents": "On Tue, Feb 7, 2023 at 10:16 AM Jacob Champion <jchampion@timescale.com> wrote:\n> Even more concretely, here's a draft patch. There's no nuance --\n> setting dump_version affects all past versions of the extension, and\n> it can't be turned off at restore time. So extensions opting in would\n> have to be written to be side-by-side installable. (Ours is, in its\n> own way, but the PGDG installers don't allow it -- which maybe\n> highlights a weakness in this approach.) I'm still not sure if this\n> addresses Tom's concerns, or just adds new ones.\n\nAny general thoughts on this approach? I don't think it's baked enough\nfor registration yet, but I also don't know what approach would be\nbetter.\n\nGiven the recent chatter around extension versions in other threads\n[1, 2], I feel like there is a big gap between the Postgres core\nexpectations and what extension authors are actually doing when it\ncomes to handling version upgrades. I'd like to chip away at that,\nsomehow.\n\nThanks,\n--Jacob\n\n[1] https://www.postgresql.org/message-id/212074.1678301349%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/CA%2BTgmoYqK6nfP15SjuyO6t5jOmymG%3DqO7JOOVJdTOj96L0XJ1Q%40mail.gmail.com\n\n\n", "msg_date": "Wed, 8 Mar 2023 13:39:18 -0800", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": true, "msg_subject": "Re: Can we let extensions change their dumped catalog schemas?" } ]
[ { "msg_contents": "I discussed this a bit in a different thread [0], but I thought it deserved\nits own thread.\n\nAfter setting wal_retrieve_retry_interval to 1ms in the tests, I noticed\nthat the recovery tests consistently take much longer. Upon further\ninspection, it looks like a similar race condition to the one described in\ne5d494d's commit message. With some added debug logs, I see that all of\nthe callers of MaybeStartWalReceiver() complete before SIGCHLD is\nprocessed, so ServerLoop() waits for a minute before starting the WAL\nreceiver.\n\nThe attached patch fixes this by adjusting DetermineSleepTime() to limit\nthe sleep to at most 100ms when WalReceiverRequested is set, similar to how\nthe sleep is limited when background workers must be restarted.\n\n[0] https://postgr.es/m/20221215224721.GA694065%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Jan 2023 17:08:36 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "delay starting WAL receiver" }, { "msg_contents": "On Wed, Jan 11, 2023 at 2:08 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I discussed this a bit in a different thread [0], but I thought it deserved\n> its own thread.\n>\n> After setting wal_retrieve_retry_interval to 1ms in the tests, I noticed\n> that the recovery tests consistently take much longer. Upon further\n> inspection, it looks like a similar race condition to the one described in\n> e5d494d's commit message. With some added debug logs, I see that all of\n> the callers of MaybeStartWalReceiver() complete before SIGCHLD is\n> processed, so ServerLoop() waits for a minute before starting the WAL\n> receiver.\n>\n> The attached patch fixes this by adjusting DetermineSleepTime() to limit\n> the sleep to at most 100ms when WalReceiverRequested is set, similar to how\n> the sleep is limited when background workers must be restarted.\n\nIs the problem here that SIGCHLD is processed ...\n\n PG_SETMASK(&UnBlockSig); <--- here?\n\n selres = select(nSockets, &rmask, NULL, NULL, &timeout);\n\nMeanwhile the SIGCHLD handler code says:\n\n * Was it the wal receiver? If exit status is zero (normal) or one\n * (FATAL exit), we assume everything is all right just like normal\n * backends. (If we need a new wal receiver, we'll start one at the\n * next iteration of the postmaster's main loop.)\n\n... which is true, but that won't be reached for a while in this case\nif the timeout has already been set to 60s. Your patch makes that\n100ms, in that case, a time delay that by now attracts my attention\nlike a red rag to a bull (I don't know why you didn't make it 0).\n\nI'm not sure, but if I got that right, then I think the whole problem\nmight automatically go away with CF #4032. The SIGCHLD processing\ncode will run not when signals are unblocked before select() (that is\ngone), but instead *after* the event loop wakes up with WL_LATCH_SET,\nand runs the handler code in the regular user context before dropping\nthrough to the rest of the main loop.\n\n\n", "msg_date": "Wed, 11 Jan 2023 17:20:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: delay starting WAL receiver" }, { "msg_contents": "On Wed, Jan 11, 2023 at 5:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> (I don't know why you didn't make it 0)\n\n(Oh, I see why it had to be non-zero to avoiding burning CPU, ignore that part.)\n\n\n", "msg_date": "Wed, 11 Jan 2023 17:26:44 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: delay starting WAL receiver" }, { "msg_contents": "On Wed, Jan 11, 2023 at 05:20:38PM +1300, Thomas Munro wrote:\n> Is the problem here that SIGCHLD is processed ...\n> \n> PG_SETMASK(&UnBlockSig); <--- here?\n> \n> selres = select(nSockets, &rmask, NULL, NULL, &timeout);\n> \n> Meanwhile the SIGCHLD handler code says:\n> \n> * Was it the wal receiver? If exit status is zero (normal) or one\n> * (FATAL exit), we assume everything is all right just like normal\n> * backends. (If we need a new wal receiver, we'll start one at the\n> * next iteration of the postmaster's main loop.)\n> \n> ... which is true, but that won't be reached for a while in this case\n> if the timeout has already been set to 60s. Your patch makes that\n> 100ms, in that case, a time delay that by now attracts my attention\n> like a red rag to a bull (I don't know why you didn't make it 0).\n\nI think this is right. At the very least, it seems consistent with my\nobservations.\n\n> I'm not sure, but if I got that right, then I think the whole problem\n> might automatically go away with CF #4032. The SIGCHLD processing\n> code will run not when signals are unblocked before select() (that is\n> gone), but instead *after* the event loop wakes up with WL_LATCH_SET,\n> and runs the handler code in the regular user context before dropping\n> through to the rest of the main loop.\n\nYeah, with those patches, the problem goes away. IIUC the key part is that\nthe postmaster's latch gets set when SIGCHLD is received, so even if\nSIGUSR1 and SIGCHLD are processed out of order, WalReceiverPID gets cleared\nand we try to restart it shortly thereafter. I find this much easier to\nreason about.\n\nI'll go ahead and withdraw this patch from the commitfest. Thanks for\nchiming in.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 10 Jan 2023 21:47:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: delay starting WAL receiver" } ]
[ { "msg_contents": "Hi,\n\nI realized that in CreateDecodingContext() function, we update both\nslot->data.two_phase and two_phase_at without acquiring the spinlock:\n\n /* Mark slot to allow two_phase decoding if not already marked */\n if (ctx->twophase && !slot->data.two_phase)\n {\n slot->data.two_phase = true;\n slot->data.two_phase_at = start_lsn;\n ReplicationSlotMarkDirty();\n ReplicationSlotSave();\n SnapBuildSetTwoPhaseAt(ctx->snapshot_builder, start_lsn);\n }\n\nI think we should acquire the spinlock when updating fields of the\nreplication slot even by its owner. Otherwise readers could see\ninconsistent results. Looking at another place where we update\ntwo_phase_at, we acquire the spinlock:\n\n SpinLockAcquire(&slot->mutex);\n slot->data.confirmed_flush = ctx->reader->EndRecPtr;\n if (slot->data.two_phase)\n slot->data.two_phase_at = ctx->reader->EndRecPtr;\n SpinLockRelease(&slot->mutex);\n\nIt seems to me an oversight of commit a8fd13cab0b. I've attached the\nsmall patch to fix it.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 11 Jan 2023 11:07:05 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Spinlock is missing when updating two_phase of ReplicationSlot" }, { "msg_contents": "On Wed, Jan 11, 2023 at 11:07:05AM +0900, Masahiko Sawada wrote:\n> I think we should acquire the spinlock when updating fields of the\n> replication slot even by its owner. Otherwise readers could see\n> inconsistent results. Looking at another place where we update\n> two_phase_at, we acquire the spinlock:\n> \n> SpinLockAcquire(&slot->mutex);\n> slot->data.confirmed_flush = ctx->reader->EndRecPtr;\n> if (slot->data.two_phase)\n> slot->data.two_phase_at = ctx->reader->EndRecPtr;\n> SpinLockRelease(&slot->mutex);\n> \n> It seems to me an oversight of commit a8fd13cab0b. I've attached the\n> small patch to fix it.\n\nLooks right to me, the paths updating the data related to the slots\nare careful about that, even when it comes to fetching a slot from\nMyReplicationSlot. I have been looking around the slot code to see if\nthere are other inconsistencies, and did not notice anything standing\nout. Will fix..\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 14:36:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Spinlock is missing when updating two_phase of ReplicationSlot" }, { "msg_contents": "On Wed, Jan 11, 2023 at 02:36:17PM +0900, Michael Paquier wrote:\n> Looks right to me, the paths updating the data related to the slots\n> are careful about that, even when it comes to fetching a slot from\n> MyReplicationSlot. I have been looking around the slot code to see if\n> there are other inconsistencies, and did not notice anything standing\n> out. Will fix..\n\nAnd done, thanks!\n--\nMichael", "msg_date": "Thu, 12 Jan 2023 13:42:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Spinlock is missing when updating two_phase of ReplicationSlot" }, { "msg_contents": "On Thu, Jan 12, 2023 at 1:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 02:36:17PM +0900, Michael Paquier wrote:\n> > Looks right to me, the paths updating the data related to the slots\n> > are careful about that, even when it comes to fetching a slot from\n> > MyReplicationSlot. I have been looking around the slot code to see if\n> > there are other inconsistencies, and did not notice anything standing\n> > out. Will fix..\n>\n> And done, thanks!\n\nThank you!\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 14:05:44 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Spinlock is missing when updating two_phase of ReplicationSlot" } ]
[ { "msg_contents": "I discussed this elsewhere [0], but I thought it deserved its own thread.\n\nAfter setting wal_retrieve_retry_interval to 1ms in the tests, I noticed\nthat some of the archiving tests began consistently failing on Windows. I\nbelieve the problem is that WaitForWALToBecomeAvailable() depends on the\ncall to WaitLatch() for wal_retrieve_retry_interval to ensure that signals\nare dispatched (i.e., pgwin32_dispatch_queued_signals()). With a low retry\ninterval, WaitForWALToBecomeAvailable() might skip the call to WaitLatch(),\nand the signals are never processed.\n\nThe attached patch fixes this by always calling WaitLatch(), even if\nwal_retrieve_retry_interval milliseconds have already elapsed and the\ntimeout is 0.\n\n[0] https://postgr.es/m/20221231235019.GA1223171%40nathanxps13\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 10 Jan 2023 22:11:16 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "low wal_retrieve_retry_interval causes missed signals on Windows" }, { "msg_contents": "Hi,\n\nOn 2023-01-10 22:11:16 -0800, Nathan Bossart wrote:\n> The attached patch fixes this by always calling WaitLatch(), even if\n> wal_retrieve_retry_interval milliseconds have already elapsed and the\n> timeout is 0.\n\nIt doesn't seem right to call WaitLatch() just for that purpose - nor\nnecessarily a complete fix.\n\nGiven that we check for interrupts in other parts of recovery with\nHandleStartupProcInterrupt(), which doesn't interact with latches, isn't the\nactual bug that HandleStartupProcInterrupt() doesn't contain the same black\nmagic that CHECK_FOR_INTERRUPTS() contains on windows? Namely this stuff:\n\n\n#ifndef WIN32\n...\n#else\n#define INTERRUPTS_PENDING_CONDITION() \\\n\t(unlikely(UNBLOCKED_SIGNAL_QUEUE()) ? pgwin32_dispatch_queued_signals() : 0, \\\n\t unlikely(InterruptPending))\n#endif\n\n/* Service interrupt, if one is pending and it's safe to service it now */\n#define CHECK_FOR_INTERRUPTS() \\\ndo { \\\n\tif (INTERRUPTS_PENDING_CONDITION()) \\\n\t\tProcessInterrupts(); \\\n} while(0)\n\n\nLooks like we have that bug in quite a few places... Some are \"protected\" by\nunconditional WaitLatch() calls, but at least pgarch.c, checkpointer.c via\nCheckpointWriteDelay() seem borked.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 12:48:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: low wal_retrieve_retry_interval causes missed signals on Windows" }, { "msg_contents": "On Wed, Jan 11, 2023 at 12:48:36PM -0800, Andres Freund wrote:\n> Given that we check for interrupts in other parts of recovery with\n> HandleStartupProcInterrupt(), which doesn't interact with latches, isn't the\n> actual bug that HandleStartupProcInterrupt() doesn't contain the same black\n> magic that CHECK_FOR_INTERRUPTS() contains on windows? Namely this stuff:\n\nYeah, this seems like a more comprehensive fix. I've attached a patch that\nadds this Windows signaling stuff to the HandleXXXInterrupts() functions in\nthe files you listed. Is this roughly what you had in mind? If so, I'll\nlook around for anywhere else it is needed.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 11 Jan 2023 15:26:45 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: low wal_retrieve_retry_interval causes missed signals on Windows" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 15:26:45 -0800, Nathan Bossart wrote:\n> On Wed, Jan 11, 2023 at 12:48:36PM -0800, Andres Freund wrote:\n> > Given that we check for interrupts in other parts of recovery with\n> > HandleStartupProcInterrupt(), which doesn't interact with latches, isn't the\n> > actual bug that HandleStartupProcInterrupt() doesn't contain the same black\n> > magic that CHECK_FOR_INTERRUPTS() contains on windows? Namely this stuff:\n> \n> Yeah, this seems like a more comprehensive fix. I've attached a patch that\n> adds this Windows signaling stuff to the HandleXXXInterrupts() functions in\n> the files you listed. Is this roughly what you had in mind? If so, I'll\n> look around for anywhere else it is needed.\n\nYes, that's what I roughly was thinking of. Although seeing the diff, I think\nit might be worth introducing a helper function that'd containing at least\npgwin32_dispatch_queued_signals() and ProcessProcSignalBarrier(). It's a bit\ncomplicated by ProcessProcSignalBarrier() only being applicable to shared\nmemory connected processes - excluding e.g. pgarch.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:40:14 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: low wal_retrieve_retry_interval causes missed signals on Windows" }, { "msg_contents": "On Wed, Jan 11, 2023 at 04:40:14PM -0800, Andres Freund wrote:\n> On 2023-01-11 15:26:45 -0800, Nathan Bossart wrote:\n>> On Wed, Jan 11, 2023 at 12:48:36PM -0800, Andres Freund wrote:\n>> > Given that we check for interrupts in other parts of recovery with\n>> > HandleStartupProcInterrupt(), which doesn't interact with latches, isn't the\n>> > actual bug that HandleStartupProcInterrupt() doesn't contain the same black\n>> > magic that CHECK_FOR_INTERRUPTS() contains on windows? Namely this stuff:\n>> \n>> Yeah, this seems like a more comprehensive fix. I've attached a patch that\n>> adds this Windows signaling stuff to the HandleXXXInterrupts() functions in\n>> the files you listed. Is this roughly what you had in mind? If so, I'll\n>> look around for anywhere else it is needed.\n> \n> Yes, that's what I roughly was thinking of. Although seeing the diff, I think\n> it might be worth introducing a helper function that'd containing at least\n> pgwin32_dispatch_queued_signals() and ProcessProcSignalBarrier(). It's a bit\n> complicated by ProcessProcSignalBarrier() only being applicable to shared\n> memory connected processes - excluding e.g. pgarch.\n\nAs of d75288f, the archiver should be connected to shared memory, so we\nmight be in luck. I guess we'd need to watch out for this if we want to\nback-patch it beyond v14. I'll work on a patch...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:59:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: low wal_retrieve_retry_interval causes missed signals on Windows" } ]
[ { "msg_contents": "> The confusion that 0001 is addressing is fair (cough, fc579e1, cough),\n> still I am wondering whether we could do a bit better to be more\n\nYeah, even after 0001 it's definitely suboptimal. I tried to keep the changes\nminimal to not distract from the main purpose of this patch. But I'll update\nthe patch to have some more. I'll respond to your other question first \n\n> In what is your proposal different from the following\n> entry in pg_ident.conf? As of:\n> mapname /^(.*)$ \\1\n\nIt's very different. I think easiest is to explain by example:\n\nIf there exist three users on the postgres server: admin, jelte and michael\n\nThen this rule (your suggested rule):\nmapname /^(.*)$ \\1\n\nIs equivalent to:\nmapname admin admin\nmapname jelte jelte\nmapname michael michael\n\nWhile with the \"all\" keyword you can create a rule like this:\nmapname admin all\n\nwhich is equivalent to:\nmapname admin admin\nmapname admin jelte\nmapname admin michael\n\n", "msg_date": "Wed, 11 Jan 2023 09:04:56 +0000", "msg_from": "Jelte Fennema <Jelte.Fennema@microsoft.com>", "msg_from_op": true, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Jan 11, 2023 at 09:04:56AM +0000, Jelte Fennema wrote:\n> It's very different. I think easiest is to explain by example:\n> \n> If there exist three users on the postgres server: admin, jelte and michael\n> \n> Then this rule (your suggested rule):\n> mapname /^(.*)$ \\1\n> \n> Is equivalent to:\n> mapname admin admin\n> mapname jelte jelte\n> mapname michael michael\n> \n> While with the \"all\" keyword you can create a rule like this:\n> mapname admin all\n> \n> which is equivalent to:\n> mapname admin admin\n> mapname admin jelte\n> mapname admin michael\n\nThanks for the explanation, I was missing your point. Hmm. On top\nof my mind, couldn't we also use a regexp for the pg-role rather than\njust a hardcoded keyword here then, so as it would be possible to\nallow a mapping to pass for a group of role names? \"all\" is just a\npattern to allow everything, at the end.\n--\nMichael", "msg_date": "Wed, 11 Jan 2023 20:05:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "> couldn't we also use a regexp for the pg-role rather than\n> just a hardcoded keyword here then, so as it would be possible to\n> allow a mapping to pass for a group of role names? \"all\" is just a\n> pattern to allow everything, at the end.\n\nThat's a good point. I hadn't realised that you added support for\nregexes in pg_hba.conf in 8fea868. Attached is a patchset\nwhere I reuse the pg_hba.conf code path to add support to\npg_ident.conf for: all, +group and regexes.\n\nThe main uncertainty I have is if the case insensitivity check is\nactually needed in check_role. It seems like a case insensitive\ncheck against the database user shouldn't actually be necessary.\nI only understand the need for the case insensitive check against\nthe system user. But I have too little experience with LDAP/kerberos\nto say for certain. So for now I kept the existing behaviour to\nnot regress.\n\nThe patchset also contains 3 preparatory patches with two refactoring\npasses and one small bugfix + test additions.\n\n> - renaming \"systemuser\" to \"system_user_token\" to outline that this is\n> not a simple string but an AuthToken with potentially a regexp?\n\nI decided against this, since now both system user and database user\nare tokens. Furthermore, compiler warnings should avoid any confusion\nagainst using this as a normal string. If you feel strongly about this\nthough, I'm happy to change this.\n\n\nOn Wed, 11 Jan 2023 at 14:34, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 11, 2023 at 09:04:56AM +0000, Jelte Fennema wrote:\n> > It's very different. I think easiest is to explain by example:\n> >\n> > If there exist three users on the postgres server: admin, jelte and michael\n> >\n> > Then this rule (your suggested rule):\n> > mapname /^(.*)$ \\1\n> >\n> > Is equivalent to:\n> > mapname admin admin\n> > mapname jelte jelte\n> > mapname michael michael\n> >\n> > While with the \"all\" keyword you can create a rule like this:\n> > mapname admin all\n> >\n> > which is equivalent to:\n> > mapname admin admin\n> > mapname admin jelte\n> > mapname admin michael\n>\n> Thanks for the explanation, I was missing your point. Hmm. On top\n> of my mind, couldn't we also use a regexp for the pg-role rather than\n> just a hardcoded keyword here then, so as it would be possible to\n> allow a mapping to pass for a group of role names? \"all\" is just a\n> pattern to allow everything, at the end.\n> --\n> Michael", "msg_date": "Wed, 11 Jan 2023 15:22:35 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Jan 11, 2023 at 03:22:35PM +0100, Jelte Fennema wrote:\n> The main uncertainty I have is if the case insensitivity check is\n> actually needed in check_role. It seems like a case insensitive\n> check against the database user shouldn't actually be necessary.\n> I only understand the need for the case insensitive check against\n> the system user. But I have too little experience with LDAP/kerberos\n> to say for certain. So for now I kept the existing behaviour to\n> not regress.\n\n if (!identLine->pg_user->quoted &&\n+ identLine->pg_user->string[0] != '+' &&\n+ !token_is_keyword(identLine->pg_user, \"all\") &&\n+ !token_has_regexp(identLine->pg_user) &&\nIf we finish by allowing a regexp for the PG user in an IdentLine, I\nwould choose to drop \"all\" entirely. Simpler is better when it comes\nto authentication, though we are working on getting things more..\nComplicated.\n\n+ Quoting a <replaceable>database-username</replaceable> containing\n+ <literal>\\1</literal> makes the <literal>\\1</literal>\n+ lose its special meaning.\n0002 and 0003 need careful thinking.\n\n+# Success as the regular expression matches and \\1 is replaced\n+reset_pg_ident($node, 'mypeermap', qq{/^$system_user(.*)\\$},\n+ 'test\\1mapuser');\n+test_role(\n+ $node,\n+ qq{testmapuser},\n+ 'peer',\n+ 0,\n+ 'with regular expression in user name map with \\1',\n+ log_like =>\n+ [qr/connection authenticated: identity=\"$system_user\" method=peer/]);\nRelying on kerberos to check the substitution pattern is a bit\nannoying.. I would be really tempted to extract and commit that\nindependently of the rest, actually, to provide some coverage of the\nsubstitution case in the peer test.\n\n> The patchset also contains 3 preparatory patches with two refactoring\n> passes and one small bugfix + test additions.\n\nApplied 0001, which looked fine and was an existing issue. At the\nend, I had no issues with the names you suggested.\n--\nMichael", "msg_date": "Thu, 12 Jan 2023 14:32:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "> Simpler is better when it comes to authentication\n\nI definitely agree with that, and if we didn't have existing\nparsing logic for pg_hba.conf I would agree. But given the existing\nlogic for pg_hba.conf, I think the path of least surprises is to\nsupport all of the same patterns that pg_hbac.conf supports.\n\nIt also makes the code simpler as we can simply reuse the\ncheck_role function, since that. I removed the lines you quoted\nsince those are actually not strictly necessary. They only change\nthe detection logic a bit in case of a \\1 existing in the string.\nAnd I'm not sure what the desired behaviour is for those.\n\n> I would be really tempted to extract and commit that\n> independently of the rest, actually, to provide some coverage of the\n> substitution case in the peer test.\n\nI split up that patch in two parts now and added the tests in a new 0001\npatch.\n\n> 0002 and 0003 need careful thinking.\n\n0002 should change no behaviour, since it simply stores the token in\nthe IdentLine struct, but doesn't start using the quoted or the regex field\nyet. 0003 is debatable indeed. To me it makes sense conceptually, but\nhaving a literal \\1 in a username seems like an unlikely scenario and\nthere might be pg_ident.conf files in existence where the \\1 is quoted\nthat would break because of this change. I haven't removed 0003 from\nthe patch set yet, but I kinda feel that the advantage is probably not\nworth the risk of breakage.\n\n0004 adds some breakage too. But there I think the advantages far outweigh\nthe risk of breakage. Both because added functionality is a much bigger\nadvantage, and because we only risk breaking when there exist users that\nare called \"all\", start with a literal + or start with a literal /.\nOnly \"all\" seems\nlike a somewhat reasonable username, but such a user existing seems\nunlikely to me given all its special meaning in pg_hba.conf. (I added this\nconsideration to the commit message)\n\n> > The main uncertainty I have is if the case insensitivity check is\n> > actually needed in check_role. It seems like a case insensitive\n> > check against the database user shouldn't actually be necessary.\n> > I only understand the need for the case insensitive check against\n> > the system user. But I have too little experience with LDAP/kerberos\n> > to say for certain. So for now I kept the existing behaviour to\n> > not regress.\n\nYou didn't write a response about this, but you did quote it. Did you intend\nto respond to it?\n\n> Applied 0001\n\nAwesome :)\n\n\nFinally, one separate thing I noticed is that regcomp_auth_token only\nchecks the / prefix, but doesn't check if the token was quoted or not.\nSo even if it's quoted it will be interpreted as a regex. Maybe we should\nchange that? At least for the regex parsing that is not released yet.", "msg_date": "Thu, 12 Jan 2023 10:10:02 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:10:02AM +0100, Jelte Fennema wrote:\n> It also makes the code simpler as we can simply reuse the\n> check_role function, since that. I removed the lines you quoted\n> since those are actually not strictly necessary. They only change\n> the detection logic a bit in case of a \\1 existing in the string.\n> And I'm not sure what the desired behaviour is for those.\n\nHmm. This is a very good point. 0004 gets really easy to follow\nnow.\n\n> I split up that patch in two parts now and added the tests in a new 0001\n> patch.\n\nThanks, applied 0001.\n\n> 0002 should change no behaviour, since it simply stores the token in\n> the IdentLine struct, but doesn't start using the quoted or the regex field\n> yet. 0003 is debatable indeed. To me it makes sense conceptually, but\n> having a literal \\1 in a username seems like an unlikely scenario and\n> there might be pg_ident.conf files in existence where the \\1 is quoted\n> that would break because of this change. I haven't removed 0003 from\n> the patch set yet, but I kinda feel that the advantage is probably not\n> worth the risk of breakage.\n\n0003 would allow folks to use \\1 in a Postgres username if quoted. My\nchoice would be to agree with you here. Even if folks applying quotes\nwould not be able anymore to replace the pattern, the risk seems a bit\nremote? I would suspect that basically everybody does not rely on\n'\\1' being in the middle of pg-username string, using it only as a\nstrict replacement of the result coming from system-username to keep a\nsimpler mapping between the PG roles and the krb5/gss system roles.\nEven if they use a more complex schema that depends on strstr(),\nthings would break if they began the pg-username with quotes. Put it\nsimply, I'd agree with your 0003.\n\n> 0004 adds some breakage too. But there I think the advantages far outweigh\n> the risk of breakage. Both because added functionality is a much bigger\n> advantage, and because we only risk breaking when there exist users that\n> are called \"all\", start with a literal + or start with a literal /.\n> Only \"all\" seems\n> like a somewhat reasonable username, but such a user existing seems\n> unlikely to me given all its special meaning in pg_hba.conf. (I added this\n> consideration to the commit message)\n\nI don't see how much that's different from the recent discussion with\nregexps added for databases and users to pg_hba.conf. And consistency\nsounds pretty good to me here.\n\n> Finally, one separate thing I noticed is that regcomp_auth_token only\n> checks the / prefix, but doesn't check if the token was quoted or not.\n> So even if it's quoted it will be interpreted as a regex. Maybe we should\n> change that? At least for the regex parsing that is not released yet.\n\nregcomp_auth_token() should not decide to compile a regexp depending\non if an AuthToken is quoted or not. Regexps can have commas, and\nthis would impact the case of database or role lists in HBA entries.\nAnd that could be an issue with spaces as well? See the docs for\npatterns like:\ndb1,\"/^db\\d{2,4}$\",db2\n\nPoint taken that we don't care about lists for pg_ident entries,\nthough.\n\n> You didn't write a response about this, but you did quote it. Did you intend\n> to respond to it?\n\nNah, I should have deleted it. I had no useful opinion on this\nparticular point.\n--\nMichael", "msg_date": "Fri, 13 Jan 2023 11:09:11 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "> Even if folks applying quotes\n> would not be able anymore to replace the pattern, the risk seems a bit\n> remote?\n\nYeah I agree the risk is remote. To be clear, the main pattern I'm\nworried about breaking is simply \"\\1\". Where people had put\nquotes around \\1 for no reason. All in all, I'm fine if 0003 gets\nmerged, but I'd also be fine with it if it doesn't. Both the risk\nand the advantage seem fairly small.\n\n> I don't see how much that's different from the recent discussion with\n> regexps added for databases and users to pg_hba.conf. And consistency\n> sounds pretty good to me here.\n\nIt's not much different, except that here also all and + change their meaning\n(for pg_hba.conf those special cases already existed). Mainly I called it out\nbecause I realised this discussion was called out in that commit too.\n\n> Regexps can have commas\n\nThat's a really good reason to allow quoted regexes indeed. Even for pg_ident\nentries, commas in unquoted regexes would cause the AuthToken parsing to fail.\n\nIs there anything you still want to see changed about any of the patches?\n\n\n", "msg_date": "Fri, 13 Jan 2023 09:19:10 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Fri, Jan 13, 2023 at 09:19:10AM +0100, Jelte Fennema wrote:\n>> Even if folks applying quotes\n>> would not be able anymore to replace the pattern, the risk seems a bit\n>> remote?\n> \n> Yeah I agree the risk is remote. To be clear, the main pattern I'm\n> worried about breaking is simply \"\\1\". Where people had put\n> quotes around \\1 for no reason. All in all, I'm fine if 0003 gets\n> merged, but I'd also be fine with it if it doesn't. Both the risk\n> and the advantage seem fairly small.\n\nStill, I am having a few second thoughts about 0003 after thinking\nabout it over the weekend. Except if I am missing something, there\nare no issues with 0004 if we keep the current behavior of always\nreplacing \\1 even if pg-user is quoted? I would certainly add a new\ntest case either way.\n\n>> I don't see how much that's different from the recent discussion with\n>> regexps added for databases and users to pg_hba.conf. And consistency\n>> sounds pretty good to me here.\n> \n> It's not much different, except that here also all and + change their meaning\n> (for pg_hba.conf those special cases already existed). Mainly I called it out\n> because I realised this discussion was called out in that commit too.\n> \n>> Regexps can have commas\n> \n> That's a really good reason to allow quoted regexes indeed. Even for pg_ident\n> entries, commas in unquoted regexes would cause the AuthToken parsing to fail.\n> \n> Is there anything you still want to see changed about any of the patches?\n\n+ /*\n+ * Mark the token as quoted, so it will only be compared literally\n+ * and not for special meanings like, such as \"all\" and membership\n+ * checks using the + prefix.\n+ */\n+ expanded_pg_user_token = make_auth_token(expanded_pg_user, true);\nIt is critical to quote this AuthToken after the replacement, indeed.\nOr we are in big trouble.\n\n- /* no substitution, so copy the match */\n- expanded_pg_user = pstrdup(identLine->pg_user->string);\n+ expanded_pg_user_token = identLine->pg_user;\nPerhaps it would be simpler to use copy_auth_token() in this code path\nand always free the resulting token?\n\nIn the code path where system-user is a regexp, could it be better\nto skip the replacement of \\1 in the new AuthToken if pg-user is\nitself a regexp? The compiled regexp would be the same, but it could\nbe considered as a bit confusing, as it can be thought that the\ncompiled regexp of pg-user happened after the replacement?\n\nNo issues with 0002 after a second look, so applied to move on.\n--\nMichael", "msg_date": "Mon, 16 Jan 2023 14:22:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "> Still, I am having a few second thoughts about 0003 after thinking\n> about it over the weekend. Except if I am missing something, there\n> are no issues with 0004 if we keep the current behavior of always\n> replacing \\1 even if pg-user is quoted? I would certainly add a new\n> test case either way.\n\nYes, 0004 is not dependent on 003 at all. I attached a new version\nof 0003 where only a test and some documentation is added.\n\n> Perhaps it would be simpler to use copy_auth_token() in this code path\n> and always free the resulting token?\n\nI initially tried that when working on the patch, but copy_auth_token\n(surprisingly) doesn't copy the regex field into the new AuthToken.\nSo we'd have to regenerate it conditionally. Making the copy\nconditional seemed just as simple code-wise, with the added\nbonus that it's not doing a useless copy.\n\n> In the code path where system-user is a regexp, could it be better\n> to skip the replacement of \\1 in the new AuthToken if pg-user is\n> itself a regexp? The compiled regexp would be the same, but it could\n> be considered as a bit confusing, as it can be thought that the\n> compiled regexp of pg-user happened after the replacement?\n\nI updated 0004 to prioritize membership checks and regexes over\nsubstitution of \\1. I also added tests for this. Prioritizing \"all\" over\nsubstitution of \\1 is not necessary, since by definition \"all\" does\nnot include \\1.", "msg_date": "Mon, 16 Jan 2023 11:53:57 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Mon, Jan 16, 2023 at 11:53:57AM +0100, Jelte Fennema wrote:\n>> Perhaps it would be simpler to use copy_auth_token() in this code path\n>> and always free the resulting token?\n> \n> I initially tried that when working on the patch, but copy_auth_token\n> (surprisingly) doesn't copy the regex field into the new AuthToken.\n> So we'd have to regenerate it conditionally. Making the copy\n> conditional seemed just as simple code-wise, with the added\n> bonus that it's not doing a useless copy.\n\nOkay, I can live with that.\n\n>> In the code path where system-user is a regexp, could it be better\n>> to skip the replacement of \\1 in the new AuthToken if pg-user is\n>> itself a regexp? The compiled regexp would be the same, but it could\n>> be considered as a bit confusing, as it can be thought that the\n>> compiled regexp of pg-user happened after the replacement?\n> \n> I updated 0004 to prioritize membership checks and regexes over\n> substitution of \\1. I also added tests for this. Prioritizing \"all\" over\n> substitution of \\1 is not necessary, since by definition \"all\" does\n> not include \\1.\n\nThanks, 0003 is OK, so applied now.\n\n0004 looks fine as well, be it for the tests (I am hesitating to tweak\nthings a bit here actually for the role names), the code or the docs,\nstill I am planning a second lookup.\n--\nMichael", "msg_date": "Tue, 17 Jan 2023 14:10:18 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "> 0004 looks fine as well, be it for the tests (I am hesitating to tweak\n> things a bit here actually for the role names), the code or the docs,\n\nAnything I can do to help with this? Or will you do that yourself?\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:35:29 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Jan 18, 2023 at 10:35:29AM +0100, Jelte Fennema wrote:\n> Anything I can do to help with this? Or will you do that yourself?\n\nNope. I just need some time to finish wrapping it, that's all.\n--\nMichael", "msg_date": "Thu, 19 Jan 2023 10:10:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Jan 18, 2023 at 10:35:29AM +0100, Jelte Fennema wrote:\n> Anything I can do to help with this? Or will you do that yourself?\n\nSo, I have done a second lookup, and tweaked a few things:\n- Addition of a macro for pg_strcasecmp(), to match with\ntoken_matches().\n- Fixed a bit the documentation.\n- Tweaked some comments and descriptions in the tests, I was rather\nfine with the role and group names.\n\nJelte, do you like this version?\n--\nMichael", "msg_date": "Thu, 19 Jan 2023 16:56:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "Looks good to me. One tiny typo in a comment that I noticed when going\nover the diff:\n\n+ * Mark the token as quoted, so it will only be compared literally\n+ * and not for some special meaning, such as \"all\" or a group\n+ * membership checks.\n\nshould be either:\n1. a group membership check\n2. group membership checks\n\nNow it's mixed singular and plural.\n\n\n", "msg_date": "Thu, 19 Jan 2023 12:23:16 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Thu, Jan 19, 2023 at 12:23:16PM +0100, Jelte Fennema wrote:\n> should be either:\n> 1. a group membership check\n> 2. group membership checks\n> \n> Now it's mixed singular and plural.\n\nThanks, fixed. And now applied the last patch.\n--\nMichael", "msg_date": "Fri, 20 Jan 2023 11:26:05 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "Hello,\n\nPlaying with this patch, I did not see descriptive comments in \npg_ident.conf.\n\nDoes it make sense to reflect changes to the PG-USERNAME field in the \npg_ident.conf.sample file?\n\nThe same relates to the regexp supportin the DATABASE and USER fieldsof \nthe pg_hba.conf.sample file(8fea8683).\n\n-----\nPavel Luzanov\n\n\n\n", "msg_date": "Mon, 13 Feb 2023 17:06:02 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Mon, 13 Feb 2023 at 15:06, Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n> Does it make sense to reflect changes to the PG-USERNAME field in the\n> pg_ident.conf.sample file?\n>\n> The same relates to the regexp supportin the DATABASE and USER fieldsof\n> the pg_hba.conf.sample file(8fea8683).\n\nThat definitely makes sense to me. When writing the patch I didn't\nrealise that there was also documentation in those files.\n\nI think it also makes sense to include usage of (some of) the features\nin the example files here:\nhttps://www.postgresql.org/docs/devel/auth-username-maps.html\n\n\n", "msg_date": "Mon, 13 Feb 2023 15:13:04 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Mon, Feb 13, 2023 at 03:13:04PM +0100, Jelte Fennema wrote:\n> On Mon, 13 Feb 2023 at 15:06, Pavel Luzanov <p.luzanov@postgrespro.ru> wrote:\n>> Does it make sense to reflect changes to the PG-USERNAME field in the\n>> pg_ident.conf.sample file?\n>>\n>> The same relates to the regexp supportin the DATABASE and USER fieldsof\n>> the pg_hba.conf.sample file(8fea8683).\n\nWhich comes down to blame me for both of them.\n\n> That definitely makes sense to me. When writing the patch I didn't\n> realise that there was also documentation in those files.\n> \n> I think it also makes sense to include usage of (some of) the features\n> in the example files here:\n> https://www.postgresql.org/docs/devel/auth-username-maps.html\n\nHmm, I am not sure that adding more examples in the sample files is\nworth the duplication with the docs.\n\nSo, please find attached a patch to close the gap the sample files,\nfor both things, with descriptions of all the field values they can\nuse.\n\nWhat do you think?\n--\nMichael", "msg_date": "Wed, 15 Feb 2023 16:11:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On 15.02.2023 10:11, Michael Paquier wrote:\n\n> Which comes down to blame me for both of them.\n\nMy only intention was to make postgres better.I'm sorry you took it that \nway.\n\n> So, please find attached a patch to close the gap the sample files,\n> for both things, with descriptions of all the field values they can\n> use.\n\nA short and precise description. Nothing to add.Next time I will try to \noffer a patch at once.\n\n \n-----\nPavel Luzanov\n\n\n\n\n\n\nOn 15.02.2023 10:11, Michael Paquier wrote:\n\n\nWhich comes down to blame me for both of them.\n\n\nMy only intention was to make postgres better.\nI'm sorry you took it that way. \n\n\n\nSo, please find attached a patch to close the gap the sample files,\nfor both things, with descriptions of all the field values they can\nuse.\n\n\n\nA short and precise description. Nothing to add.\nNext time I will try to offer a patch at once.\n\n\n \n-----\nPavel Luzanov", "msg_date": "Wed, 15 Feb 2023 13:05:04 +0300", "msg_from": "Pavel Luzanov <p.luzanov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Feb 15, 2023 at 01:05:04PM +0300, Pavel Luzanov wrote:\n> On 15.02.2023 10:11, Michael Paquier wrote:\n>> Which comes down to blame me for both of them.\n> \n> My only intention was to make postgres better.I'm sorry you took it that\n> way.\n\nYou have no need to feel sorry about that. I really appreciate that\nyou took the time to report this issue, so don't worry. My point is\nthat I have committed this code, so basically it is my responsibility\nto take care of its maintenance.\n\n>> So, please find attached a patch to close the gap the sample files,\n>> for both things, with descriptions of all the field values they can\n>> use.\n> \n> A short and precise description. Nothing to add.Next time I will try to\n> offer a patch at once.\n\nIf you have a proposal of patch, that's always nice to have, but you\nshould not feel obliged to do so, either.\n\nThanks a lot for the report, Pavel!\n--\nMichael", "msg_date": "Wed, 15 Feb 2023 19:25:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, 15 Feb 2023 at 08:11, Michael Paquier <michael@paquier.xyz> wrote:\n> Hmm, I am not sure that adding more examples in the sample files is\n> worth the duplication with the docs.\n\nI think you misunderstood what I meant (because I admittedly didn't\nwrite it down clearly). I meant the docs for pg_ident don't include\nany examples (only descriptions of the new patterns). Attached is a\npatch that addresses that.\n\n> So, please find attached a patch to close the gap the sample files,\n> for both things, with descriptions of all the field values they can\n> use.\n\nLGTM", "msg_date": "Wed, 15 Feb 2023 15:40:26 +0100", "msg_from": "Jelte Fennema <postgres@jeltef.nl>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" }, { "msg_contents": "On Wed, Feb 15, 2023 at 03:40:26PM +0100, Jelte Fennema wrote:\n> On Wed, 15 Feb 2023 at 08:11, Michael Paquier <michael@paquier.xyz> wrote:\n>> Hmm, I am not sure that adding more examples in the sample files is\n>> worth the duplication with the docs.\n> \n> I think you misunderstood what I meant (because I admittedly didn't\n> write it down clearly). I meant the docs for pg_ident don't include\n> any examples (only descriptions of the new patterns). Attached is a\n> patch that addresses that.\n\nShouldn't the paragraph above the example file of pg_ident.conf be\nupdated to reflect the changes you have added? An idea would be\ncleaner to split that into two sections. For example, we could keep\nthe current example with bryanh, ann and bob as it is (splitting it\ninto its own <para>), and add a second example with all the new\npatterns?\n\n>> So, please find attached a patch to close the gap the sample files,\n>> for both things, with descriptions of all the field values they can\n>> use.\n> \n> LGTM\n\nThanks for the review, applied this part.\n--\nMichael", "msg_date": "Thu, 16 Feb 2023 07:46:30 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [EXTERNAL] Re: [PATCH] Support using \"all\" for the db user in\n pg_ident.conf" } ]
[ { "msg_contents": "Hello hackers,\n\nCurrently, the Checkpointer process only reports SLRU statistics at server\nshutdown, leading to delayed statistics for SLRU flushes. This patch adds a\nflush of SLRU stats to the end of checkpoints.\n\nBest regards,\nAnthonin", "msg_date": "Wed, 11 Jan 2023 10:29:06 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Flush SLRU counters in checkpointer process" }, { "msg_contents": "Hi Anthonin,\n\n> This patch adds a flush of SLRU stats to the end of checkpoints.\n\nThe patch looks good to me and passes the tests but let's see if\nanyone else has any feedback.\n\nAlso I added a CF entry: https://commitfest.postgresql.org/42/4120/\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:05:04 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 10:29:06 +0100, Anthonin Bonnefoy wrote:\n> Currently, the Checkpointer process only reports SLRU statistics at server\n> shutdown, leading to delayed statistics for SLRU flushes. This patch adds a\n> flush of SLRU stats to the end of checkpoints.\n\nHm. I wonder if we should do this even earlier, by the\npgstat_report_checkpointer() calls in CheckpointWriteDelay().\n\nI'm inclined to move the pgstat_report_wal() and pgstat_report_slru() calls\ninto pgstat_report_checkpointer() to avoid needing to care about all the\nindividual places.\n\n\n> @@ -505,6 +505,7 @@ CheckpointerMain(void)\n> \t\t/* Report pending statistics to the cumulative stats system */\n> \t\tpgstat_report_checkpointer();\n> \t\tpgstat_report_wal(true);\n> +\t\tpgstat_report_slru(true);\n\nWhy do we need a force parameter if all callers use it?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 08:33:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "On Wed, Jan 11, 2023 at 5:33 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-11 10:29:06 +0100, Anthonin Bonnefoy wrote:\n> > Currently, the Checkpointer process only reports SLRU statistics at\n> server\n> > shutdown, leading to delayed statistics for SLRU flushes. This patch\n> adds a\n> > flush of SLRU stats to the end of checkpoints.\n>\n> Hm. I wonder if we should do this even earlier, by the\n> pgstat_report_checkpointer() calls in CheckpointWriteDelay().\n>\n> I'm inclined to move the pgstat_report_wal() and pgstat_report_slru() calls\n> into pgstat_report_checkpointer() to avoid needing to care about all the\n> individual places.\n>\nThat would make sense. I've created a new patch with everything moved in\npgstat_report_checkpointer().\nI did split the checkpointer flush in a pgstat_flush_checkpointer()\nfunction as it seemed more readable. Thought?\n\n\n> > @@ -505,6 +505,7 @@ CheckpointerMain(void)\n> > /* Report pending statistics to the cumulative stats\n> system */\n> > pgstat_report_checkpointer();\n> > pgstat_report_wal(true);\n> > + pgstat_report_slru(true);\n>\n> Why do we need a force parameter if all callers use it?\n\nGood point. I've written the same signature as pgstat_report_wal but\nthere's no need for the nowait parameter.", "msg_date": "Thu, 12 Jan 2023 09:45:31 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "On Thu, 12 Jan 2023 at 03:46, Anthonin Bonnefoy\n<anthonin.bonnefoy@datadoghq.com> wrote:\n>\n>\n> That would make sense. I've created a new patch with everything moved in pgstat_report_checkpointer().\n> I did split the checkpointer flush in a pgstat_flush_checkpointer() function as it seemed more readable. Thought?\n\nThis patch appears to need a rebase. Is there really any feedback\nneeded or is it ready for committer once it's rebased?\n\n\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Wed, 1 Mar 2023 14:45:57 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "Here's the patch rebased with Andres' suggestions.\nHappy to update it if there's any additionalj change required.\n\n\nOn Wed, Mar 1, 2023 at 8:46 PM Gregory Stark (as CFM) <stark.cfm@gmail.com>\nwrote:\n\n> On Thu, 12 Jan 2023 at 03:46, Anthonin Bonnefoy\n> <anthonin.bonnefoy@datadoghq.com> wrote:\n> >\n> >\n> > That would make sense. I've created a new patch with everything moved in\n> pgstat_report_checkpointer().\n> > I did split the checkpointer flush in a pgstat_flush_checkpointer()\n> function as it seemed more readable. Thought?\n>\n> This patch appears to need a rebase. Is there really any feedback\n> needed or is it ready for committer once it's rebased?\n>\n>\n>\n> --\n> Gregory Stark\n> As Commitfest Manager\n>", "msg_date": "Fri, 3 Mar 2023 09:06:23 +0100", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "> On 3 Mar 2023, at 09:06, Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> wrote:\n> \n> Here's the patch rebased with Andres' suggestions. \n> Happy to update it if there's any additionalj change required.\n\nThis patch crashes 031_recovery_conflict with a SIGInvalid on Windows, can you\nplease investigate and see what might be going on there? The test passed about\n4 days ago on Windows so unless it's the CI being flaky it should be due to a\nrecent change.\n\nIf you don't have access to a Windows environment you can run your own\ninstrumented builds in your Github account with the CI files in the postgres\nrepo.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Mon, 3 Jul 2023 15:18:17 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "I think I've managed to reproduce the issue. The test I've added to check\nslru flush was the one failing in the regression suite.\n\nSELECT SUM(flushes) > :slru_flushes_before FROM pg_stat_slru;\n ?column?\n----------\n t\n\nThe origin seems to be a race condition on have_slrustats (\nhttps://github.com/postgres/postgres/blob/c8e1ba736b2b9e8c98d37a5b77c4ed31baf94147/src/backend/utils/activity/pgstat_slru.c#L161-L162\n).\nI will try to get a new patch with improved test stability.\n\n\nOn Mon, Jul 3, 2023 at 3:18 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 3 Mar 2023, at 09:06, Anthonin Bonnefoy <\n> anthonin.bonnefoy@datadoghq.com> wrote:\n> >\n> > Here's the patch rebased with Andres' suggestions.\n> > Happy to update it if there's any additionalj change required.\n>\n> This patch crashes 031_recovery_conflict with a SIGInvalid on Windows, can\n> you\n> please investigate and see what might be going on there? The test passed\n> about\n> 4 days ago on Windows so unless it's the CI being flaky it should be due\n> to a\n> recent change.\n>\n> If you don't have access to a Windows environment you can run your own\n> instrumented builds in your Github account with the CI files in the\n> postgres\n> repo.\n>\n> --\n> Daniel Gustafsson\n>\n>\n\nI think I've managed to reproduce the issue. The test I've added to check slru flush was the one failing in the regression suite.SELECT SUM(flushes) > :slru_flushes_before FROM pg_stat_slru; ?column?---------- tThe origin seems to be a race condition on have_slrustats (https://github.com/postgres/postgres/blob/c8e1ba736b2b9e8c98d37a5b77c4ed31baf94147/src/backend/utils/activity/pgstat_slru.c#L161-L162).I will try to get a new patch with improved test stability. On Mon, Jul 3, 2023 at 3:18 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 3 Mar 2023, at 09:06, Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com> wrote:\n> \n> Here's the patch rebased with Andres' suggestions. \n> Happy to update it if there's any additionalj change required.\n\nThis patch crashes 031_recovery_conflict with a SIGInvalid on Windows, can you\nplease investigate and see what might be going on there?  The test passed about\n4 days ago on Windows so unless it's the CI being flaky it should be due to a\nrecent change.\n\nIf you don't have access to a Windows environment you can run your own\ninstrumented builds in your Github account with the CI files in the postgres\nrepo.\n\n--\nDaniel Gustafsson", "msg_date": "Wed, 19 Jul 2023 08:36:34 +0200", "msg_from": "Anthonin Bonnefoy <anthonin.bonnefoy@datadoghq.com>", "msg_from_op": true, "msg_subject": "Re: Flush SLRU counters in checkpointer process" }, { "msg_contents": "Hi,\n\n> I think I've managed to reproduce the issue. The test I've added to check slru flush was the one failing in the regression suite.\n\nA consensus was reached [1] to mark this patch as RwF for now. There\nare many patches to be reviewed and this one doesn't seem to be in the\nbest shape, so we have to prioritise. Please feel free re-submitting\nthe patch for the next commitfest.\n\n[1]: https://postgr.es/m/0737f444-59bb-ac1d-2753-873c40da0840%40eisentraut.org\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Mon, 4 Sep 2023 15:24:48 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: Flush SLRU counters in checkpointer process" } ]
[ { "msg_contents": "Hi,\n\nWhen I was reading the \"Logical Decoding Output Plugins\" chapter in pg-doc [1],\nI think in the summary section, only the callback message_cb is not described\nwhether it is required or optional, and the description of callback\nstream_prepare_cb seems inaccurate.\n\nAnd after the summary section, I think only the callback stream_xxx_cb section\nand the callback truncate_cb section are not described this tag (are they\nrequired or optional).\n\nI think we could improve these to be more reader friendly. So I tried to write\na patch for these and attach it.\n\nAny thoughts?\n\nRegards,\nWang Wei\n\n[1] - https://www.postgresql.org/docs/devel/logicaldecoding-output-plugin.html", "msg_date": "Wed, 11 Jan 2023 10:50:28 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Adjust the description of OutputPluginCallbacks in pg-doc" }, { "msg_contents": "On Wed, Jan 11, 2023 at 4:20 PM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> When I was reading the \"Logical Decoding Output Plugins\" chapter in pg-doc [1],\n> I think in the summary section, only the callback message_cb is not described\n> whether it is required or optional, and the description of callback\n> stream_prepare_cb seems inaccurate.\n>\n> And after the summary section, I think only the callback stream_xxx_cb section\n> and the callback truncate_cb section are not described this tag (are they\n> required or optional).\n>\n> I think we could improve these to be more reader friendly. So I tried to write\n> a patch for these and attach it.\n>\n> Any thoughts?\n>\n\nThis looks mostly good to me. I have made minor adjustments in the\nattached. Do those make sense to you?\n\n-- \nWith Regards,\nAmit Kapila.", "msg_date": "Thu, 19 Jan 2023 16:47:38 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adjust the description of OutputPluginCallbacks in pg-doc" }, { "msg_contents": "On Thu, Jan 19, 2023 at 4:47 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 11, 2023 at 4:20 PM wangw.fnst@fujitsu.com\n> <wangw.fnst@fujitsu.com> wrote:\n> >\n> > When I was reading the \"Logical Decoding Output Plugins\" chapter in pg-doc [1],\n> > I think in the summary section, only the callback message_cb is not described\n> > whether it is required or optional, and the description of callback\n> > stream_prepare_cb seems inaccurate.\n> >\n> > And after the summary section, I think only the callback stream_xxx_cb section\n> > and the callback truncate_cb section are not described this tag (are they\n> > required or optional).\n> >\n> > I think we could improve these to be more reader friendly. So I tried to write\n> > a patch for these and attach it.\n> >\n> > Any thoughts?\n> >\n>\n> This looks mostly good to me. I have made minor adjustments in the\n> attached. Do those make sense to you?\n>\n\nI forgot to mention that I intend to commit this only on HEAD as this\nis a doc improvement patch.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:49:52 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adjust the description of OutputPluginCallbacks in pg-doc" }, { "msg_contents": "On Thurs, Jan 19, 2023 at 19:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\r\n> On Wed, Jan 11, 2023 at 4:20 PM wangw.fnst@fujitsu.com\r\n> <wangw.fnst@fujitsu.com> wrote:\r\n> >\r\n> > When I was reading the \"Logical Decoding Output Plugins\" chapter in pg-doc\r\n> [1],\r\n> > I think in the summary section, only the callback message_cb is not described\r\n> > whether it is required or optional, and the description of callback\r\n> > stream_prepare_cb seems inaccurate.\r\n> >\r\n> > And after the summary section, I think only the callback stream_xxx_cb\r\n> section\r\n> > and the callback truncate_cb section are not described this tag (are they\r\n> > required or optional).\r\n> >\r\n> > I think we could improve these to be more reader friendly. So I tried to write\r\n> > a patch for these and attach it.\r\n> >\r\n> > Any thoughts?\r\n> >\r\n> \r\n> This looks mostly good to me. I have made minor adjustments in the\r\n> attached. Do those make sense to you?\r\n\r\nThanks for your improvement.\r\nIt makes sense to me.\r\n\r\nRegards,\r\nWang Wei\r\n", "msg_date": "Fri, 20 Jan 2023 02:33:40 +0000", "msg_from": "\"wangw.fnst@fujitsu.com\" <wangw.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Adjust the description of OutputPluginCallbacks in pg-doc" }, { "msg_contents": "On Fri, Jan 20, 2023 at 8:03 AM wangw.fnst@fujitsu.com\n<wangw.fnst@fujitsu.com> wrote:\n>\n> On Thurs, Jan 19, 2023 at 19:18 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Wed, Jan 11, 2023 at 4:20 PM wangw.fnst@fujitsu.com\n> > <wangw.fnst@fujitsu.com> wrote:\n> > >\n> > > When I was reading the \"Logical Decoding Output Plugins\" chapter in pg-doc\n> > [1],\n> > > I think in the summary section, only the callback message_cb is not described\n> > > whether it is required or optional, and the description of callback\n> > > stream_prepare_cb seems inaccurate.\n> > >\n> > > And after the summary section, I think only the callback stream_xxx_cb\n> > section\n> > > and the callback truncate_cb section are not described this tag (are they\n> > > required or optional).\n> > >\n> > > I think we could improve these to be more reader friendly. So I tried to write\n> > > a patch for these and attach it.\n> > >\n> > > Any thoughts?\n> > >\n> >\n> > This looks mostly good to me. I have made minor adjustments in the\n> > attached. Do those make sense to you?\n>\n> Thanks for your improvement.\n> It makes sense to me.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:10:29 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Adjust the description of OutputPluginCallbacks in pg-doc" } ]
[ { "msg_contents": "The current hierarchy of object types is like this:\n\ndatabase\n\taccess method\n\tevent trigger\n\textension\n\tforeign data wrapper\n\tforeign server\n\tlanguage\n\tpublication\n\tschema\n\t\taggregate\n\t\tcollation\n\t\tconversion\n\t\tdomain\n\t\tfunction/procedure\n\t\tindex\n\t\toperator\n\t\toperator class\n\t\toperator family\n\t\tsequence\n\t\tstatistics\n\t\ttable/view\n\t\t\tpolicy\n\t\t\trule\n\t\t\ttrigger\n\t\ttext search configuration\n\t\ttext search dictionary\n\t\ttext search parser\n\t\ttext search template\n\t\ttype\n\tsubscription\nrole\ntablespace\n\nspecial:\n- cast\n- transform\n- user mapping\n\n\nHow does one decide whether something should be in a schema or not? The \ncurrent state feels intuitively correct, but I can't determine any firm \nway to decide.\n\nOver in the column encryption thread, the patch proposes to add various \nkey types as new object types. For simplicity, I just stuck them \ndirectly under database, but I don't know whether that is correct.\n\nThoughts?\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:32:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "What object types should be in schemas?" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> The current hierarchy of object types is like this:\n> ...\n> How does one decide whether something should be in a schema or not?\n\nRoughly speaking, I think the intuition was \"if there are not likely\nto be a lot of objects of type X, maybe they don't need to be within\nschemas\".\n\nExtensions might be raised as a counterexample, but in that case\nI recall that there was a specific consideration: extensions can\ncontain (own) schemas, so it would be very confusing if they could\nalso be within schemas.\n\nI'm not sure about whether that holds for foreign data wrappers and\nforeign servers, but isn't that case mandated by the SQL spec?\n\nRoles and tablespaces aren't within schemas because they aren't\nwithin databases.\n\n> Over in the column encryption thread, the patch proposes to add various \n> key types as new object types. For simplicity, I just stuck them \n> directly under database, but I don't know whether that is correct.\n\nIs it reasonable for those to be per-database rather than cluster-wide?\nI don't immediately see a reason to have encrypted columns in shared\ncatalogs, but there would never be any chance of supporting that if\nthe keys live in per-database catalogs. (OTOH, perhaps there are\nsecurity reasons to keep them per-database, so I'm not insisting\nthat this is the right way.)\n\nIf we did make them cluster-wide then of course they'd be outside\nschemas too. If we don't, I'd lean slightly towards putting them\nwithin schemas, because that seems to be the default choice if you're\nnot sure. There probably aren't a huge number of text search parsers\neither, but they live within schemas.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:53:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: What object types should be in schemas?" }, { "msg_contents": "On 2023-Jan-11, Peter Eisentraut wrote:\n\n> How does one decide whether something should be in a schema or not? The\n> current state feels intuitively correct, but I can't determine any firm way\n> to decide.\n> \n> Over in the column encryption thread, the patch proposes to add various key\n> types as new object types. For simplicity, I just stuck them directly under\n> database, but I don't know whether that is correct.\n\nI think one important criterion to think about is how does encryption work\nwhen you have per-customer (or per-whatever) schemas. Is the concept of\na column encryption [objtype] a thing that you would like to set up per\ncustomer? In that case, you will probably want that object to live in\nthat customer's schema. Otherwise, you'll force the DBA to come up with\na naming scheme that includes the customer name in the column encryption\nobject.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n", "msg_date": "Thu, 12 Jan 2023 18:41:57 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: What object types should be in schemas?" }, { "msg_contents": "On 12.01.23 18:41, Alvaro Herrera wrote:\n> I think one important criterion to think about is how does encryption work\n> when you have per-customer (or per-whatever) schemas. Is the concept of\n> a column encryption [objtype] a thing that you would like to set up per\n> customer? In that case, you will probably want that object to live in\n> that customer's schema. Otherwise, you'll force the DBA to come up with\n> a naming scheme that includes the customer name in the column encryption\n> object.\n\nMakes sense. In my latest patch I have moved these key objects into \nschemas.\n\n\n\n", "msg_date": "Wed, 25 Jan 2023 20:06:47 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: What object types should be in schemas?" } ]
[ { "msg_contents": "I find \\df+ much less useful than it should be because it tends to be\ncluttered up with source code. Now that we have \\sf, would it be reasonable\nto remove the source code from the \\df+ display? This would make it easier\nto see function permissions and comments. If somebody wants to see the full\ndefinition of a function they can always invoke \\sf on it.\n\nIf there is consensus on the idea in principle I will write up a patch.\n\nI find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.", "msg_date": "Wed, 11 Jan 2023 11:50:39 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Remove source code display from \\df+?" }, { "msg_contents": "st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> I find \\df+ much less useful than it should be because it tends to be\n> cluttered up with source code. Now that we have \\sf, would it be reasonable\n> to remove the source code from the \\df+ display? This would make it easier\n> to see function permissions and comments. If somebody wants to see the full\n> definition of a function they can always invoke \\sf on it.\n>\n> If there is consensus on the idea in principle I will write up a patch.\n>\n\n+1\n\nPavel\n\nst 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.+1Pavel", "msg_date": "Wed, 11 Jan 2023 18:18:58 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com>\n> napsal:\n>\n>> I find \\df+ much less useful than it should be because it tends to be\n>> cluttered up with source code. Now that we have \\sf, would it be reasonable\n>> to remove the source code from the \\df+ display? This would make it easier\n>> to see function permissions and comments. If somebody wants to see the full\n>> definition of a function they can always invoke \\sf on it.\n>>\n>> If there is consensus on the idea in principle I will write up a patch.\n>>\n>\n> +1\n>\n>\n+1 but maybe with a twist. For any functions in a procedural language like\nplpgsql, it makes it pretty useless today. But when viewing an internal or\nC language function, it's short enough and still actually useful. Maybe\nsome combination where it would keep showing those for such language, but\nwould show \"use \\sf to view source\" for procedural languages?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.+1+1 but maybe with a twist. For any functions in a procedural language like plpgsql, it makes it pretty useless today. But when viewing an internal or C language function, it's short enough and still actually useful. Maybe some combination where it would keep showing those for such language, but would show \"use \\sf to view source\" for procedural languages? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 18:25:02 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "st 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net>\nnapsal:\n\n>\n>\n> On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com>\n>> napsal:\n>>\n>>> I find \\df+ much less useful than it should be because it tends to be\n>>> cluttered up with source code. Now that we have \\sf, would it be reasonable\n>>> to remove the source code from the \\df+ display? This would make it easier\n>>> to see function permissions and comments. If somebody wants to see the full\n>>> definition of a function they can always invoke \\sf on it.\n>>>\n>>> If there is consensus on the idea in principle I will write up a patch.\n>>>\n>>\n>> +1\n>>\n>>\n> +1 but maybe with a twist. For any functions in a procedural language like\n> plpgsql, it makes it pretty useless today. But when viewing an internal or\n> C language function, it's short enough and still actually useful. Maybe\n> some combination where it would keep showing those for such language, but\n> would show \"use \\sf to view source\" for procedural languages?\n>\n\nyes, it is almost necessary for C functions or functions in external\nlanguages. Maybe it can be specified in pg_language if prosrc is really\nsource code or some reference.\n\n\n\n\n\n\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>\n\nst 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.+1+1 but maybe with a twist. For any functions in a procedural language like plpgsql, it makes it pretty useless today. But when viewing an internal or C language function, it's short enough and still actually useful. Maybe some combination where it would keep showing those for such language, but would show \"use \\sf to view source\" for procedural languages?yes, it is almost necessary for C functions or functions in external languages. Maybe it can be specified in pg_language if prosrc is really source code or some reference. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 18:30:40 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Right, for internal or C functions it just gives a symbol name or something\nsimilar. I've never been annoyed seeing that, just having pages of PL/PGSQL\n(I use a lot of that, possibly biased towards the “too much” direction)\ntake up all the space.\n\nA bit hacky, but what about only showing the first line of the source code?\nThen you would see link names for that type of function but the main\nbenefit of suppressing the full source code would be obtained. Or, show\nsource if it is a single line, otherwise “…” (as in, literally an ellipsis).\n\nOn Wed, 11 Jan 2023 at 12:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n>\n> st 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net>\n> napsal:\n>\n>>\n>>\n>> On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>>>\n>>>\n>>> st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com>\n>>> napsal:\n>>>\n>>>> I find \\df+ much less useful than it should be because it tends to be\n>>>> cluttered up with source code. Now that we have \\sf, would it be reasonable\n>>>> to remove the source code from the \\df+ display? This would make it easier\n>>>> to see function permissions and comments. If somebody wants to see the full\n>>>> definition of a function they can always invoke \\sf on it.\n>>>>\n>>>> If there is consensus on the idea in principle I will write up a patch.\n>>>>\n>>>\n>>> +1\n>>>\n>>>\n>> +1 but maybe with a twist. For any functions in a procedural language\n>> like plpgsql, it makes it pretty useless today. But when viewing an\n>> internal or C language function, it's short enough and still actually\n>> useful. Maybe some combination where it would keep showing those for such\n>> language, but would show \"use \\sf to view source\" for procedural languages?\n>>\n>\n> yes, it is almost necessary for C functions or functions in external\n> languages. Maybe it can be specified in pg_language if prosrc is really\n> source code or some reference.\n>\n>\n>\n>\n>\n>\n>> --\n>> Magnus Hagander\n>> Me: https://www.hagander.net/ <http://www.hagander.net/>\n>> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>>\n>\n\nRight, for internal or C functions it just gives a symbol name or something similar. I've never been annoyed seeing that, just having pages of PL/PGSQL (I use a lot of that, possibly biased towards the “too much” direction) take up all the space.A bit hacky, but what about only showing the first line of the source code? Then you would see link names for that type of function but the main benefit of suppressing the full source code would be obtained. Or, show source if it is a single line, otherwise “…” (as in, literally an ellipsis).On Wed, 11 Jan 2023 at 12:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.+1+1 but maybe with a twist. For any functions in a procedural language like plpgsql, it makes it pretty useless today. But when viewing an internal or C language function, it's short enough and still actually useful. Maybe some combination where it would keep showing those for such language, but would show \"use \\sf to view source\" for procedural languages?yes, it is almost necessary for C functions or functions in external languages. Maybe it can be specified in pg_language if prosrc is really source code or some reference. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 12:57:45 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Hi\n\nst 11. 1. 2023 v 18:57 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> Right, for internal or C functions it just gives a symbol name or\n> something similar. I've never been annoyed seeing that, just having pages\n> of PL/PGSQL (I use a lot of that, possibly biased towards the “too much”\n> direction) take up all the space.\n>\n> A bit hacky, but what about only showing the first line of the source\n> code? Then you would see link names for that type of function but the main\n> benefit of suppressing the full source code would be obtained. Or, show\n> source if it is a single line, otherwise “…” (as in, literally an ellipsis).\n>\n\nplease, don't send top post replies -\nhttps://en.wikipedia.org/wiki/Posting_style\n\nI don't think printing a few first rows is a good idea - usually there is\nnothing interesting (same is PL/Perl, PL/Python, ...)\n\nIf the proposed feature can be generic, then this information should be\nstored somewhere in pg_language. Or we can redesign usage of prosrc and\nprobin columns - but this can be a much more massive change.\n\nRegards\n\nPavel\n\n\n\n\n>\n> On Wed, 11 Jan 2023 at 12:31, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> st 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net>\n>> napsal:\n>>\n>>>\n>>>\n>>> On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com>\n>>> wrote:\n>>>\n>>>>\n>>>>\n>>>> st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <\n>>>> isaac.morland@gmail.com> napsal:\n>>>>\n>>>>> I find \\df+ much less useful than it should be because it tends to be\n>>>>> cluttered up with source code. Now that we have \\sf, would it be reasonable\n>>>>> to remove the source code from the \\df+ display? This would make it easier\n>>>>> to see function permissions and comments. If somebody wants to see the full\n>>>>> definition of a function they can always invoke \\sf on it.\n>>>>>\n>>>>> If there is consensus on the idea in principle I will write up a patch.\n>>>>>\n>>>>\n>>>> +1\n>>>>\n>>>>\n>>> +1 but maybe with a twist. For any functions in a procedural language\n>>> like plpgsql, it makes it pretty useless today. But when viewing an\n>>> internal or C language function, it's short enough and still actually\n>>> useful. Maybe some combination where it would keep showing those for such\n>>> language, but would show \"use \\sf to view source\" for procedural languages?\n>>>\n>>\n>> yes, it is almost necessary for C functions or functions in external\n>> languages. Maybe it can be specified in pg_language if prosrc is really\n>> source code or some reference.\n>>\n>>\n>>\n>>\n>>\n>>\n>>> --\n>>> Magnus Hagander\n>>> Me: https://www.hagander.net/ <http://www.hagander.net/>\n>>> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>>>\n>>\n\nHist 11. 1. 2023 v 18:57 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:Right, for internal or C functions it just gives a symbol name or something similar. I've never been annoyed seeing that, just having pages of PL/PGSQL (I use a lot of that, possibly biased towards the “too much” direction) take up all the space.A bit hacky, but what about only showing the first line of the source code? Then you would see link names for that type of function but the main benefit of suppressing the full source code would be obtained. Or, show source if it is a single line, otherwise “…” (as in, literally an ellipsis).please, don't send top post replies - https://en.wikipedia.org/wiki/Posting_styleI don't think printing a few first rows is a good idea - usually there is nothing interesting (same is PL/Perl, PL/Python, ...)If the proposed feature can be generic, then this information should be stored somewhere in pg_language. Or we can redesign usage of prosrc and probin columns - but this can be a much more massive change. RegardsPavel On Wed, 11 Jan 2023 at 12:31, Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 18:25 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Wed, Jan 11, 2023 at 6:19 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 17:50 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I find \\df+ much less useful than it should be because it tends to be cluttered up with source code. Now that we have \\sf, would it be reasonable to remove the source code from the \\df+ display? This would make it easier to see function permissions and comments. If somebody wants to see the full definition of a function they can always invoke \\sf on it.If there is consensus on the idea in principle I will write up a patch.+1+1 but maybe with a twist. For any functions in a procedural language like plpgsql, it makes it pretty useless today. But when viewing an internal or C language function, it's short enough and still actually useful. Maybe some combination where it would keep showing those for such language, but would show \"use \\sf to view source\" for procedural languages?yes, it is almost necessary for C functions or functions in external languages. Maybe it can be specified in pg_language if prosrc is really source code or some reference. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 19:10:54 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Or, only show the source in \\df++. But it'd be a bit unfortunate if the\nC language function wasn't shown in \\df+\n\n\n", "msg_date": "Wed, 11 Jan 2023 12:16:23 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\nplease, don't send top post replies -\n> https://en.wikipedia.org/wiki/Posting_style\n>\n\nSorry about that; I do know to do it properly and usually get it right.\nGMail doesn’t seem to have an option (that I can find) to leave no space at\nthe top and put my cursor at the bottom; it nudges pretty firmly in the\ndirection of top-posting. Thanks for the reminder.\n\n\n> I don't think printing a few first rows is a good idea - usually there is\n> nothing interesting (same is PL/Perl, PL/Python, ...)\n>\n> If the proposed feature can be generic, then this information should be\n> stored somewhere in pg_language. Or we can redesign usage of prosrc and\n> probin columns - but this can be a much more massive change.\n>\n>> <http://www.redpill-linpro.com/>\n>>>>\n>>>\nI’m looking for a quick win. So I think that means either drop the source\ncolumn entirely, or show single-line source values only and nothing or a\nplaceholder for anything that is more than one line, unless somebody comes\nup with another suggestion. Originally I was thinking just to remove\nentirely, but I’ve seen a couple of comments suggesting that people would\nfind it unfortunate if the source weren’t shown for internal/C functions,\nso now I’m leaning towards showing single-line values only.\n\nI agree that showing the first line or couple of lines isn't likely to be\nvery useful. The way I format my functions, the first line is always blank\nanyway: I write the bodies like this:\n\n$$\nBEGIN\n …\nEND;\n$$;\n\nEven if somebody uses a different style, the first line is probably just\n\"BEGIN\" or something equally formulaic.\n\nOn Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, don't send top post replies - https://en.wikipedia.org/wiki/Posting_styleSorry about that; I do know to do it properly and usually get it right. GMail doesn’t seem to have an option (that I can find) to leave no space at the top and put my cursor at the bottom; it nudges pretty firmly in the direction of top-posting. Thanks for the reminder. I don't think printing a few first rows is a good idea - usually there is nothing interesting (same is PL/Perl, PL/Python, ...)If the proposed feature can be generic, then this information should be stored somewhere in pg_language. Or we can redesign usage of prosrc and probin columns - but this can be a much more massive change. I’m looking for a quick win. So I think that means either drop the source column entirely, or show single-line source values only and nothing or a placeholder for anything that is more than one line, unless somebody comes up with another suggestion. Originally I was thinking just to remove entirely, but I’ve seen a couple of comments suggesting that people would find it unfortunate if the source weren’t shown for internal/C functions, so now I’m leaning towards showing single-line values only.I agree that showing the first line or couple of lines isn't likely to be very useful. The way I format my functions, the first line is always blank anyway: I write the bodies like this:$$BEGIN    …END;$$;Even if somebody uses a different style, the first line is probably just \"BEGIN\" or something equally formulaic.", "msg_date": "Wed, 11 Jan 2023 13:24:18 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Wed, Jan 11, 2023 at 7:24 PM Isaac Morland <isaac.morland@gmail.com>\nwrote:\n\n> On Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n> please, don't send top post replies -\n>> https://en.wikipedia.org/wiki/Posting_style\n>>\n>\n> Sorry about that; I do know to do it properly and usually get it right.\n> GMail doesn’t seem to have an option (that I can find) to leave no space at\n> the top and put my cursor at the bottom; it nudges pretty firmly in the\n> direction of top-posting. Thanks for the reminder.\n>\n>\n>> I don't think printing a few first rows is a good idea - usually there is\n>> nothing interesting (same is PL/Perl, PL/Python, ...)\n>>\n>> If the proposed feature can be generic, then this information should be\n>> stored somewhere in pg_language. Or we can redesign usage of prosrc and\n>> probin columns - but this can be a much more massive change.\n>>\n>>> <http://www.redpill-linpro.com/>\n>>>>>\n>>>>\n> I’m looking for a quick win. So I think that means either drop the source\n> column entirely, or show single-line source values only and nothing or a\n> placeholder for anything that is more than one line, unless somebody comes\n> up with another suggestion. Originally I was thinking just to remove\n> entirely, but I’ve seen a couple of comments suggesting that people would\n> find it unfortunate if the source weren’t shown for internal/C functions,\n> so now I’m leaning towards showing single-line values only.\n>\n> I agree that showing the first line or couple of lines isn't likely to be\n> very useful. The way I format my functions, the first line is always blank\n> anyway: I write the bodies like this:\n>\n> $$\n> BEGIN\n> …\n> END;\n> $$;\n>\n> Even if somebody uses a different style, the first line is probably just\n> \"BEGIN\" or something equally formulaic.\n>\n\nThis is only about Internal and C, isn't it? Isn't the oid of these static,\nand identified by INTERNALlanguageId and ClanguageId respectively? So we\ncould just have the query show the prosrc column if the language oid is one\nof those two, and otherwise show \"Please use \\sf to view the source\"?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Wed, Jan 11, 2023 at 7:24 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, don't send top post replies - https://en.wikipedia.org/wiki/Posting_styleSorry about that; I do know to do it properly and usually get it right. GMail doesn’t seem to have an option (that I can find) to leave no space at the top and put my cursor at the bottom; it nudges pretty firmly in the direction of top-posting. Thanks for the reminder. I don't think printing a few first rows is a good idea - usually there is nothing interesting (same is PL/Perl, PL/Python, ...)If the proposed feature can be generic, then this information should be stored somewhere in pg_language. Or we can redesign usage of prosrc and probin columns - but this can be a much more massive change. I’m looking for a quick win. So I think that means either drop the source column entirely, or show single-line source values only and nothing or a placeholder for anything that is more than one line, unless somebody comes up with another suggestion. Originally I was thinking just to remove entirely, but I’ve seen a couple of comments suggesting that people would find it unfortunate if the source weren’t shown for internal/C functions, so now I’m leaning towards showing single-line values only.I agree that showing the first line or couple of lines isn't likely to be very useful. The way I format my functions, the first line is always blank anyway: I write the bodies like this:$$BEGIN    …END;$$;Even if somebody uses a different style, the first line is probably just \"BEGIN\" or something equally formulaic.This is only about Internal and C, isn't it? Isn't the oid of these static, and identified by INTERNALlanguageId and ClanguageId respectively? So we could just have the query show the prosrc column if the language oid is one of those two, and otherwise show \"Please use \\sf to view the source\"? --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 19:31:31 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\nnapsal:\n\n> On Wed, Jan 11, 2023 at 7:24 PM Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n>\n>> On Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com>\n>> wrote:\n>>\n>> please, don't send top post replies -\n>>> https://en.wikipedia.org/wiki/Posting_style\n>>>\n>>\n>> Sorry about that; I do know to do it properly and usually get it right.\n>> GMail doesn’t seem to have an option (that I can find) to leave no space at\n>> the top and put my cursor at the bottom; it nudges pretty firmly in the\n>> direction of top-posting. Thanks for the reminder.\n>>\n>>\n>>> I don't think printing a few first rows is a good idea - usually there\n>>> is nothing interesting (same is PL/Perl, PL/Python, ...)\n>>>\n>>> If the proposed feature can be generic, then this information should be\n>>> stored somewhere in pg_language. Or we can redesign usage of prosrc and\n>>> probin columns - but this can be a much more massive change.\n>>>\n>>>> <http://www.redpill-linpro.com/>\n>>>>>>\n>>>>>\n>> I’m looking for a quick win. So I think that means either drop the source\n>> column entirely, or show single-line source values only and nothing or a\n>> placeholder for anything that is more than one line, unless somebody comes\n>> up with another suggestion. Originally I was thinking just to remove\n>> entirely, but I’ve seen a couple of comments suggesting that people would\n>> find it unfortunate if the source weren’t shown for internal/C functions,\n>> so now I’m leaning towards showing single-line values only.\n>>\n>> I agree that showing the first line or couple of lines isn't likely to be\n>> very useful. The way I format my functions, the first line is always blank\n>> anyway: I write the bodies like this:\n>>\n>> $$\n>> BEGIN\n>> …\n>> END;\n>> $$;\n>>\n>> Even if somebody uses a different style, the first line is probably just\n>> \"BEGIN\" or something equally formulaic.\n>>\n>\n> This is only about Internal and C, isn't it? Isn't the oid of these\n> static, and identified by INTERNALlanguageId and ClanguageId respectively?\n> So we could just have the query show the prosrc column if the language oid\n> is one of those two, and otherwise show \"Please use \\sf to view the\n> source\"?\n>\n\nI think it can be acceptable - maybe we can rename the column \"source code\"\nlike \"internal name\" or some like that.\n\nagain I don't think printing \"Please use \\sf to view the source\"? \" often\ncan be user friendly. \\? is clear and \\sf is easy to use\n\n\n\n> --\n> Magnus Hagander\n> Me: https://www.hagander.net/ <http://www.hagander.net/>\n> Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n>\n\nst 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net> napsal:On Wed, Jan 11, 2023 at 7:24 PM Isaac Morland <isaac.morland@gmail.com> wrote:On Wed, 11 Jan 2023 at 13:11, Pavel Stehule <pavel.stehule@gmail.com> wrote:please, don't send top post replies - https://en.wikipedia.org/wiki/Posting_styleSorry about that; I do know to do it properly and usually get it right. GMail doesn’t seem to have an option (that I can find) to leave no space at the top and put my cursor at the bottom; it nudges pretty firmly in the direction of top-posting. Thanks for the reminder. I don't think printing a few first rows is a good idea - usually there is nothing interesting (same is PL/Perl, PL/Python, ...)If the proposed feature can be generic, then this information should be stored somewhere in pg_language. Or we can redesign usage of prosrc and probin columns - but this can be a much more massive change. I’m looking for a quick win. So I think that means either drop the source column entirely, or show single-line source values only and nothing or a placeholder for anything that is more than one line, unless somebody comes up with another suggestion. Originally I was thinking just to remove entirely, but I’ve seen a couple of comments suggesting that people would find it unfortunate if the source weren’t shown for internal/C functions, so now I’m leaning towards showing single-line values only.I agree that showing the first line or couple of lines isn't likely to be very useful. The way I format my functions, the first line is always blank anyway: I write the bodies like this:$$BEGIN    …END;$$;Even if somebody uses a different style, the first line is probably just \"BEGIN\" or something equally formulaic.This is only about Internal and C, isn't it? Isn't the oid of these static, and identified by INTERNALlanguageId and ClanguageId respectively? So we could just have the query show the prosrc column if the language oid is one of those two, and otherwise show \"Please use \\sf to view the source\"? I think it can be acceptable - maybe we can rename the column \"source code\" like \"internal name\" or some like that.again I don't think printing  \"Please use \\sf to view the source\"? \" often can be user friendly.  \\? is clear and \\sf is easy to use--  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Wed, 11 Jan 2023 19:38:56 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\n> napsal:\n>> This is only about Internal and C, isn't it? Isn't the oid of these\n>> static, and identified by INTERNALlanguageId and ClanguageId respectively?\n>> So we could just have the query show the prosrc column if the language oid\n>> is one of those two, and otherwise show \"Please use \\sf to view the\n>> source\"?\n\n> I think it can be acceptable - maybe we can rename the column \"source code\"\n> like \"internal name\" or some like that.\n\nYeah, \"source code\" has always been kind of a misnomer for these\nlanguages.\n\n> again I don't think printing \"Please use \\sf to view the source\"? \" often\n> can be user friendly. \\? is clear and \\sf is easy to use\n\nWe could shorten it to \"See \\sf\" or something like that. But if we change\nthe column header to \"internal name\" or the like, then the column just\nobviously doesn't apply for non-internal languages, so leaving it null\nshould be fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:11:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "st 11. 1. 2023 v 22:11 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n\n> Pavel Stehule <pavel.stehule@gmail.com> writes:\n> > st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\n> > napsal:\n> >> This is only about Internal and C, isn't it? Isn't the oid of these\n> >> static, and identified by INTERNALlanguageId and ClanguageId\n> respectively?\n> >> So we could just have the query show the prosrc column if the language\n> oid\n> >> is one of those two, and otherwise show \"Please use \\sf to view the\n> >> source\"?\n>\n> > I think it can be acceptable - maybe we can rename the column \"source\n> code\"\n> > like \"internal name\" or some like that.\n>\n> Yeah, \"source code\" has always been kind of a misnomer for these\n> languages.\n>\n> > again I don't think printing \"Please use \\sf to view the source\"? \"\n> often\n> > can be user friendly. \\? is clear and \\sf is easy to use\n>\n> We could shorten it to \"See \\sf\" or something like that. But if we change\n> the column header to \"internal name\" or the like, then the column just\n> obviously doesn't apply for non-internal languages, so leaving it null\n> should be fine.\n>\n\n+1\n\n\n\n> regards, tom lane\n>\n\nst 11. 1. 2023 v 22:11 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\n> napsal:\n>> This is only about Internal and C, isn't it? Isn't the oid of these\n>> static, and identified by INTERNALlanguageId and ClanguageId respectively?\n>> So we could just have the query show the prosrc column if the language oid\n>> is one of those two, and otherwise show \"Please use \\sf to view the\n>> source\"?\n\n> I think it can be acceptable - maybe we can rename the column \"source code\"\n> like \"internal name\" or some like that.\n\nYeah, \"source code\" has always been kind of a misnomer for these\nlanguages.\n\n> again I don't think printing  \"Please use \\sf to view the source\"? \" often\n> can be user friendly.  \\? is clear and \\sf is easy to use\n\nWe could shorten it to \"See \\sf\" or something like that.  But if we change\nthe column header to \"internal name\" or the like, then the column just\nobviously doesn't apply for non-internal languages, so leaving it null\nshould be fine.+1 \n\n                        regards, tom lane", "msg_date": "Thu, 12 Jan 2023 06:22:45 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, Jan 12, 2023 at 6:23 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 11. 1. 2023 v 22:11 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:\n>\n>> Pavel Stehule <pavel.stehule@gmail.com> writes:\n>> > st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\n>> > napsal:\n>> >> This is only about Internal and C, isn't it? Isn't the oid of these\n>> >> static, and identified by INTERNALlanguageId and ClanguageId\n>> respectively?\n>> >> So we could just have the query show the prosrc column if the language\n>> oid\n>> >> is one of those two, and otherwise show \"Please use \\sf to view the\n>> >> source\"?\n>>\n>> > I think it can be acceptable - maybe we can rename the column \"source\n>> code\"\n>> > like \"internal name\" or some like that.\n>>\n>> Yeah, \"source code\" has always been kind of a misnomer for these\n>> languages.\n>>\n>> > again I don't think printing \"Please use \\sf to view the source\"? \"\n>> often\n>> > can be user friendly. \\? is clear and \\sf is easy to use\n>>\n>> We could shorten it to \"See \\sf\" or something like that. But if we change\n>> the column header to \"internal name\" or the like, then the column just\n>> obviously doesn't apply for non-internal languages, so leaving it null\n>> should be fine.\n>>\n>\n> +1\n>\n>\nSure, that works for me as well. I agree the suggested text was way too\nlong, I was more thinking of \"something in this direction\" rather than that\nexact text. But yes, with a change of names, we can leave it NULL as well.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Thu, Jan 12, 2023 at 6:23 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 11. 1. 2023 v 22:11 odesílatel Tom Lane <tgl@sss.pgh.pa.us> napsal:Pavel Stehule <pavel.stehule@gmail.com> writes:\n> st 11. 1. 2023 v 19:31 odesílatel Magnus Hagander <magnus@hagander.net>\n> napsal:\n>> This is only about Internal and C, isn't it? Isn't the oid of these\n>> static, and identified by INTERNALlanguageId and ClanguageId respectively?\n>> So we could just have the query show the prosrc column if the language oid\n>> is one of those two, and otherwise show \"Please use \\sf to view the\n>> source\"?\n\n> I think it can be acceptable - maybe we can rename the column \"source code\"\n> like \"internal name\" or some like that.\n\nYeah, \"source code\" has always been kind of a misnomer for these\nlanguages.\n\n> again I don't think printing  \"Please use \\sf to view the source\"? \" often\n> can be user friendly.  \\? is clear and \\sf is easy to use\n\nWe could shorten it to \"See \\sf\" or something like that.  But if we change\nthe column header to \"internal name\" or the like, then the column just\nobviously doesn't apply for non-internal languages, so leaving it null\nshould be fine.+1Sure, that works for me as well. I agree the suggested text was way too long, I was more thinking of \"something in this direction\" rather than that exact text. But yes, with a change of names, we can leave it NULL as well. --  Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/", "msg_date": "Thu, 12 Jan 2023 16:03:53 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, 12 Jan 2023 at 10:04, Magnus Hagander <magnus@hagander.net> wrote:\n\nWe could shorten it to \"See \\sf\" or something like that. But if we change\n>>> the column header to \"internal name\" or the like, then the column just\n>>> obviously doesn't apply for non-internal languages, so leaving it null\n>>> should be fine.\n>>>\n>>\n>> +1\n>>\n>>\n> Sure, that works for me as well. I agree the suggested text was way too\n> long, I was more thinking of \"something in this direction\" rather than that\n> exact text. But yes, with a change of names, we can leave it NULL as well.\n>\n\nThanks everybody. So based on the latest discussion I will:\n\n1) rename the column from “Source code” to “Internal name”; and\n2) change the contents to NULL except when the language (identified by oid)\nis INTERNAL or C.\n\nPatch forthcoming, I hope.\n\nOn Thu, 12 Jan 2023 at 10:04, Magnus Hagander <magnus@hagander.net> wrote:We could shorten it to \"See \\sf\" or something like that.  But if we change\nthe column header to \"internal name\" or the like, then the column just\nobviously doesn't apply for non-internal languages, so leaving it null\nshould be fine.+1Sure, that works for me as well. I agree the suggested text was way too long, I was more thinking of \"something in this direction\" rather than that exact text. But yes, with a change of names, we can leave it NULL as well. Thanks everybody. So based on the latest discussion I will:1) rename the column from “Source code” to “Internal name”; and2) change the contents to NULL except when the language (identified by oid) is INTERNAL or C.Patch forthcoming, I hope.", "msg_date": "Thu, 12 Jan 2023 12:06:13 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, 12 Jan 2023 at 12:06, Isaac Morland <isaac.morland@gmail.com> wrote:\n\nThanks everybody. So based on the latest discussion I will:\n>\n> 1) rename the column from “Source code” to “Internal name”; and\n> 2) change the contents to NULL except when the language (identified by\n> oid) is INTERNAL or C.\n>\n> Patch forthcoming, I hope.\n>\n\nI've attached a patch for this. It turns out to simplify the existing code\nin one way because the recently added call to pg_get_function_sqlbody() is\nno longer needed since it applies only to SQL functions, which will now\ndisplay as a blank column.\n\nI implemented the change and was surprised to see that no tests failed.\nTurns out that while there are several tests for \\df, there were none for\n\\df+. I added a couple, just using \\df+ on some functions that appear to me\nto be present on plain vanilla Postgres.\n\nI was initially concerned about translation support for the column heading,\nbut it turns out that \\dT+ already has a column with the exact same name so\nit appears we don’t need any new translations.\n\nI welcome comments and feedback. Now to try to find something manageable to\nreview.", "msg_date": "Tue, 17 Jan 2023 14:29:04 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Hi\n\n\nút 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Thu, 12 Jan 2023 at 12:06, Isaac Morland <isaac.morland@gmail.com>\n> wrote:\n>\n> Thanks everybody. So based on the latest discussion I will:\n>>\n>> 1) rename the column from “Source code” to “Internal name”; and\n>> 2) change the contents to NULL except when the language (identified by\n>> oid) is INTERNAL or C.\n>>\n>> Patch forthcoming, I hope.\n>>\n>\n> I've attached a patch for this. It turns out to simplify the existing code\n> in one way because the recently added call to pg_get_function_sqlbody() is\n> no longer needed since it applies only to SQL functions, which will now\n> display as a blank column.\n>\n> I implemented the change and was surprised to see that no tests failed.\n> Turns out that while there are several tests for \\df, there were none for\n> \\df+. I added a couple, just using \\df+ on some functions that appear to me\n> to be present on plain vanilla Postgres.\n>\n> I was initially concerned about translation support for the column\n> heading, but it turns out that \\dT+ already has a column with the exact\n> same name so it appears we don’t need any new translations.\n>\n> I welcome comments and feedback. Now to try to find something manageable\n> to review.\n>\n\nlooks well\n\nyou miss update psql documentation\n\nhttps://www.postgresql.org/docs/current/app-psql.html\n\nIf the form \\df+ is used, additional information about each function is\nshown, including volatility, parallel safety, owner, security\nclassification, access privileges, language, source code and description.\n\nyou should to assign your patch to commitfest app\n\nhttps://commitfest.postgresql.org/\n\nRegards\n\nPavel\n\nHiút 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Thu, 12 Jan 2023 at 12:06, Isaac Morland <isaac.morland@gmail.com> wrote:Thanks everybody. So based on the latest discussion I will:1) rename the column from “Source code” to “Internal name”; and2) change the contents to NULL except when the language (identified by oid) is INTERNAL or C.Patch forthcoming, I hope.I've attached a patch for this. It turns out to simplify the existing code in one way because the recently added call to pg_get_function_sqlbody() is no longer needed since it applies only to SQL functions, which will now display as a blank column.I implemented the change and was surprised to see that no tests failed. Turns out that while there are several tests for \\df, there were none for \\df+. I added a couple, just using \\df+ on some functions that appear to me to be present on plain vanilla Postgres.I was initially concerned about translation support for the column heading, but it turns out that \\dT+ already has a column with the exact same name so it appears we don’t need any new translations.I welcome comments and feedback. Now to try to find something manageable to review.looks wellyou miss update psql documentation https://www.postgresql.org/docs/current/app-psql.html If the form \\df+ is used, additional \ninformation about each function is shown, including volatility, parallel\n safety, owner, security classification, access privileges, language, \nsource code and description.you should to assign your patch to commitfest apphttps://commitfest.postgresql.org/RegardsPavel", "msg_date": "Wed, 18 Jan 2023 05:59:33 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Wed, 18 Jan 2023 at 00:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n\n>\n> út 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com>\n> napsal:\n>\n>>\n>> I welcome comments and feedback. Now to try to find something manageable\n>> to review.\n>>\n>\n> looks well\n>\n> you miss update psql documentation\n>\n> https://www.postgresql.org/docs/current/app-psql.html\n>\n> If the form \\df+ is used, additional information about each function is\n> shown, including volatility, parallel safety, owner, security\n> classification, access privileges, language, source code and description.\n>\n\nThanks, and sorry about that, it just completely slipped my mind. I’ve\nattached a revised patch with a slight update of the psql documentation.\n\nyou should to assign your patch to commitfest app\n>\n> https://commitfest.postgresql.org/\n>\n\nI thought I had: https://commitfest.postgresql.org/42/4133/\n\nDid I miss something?", "msg_date": "Wed, 18 Jan 2023 10:27:46 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "st 18. 1. 2023 v 16:27 odesílatel Isaac Morland <isaac.morland@gmail.com>\nnapsal:\n\n> On Wed, 18 Jan 2023 at 00:00, Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>> út 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com>\n>> napsal:\n>>\n>>>\n>>> I welcome comments and feedback. Now to try to find something manageable\n>>> to review.\n>>>\n>>\n>> looks well\n>>\n>> you miss update psql documentation\n>>\n>> https://www.postgresql.org/docs/current/app-psql.html\n>>\n>> If the form \\df+ is used, additional information about each function is\n>> shown, including volatility, parallel safety, owner, security\n>> classification, access privileges, language, source code and description.\n>>\n>\n> Thanks, and sorry about that, it just completely slipped my mind. I’ve\n> attached a revised patch with a slight update of the psql documentation.\n>\n> you should to assign your patch to commitfest app\n>>\n>> https://commitfest.postgresql.org/\n>>\n>\n> I thought I had: https://commitfest.postgresql.org/42/4133/\n>\n\nok\n\n\n>\n> Did I miss something?\n>\n\nit looks well\n\nregards\n\nPavel\n\nst 18. 1. 2023 v 16:27 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:On Wed, 18 Jan 2023 at 00:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:út 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:I welcome comments and feedback. Now to try to find something manageable to review.looks wellyou miss update psql documentation https://www.postgresql.org/docs/current/app-psql.html If the form \\df+ is used, additional \ninformation about each function is shown, including volatility, parallel\n safety, owner, security classification, access privileges, language, \nsource code and description.Thanks, and sorry about that, it just completely slipped my mind. I’ve attached a revised patch with a slight update of the psql documentation. you should to assign your patch to commitfest apphttps://commitfest.postgresql.org/I thought I had: https://commitfest.postgresql.org/42/4133/ok Did I miss something?it looks well regards Pavel", "msg_date": "Thu, 19 Jan 2023 05:02:08 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Wed, Jan 18, 2023 at 10:27:46AM -0500, Isaac Morland wrote:\n> On Wed, 18 Jan 2023 at 00:00, Pavel Stehule <pavel.stehule@gmail.com> wrote:\n> \n> > út 17. 1. 2023 v 20:29 odesílatel Isaac Morland <isaac.morland@gmail.com> napsal:\n> >\n> >> I welcome comments and feedback. Now to try to find something manageable\n> >> to review.\n> >\n> > looks well\n> >\n> > you miss update psql documentation\n> >\n> > https://www.postgresql.org/docs/current/app-psql.html\n> >\n> > If the form \\df+ is used, additional information about each function is\n> > shown, including volatility, parallel safety, owner, security\n> > classification, access privileges, language, source code and description.\n> \n> Thanks, and sorry about that, it just completely slipped my mind. I’ve\n> attached a revised patch with a slight update of the psql documentation.\n> \n> you should to assign your patch to commitfest app\n> >\n> > https://commitfest.postgresql.org/\n> \n> I thought I had: https://commitfest.postgresql.org/42/4133/\n\nThis is failing tests:\nhttp://cfbot.cputube.org/isaac-morland.html\n\nIt seems like any \"make check\" would fail .. but did you also try\ncirrusci from your own github account?\n./src/tools/ci/README\n\nBTW, I think \"ELSE NULL\" could be omitted.\n\n-- \nJustin\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:30:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, 19 Jan 2023 at 11:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Wed, Jan 18, 2023 at 10:27:46AM -0500, Isaac Morland wrote:\n> >\n> > I thought I had: https://commitfest.postgresql.org/42/4133/\n>\n> This is failing tests:\n> http://cfbot.cputube.org/isaac-morland.html\n>\n> It seems like any \"make check\" would fail .. but did you also try\n> cirrusci from your own github account?\n> ./src/tools/ci/README\n>\n\nI definitely ran \"make check\" but I did not realize there is also cirrusci.\nI will look at that. I'm having trouble interpreting the cfbot page to\nwhich you linked but I'll try to run cirrusci myself before worrying too\nmuch about that.\n\nRunning \"make check\" the first time I was surprised to see no failures - so\nI added tests for \\df+, which passed when I did \"make check\".\n\n\n> BTW, I think \"ELSE NULL\" could be omitted.\n>\n\nLooks like; I'll update. Might as well reduce the visual size of the code.\n\nOn Thu, 19 Jan 2023 at 11:30, Justin Pryzby <pryzby@telsasoft.com> wrote:On Wed, Jan 18, 2023 at 10:27:46AM -0500, Isaac Morland wrote:> \n> I thought I had: https://commitfest.postgresql.org/42/4133/\n\nThis is failing tests:\nhttp://cfbot.cputube.org/isaac-morland.html\n\nIt seems like any \"make check\" would fail .. but did you also try\ncirrusci from your own github account?\n./src/tools/ci/READMEI definitely ran \"make check\" but I did not realize there is also cirrusci. I will look at that. I'm having trouble interpreting the cfbot page to which you linked but I'll try to run cirrusci myself before worrying too much about that.Running \"make check\" the first time I was surprised to see no failures - so I added tests for \\df+, which passed when I did \"make check\". \nBTW, I think \"ELSE NULL\" could be omitted.Looks like; I'll update. Might as well reduce the visual size of the code.", "msg_date": "Thu, 19 Jan 2023 13:02:14 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, 19 Jan 2023 at 13:02, Isaac Morland <isaac.morland@gmail.com> wrote:\n\n> On Thu, 19 Jan 2023 at 11:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n>> On Wed, Jan 18, 2023 at 10:27:46AM -0500, Isaac Morland wrote:\n>> >\n>> > I thought I had: https://commitfest.postgresql.org/42/4133/\n>>\n>> This is failing tests:\n>> http://cfbot.cputube.org/isaac-morland.html\n>>\n>> It seems like any \"make check\" would fail .. but did you also try\n>> cirrusci from your own github account?\n>> ./src/tools/ci/README\n>>\n>\n> I definitely ran \"make check\" but I did not realize there is also\n> cirrusci. I will look at that. I'm having trouble interpreting the cfbot\n> page to which you linked but I'll try to run cirrusci myself before\n> worrying too much about that.\n>\n> Running \"make check\" the first time I was surprised to see no failures -\n> so I added tests for \\df+, which passed when I did \"make check\".\n>\n>>\nIt turns out that my tests wanted the owner to be “vagrant” rather than\n“postgres”. This is apparently because I was running as that user (in a\nVagrant VM) when running the tests. Then I took that output and just made\nit the expected output. I’ve re-worked my build environment a bit so that I\nrun as “postgres” inside the Vagrant VM.\n\nWhat I don’t understand is why that didn’t break all the other tests. I\nwould have expected “postgres” to show up all over the expected results and\nbe changed to “vagrant” by what I did; so I should have seen all sorts of\ntest failures in the existing tests. Anyway, my new tests now have the\nproper value in the Owner column so let’s see what CI does with it.\n\nBTW, I think \"ELSE NULL\" could be omitted.\n>>\n>\n> Looks like; I'll update. Might as well reduce the visual size of the code.\n>\n\nI did this. I’m ambivalent about this because I usually think of CASE and\nsimilar constructs as needing to explicitly cover all possible cases but\nthe language does provide for the NULL default case so may as well use the\nfeature where applicable.", "msg_date": "Sun, 22 Jan 2023 00:18:34 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> It turns out that my tests wanted the owner to be “vagrant” rather than\n> “postgres”. This is apparently because I was running as that user (in a\n> Vagrant VM) when running the tests. Then I took that output and just made\n> it the expected output. I’ve re-worked my build environment a bit so that I\n> run as “postgres” inside the Vagrant VM.\n\nNope, that is not going to get past the buildfarm (hint: a lot of the\nBF animals run under \"buildfarm\" or some similar username). You have\nto make sure that your tests do not care what the name of the bootstrap\nsuperuser is.\n\n> What I don’t understand is why that didn’t break all the other tests.\n\nBecause all the committed tests are independent of that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Jan 2023 00:35:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, Jan 22, 2023 at 12:18:34AM -0500, Isaac Morland wrote:\n> On Thu, 19 Jan 2023 at 13:02, Isaac Morland <isaac.morland@gmail.com> wrote:\n> > On Thu, 19 Jan 2023 at 11:30, Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >> On Wed, Jan 18, 2023 at 10:27:46AM -0500, Isaac Morland wrote:\n> >> >\n> >> > I thought I had: https://commitfest.postgresql.org/42/4133/\n> >>\n> >> This is failing tests:\n> >> http://cfbot.cputube.org/isaac-morland.html\n> >>\n> >> It seems like any \"make check\" would fail .. but did you also try\n> >> cirrusci from your own github account?\n> >> ./src/tools/ci/README\n> >\n> > I definitely ran \"make check\" but I did not realize there is also\n> > cirrusci. I will look at that. I'm having trouble interpreting the cfbot\n> > page to which you linked but I'll try to run cirrusci myself before\n> > worrying too much about that.\n> >\n> > Running \"make check\" the first time I was surprised to see no failures -\n> > so I added tests for \\df+, which passed when I did \"make check\".\n> >\n> It turns out that my tests wanted the owner to be “vagrant” rather than\n> “postgres”. This is apparently because I was running as that user (in a\n> Vagrant VM) when running the tests. Then I took that output and just made\n> it the expected output. I’ve re-worked my build environment a bit so that I\n> run as “postgres” inside the Vagrant VM.\n> \n> What I don’t understand is why that didn’t break all the other tests.\n\nProbably because the other tests avoid showing the owner, exactly\nbecause it varies depending on who runs the tests. Most of the \"plus\"\ncommands aren't tested, at least in the sql regression tests.\n\nWe should probably change one of the CI usernames to something other\nthan \"postgres\" to catch the case that someone hardcodes \"postgres\".\n\n> proper value in the Owner column so let’s see what CI does with it.\n\nOr better: see above about using it from your github account.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 21 Jan 2023 23:45:36 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 00:45, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Jan 22, 2023 at 12:18:34AM -0500, Isaac Morland wrote:\n>\n\n\n> > It turns out that my tests wanted the owner to be “vagrant” rather than\n> > “postgres”. This is apparently because I was running as that user (in a\n> > Vagrant VM) when running the tests. Then I took that output and just made\n> > it the expected output. I’ve re-worked my build environment a bit so\n> that I\n> > run as “postgres” inside the Vagrant VM.\n> >\n> > What I don’t understand is why that didn’t break all the other tests.\n>\n> Probably because the other tests avoid showing the owner, exactly\n> because it varies depending on who runs the tests. Most of the \"plus\"\n> commands aren't tested, at least in the sql regression tests.\n>\n\nThanks for your patience. I didn’t think about it enough but everything you\nboth said makes sense.\n\nI’ve re-written the tests to create a test-specific role and functions so\nthere is no longer a dependency on the superuser name. I pondered the\nnotion of going with the flow and just leaving out the tests but that\nseemed like giving up too easily.\n\nWe should probably change one of the CI usernames to something other\n> than \"postgres\" to catch the case that someone hardcodes \"postgres\".\n>\n> > proper value in the Owner column so let’s see what CI does with it.\n>\n> Or better: see above about using it from your github account.\n\n\nYes, I will try to get this working before I try to make another\ncontribution.", "msg_date": "Sun, 22 Jan 2023 13:59:29 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On 2023-Jan-22, Isaac Morland wrote:\n\n> I’ve re-written the tests to create a test-specific role and functions so\n> there is no longer a dependency on the superuser name.\n\nThis one would fail the sanity check that all roles created by\nregression tests need to have names that start with \"regress_\".\n\n> I pondered the notion of going with the flow and just leaving out the\n> tests but that seemed like giving up too easily.\n\nI think avoiding even more untested code is good, so +1 for keeping at\nit.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"At least to kernel hackers, who really are human, despite occasional\nrumors to the contrary\" (LWN.net)\n\n\n", "msg_date": "Sun, 22 Jan 2023 20:15:49 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2023-Jan-22, Isaac Morland wrote:\n>\n> > I’ve re-written the tests to create a test-specific role and functions so\n> > there is no longer a dependency on the superuser name.\n>\n> This one would fail the sanity check that all roles created by\n> regression tests need to have names that start with \"regress_\".\n>\n\nThanks for the correction. Now I feel like I've skipped some of the\nreadings!\n\nUpdated patch attached. Informally, I am adopting the regress_* policy for\nall object types.\n\n> I pondered the notion of going with the flow and just leaving out the\n> > tests but that seemed like giving up too easily.\n>\n> I think avoiding even more untested code is good, so +1 for keeping at\n> it.\n>", "msg_date": "Sun, 22 Jan 2023 14:53:48 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>> This one would fail the sanity check that all roles created by\n>> regression tests need to have names that start with \"regress_\".\n\n> Thanks for the correction. Now I feel like I've skipped some of the\n> readings!\n> Updated patch attached. Informally, I am adopting the regress_* policy for\n> all object types.\n\nThat's excessive. The policy Alvaro mentions applies to globally-visible\nobject names (i.e., database, role, and tablespace names), and it's there\nto try to ensure that doing \"make installcheck\" against a live\ninstallation won't clobber any non-test-created objects. There's no point\nin having such a policy within a test database --- its most likely effect\nthere would be to increase the risk that different test scripts step on\neach others' toes. If you feel a need for a name prefix for non-global\nobjects, use something based on the name of your test script.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 22 Jan 2023 15:04:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 15:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n> >> This one would fail the sanity check that all roles created by\n> >> regression tests need to have names that start with \"regress_\".\n>\n> > Thanks for the correction. Now I feel like I've skipped some of the\n> > readings!\n> > Updated patch attached. Informally, I am adopting the regress_* policy\n> for\n> > all object types.\n>\n> That's excessive. The policy Alvaro mentions applies to globally-visible\n> object names (i.e., database, role, and tablespace names), and it's there\n> to try to ensure that doing \"make installcheck\" against a live\n> installation won't clobber any non-test-created objects. There's no point\n> in having such a policy within a test database --- its most likely effect\n> there would be to increase the risk that different test scripts step on\n> each others' toes. If you feel a need for a name prefix for non-global\n> objects, use something based on the name of your test script.\n>\n\nI already used a test-specific prefix, then added \"regress_\" in front.\nPoint taken, however, on the difference between global and non-global\nobjects.\n\nBut now I'm having a problem I don't understand: the CI are still failling,\nbut not in the psql test. Instead, I get this:\n\n[20:11:17.624] +++ tap check in src/bin/pg_upgrade +++\n[20:11:17.624] [20:09:11] t/001_basic.pl ....... ok 106 ms ( 0.00 usr\n 0.00 sys + 0.06 cusr 0.02 csys = 0.08 CPU)\n[20:11:17.624]\n[20:11:17.624] # Failed test 'old and new dumps match after pg_upgrade'\n[20:11:17.624] # at t/002_pg_upgrade.pl line 362.\n[20:11:17.624] # got: '1'\n[20:11:17.624] # expected: '0'\n[20:11:17.624] # Looks like you failed 1 test of 13.\n[20:11:17.624] [20:11:17] t/002_pg_upgrade.pl ..\n[20:11:17.624] Dubious, test returned 1 (wstat 256, 0x100)\n[20:11:17.624] Failed 1/13 subtests\n[20:11:17.624] [20:11:17]\n[20:11:17.624]\n[20:11:17.624] Test Summary Report\n[20:11:17.624] -------------------\n[20:11:17.624] t/002_pg_upgrade.pl (Wstat: 256 Tests: 13 Failed: 1)\n[20:11:17.624] Failed test: 13\n[20:11:17.624] Non-zero exit status: 1\n[20:11:17.624] Files=2, Tests=21, 126 wallclock secs ( 0.01 usr 0.00 sys +\n 6.65 cusr 3.95 csys = 10.61 CPU)\n[20:11:17.624] Result: FAIL\n[20:11:17.624] make[2]: *** [Makefile:55: check] Error 1\n[20:11:17.625] make[1]: *** [Makefile:43: check-pg_upgrade-recurse] Error 2\n\nAs far as I can tell this is the only failure and doesn’t have anything to\ndo with my change. Unless the objects I added are messing it up? Unlike\nwhen the psql regression test was failing, I don’t see an indication of\nwhere I can see the diffs.\n\nOn Sun, 22 Jan 2023 at 15:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> wrote:\n>> This one would fail the sanity check that all roles created by\n>> regression tests need to have names that start with \"regress_\".\n\n> Thanks for the correction. Now I feel like I've skipped some of the\n> readings!\n> Updated patch attached. Informally, I am adopting the regress_* policy for\n> all object types.\n\nThat's excessive.  The policy Alvaro mentions applies to globally-visible\nobject names (i.e., database, role, and tablespace names), and it's there\nto try to ensure that doing \"make installcheck\" against a live\ninstallation won't clobber any non-test-created objects.  There's no point\nin having such a policy within a test database --- its most likely effect\nthere would be to increase the risk that different test scripts step on\neach others' toes.  If you feel a need for a name prefix for non-global\nobjects, use something based on the name of your test script.I already used a test-specific prefix, then added \"regress_\" in front. Point taken, however, on the difference between global and non-global objects.But now I'm having a problem I don't understand: the CI are still failling, but not in the psql test. Instead, I get this:[20:11:17.624] +++ tap check in src/bin/pg_upgrade +++[20:11:17.624] [20:09:11] t/001_basic.pl ....... ok      106 ms ( 0.00 usr  0.00 sys +  0.06 cusr  0.02 csys =  0.08 CPU)[20:11:17.624] [20:11:17.624] #   Failed test 'old and new dumps match after pg_upgrade'[20:11:17.624] #   at t/002_pg_upgrade.pl line 362.[20:11:17.624] #          got: '1'[20:11:17.624] #     expected: '0'[20:11:17.624] # Looks like you failed 1 test of 13.[20:11:17.624] [20:11:17] t/002_pg_upgrade.pl .. [20:11:17.624] Dubious, test returned 1 (wstat 256, 0x100)[20:11:17.624] Failed 1/13 subtests [20:11:17.624] [20:11:17][20:11:17.624] [20:11:17.624] Test Summary Report[20:11:17.624] -------------------[20:11:17.624] t/002_pg_upgrade.pl (Wstat: 256 Tests: 13 Failed: 1)[20:11:17.624]   Failed test:  13[20:11:17.624]   Non-zero exit status: 1[20:11:17.624] Files=2, Tests=21, 126 wallclock secs ( 0.01 usr  0.00 sys +  6.65 cusr  3.95 csys = 10.61 CPU)[20:11:17.624] Result: FAIL[20:11:17.624] make[2]: *** [Makefile:55: check] Error 1[20:11:17.625] make[1]: *** [Makefile:43: check-pg_upgrade-recurse] Error 2As far as I can tell this is the only failure and doesn’t have anything to do with my change. Unless the objects I added are messing it up? Unlike when the psql regression test was failing, I don’t see an indication of where I can see the diffs.", "msg_date": "Sun, 22 Jan 2023 16:28:21 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, Jan 22, 2023 at 03:04:14PM -0500, Tom Lane wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > wrote:\n> >> This one would fail the sanity check that all roles created by\n> >> regression tests need to have names that start with \"regress_\".\n> \n> > Thanks for the correction. Now I feel like I've skipped some of the\n> > readings!\n> > Updated patch attached. Informally, I am adopting the regress_* policy for\n> > all object types.\n> \n> That's excessive. The policy Alvaro mentions applies to globally-visible\n> object names (i.e., database, role, and tablespace names), and it's there\n> to try to ensure that doing \"make installcheck\" against a live\n> installation won't clobber any non-test-created objects. There's no point\n> in having such a policy within a test database --- its most likely effect\n> there would be to increase the risk that different test scripts step on\n> each others' toes. If you feel a need for a name prefix for non-global\n> objects, use something based on the name of your test script.\n\nBut we *are* talking about the role to be created to allow stable output\nof \\df+ , so it's necessary to name it \"regress_*\". To appease\nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and to avoid clobbering\nglobal objects during \"installcheck\".\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Jan 2023 15:56:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 16:56, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Jan 22, 2023 at 03:04:14PM -0500, Tom Lane wrote:\n>\n\n> That's excessive. The policy Alvaro mentions applies to globally-visible\n> > object names (i.e., database, role, and tablespace names), and it's there\n> > to try to ensure that doing \"make installcheck\" against a live\n> > installation won't clobber any non-test-created objects. There's no\n> point\n> > in having such a policy within a test database --- its most likely effect\n> > there would be to increase the risk that different test scripts step on\n> > each others' toes. If you feel a need for a name prefix for non-global\n> > objects, use something based on the name of your test script.\n>\n> But we *are* talking about the role to be created to allow stable output\n> of \\df+ , so it's necessary to name it \"regress_*\". To appease\n> ENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and to avoid clobbering\n> global objects during \"installcheck\".\n>\n\nTom is talking about my informal policy of prefixing all objects. Only\nglobal objects need to be prefixed with regress_, but I prefixed everything\nI created (functions as well as the role). I actually called the\nrole regress_psql_df and used that entire role name as the prefix of my\nfunction names, so I think it unlikely that I’ll collide with anything else.\n\nOn Sun, 22 Jan 2023 at 16:56, Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Jan 22, 2023 at 03:04:14PM -0500, Tom Lane wrote:\n\n> That's excessive.  The policy Alvaro mentions applies to globally-visible\n> object names (i.e., database, role, and tablespace names), and it's there\n> to try to ensure that doing \"make installcheck\" against a live\n> installation won't clobber any non-test-created objects.  There's no point\n> in having such a policy within a test database --- its most likely effect\n> there would be to increase the risk that different test scripts step on\n> each others' toes.  If you feel a need for a name prefix for non-global\n> objects, use something based on the name of your test script.\n\nBut we *are* talking about the role to be created to allow stable output\nof \\df+ , so it's necessary to name it \"regress_*\".  To appease\nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, and to avoid clobbering\nglobal objects during \"installcheck\".Tom is talking about my informal policy of prefixing all objects. Only global objects need to be prefixed with regress_, but I prefixed everything I created (functions as well as the role). I actually called the role regress_psql_df and used that entire role name as the prefix of my function names, so I think it unlikely that I’ll collide with anything else.", "msg_date": "Sun, 22 Jan 2023 17:04:50 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, Jan 22, 2023 at 04:28:21PM -0500, Isaac Morland wrote:\n> On Sun, 22 Jan 2023 at 15:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Isaac Morland <isaac.morland@gmail.com> writes:\n> > > On Sun, 22 Jan 2023 at 14:26, Alvaro Herrera <alvherre@alvh.no-ip.org>\n> > > wrote:\n> > >> This one would fail the sanity check that all roles created by\n> > >> regression tests need to have names that start with \"regress_\".\n> >\n> > > Thanks for the correction. Now I feel like I've skipped some of the\n> > > readings!\n> > > Updated patch attached. Informally, I am adopting the regress_* policy\n> > for\n> > > all object types.\n> >\n> > That's excessive. The policy Alvaro mentions applies to globally-visible\n> > object names (i.e., database, role, and tablespace names), and it's there\n> > to try to ensure that doing \"make installcheck\" against a live\n> > installation won't clobber any non-test-created objects. There's no point\n> > in having such a policy within a test database --- its most likely effect\n> > there would be to increase the risk that different test scripts step on\n> > each others' toes. If you feel a need for a name prefix for non-global\n> > objects, use something based on the name of your test script.\n> >\n> \n> I already used a test-specific prefix, then added \"regress_\" in front.\n> Point taken, however, on the difference between global and non-global\n> objects.\n> \n> But now I'm having a problem I don't understand: the CI are still failling,\n> but not in the psql test. Instead, I get this:\n> \n> [20:11:17.624] +++ tap check in src/bin/pg_upgrade +++\n\nYou'll find the diff in the \"artifacts\", but not a separate \"diff\" file.\nhttps://api.cirrus-ci.com/v1/artifact/task/6146418377752576/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n\n CREATE FUNCTION public.regress_psql_df_sql() RETURNS void\n LANGUAGE sql\n BEGIN ATOMIC\n- SELECT NULL::text;\n+ SELECT NULL::text AS text;\n END;\n\nIt's failing because after restoring the function, the column is named\n\"text\" - maybe it's a bug.\n\nTom's earlier point was that neither the function nor its owner needs to\nbe preserved (as is done to exercise pg_dump/restore/upgrade - surely\nfunctions are already tested). Dropping it when you're done running \\df\nwill avoid any possible issue with pg_upgrade.\n\nWere you able to test with your own github account ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Jan 2023 16:27:51 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 17:27, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Jan 22, 2023 at 04:28:21PM -0500, Isaac Morland wrote:\n> > On Sun, 22 Jan 2023 at 15:04, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n\n> But now I'm having a problem I don't understand: the CI are still\n> failling,\n> > but not in the psql test. Instead, I get this:\n> >\n> > [20:11:17.624] +++ tap check in src/bin/pg_upgrade +++\n>\n> You'll find the diff in the \"artifacts\", but not a separate \"diff\" file.\n>\n> https://api.cirrus-ci.com/v1/artifact/task/6146418377752576/testrun/build/testrun/pg_upgrade/002_pg_upgrade/log/regress_log_002_pg_upgrade\n>\n> CREATE FUNCTION public.regress_psql_df_sql() RETURNS void\n> LANGUAGE sql\n> BEGIN ATOMIC\n> - SELECT NULL::text;\n> + SELECT NULL::text AS text;\n> END;\n>\n> It's failing because after restoring the function, the column is named\n> \"text\" - maybe it's a bug.\n>\n\nOK, thanks. I'd say I've uncovered a bug related to pg_upgrade, unless I’m\nmissing something. However, I've adjusted my patch so that nothing it\ncreates is kept. This seems tidier even without the test failure.\n\nTom's earlier point was that neither the function nor its owner needs to\n> be preserved (as is done to exercise pg_dump/restore/upgrade - surely\n> functions are already tested). Dropping it when you're done running \\df\n> will avoid any possible issue with pg_upgrade.\n>\n> Were you able to test with your own github account ?\n>\n\nI haven’t had a chance to try this. I must confess to being a bit confused\nby the distinction between running the CI tests and doing \"make check\";\nideally I would like to be able to run all the tests on my own machine\nwithout any external resources. But at the same time I don’t pretend to\nunderstand the full situation so I will try to use this when I get some\ntime.", "msg_date": "Sun, 22 Jan 2023 20:23:25 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, Jan 22, 2023 at 08:23:25PM -0500, Isaac Morland wrote:\n> > Were you able to test with your own github account ?\n> \n> I haven’t had a chance to try this. I must confess to being a bit confused\n> by the distinction between running the CI tests and doing \"make check\";\n> ideally I would like to be able to run all the tests on my own machine\n> without any external resources. But at the same time I don’t pretend to\n> understand the full situation so I will try to use this when I get some\n> time.\n\nFirst: \"make check\" only runs the sql tests, and not the perl tests\n(including pg_upgrade) or isolation tests. check-world runs everything.\n\nOne difference from running it locally is that cirrus runs tests under\nfour OSes. Another is that it has a bunch of compilation flags and\nvariations to help catch errors (although it's currently missing\nENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS, so that wouldn't have been\ncaught). And another reason is that it runs in a \"clean\" environment,\nso (for example) it'd probably catch if you have local, uncommited\nchanges, or if you assumed that the username is \"postgres\" (earlier I\nsaid that it didn't, but actually the mac task runs as \"admin\").\n\nThe old way of doing things was for cfbot to \"inject\" the cirrus.yml\nfile and then push a branch to cirrusci to run tests; it made some sense\nfor people to mail a patch to the list to cause cfbot to run the tests\nunder cirrusci. The current/new way is that .cirrus.yml is in the\nsource tree, so anyone with a github account can do that. IMO it no\nlonger makes sense to send patches to the list \"to see\" if it passes\ntests. I encouraging those who haven't to try it.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Jan 2023 20:37:32 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, 22 Jan 2023 at 21:37, Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Sun, Jan 22, 2023 at 08:23:25PM -0500, Isaac Morland wrote:\n> > > Were you able to test with your own github account ?\n> >\n> > I haven’t had a chance to try this. I must confess to being a bit\n> confused\n> > by the distinction between running the CI tests and doing \"make check\";\n> > ideally I would like to be able to run all the tests on my own machine\n> > without any external resources. But at the same time I don’t pretend to\n> > understand the full situation so I will try to use this when I get some\n> > time.\n>\n> First: \"make check\" only runs the sql tests, and not the perl tests\n> (including pg_upgrade) or isolation tests. check-world runs everything.\n>\n\nThanks very much. I should have remembered check-world, and of course the\nfact that the CI tests multiple platforms. I’ll go and do some\nreading/re-reading; now that I’ve gone through some parts of the process\nI’ll probably understand more.\n\nThe latest submission appears to have passed:\n\nhttp://cfbot.cputube.org/isaac-morland.html\n\nHowever, one of the jobs (Windows - Server 2019, MinGW64 - Meson) is paused\nand appears never to have run:\n\nhttps://cirrus-ci.com/task/6687014536347648\n\nOther than that, I think this is passing the tests.\n\nOn Sun, 22 Jan 2023 at 21:37, Justin Pryzby <pryzby@telsasoft.com> wrote:On Sun, Jan 22, 2023 at 08:23:25PM -0500, Isaac Morland wrote:\n> > Were you able to test with your own github account ?\n> \n> I haven’t had a chance to try this. I must confess to being a bit confused\n> by the distinction between running the CI tests and doing \"make check\";\n> ideally I would like to be able to run all the tests on my own machine\n> without any external resources. But at the same time I don’t pretend to\n> understand the full situation so I will try to use this when I get some\n> time.\n\nFirst: \"make check\" only runs the sql tests, and not the perl tests\n(including pg_upgrade) or isolation tests.  check-world runs everything.Thanks very much. I should have remembered check-world, and of course the fact that the CI tests multiple platforms. I’ll go and do some reading/re-reading; now that I’ve gone through some parts of the process I’ll probably understand more.The latest submission appears to have passed:http://cfbot.cputube.org/isaac-morland.htmlHowever, one of the jobs (Windows - Server 2019, MinGW64 - Meson) is paused and appears never to have run:https://cirrus-ci.com/task/6687014536347648Other than that, I think this is passing the tests.", "msg_date": "Sun, 22 Jan 2023 21:50:29 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Sun, Jan 22, 2023 at 09:50:29PM -0500, Isaac Morland wrote:\n> However, one of the jobs (Windows - Server 2019, MinGW64 - Meson) is paused\n> and appears never to have run:\n> \n> https://cirrus-ci.com/task/6687014536347648\n\nYeah, mingw is currently set to run only when manually \"triggered\" by\nthe repository owner (because it's slow). There's no mechanism to tell\ncfbot to trigger the task, but you can do it if you run from your own\ngithub. Anyway, there's no reason to think this patch needs extra\nplatform-specific coverage.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 22 Jan 2023 22:09:54 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> [ 0001-Remove-source-code-display-from-df-v6.patch ]\n\nPushed after some editorialization on the test case.\n\nOne thing I noticed while testing is that if you apply \\df+ to an\naggregate function, it will show \"Internal name\" of \"aggregate_dummy\".\nWhile that's an accurate description of what's in prosrc, it seems\nnot especially useful and perhaps indeed confusing to novices.\nSo I thought about suppressing it. However, that would require\na server-version-dependent test and I wasn't quite convinced it'd\nbe worth the trouble. Any thoughts on that?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 02 Mar 2023 17:20:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove source code display from \\df+?" }, { "msg_contents": "On Thu, 2 Mar 2023 at 17:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > [ 0001-Remove-source-code-display-from-df-v6.patch ]\n>\n> Pushed after some editorialization on the test case.\n>\n\nThanks!\n\nOne thing I noticed while testing is that if you apply \\df+ to an\n> aggregate function, it will show \"Internal name\" of \"aggregate_dummy\".\n> While that's an accurate description of what's in prosrc, it seems\n> not especially useful and perhaps indeed confusing to novices.\n> So I thought about suppressing it. However, that would require\n> a server-version-dependent test and I wasn't quite convinced it'd\n> be worth the trouble. Any thoughts on that?\n>\n\nI think it’s OK. Right now \\df+ claims that the source code for an\naggregate function is “aggregate_dummy”; that’s probably more untrue than\nsaying that its internal name is “aggregate_dummy”. There are several\nfeatures of aggregate functions that are always defined the same way by the\ncreation process; who’s to say they don’t all have a shared dummy internal\nname?\n\nOn Thu, 2 Mar 2023 at 17:20, Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> [ 0001-Remove-source-code-display-from-df-v6.patch ]\n\nPushed after some editorialization on the test case.Thanks!\nOne thing I noticed while testing is that if you apply \\df+ to an\naggregate function, it will show \"Internal name\" of \"aggregate_dummy\".\nWhile that's an accurate description of what's in prosrc, it seems\nnot especially useful and perhaps indeed confusing to novices.\nSo I thought about suppressing it.  However, that would require\na server-version-dependent test and I wasn't quite convinced it'd\nbe worth the trouble.  Any thoughts on that?I think it’s OK. Right now \\df+ claims that the source code for an aggregate function is “aggregate_dummy”; that’s probably more untrue than saying that its internal name is “aggregate_dummy”. There are several features of aggregate functions that are always defined the same way by the creation process; who’s to say they don’t all have a shared dummy internal name?", "msg_date": "Thu, 2 Mar 2023 23:49:16 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove source code display from \\df+?" } ]
[ { "msg_contents": "Hi all,\n\n\nI would like to propose a new pg_dump option called --with-child to \ninclude or exclude from a dump all child and partition tables when a \nparent table is specified using option -t/--table or -T/--exclude-table. \nThe whole tree is dumped with the root table.\n\n\nTo include all partitions or child tables with inheritance in a table \ndump we usually use the wildcard, for example:\n\n\n     pg_dump -d mydb -t \"root_tbname*\" > out.sql\n\n\nThis suppose that all child/partition tables use the prefix root_tbname \nin their object name. This is often the case but, if you are as lucky as \nme, the partitions could have a total different name. No need to say \nthat for inheritance this is rarely the case. The other problem is that \nwith the wildcard you can also dump relations that are not concerned at \nall by what you want to dump. Using the --with-child option will allow \nto just specify the root relation and all child/partition definitions \nand/or data will be parts of dump.\n\n\n     pg_dump -d mydb --table \"root_tbname\" --with-childs > out.sql\n\n\nTo exclude a whole inheritance tree from a dump:\n\n\n     pg_dump -d mydb --exclude-table \"root_tbname\" --with-childs > out.sql\n\n\nHere in attachment the patch that adds this feature to pg_dump.\n\n\nIs there is any interest for this feature?\n\n\nBest regards,\n\n-- \nGilles Darold\nhttps://www.migops.com/", "msg_date": "Wed, 11 Jan 2023 17:59:59 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "[Proposal] Allow pg_dump to include all child tables with the root\n table" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: not tested\nDocumentation: not tested\n\nHi\r\n\r\nthe patch applies fine on current master branch and it works as described. However, I would suggest changing the new option name from \"--with-childs\" to \"--with-partitions\" for several reasons. \r\n\r\n\"childs\" is grammatically incorrect and in the PG community, the term \"partitioned table\" is normally used to denote a parent table, and the term \"partition\" is used to denote the child table under the parent table. We should use these terms to stay consistent with the community.\r\n\r\nAlso, I would rephrase the documentation as:\r\n\r\nUsed in conjunction with <option>-t</option>/<option>--table</option> or <option>-T</option>/<option>--exclude-table</option> options to include or exclude partitions of the specified tables if any.\r\n\r\nthank you\r\n\r\nCary Huang\r\n================\r\nHighGo Software Canada\r\nwww.highgo.ca", "msg_date": "Fri, 24 Feb 2023 22:49:17 +0000", "msg_from": "Cary Huang <cary.huang@highgo.ca>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root\n table" }, { "msg_contents": "Hi,\n\nI'm not sure about the \"child\" -> \"partition\" change as it also selects\nchilds that are not partitions.\nI'm more dubious about the --with-childs option, I'd rather have\n--table-with-childs=<PATTERN> and --exclude-table-with-childs=<PATTERN>.\nThat will be clearer about what is what.\n\nI'm working on that, but have a hard time with test pg_dump/002_pg_dump\n(It's brand new to me)\n\nStéphane\n\nLe ven. 24 févr. 2023 à 23:50, Cary Huang <cary.huang@highgo.ca> a écrit :\n\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: not tested\n> Documentation: not tested\n>\n> Hi\n>\n> the patch applies fine on current master branch and it works as described.\n> However, I would suggest changing the new option name from \"--with-childs\"\n> to \"--with-partitions\" for several reasons.\n>\n> \"childs\" is grammatically incorrect and in the PG community, the term\n> \"partitioned table\" is normally used to denote a parent table, and the term\n> \"partition\" is used to denote the child table under the parent table. We\n> should use these terms to stay consistent with the community.\n>\n> Also, I would rephrase the documentation as:\n>\n> Used in conjunction with <option>-t</option>/<option>--table</option> or\n> <option>-T</option>/<option>--exclude-table</option> options to include or\n> exclude partitions of the specified tables if any.\n>\n> thank you\n>\n> Cary Huang\n> ================\n> HighGo Software Canada\n> www.highgo.ca\n\n\n\n-- \n\"Où se posaient les hirondelles avant l'invention du téléphone ?\"\n -- Grégoire Lacroix\n\nHi,I'm not sure about the \"child\" -> \"partition\" change as it also selects childs that are not partitions.I'm more dubious about the --with-childs option, I'd rather have --table-with-childs=<PATTERN> and --exclude-table-with-childs=<PATTERN>. That will be clearer about what is what.I'm working on that, but have a hard time with test pg_dump/002_pg_dump (It's brand new to me)StéphaneLe ven. 24 févr. 2023 à 23:50, Cary Huang <cary.huang@highgo.ca> a écrit :The following review has been posted through the commitfest application:\nmake installcheck-world:  tested, passed\nImplements feature:       tested, passed\nSpec compliant:           not tested\nDocumentation:            not tested\n\nHi\n\nthe patch applies fine on current master branch and it works as described. However, I would suggest changing the new option name from \"--with-childs\" to \"--with-partitions\" for several reasons. \n\n\"childs\" is grammatically incorrect and in the PG community, the term \"partitioned table\" is normally used to denote a parent table, and the term \"partition\" is used to denote the child table under the parent table. We should use these terms to stay consistent with the community.\n\nAlso, I would rephrase the documentation as:\n\nUsed in conjunction with <option>-t</option>/<option>--table</option> or <option>-T</option>/<option>--exclude-table</option> options to include or exclude partitions of the specified tables if any.\n\nthank you\n\nCary Huang\n================\nHighGo Software Canada\nwww.highgo.ca-- \"Où se posaient les hirondelles avant l'invention du téléphone ?\"  -- Grégoire Lacroix", "msg_date": "Sat, 25 Feb 2023 17:40:17 +0100", "msg_from": "=?UTF-8?Q?St=C3=A9phane_Tachoires?= <stephane.tachoires@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Le 25/02/2023 à 16:40, Stéphane Tachoires a écrit :\n> Hi,\n>\n> I'm not sure about the \"child\" -> \"partition\" change as it also \n> selects childs that are not partitions.\n> I'm more dubious about the --with-childs option, I'd rather have \n> --table-with-childs=<PATTERN> and \n> --exclude-table-with-childs=<PATTERN>. That will be clearer about what \n> is what.\n>\n> I'm working on that, but have a hard time with \n> test pg_dump/002_pg_dump (It's brand new to me)\n>\n> Stéphane\n>\n> Le ven. 24 févr. 2023 à 23:50, Cary Huang <cary.huang@highgo.ca> a écrit :\n>\n> The following review has been posted through the commitfest\n> application:\n> make installcheck-world:  tested, passed\n> Implements feature:       tested, passed\n> Spec compliant:           not tested\n> Documentation:            not tested\n>\n> Hi\n>\n> the patch applies fine on current master branch and it works as\n> described. However, I would suggest changing the new option name\n> from \"--with-childs\" to \"--with-partitions\" for several reasons.\n>\n> \"childs\" is grammatically incorrect and in the PG community, the\n> term \"partitioned table\" is normally used to denote a parent\n> table, and the term \"partition\" is used to denote the child table\n> under the parent table. We should use these terms to stay\n> consistent with the community.\n>\n> Also, I would rephrase the documentation as:\n>\n> Used in conjunction with\n> <option>-t</option>/<option>--table</option> or\n> <option>-T</option>/<option>--exclude-table</option> options to\n> include or exclude partitions of the specified tables if any.\n>\n> thank you\n>\n> Cary Huang\n> ================\n> HighGo Software Canada\n> www.highgo.ca <http://www.highgo.ca>\n>\n\nHi,\n\n\nThis is right this patch also works with inherited tables so \n--with-partitions can be confusing this is why --with-childs was chosen. \nBut I disagree the use of --table-with-childs and \n--exclude-table-with-childs because we already have the --table and \n--exclude-table, and it will add lot of code where we just need a switch \nto include children tables. Actually my first though was that this \nbehavior (dump child tables when the root table is dumped using --table) \nshould be the default in pg_dump but the problem is that it could break \nexisting scripts using pg_dump so I prefer to implement the \n--with-childs options.\n\n\nI think we can use --with-partitions, provided that it is clear in the \ndocumentation that this option also works with inheritance.\n\n\nAttached is a new patch v2 using the --with-partitions and the \ndocumentation fix.\n\n\n-- \nGilles Darold", "msg_date": "Sat, 25 Feb 2023 18:59:47 +0000", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> But I disagree the use of --table-with-childs and \n> --exclude-table-with-childs because we already have the --table and \n> --exclude-table, and it will add lot of code where we just need a switch \n> to include children tables.\n\nI quite dislike the idea of a separate --with-whatever switch, because\nit will (presumably) apply to all of your --table and --exclude-table\nswitches, where you may need it to apply to just some of them.\nSpelling considerations aside, attaching the property to the\nindividual switches seems far superior. And I neither believe that\nthis would add a lot of code, nor accept that as an excuse even if\nit's true.\n\nAs noted, \"childs\" is bad English and \"partitions\" is flat out wrong\n(unless you change it to recurse only to partitions, which doesn't\nseem like a better definition). We could go with\n--[exclude-]table-and-children, or maybe\n--[exclude-]table-and-child-tables, but those are getting into\ncarpal-tunnel-syndrome-inducing territory :-(. I lack a better\nnaming suggestion offhand.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 04 Mar 2023 14:18:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Le 04/03/2023 à 19:18, Tom Lane a écrit :\n> Gilles Darold <gilles@migops.com> writes:\n>> But I disagree the use of --table-with-childs and\n>> --exclude-table-with-childs because we already have the --table and\n>> --exclude-table, and it will add lot of code where we just need a switch\n>> to include children tables.\n> I quite dislike the idea of a separate --with-whatever switch, because\n> it will (presumably) apply to all of your --table and --exclude-table\n> switches, where you may need it to apply to just some of them.\n> Spelling considerations aside, attaching the property to the\n> individual switches seems far superior. And I neither believe that\n> this would add a lot of code, nor accept that as an excuse even if\n> it's true.y..\n\n\nRight, this is not a lot of code but just more code where I think we \njust need a switch. I much prefer that it applies to all --table / \n--exclude-table because this is generally the behavior we want for all \nroot/parent tables. But I agree that in some cases users could want that \nthis behavior applies to some selected tables only so the proposed new \noptions can answer to this need. Even if generally in similar cases \nseveral pg_dump commands can be used. This is just my opinion, I will \nadapt the patch to use the proposed new options.\n\n\nBut, what do you think about having pg_dump default to dump children \ntables with --table / --exclude-table? I was very surprised that this \nwas not the case the first time I see that. In this case we could add \n--[exclude-]table-no-child-tables. I think this form will be less used \nthan the form where we need the child tables to be dump with the parent \ntable, meaning that most of the time pg_dump commands using --table and \n--exclude-table will be kept untouched and those using more regexp to \ndump child tables could be simplified. I'm not sure that the backward \ncompatibility is an argument here to not change the default behavior of \npg_dump.\n\n--\n\nGilles\n\n\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Sun, 5 Mar 2023 08:03:33 +0000", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Le 04/03/2023 à 20:18, Tom Lane a écrit :\n> As noted, \"childs\" is bad English and \"partitions\" is flat out wrong\n> (unless you change it to recurse only to partitions, which doesn't\n> seem like a better definition). We could go with\n> --[exclude-]table-and-children, or maybe\n> --[exclude-]table-and-child-tables, but those are getting into\n> carpal-tunnel-syndrome-inducing territory 🙁. I lack a better\n> naming suggestion offhand.\n\n\nIn attachment is version 3 of the patch, it includes the use of options \nsuggested by Stephane and Tom:\n\n     --table-and-children,\n\n     --exclude-table-and-children\n\n     --exclude-table-data-and-children.\n\n  Documentation have been updated too.\n\n\nThanks\n\n-- \nGilles Darold", "msg_date": "Sat, 11 Mar 2023 19:51:01 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Le 11/03/2023 à 19:51, Gilles Darold a écrit :\n> Le 04/03/2023 à 20:18, Tom Lane a écrit :\n>> As noted, \"childs\" is bad English and \"partitions\" is flat out wrong\n>> (unless you change it to recurse only to partitions, which doesn't\n>> seem like a better definition).  We could go with\n>> --[exclude-]table-and-children, or maybe\n>> --[exclude-]table-and-child-tables, but those are getting into\n>> carpal-tunnel-syndrome-inducing territory 🙁.  I lack a better\n>> naming suggestion offhand.\n>\n>\n> In attachment is version 3 of the patch, it includes the use of \n> options suggested by Stephane and Tom:\n>\n>     --table-and-children,\n>\n>     --exclude-table-and-children\n>\n>     --exclude-table-data-and-children.\n>\n>  Documentation have been updated too.\n>\n>\n> Thanks\n>\n\nNew version v4 of the patch attached with a typo in documentation fixed.\n\n-- \nGilles Darold.", "msg_date": "Sun, 12 Mar 2023 10:04:34 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Hi Gilles,\r\n\r\nOn Ubuntu 22.04.2 all deb's updated, I have an error on a test\r\nI'll try to find where and why, but I think you should know first.\r\n\r\n1/1 postgresql:pg_dump / pg_dump/002_pg_dump ERROR 24.40s\r\n exit status 1\r\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n✀\r\n ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\nstderr:\r\n# Failed test 'only_dump_measurement: should dump CREATE TABLE\r\ntest_compression'\r\n# at /media/hddisk/stephane/postgresql/src/postgresql/src/bin/pg_dump/t/\r\n002_pg_dump.pl line 4729.\r\n# Review only_dump_measurement results in\r\n/media/hddisk/stephane/postgresql/build/testrun/pg_dump/002_pg_dump/data/tmp_test_iJxJ\r\n# Looks like you failed 1 test of 10264.\r\n\r\n(test program exited with status code 1)\r\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n\r\n\r\nSummary of Failures:\r\n\r\n1/1 postgresql:pg_dump / pg_dump/002_pg_dump ERROR 24.40s exit\r\nstatus 1\r\n\r\n\r\nOk: 0\r\nExpected Fail: 0\r\nFail: 1\r\nUnexpected Pass: 0\r\nSkipped: 0\r\nTimeout: 0\r\n\r\nJoin, only_dump_measurement.sql from\r\n/media/hddisk/stephane/postgresql/build/testrun/pg_dump/002_pg_dump/data/tmp_test_iJxJ\r\nIf you need more information, please ask...\r\n\r\nStéphane.\r\n\r\n\r\nLe dim. 12 mars 2023 à 10:04, Gilles Darold <gilles@migops.com> a écrit :\r\n\r\n> Le 11/03/2023 à 19:51, Gilles Darold a écrit :\r\n> > Le 04/03/2023 à 20:18, Tom Lane a écrit :\r\n> >> As noted, \"childs\" is bad English and \"partitions\" is flat out wrong\r\n> >> (unless you change it to recurse only to partitions, which doesn't\r\n> >> seem like a better definition). We could go with\r\n> >> --[exclude-]table-and-children, or maybe\r\n> >> --[exclude-]table-and-child-tables, but those are getting into\r\n> >> carpal-tunnel-syndrome-inducing territory 🙁. I lack a better\r\n> >> naming suggestion offhand.\r\n> >\r\n> >\r\n> > In attachment is version 3 of the patch, it includes the use of\r\n> > options suggested by Stephane and Tom:\r\n> >\r\n> > --table-and-children,\r\n> >\r\n> > --exclude-table-and-children\r\n> >\r\n> > --exclude-table-data-and-children.\r\n> >\r\n> > Documentation have been updated too.\r\n> >\r\n> >\r\n> > Thanks\r\n> >\r\n>\r\n> New version v4 of the patch attached with a typo in documentation fixed.\r\n>\r\n> --\r\n> Gilles Darold.\r\n>\r\n\r\n\r\n-- \r\n\"Où se posaient les hirondelles avant l'invention du téléphone ?\"\r\n -- Grégoire Lacroix", "msg_date": "Sun, 12 Mar 2023 19:05:42 +0100", "msg_from": "=?UTF-8?Q?St=C3=A9phane_Tachoires?= <stephane.tachoires@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Le 12/03/2023 à 19:05, Stéphane Tachoires a écrit :\n>\n> Hi Gilles,\n>\n> On Ubuntu 22.04.2 all deb's updated, I have an error on a test\n> I'll try to find where and why, but I think you should know first.\n>\n> 1/1 postgresql:pg_dump / pg_dump/002_pg_dump        ERROR           \n>  24.40s   exit status 1\n> ―――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――― \n> ✀ \n>  ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n> stderr:\n> #   Failed test 'only_dump_measurement: should dump CREATE TABLE \n> test_compression'\n> #   at \n> /media/hddisk/stephane/postgresql/src/postgresql/src/bin/pg_dump/t/002_pg_dump.pl \n> <http://002_pg_dump.pl> line 4729.\n> # Review only_dump_measurement results in \n> /media/hddisk/stephane/postgresql/build/testrun/pg_dump/002_pg_dump/data/tmp_test_iJxJ\n> # Looks like you failed 1 test of 10264.\n\nHi Stephane,\n\n\nOdd, I do not have this error when running make installcheck, I have the \nsame OS version as you.\n\n\n     +++ tap check in src/bin/pg_dump +++\n     t/001_basic.pl ................ ok\n     t/002_pg_dump.pl .............. ok\n     t/003_pg_dump_with_server.pl .. ok\n     t/010_dump_connstr.pl ......... ok\n     All tests successful.\n     Files=4, Tests=9531, 11 wallclock secs ( 0.33 usr  0.04 sys + 3.05 \ncusr  1.22 csys =  4.64 CPU)\n     Result: PASS\n\nAnyway this test must be fixed and this is done in version v5 of the \npatch attached here.\n\n\nThanks for the review.\n\n-- \nGilles Darold", "msg_date": "Mon, 13 Mar 2023 16:15:12 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Hi Gilles,\r\nV5 is ok (aside for LLVM 14 deprecated warnings, but that's another\r\nproblem) with meson compile and meson test on Ubuntu 20.04.2.\r\nCode fits well and looks standart, --help explain what it does clearly, and\r\ndocumentation is ok (but as a Français, I'm not an expert in English).\r\n\r\nStéphane\r\n\r\nLe lun. 13 mars 2023 à 16:15, Gilles Darold <gilles@migops.com> a écrit :\r\n\r\n> Le 12/03/2023 à 19:05, Stéphane Tachoires a écrit :\r\n>\r\n>\r\n> Hi Gilles,\r\n>\r\n> On Ubuntu 22.04.2 all deb's updated, I have an error on a test\r\n> I'll try to find where and why, but I think you should know first.\r\n>\r\n> 1/1 postgresql:pg_dump / pg_dump/002_pg_dump ERROR\r\n> 24.40s exit status 1\r\n> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n> ✀\r\n> ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\r\n> stderr:\r\n> # Failed test 'only_dump_measurement: should dump CREATE TABLE\r\n> test_compression'\r\n> # at /media/hddisk/stephane/postgresql/src/postgresql/src/bin/pg_dump/t/\r\n> 002_pg_dump.pl line 4729.\r\n> # Review only_dump_measurement results in\r\n> /media/hddisk/stephane/postgresql/build/testrun/pg_dump/002_pg_dump/data/tmp_test_iJxJ\r\n> # Looks like you failed 1 test of 10264.\r\n>\r\n>\r\n> Hi Stephane,\r\n>\r\n>\r\n> Odd, I do not have this error when running make installcheck, I have the\r\n> same OS version as you.\r\n>\r\n>\r\n> +++ tap check in src/bin/pg_dump +++\r\n> t/001_basic.pl ................ ok\r\n> t/002_pg_dump.pl .............. ok\r\n> t/003_pg_dump_with_server.pl .. ok\r\n> t/010_dump_connstr.pl ......... ok\r\n> All tests successful.\r\n> Files=4, Tests=9531, 11 wallclock secs ( 0.33 usr 0.04 sys + 3.05\r\n> cusr 1.22 csys = 4.64 CPU)\r\n> Result: PASS\r\n>\r\n> Anyway this test must be fixed and this is done in version v5 of the patch\r\n> attached here.\r\n>\r\n>\r\n> Thanks for the review.\r\n>\r\n> --\r\n> Gilles Darold\r\n>\r\n>\r\n\r\n-- \r\n\"Où se posaient les hirondelles avant l'invention du téléphone ?\"\r\n -- Grégoire Lacroix\r\n\nHi Gilles,V5 is ok (aside for LLVM 14 deprecated warnings, but that's another problem) with meson compile and meson test on Ubuntu 20.04.2.Code fits well and looks standart, --help explain what it does clearly, and documentation is ok (but as a Français, I'm not an expert in English).StéphaneLe lun. 13 mars 2023 à 16:15, Gilles Darold <gilles@migops.com> a écrit :\n\nLe 12/03/2023 à 19:05, Stéphane\n Tachoires a écrit :\n\n\n\n\n\n Hi Gilles,\n \n\nOn Ubuntu 22.04.2 all deb's updated, I have an error on a\n test\nI'll try to find where and why, but I think you should know\n first.\n\n1/1 postgresql:pg_dump / pg_dump/002_pg_dump        ERROR\n            24.40s   exit status 1\n――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n ✀\n ――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――――\n stderr:\n #   Failed test 'only_dump_measurement: should dump CREATE\n TABLE test_compression'\n #   at\n /media/hddisk/stephane/postgresql/src/postgresql/src/bin/pg_dump/t/002_pg_dump.pl\n line 4729.\n # Review only_dump_measurement results in\n/media/hddisk/stephane/postgresql/build/testrun/pg_dump/002_pg_dump/data/tmp_test_iJxJ\n # Looks like you failed 1 test of 10264.\n\n\n\n\n\nHi Stephane,\n\n\nOdd, I do not have this error when running make installcheck, I\n have the same OS version as you.\n\n\n    +++ tap check in src/bin/pg_dump +++\n     t/001_basic.pl ................ ok    \n     t/002_pg_dump.pl .............. ok      \n     t/003_pg_dump_with_server.pl .. ok   \n     t/010_dump_connstr.pl ......... ok    \n     All tests successful.\n     Files=4, Tests=9531, 11 wallclock secs ( 0.33 usr  0.04 sys + \n 3.05 cusr  1.22 csys =  4.64 CPU)\n     Result: PASS\n\n\nAnyway this test must be fixed and this is done in version v5 of\n the patch attached here.\n\n\nThanks for the review.\n\n\n-- \nGilles Darold\n\n-- \"Où se posaient les hirondelles avant l'invention du téléphone ?\"  -- Grégoire Lacroix", "msg_date": "Tue, 14 Mar 2023 10:49:24 +0100", "msg_from": "=?UTF-8?Q?St=C3=A9phane_Tachoires?= <stephane.tachoires@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nV5 is ok (aside for LLVM 14 deprecated warnings, but that's another problem) with meson compile and meson test on Ubuntu 20.04.2.\r\nCode fits well and looks standart, --help explain what it does clearly, and documentation is ok (but as a Français, I'm not an expert in English).", "msg_date": "Tue, 14 Mar 2023 09:50:32 +0000", "msg_from": "stephane tachoires <stephane.tachoires@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root\n table" }, { "msg_contents": "Le 14/03/2023 à 10:50, stephane tachoires a écrit :\n> The following review has been posted through the commitfest application:\n> make installcheck-world: tested, passed\n> Implements feature: tested, passed\n> Spec compliant: tested, passed\n> Documentation: tested, passed\n>\n> V5 is ok (aside for LLVM 14 deprecated warnings, but that's another problem) with meson compile and meson test on Ubuntu 20.04.2.\n> Code fits well and looks standart, --help explain what it does clearly, and documentation is ok (but as a Français, I'm not an expert in English).\n\n\nThanks Stepane, I've changed commit fest status to \"Ready for committers\".\n\n-- \nGilles Darold\n\n\n\n", "msg_date": "Tue, 14 Mar 2023 11:44:59 +0100", "msg_from": "Gilles Darold <gilles@migops.com>", "msg_from_op": true, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" }, { "msg_contents": "Gilles Darold <gilles@migops.com> writes:\n> Thanks Stepane, I've changed commit fest status to \"Ready for committers\".\n\nPushed with some minor editing.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 14 Mar 2023 16:10:16 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Proposal] Allow pg_dump to include all child tables with the\n root table" } ]
[ { "msg_contents": "Hi,\n\nThe use of the ringbuffer in VACUUM often causes very substantial slowdowns.\n\nThe primary reason for that is that often most/all the buffers in the\nringbuffer have been dirtied when they were processed, with an associated WAL\nrecord. When we then reuse the buffer via the (quite small) ringbuffer, we\nneed to flush the WAL before reclaiming the buffer. A synchronous flush of\nthe WAL every few buffers ends up as a very significant bottleneck, unless you\nhave local low-latency durable storage (i.e. local storage with some sort of\npower protection).\n\nThe slowdown caused by the frequent WAL flushes is very significant.\n\nA secondary issue, when we end up doing multiple passes, we'll have to re-read\ndata into shared_buffers, when we just wrote it out and evicted it.\n\n\nAn example:\n\nOn a local SSD with decently fast fdatasync([1]). Table size is 3322MB, with ~10%\nupdated, ~30% deleted tuples, and a single index. m_w_m is large enough to do\nthis in one pass. I used pg_prewarm() of another relation to ensure the\nvacuumed table isn't in s_b (otherwise ringbuffers aren't doing anything).\n\ns_b ringbuffer enabled time wal_syncs wal_sync_time\n128MB 1 77797ms\n128MB 0 13676ms 241 2538ms\n8GB 1 72834ms 23976 51989ms\n8GB 0 9544ms 150 1634ms\n\n\nsee [2] for logs / stats of the 8GB run. All the data here is in the OS page\ncache, so we don't even pay the real-price for reading the data multiple\ntimes.\n\n\nOn cloud hardware with higher fsync latency I've seen > 15x time differences\nbetween using the ringbuffers and avoiding them by using pg_prewarm.\n\n\nOf course there's a good reason we have the ringbuffer - we don't want\nmaintenance operations to completely disturb the buffer pool and harm the\nproduction workload. But if, e.g., the database isn't available due to\nanti-wraparound measures, there's no other workload to protect, and the\nringbuffer substantially reduces availability. Initial data loads could\nsimilarly benefit.\n\n\nTherefore I'd like to add an option to the VACUUM command to use to disable\nthe use of the ringbuffer. Not sure about the name yet.\n\n\nI think we should auto-enable that mode once we're using the failsafe mode,\nsimilar to [auto]vacuum cost delays getting disabled\n(c.f. lazy_check_wraparound_failsafe()). If things are bad enough that we're\nsoon going to shut down, we want to be aggressive.\n\n\nGreetings,\n\nAndres Freund\n\n[1] according to pg_test_fsync:\nfdatasync 769.189 ops/sec 1300 usecs/op\n\n\n[2]\n\nFor the s_b 128MB case:\n\nringbuffers enabled:\n\n2023-01-11 10:24:58.726 PST [355353][client backend][2/19:0][psql] INFO: aggressively vacuuming \"postgres.public.copytest_0\"\n2023-01-11 10:26:19.488 PST [355353][client backend][2/19:0][psql] INFO: finished vacuuming \"postgres.public.copytest_0\": index scans: 1\n\tpages: 0 removed, 424975 remain, 424975 scanned (100.00% of total)\n\ttuples: 4333300 removed, 6666700 remain, 0 are dead but not yet removable\n\tremovable cutoff: 2981, which was 0 XIDs old when operation ended\n\tnew relfrozenxid: 2981, which is 102 XIDs ahead of previous value\n\tfrozen: 424975 pages from table (100.00% of total) had 6666700 tuples frozen\n\tindex scan needed: 424975 pages from table (100.00% of total) had 4325101 dead item identifiers removed\n\tindex \"copytest_0_id_idx\": pages: 10032 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tI/O timings: read: 2284.292 ms, write: 4325.009 ms\n\tavg read rate: 83.203 MB/s, avg write rate: 83.199 MB/s\n\tbuffer usage: 425044 hits, 860113 misses, 860074 dirtied\n\tWAL usage: 1709902 records, 434990 full page images, 2273501683 bytes\n\tsystem usage: CPU: user: 11.62 s, system: 11.86 s, elapsed: 80.76 s\n\n┌─────────────┬─────────┬────────────┬──────────────────┬───────────┬──────────┬────────────────┬───────────────┬───────────────────────────────┐\n│ wal_records │ wal_fpi │ wal_bytes │ wal_buffers_full │ wal_write │ wal_sync │ wal_write_time │ wal_sync_time │ stats_reset │\n├─────────────┼─────────┼────────────┼──────────────────┼───────────┼──────────┼────────────────┼───────────────┼───────────────────────────────┤\n│ 8092795 │ 443356 │ 2999358740 │ 1569 │ 28651 │ 27081 │ 1874.391 │ 59895.674 │ 2023-01-11 10:24:58.664859-08 │\n└─────────────┴─────────┴────────────┴──────────────────┴───────────┴──────────┴────────────────┴───────────────┴───────────────────────────────┘\n\n\nringbuffers disabled:\n\n2023-01-11 10:23:05.081 PST [355054][client backend][2/19:0][psql] INFO: aggressively vacuuming \"postgres.public.copytest_0\"\n2023-01-11 10:23:18.755 PST [355054][client backend][2/19:0][psql] INFO: finished vacuuming \"postgres.public.copytest_0\": index scans: 1\n\tpages: 0 removed, 424979 remain, 424979 scanned (100.00% of total)\n\ttuples: 4333300 removed, 6666700 remain, 0 are dead but not yet removable\n\tremovable cutoff: 2879, which was 0 XIDs old when operation ended\n\tnew relfrozenxid: 2879, which is 102 XIDs ahead of previous value\n\tfrozen: 424979 pages from table (100.00% of total) had 6666700 tuples frozen\n\tindex scan needed: 424979 pages from table (100.00% of total) had 4325176 dead item identifiers removed\n\tindex \"copytest_0_id_idx\": pages: 10032 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tI/O timings: read: 1247.366 ms, write: 2888.756 ms\n\tavg read rate: 491.485 MB/s, avg write rate: 491.395 MB/s\n\tbuffer usage: 424927 hits, 860242 misses, 860083 dirtied\n\tWAL usage: 1709918 records, 434994 full page images, 2273503049 bytes\n\tsystem usage: CPU: user: 5.42 s, system: 6.26 s, elapsed: 13.67 s\n\n┌─────────────┬─────────┬────────────┬──────────────────┬───────────┬──────────┬────────────────┬───────────────┬───────────────────────────────┐\n│ wal_records │ wal_fpi │ wal_bytes │ wal_buffers_full │ wal_write │ wal_sync │ wal_write_time │ wal_sync_time │ stats_reset │\n├─────────────┼─────────┼────────────┼──────────────────┼───────────┼──────────┼────────────────┼───────────────┼───────────────────────────────┤\n│ 8092963 │ 443362 │ 2999373996 │ 212190 │ 212333 │ 241 │ 1209.516 │ 2538.706 │ 2023-01-11 10:23:05.004783-08 │\n└─────────────┴─────────┴────────────┴──────────────────┴───────────┴──────────┴────────────────┴───────────────┴───────────────────────────────┘\n\n\n\nFor the s_b 8GB case:\n\nringbuffers enabled:\n\n2023-01-11 10:04:12.479 PST [352665][client backend][2/19:0][psql] INFO: aggressively vacuuming \"postgres.public.copytest_0\"\n2023-01-11 10:05:25.312 PST [352665][client backend][2/19:0][psql] INFO: finished vacuuming \"postgres.public.copytest_0\": index scans: 1\n\tpages: 0 removed, 424977 remain, 424977 scanned (100.00% of total)\n\ttuples: 4333300 removed, 6666700 remain, 0 are dead but not yet removable\n\tremovable cutoff: 2675, which was 0 XIDs old when operation ended\n\tnew relfrozenxid: 2675, which is 102 XIDs ahead of previous value\n\tfrozen: 424977 pages from table (100.00% of total) had 6666700 tuples frozen\n\tindex scan needed: 424977 pages from table (100.00% of total) had 4325066 dead item identifiers removed\n\tindex \"copytest_0_id_idx\": pages: 10032 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tI/O timings: read: 2610.875 ms, write: 4177.842 ms\n\tavg read rate: 81.688 MB/s, avg write rate: 87.515 MB/s\n\tbuffer usage: 523611 hits, 761552 misses, 815868 dirtied\n\tWAL usage: 1709910 records, 434992 full page images, 2273502729 bytes\n\tsystem usage: CPU: user: 11.00 s, system: 11.86 s, elapsed: 72.83 s\n┌─────────────┬─────────┬────────────┬──────────────────┬───────────┬──────────┬────────────────┬───────────────┬───────────────────────────────┐\n│ wal_records │ wal_fpi │ wal_bytes │ wal_buffers_full │ wal_write │ wal_sync │ wal_write_time │ wal_sync_time │ stats_reset │\n├─────────────┼─────────┼────────────┼──────────────────┼───────────┼──────────┼────────────────┼───────────────┼───────────────────────────────┤\n│ 8092707 │ 443358 │ 2999354090 │ 42259 │ 66227 │ 23976 │ 2050.963 │ 51989.099 │ 2023-01-11 10:04:12.404054-08 │\n└─────────────┴─────────┴────────────┴──────────────────┴───────────┴──────────┴────────────────┴───────────────┴───────────────────────────────┘\n\n\nringbuffers disabled:\n\n2023-01-11 10:08:48.414 PST [353287][client backend][3/19:0][psql] INFO: aggressively vacuuming \"postgres.public.copytest_0\"\n2023-01-11 10:08:57.956 PST [353287][client backend][3/19:0][psql] INFO: finished vacuuming \"postgres.public.copytest_0\": index scans: 1\n\tpages: 0 removed, 424977 remain, 424977 scanned (100.00% of total)\n\ttuples: 4333300 removed, 6666700 remain, 0 are dead but not yet removable\n\tremovable cutoff: 2777, which was 0 XIDs old when operation ended\n\tnew relfrozenxid: 2777, which is 102 XIDs ahead of previous value\n\tfrozen: 424977 pages from table (100.00% of total) had 6666700 tuples frozen\n\tindex scan needed: 424976 pages from table (100.00% of total) had 4325153 dead item identifiers removed\n\tindex \"copytest_0_id_idx\": pages: 10032 in total, 0 newly deleted, 0 currently deleted, 0 reusable\n\tI/O timings: read: 1040.230 ms, write: 0.000 ms\n\tavg read rate: 312.016 MB/s, avg write rate: 356.242 MB/s\n\tbuffer usage: 904078 hits, 381084 misses, 435101 dirtied\n\tWAL usage: 1709908 records, 434992 full page images, 2273499663 bytes\n\tsystem usage: CPU: user: 5.57 s, system: 2.26 s, elapsed: 9.54 s\n\n┌─────────────┬─────────┬────────────┬──────────────────┬───────────┬──────────┬────────────────┬───────────────┬───────────────────────────────┐\n│ wal_records │ wal_fpi │ wal_bytes │ wal_buffers_full │ wal_write │ wal_sync │ wal_write_time │ wal_sync_time │ stats_reset │\n├─────────────┼─────────┼────────────┼──────────────────┼───────────┼──────────┼────────────────┼───────────────┼───────────────────────────────┤\n│ 8092933 │ 443358 │ 2999364596 │ 236354 │ 236398 │ 150 │ 1166.314 │ 1634.408 │ 2023-01-11 10:08:48.350328-08 │\n└─────────────┴─────────┴────────────┴──────────────────┴───────────┴──────────┴────────────────┴───────────────┴───────────────────────────────┘\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:27:20 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Jan 11, 2023 at 10:27 AM Andres Freund <andres@anarazel.de> wrote:\n> Therefore I'd like to add an option to the VACUUM command to use to disable\n> the use of the ringbuffer. Not sure about the name yet.\n\nSounds like a good idea.\n\n> I think we should auto-enable that mode once we're using the failsafe mode,\n> similar to [auto]vacuum cost delays getting disabled\n> (c.f. lazy_check_wraparound_failsafe()). If things are bad enough that we're\n> soon going to shut down, we want to be aggressive.\n\n+1\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:35:19 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 10:27:20 -0800, Andres Freund wrote:\n> On cloud hardware with higher fsync latency I've seen > 15x time differences\n> between using the ringbuffers and avoiding them by using pg_prewarm.\n\nA slightly edited version of what I've in the past to defeat the ringbuffers\nusing pg_prewarm, as I think it might be useful for others:\n\nWITH what_rel AS (\n SELECT 'copytest_0'::regclass AS vacuum_me\n),\nwhat_to_prefetch AS (\n SELECT vacuum_me, greatest(heap_blks_total - 1, 0) AS last_block,\n CASE WHEN phase = 'scanning heap' THEN heap_blks_scanned ELSE heap_blks_vacuumed END AS current_pos\n FROM what_rel, pg_stat_progress_vacuum\n WHERE relid = vacuum_me AND phase IN ('scanning heap', 'vacuuming heap')\n)\nSELECT\n vacuum_me, current_pos,\n pg_prewarm(vacuum_me, 'buffer', 'main', current_pos, least(current_pos + 10000, last_block))\nFROM what_to_prefetch\n\\watch 0.1\n\nHaving this running in the background brings the s_b=128MB, ringbuffer enabled\ncase down from 77797ms to 14838ms. Close to the version with the ringbuffer\ndisabled.\n\n\nUnfortunately, afaik, that trick isn't currently possible for the index vacuum\nphase, as we don't yet expose the current scan position. And not every index\nmight be as readily prefetchable as just prefetching the next 10k blocks from\nthe current position.\n\nThat's not too bad if your indexes are small, but unfortunately that's not\nalways the case...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:54:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 10:35:19 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 11, 2023 at 10:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > Therefore I'd like to add an option to the VACUUM command to use to disable\n> > the use of the ringbuffer. Not sure about the name yet.\n> \n> Sounds like a good idea.\n\nAny idea about the name? The obvious thing is to reference ring buffers in the\noption name, but that's more of an implementation detail...\n\nSome ideas:\n\nUSE_RING_BUFFERS on|off\nSCAN_PROTECTION on|off\nREUSE_BUFFERS on|off\nLIMIT_BUFFER_USAGE on|off\n\nRegards,\n\nAndres\n\n\n", "msg_date": "Wed, 11 Jan 2023 10:58:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Jan 11, 2023 at 10:58 AM Andres Freund <andres@anarazel.de> wrote:\n> Any idea about the name? The obvious thing is to reference ring buffers in the\n> option name, but that's more of an implementation detail...\n\nWhat are the chances that anybody using this feature via a manual\nVACUUM command will also use INDEX_CLEANUP off? It's not really\nsupposed to be used routinely, at all. Right? It's just for\nemergencies.\n\nPerhaps it can be tied to INDEX_CLEANUP=off? That makes it hard to get\njust the behavior you want when testing VACUUM, but maybe that doesn't\nmatter.\n\nRealistically, most of the value here comes from changing the failsafe\nbehavior, which doesn't require the user to know anything about\nVACUUM. I know that AWS has reduced the vacuum_failsafe_age default on\nRDS to 1.2 billion (a decision made before I joined Amazon), so it is\nalready something AWS lean on quite a bit where available.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:06:26 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 11:06:26 -0800, Peter Geoghegan wrote:\n> On Wed, Jan 11, 2023 at 10:58 AM Andres Freund <andres@anarazel.de> wrote:\n> > Any idea about the name? The obvious thing is to reference ring buffers in the\n> > option name, but that's more of an implementation detail...\n> \n> What are the chances that anybody using this feature via a manual\n> VACUUM command will also use INDEX_CLEANUP off? It's not really\n> supposed to be used routinely, at all. Right? It's just for\n> emergencies.\n\nI think it's also quite useful for e.g. vacuuming after initial data loads or\nif you need to do a first vacuum after a lot of bloat accumulated due to a\nstuck transaction.\n\n\n> Perhaps it can be tied to INDEX_CLEANUP=off? That makes it hard to get\n> just the behavior you want when testing VACUUM, but maybe that doesn't\n> matter.\n\nI don't like that - it's also quite useful to disable use of ringbuffers when\nyou actually need to clean up indexes. Especially when we have a lot of dead\ntuples we'll rescan indexes over and over...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:18:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de> wrote:\n> I don't like that - it's also quite useful to disable use of ringbuffers when\n> you actually need to clean up indexes. Especially when we have a lot of dead\n> tuples we'll rescan indexes over and over...\n\nThat's a fair point.\n\nMy vote goes to \"REUSE_BUFFERS\", then.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:20:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Jan 11, 2023 at 10:58:54AM -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-11 10:35:19 -0800, Peter Geoghegan wrote:\n> > On Wed, Jan 11, 2023 at 10:27 AM Andres Freund <andres@anarazel.de> wrote:\n> > > Therefore I'd like to add an option to the VACUUM command to use to disable\n> > > the use of the ringbuffer. Not sure about the name yet.\n> > \n> > Sounds like a good idea.\n> \n> Any idea about the name? The obvious thing is to reference ring buffers in the\n> option name, but that's more of an implementation detail...\n> \n> Some ideas:\n> \n> USE_RING_BUFFERS on|off\n> REUSE_BUFFERS on|off\n\n+1 for either of these.\n\nI don't think it's an issue to expose implementation details here.\nAnyone who wants to change this will know about the implementation\ndetails that they're changing, and it's better to refer to it by the\nsame/similar name and not by some other name that's hard to find.\n\nBTW I can't see that the ring buffer is currently exposed in any\nuser-facing docs for COPY/ALTER/VACUUM/CREATE ?\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Jan 2023 14:38:34 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 14:38:34 -0600, Justin Pryzby wrote:\n> On Wed, Jan 11, 2023 at 10:58:54AM -0800, Andres Freund wrote:\n> > Some ideas:\n> > \n> > USE_RING_BUFFERS on|off\n> > REUSE_BUFFERS on|off\n> \n> +1 for either of these.\n\nThen I'd go for REUSE_BUFFERS. What made you prefer it over\nLIMIT_BUFFER_USAGE?\n\nUSE_BUFFER_ACCESS_STRATEGY would be a name tied to the implementation that's\nnot awful, I think..\n\n\n> I don't think it's an issue to expose implementation details here.\n> Anyone who wants to change this will know about the implementation\n> details that they're changing, and it's better to refer to it by the\n> same/similar name and not by some other name that's hard to find.\n\nA ringbuffer could refer to a lot of things other than something limiting\nbuffer usage, that's why I don't like it.\n\n\n> BTW I can't see that the ring buffer is currently exposed in any\n> user-facing docs for COPY/ALTER/VACUUM/CREATE ?\n\nYea, there's surprisingly little in the docs about it, given how much it\ninfluences behaviour. It's mentioned in tablesample-method.sgml, but without\nexplanation - and it's a page documenting C API...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 13:09:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de> wrote:\n>> I don't like that - it's also quite useful to disable use of ringbuffers when\n>> you actually need to clean up indexes. Especially when we have a lot of dead\n>> tuples we'll rescan indexes over and over...\n\n> That's a fair point.\n\n> My vote goes to \"REUSE_BUFFERS\", then.\n\nI wonder whether it could make sense to allow a larger ringbuffer size,\nrather than just the limit cases of \"on\" and \"off\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:18:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 16:18:34 -0500, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de> wrote:\n> >> I don't like that - it's also quite useful to disable use of ringbuffers when\n> >> you actually need to clean up indexes. Especially when we have a lot of dead\n> >> tuples we'll rescan indexes over and over...\n> \n> > That's a fair point.\n> \n> > My vote goes to \"REUSE_BUFFERS\", then.\n> \n> I wonder whether it could make sense to allow a larger ringbuffer size,\n> rather than just the limit cases of \"on\" and \"off\".\n\nI can see that making sense, particularly if we were to later extend this to\nother users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading of\ndata > 16MB but also << s_b vastly slower, but it can still be very important\nto use if there's lots of parallel processes loading data.\n\nMaybe BUFFER_USAGE_LIMIT, with a value from -1 to N, with -1 indicating the\ndefault value, 0 preventing use of a buffer access strategy, and 1..N\nindicating the size in blocks?\n\nWould we want to set an upper limit lower than implied by the memory limit for\nthe BufferAccessStrategy allocation?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 13:39:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Jan 11, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-11 16:18:34 -0500, Tom Lane wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > >> I don't like that - it's also quite useful to disable use of\n> ringbuffers when\n> > >> you actually need to clean up indexes. Especially when we have a lot\n> of dead\n> > >> tuples we'll rescan indexes over and over...\n> >\n> > > That's a fair point.\n> >\n> > > My vote goes to \"REUSE_BUFFERS\", then.\n> >\n> > I wonder whether it could make sense to allow a larger ringbuffer size,\n> > rather than just the limit cases of \"on\" and \"off\".\n>\n> I can see that making sense, particularly if we were to later extend this\n> to\n> other users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading\n> of\n> data > 16MB but also << s_b vastly slower, but it can still be very\n> important\n> to use if there's lots of parallel processes loading data.\n>\n>\nShould we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n\"temp_buffers\" settings?\n\nThen give VACUUM a (BUFFER_POOL=ring*|shared) option?\n\nI think making DBAs aware of this dynamic and making the ring buffer usage\nuser-facing is beneficial in its own right (at least, the concept that\nchanges done by vacuum don't impact shared_buffers, regardless of how that\nnon-impact manifests). But I don't see much benefit trying to come up with\na different name.\n\nDavid J.\n\nOn Wed, Jan 11, 2023 at 2:39 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-01-11 16:18:34 -0500, Tom Lane wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n> > On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de> wrote:\n> >> I don't like that - it's also quite useful to disable use of ringbuffers when\n> >> you actually need to clean up indexes. Especially when we have a lot of dead\n> >> tuples we'll rescan indexes over and over...\n> \n> > That's a fair point.\n> \n> > My vote goes to \"REUSE_BUFFERS\", then.\n> \n> I wonder whether it could make sense to allow a larger ringbuffer size,\n> rather than just the limit cases of \"on\" and \"off\".\n\nI can see that making sense, particularly if we were to later extend this to\nother users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading of\ndata > 16MB but also << s_b vastly slower, but it can still be very important\nto use if there's lots of parallel processes loading data.Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and \"temp_buffers\" settings?Then give VACUUM a (BUFFER_POOL=ring*|shared) option?I think making DBAs aware of this dynamic and making the ring buffer usage user-facing is beneficial in its own right (at least, the concept that changes done by vacuum don't impact shared_buffers, regardless of how that non-impact manifests).  But I don't see much benefit trying to come up with a different name.David J.", "msg_date": "Wed, 11 Jan 2023 17:26:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> \"temp_buffers\" settings?\n\nThe different types of ring buffers have different sizes, for good reasons. So\nI don't see that working well. I also think it'd be more often useful to\ncontrol this on a statement basis - if you have a parallel import tool that\nstarts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\ncourse each session can change the ring buffer settings, but still.\n\n\n> Then give VACUUM a (BUFFER_POOL=ring*|shared) option?\n\nThat seems likely to mislead, because it'd still use shared buffers when the\nblocks are already present. The ring buffers aren't a separate buffer pool,\nthey're a subset of the normal bufferpool. Lookup is done normally, only when\na page isn't found, the search for a victim buffer first tries to use a buffer\nfrom the ring.\n\n\n> I think making DBAs aware of this dynamic and making the ring buffer usage\n> user-facing is beneficial in its own right (at least, the concept that\n> changes done by vacuum don't impact shared_buffers, regardless of how that\n> non-impact manifests).\n\nVACUUM can end up dirtying all of shared buffers, even with the ring buffer in\nuse...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:36:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nSo, I attached a rough implementation of both the autovacuum failsafe\nreverts to shared buffers and the vacuum option (no tests or docs or\nanything).\n\nThe first three patches in the set are just for enabling use of shared\nbuffers in failsafe mode for autovacuum. I haven't actually ensured it\nworks (i.e. triggering failsafe mode and checking the stats for whether\nor not shared buffers were used).\n\nI was wondering about the status of the autovacuum wraparound failsafe\ntest suggested in [1]. I don't see it registered for the March's\ncommitfest. I'll probably review it since it will be useful for this\npatchset.\n\nThe first patch in the set is to free the BufferAccessStrategy object\nthat is made in do_autovacuum() -- I don't see when the memory context\nit is allocated in is destroyed, so it seems like it might be a leak?\n\nThe last patch in the set is a trial implementation of the VACUUM option\nsuggested -- BUFFER_USAGE_LIMIT. More on that below.\n\nOn Wed, Jan 11, 2023 at 4:39 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-11 16:18:34 -0500, Tom Lane wrote:\n> > Peter Geoghegan <pg@bowt.ie> writes:\n> > > On Wed, Jan 11, 2023 at 11:18 AM Andres Freund <andres@anarazel.de>\n> wrote:\n> > >> I don't like that - it's also quite useful to disable use of\n> ringbuffers when\n> > >> you actually need to clean up indexes. Especially when we have a lot\n> of dead\n> > >> tuples we'll rescan indexes over and over...\n> >\n> > > That's a fair point.\n> >\n> > > My vote goes to \"REUSE_BUFFERS\", then.\n> >\n> > I wonder whether it could make sense to allow a larger ringbuffer size,\n> > rather than just the limit cases of \"on\" and \"off\".\n>\n> I can see that making sense, particularly if we were to later extend this\n> to\n> other users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading\n> of\n> data > 16MB but also << s_b vastly slower, but it can still be very\n> important\n> to use if there's lots of parallel processes loading data.\n>\n> Maybe BUFFER_USAGE_LIMIT, with a value from -1 to N, with -1 indicating the\n> default value, 0 preventing use of a buffer access strategy, and 1..N\n> indicating the size in blocks?\n>\n>\nI have found the implementation you suggested very hard to use.\nThe attached fourth patch in the set implements it the way you suggest.\nI had to figure out what number to set the BUFER_USAGE_LIMIT to -- and,\nsince I don't specify shared buffers in units of nbuffer, it's pretty\nannoying to have to figure out a valid number.\n\nI think that it would be better to have it be either a percentage of\nshared buffers or a size in units of bytes/kb/mb like that of shared\nbuffers.\n\nUsing a fraction or percentage appeals to me because you don't need to\nreference your shared buffers setting and calculate what size you want\nto set it to. Also, parsing the size in different units sounds like more\nwork.\n\nUnfortunately, the fraction doesn't really work if we cap the ring size\nof a buffer access strategy to NBuffers / 8. Also, there are other\nissues like what would 0% and 100% mean.\n\nI have a list of other questions, issues, and TODOs related to the code\nI wrote to implement BUFFER_USAGE_LIMIT, but I'm not sure those are\nworth discussing until we shape up the interface.\n\n\n> Would we want to set an upper limit lower than implied by the memory limit\n> for\n> the BufferAccessStrategy allocation?\n>\n>\nSo, I was wondering what you thought about NBuffers / 8 (the current\nlimit). Does it make sense?\n\nIf we clamp the user-specified value to this, I think we definitely need\nto inform them through some kind of logging or message. I am sure there\nare lots of other gucs doing this -- do you know any off the top of your\nhead?\n\n- Melanie\n\n [1]\nhttps://www.postgresql.org/message-id/flat/CAB8KJ%3Dj1b3kscX8Cg5G%3DQ39ZQsv2x4URXsuTueJLz%3DfcvJ3eoQ%40mail.gmail.com#ee67664e85c4d11596a92cc71780d29c", "msg_date": "Wed, 22 Feb 2023 16:32:53 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-02-22 16:32:53 -0500, Melanie Plageman wrote:\n> I was wondering about the status of the autovacuum wraparound failsafe\n> test suggested in [1]. I don't see it registered for the March's\n> commitfest. I'll probably review it since it will be useful for this\n> patchset.\n\nIt's pretty hard to make it work reliably. I was suggesting somewhere that we\nought to add a EMERGENCY parameter to manual VACUUMs to allow testing that\npath a tad more easily.\n\n\n> The first patch in the set is to free the BufferAccessStrategy object\n> that is made in do_autovacuum() -- I don't see when the memory context\n> it is allocated in is destroyed, so it seems like it might be a leak?\n\nThe backend shuts down just after, so that's not a real issue. Not that it'd\nhurt to fix it.\n\n\n> > I can see that making sense, particularly if we were to later extend this\n> > to\n> > other users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading\n> > of\n> > data > 16MB but also << s_b vastly slower, but it can still be very\n> > important\n> > to use if there's lots of parallel processes loading data.\n> >\n> > Maybe BUFFER_USAGE_LIMIT, with a value from -1 to N, with -1 indicating the\n> > default value, 0 preventing use of a buffer access strategy, and 1..N\n> > indicating the size in blocks?\n\n> I have found the implementation you suggested very hard to use.\n> The attached fourth patch in the set implements it the way you suggest.\n> I had to figure out what number to set the BUFER_USAGE_LIMIT to -- and,\n> since I don't specify shared buffers in units of nbuffer, it's pretty\n> annoying to have to figure out a valid number.\n\nI think we should be able to parse it in a similar way to how we parse\nshared_buffers. You could even implement this as a GUC that is then set by\nVACUUM (similar to how VACUUM FREEZE is implemented).\n\n\n> I think that it would be better to have it be either a percentage of\n> shared buffers or a size in units of bytes/kb/mb like that of shared\n> buffers.\n\nI don't think a percentage of shared buffers works particularly well - you\nvery quickly run into the ringbuffer becoming impractically big.\n\n\n> > Would we want to set an upper limit lower than implied by the memory limit\n> > for\n> > the BufferAccessStrategy allocation?\n> >\n> >\n> So, I was wondering what you thought about NBuffers / 8 (the current\n> limit). Does it make sense?\n\nThat seems *way* too big. Imagine how large allocations we'd end up with a\nshared_buffers size of a few TB.\n\nI'd probably make it a hard error at 1GB and a silent cap at NBuffers / 2 or\nsuch.\n\n\n> @@ -547,7 +547,7 @@ bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,\n> \tstate->targetcontext = AllocSetContextCreate(CurrentMemoryContext,\n> \t\t\t\t\t\t\t\t\t\t\t\t \"amcheck context\",\n> \t\t\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n> -\tstate->checkstrategy = GetAccessStrategy(BAS_BULKREAD);\n> +\tstate->checkstrategy = GetAccessStrategy(BAS_BULKREAD, -1);\n> \n> \t/* Get true root block from meta-page */\n> \tmetapage = palloc_btree_page(state, BTREE_METAPAGE);\n\nChanging this everywhere seems pretty annoying, particularly because I suspect\na bunch of extensions also use GetAccessStrategy(). How about a\nGetAccessStrategyExt(), GetAccessStrategyCustomSize() or such?\n\n\n> BufferAccessStrategy\n> -GetAccessStrategy(BufferAccessStrategyType btype)\n> +GetAccessStrategy(BufferAccessStrategyType btype, int buffers)\n> {\n> \tBufferAccessStrategy strategy;\n> \tint\t\t\tring_size;\n> +\tconst char *strategy_name = btype_get_name(btype);\n\nShouldn't be executed when we don't need it.\n\n\n> +\tif (btype != BAS_VACUUM)\n> +\t{\n> +\t\tif (buffers == 0)\n> +\t\t\telog(ERROR, \"Use of shared buffers unsupported for buffer access strategy: %s. nbuffers must be -1.\",\n> +\t\t\t\t\tstrategy_name);\n> +\n> +\t\tif (buffers > 0)\n> +\t\t\telog(ERROR, \"Specification of ring size in buffers unsupported for buffer access strategy: %s. nbuffers must be -1.\",\n> +\t\t\t\t\tstrategy_name);\n> +\t}\n> +\n> +\t// TODO: DEBUG logging message for dev?\n> +\tif (buffers == 0)\n> +\t\tbtype = BAS_NORMAL;\n\nGetAccessStrategy() often can be executed hundreds of thousands of times a\nsecond, so I'm very sceptical that adding log messages to it useful.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 27 Feb 2023 13:21:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > \"temp_buffers\" settings?\n>\n> The different types of ring buffers have different sizes, for good reasons. So\n> I don't see that working well. I also think it'd be more often useful to\n> control this on a statement basis - if you have a parallel import tool that\n> starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> course each session can change the ring buffer settings, but still.\n\nHow about having GUCs for each ring buffer (bulk_read_ring_buffers,\nbulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\nThese options can help especially when statement level controls aren't\neasy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\nneeded users can also set them at the system level. For instance, one\ncan set bulk_write_ring_buffers to other than 16MB or -1 to disable\nthe ring buffer to use shared_buffers and run a bunch of bulk write\nqueries.\n\nAlthough I'm not quite opposing the idea of statement level controls\n(like the VACUUM one proposed here), it is better to make these ring\nbuffer sizes configurable across the system to help with the other\nsimilar cases e.g., a CTAS or RMV can help subsequent reads from\nshared buffers if ring buffer is skipped.\n\nThoughts?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Feb 2023 13:46:32 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, Feb 23, 2023 at 3:03 AM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Hi,\n>\n> So, I attached a rough implementation of both the autovacuum failsafe\n> reverts to shared buffers and the vacuum option (no tests or docs or\n> anything).\n\nThanks for the patches. I have some comments.\n\n0001:\n1. I don't quite understand the need for this 0001 patch. Firstly,\nbstrategy allocated once per autovacuum worker in AutovacMemCxt which\ngoes away with the process. Secondly, the worker exits after\ndo_autovacuum() with which memory context is gone. I think this patch\nis unnecessary unless I'm missing something.\n\n0002:\n1. Don't we need to remove vac_strategy for analyze.c as well? It's\npretty-meaningless there than vacuum.c as we're passing bstrategy to\nall required functions.\n\n0004:\n1. I think no multiple sentences in a single error message. How about\n\"of %d, changing it to %d\"?\n+ elog(WARNING, \"buffer_usage_limit %d is below the\nminimum buffer_usage_limit of %d. setting it to %d\",\n\n2. Typically, postgres error messages start with lowercase letters,\nhints and detail messages start with uppercase letters.\n+ if (buffers == 0)\n+ elog(ERROR, \"Use of shared buffers unsupported for buffer\naccess strategy: %s. nbuffers must be -1.\",\n+ strategy_name);\n+\n+ if (buffers > 0)\n+ elog(ERROR, \"Specification of ring size in buffers\nunsupported for buffer access strategy: %s. nbuffers must be -1.\",\n+ strategy_name);\n\n3. A function for this seems unnecessary, especially when a static\narray would do the needful, something like forkNames[].\n+static const char *\n+btype_get_name(BufferAccessStrategyType btype)\n+{\n+ switch (btype)\n+ {\n\n4. Why are these assumptions needed? Can't we simplify by doing\nvalidations on the new buffers parameter only when the btype is\nBAS_VACUUM?\n+ if (buffers == 0)\n+ elog(ERROR, \"Use of shared buffers unsupported for buffer\naccess strategy: %s. nbuffers must be -1.\",\n+ strategy_name);\n\n+ // TODO: DEBUG logging message for dev?\n+ if (buffers == 0)\n+ btype = BAS_NORMAL;\n\n5. Is this change needed for this patch?\n default:\n elog(ERROR, \"unrecognized buffer access strategy: %d\",\n- (int) btype);\n- return NULL; /* keep compiler quiet */\n+ (int) btype);\n+\n+ pg_unreachable();\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Feb 2023 14:22:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Thank you to all reviewers!\n\nThis email is in answer to the three reviews\ndone since my last version. Attached is v2 and inline below is replies\nto all the review comments.\n\nThe main difference between this version and the previous version is\nthat I've added a guc, buffer_usage_limit and the VACUUM option\nBUFFER_USAGE_LIMIT is now to be specified in size (like kB, MB, etc).\n\nI currently only use the guc value for VACUUM, but it is meant to be\nused for all buffer access strategies and is configurable at the session\nlevel.\n\nI would prefer that we had the option of resizing the buffer access\nstrategy object per table being autovacuumed. Since autovacuum reloads\nthe config file between tables, this would be quite possible.\n\nI started implementing this, but stopped because the code is not really\nin a good state for that.\n\nIn fact, I'm not very happy with my implementation at all because I\nthink given the current structure of vacuum() and vacuum_rel(), it will\npotentially make the code more confusing.\n\nI don't like how autovacuum and vacuum use vacuum_rel() and vacuum()\ndifferently (autovacuum always calls vacuum() with a list containing a\nsingle relation). And vacuum() takes buffer access strategy as a\nparameter, supposedly so that autovacuum can change the buffer access\nstrategy object per call, but it doesn't do that. And then vacuum() and\nvacuum_rel() go and access VacuumParams at various places with no rhyme\nor reason -- seemingly just based on the random availability of other\nobjects whose state they would like to check on. So, IMO, in adding a\n\"buffers\" parameter to VacuumParams, I am asking for confusion in\nautovacuum code with table-level VacuumParams containing an value for\nbuffers that isn't used.\n\n\nOn Mon, Feb 27, 2023 at 4:21 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-02-22 16:32:53 -0500, Melanie Plageman wrote:\n> > The first patch in the set is to free the BufferAccessStrategy object\n> > that is made in do_autovacuum() -- I don't see when the memory context\n> > it is allocated in is destroyed, so it seems like it might be a leak?\n>\n> The backend shuts down just after, so that's not a real issue. Not that it'd\n> hurt to fix it.\n\nI've dropped that patch from the set.\n\n> > > I can see that making sense, particularly if we were to later extend this\n> > > to\n> > > other users of ringbuffers. E.g. COPYs us of the ringbuffer makes loading\n> > > of\n> > > data > 16MB but also << s_b vastly slower, but it can still be very\n> > > important\n> > > to use if there's lots of parallel processes loading data.\n> > >\n> > > Maybe BUFFER_USAGE_LIMIT, with a value from -1 to N, with -1 indicating the\n> > > default value, 0 preventing use of a buffer access strategy, and 1..N\n> > > indicating the size in blocks?\n>\n> > I have found the implementation you suggested very hard to use.\n> > The attached fourth patch in the set implements it the way you suggest.\n> > I had to figure out what number to set the BUFER_USAGE_LIMIT to -- and,\n> > since I don't specify shared buffers in units of nbuffer, it's pretty\n> > annoying to have to figure out a valid number.\n>\n> I think we should be able to parse it in a similar way to how we parse\n> shared_buffers. You could even implement this as a GUC that is then set by\n> VACUUM (similar to how VACUUM FREEZE is implemented).\n\nin the attached v2, I've used parse_int() to do this.\n\n\n> > I think that it would be better to have it be either a percentage of\n> > shared buffers or a size in units of bytes/kb/mb like that of shared\n> > buffers.\n>\n> I don't think a percentage of shared buffers works particularly well - you\n> very quickly run into the ringbuffer becoming impractically big.\n\nIt is now a size.\n\n> > > Would we want to set an upper limit lower than implied by the memory limit\n> > > for\n> > > the BufferAccessStrategy allocation?\n> > >\n> > >\n> > So, I was wondering what you thought about NBuffers / 8 (the current\n> > limit). Does it make sense?\n>\n> That seems *way* too big. Imagine how large allocations we'd end up with a\n> shared_buffers size of a few TB.\n>\n> I'd probably make it a hard error at 1GB and a silent cap at NBuffers / 2 or\n> such.\n\nWell, as I mentioned NBuffers / 8 is the current GetAccessStrategy()\ncap.\n\nIn the attached patchset, I have introduced a hard cap of 16GB which is\nenforced for the VACUUM option and for the buffer_usage_limit guc. I\nkept the \"silent cap\" at NBuffers / 8 but am open to changing it to\nNBuffers / 2 if we think it is okay for its silent cap to be different\nthan GetAccessStrategy()'s cap.\n\n\n> > @@ -547,7 +547,7 @@ bt_check_every_level(Relation rel, Relation heaprel, bool heapkeyspace,\n> > state->targetcontext = AllocSetContextCreate(CurrentMemoryContext,\n> > \"amcheck context\",\n> > ALLOCSET_DEFAULT_SIZES);\n> > - state->checkstrategy = GetAccessStrategy(BAS_BULKREAD);\n> > + state->checkstrategy = GetAccessStrategy(BAS_BULKREAD, -1);\n> >\n> > /* Get true root block from meta-page */\n> > metapage = palloc_btree_page(state, BTREE_METAPAGE);\n>\n> Changing this everywhere seems pretty annoying, particularly because I suspect\n> a bunch of extensions also use GetAccessStrategy(). How about a\n> GetAccessStrategyExt(), GetAccessStrategyCustomSize() or such?\n\nYes, I don't know what I was thinking. Changed it to\nGetAccessStrategyExt() -- though now I am thinking I don't like Ext and\nwant to change it.\n\n> > BufferAccessStrategy\n> > -GetAccessStrategy(BufferAccessStrategyType btype)\n> > +GetAccessStrategy(BufferAccessStrategyType btype, int buffers)\n> > {\n> > BufferAccessStrategy strategy;\n> > int ring_size;\n> > + const char *strategy_name = btype_get_name(btype);\n>\n> Shouldn't be executed when we don't need it.\n\nI got rid of it for now.\n\n> > + if (btype != BAS_VACUUM)\n> > + {\n> > + if (buffers == 0)\n> > + elog(ERROR, \"Use of shared buffers unsupported for buffer access strategy: %s. nbuffers must be -1.\",\n> > + strategy_name);\n> > +\n> > + if (buffers > 0)\n> > + elog(ERROR, \"Specification of ring size in buffers unsupported for buffer access strategy: %s. nbuffers must be -1.\",\n> > + strategy_name);\n> > + }\n> > +\n> > + // TODO: DEBUG logging message for dev?\n> > + if (buffers == 0)\n> > + btype = BAS_NORMAL;\n>\n> GetAccessStrategy() often can be executed hundreds of thousands of times a\n> second, so I'm very sceptical that adding log messages to it useful.\n\nSo, in the case of vacuum and autovacuum, I don't see how\nGetAccessStrategyExt() could be called hundreds of thousands of times a\nsecond. It is not even called for each table being vacuumed -- it is\nonly called before vacuuming a list of tables.\n\nOn Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > \"temp_buffers\" settings?\n> >\n> > The different types of ring buffers have different sizes, for good reasons. So\n> > I don't see that working well. I also think it'd be more often useful to\n> > control this on a statement basis - if you have a parallel import tool that\n> > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > course each session can change the ring buffer settings, but still.\n>\n> How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> These options can help especially when statement level controls aren't\n> easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> needed users can also set them at the system level. For instance, one\n> can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> the ring buffer to use shared_buffers and run a bunch of bulk write\n> queries.\n\nSo, I've rebelled a bit and implemented a single guc,\nbuffer_usage_limit, in the attached patchset. Users can set it at the\nsession or system level or they can specify BUFFER_USAGE_LIMIT to\nvacuum. It is the same size for all operations. By default all of this\nwould be the same as it is now.\n\nThe attached patchset does not use the guc for any operations except\nVACUUM, though. I will add on another patch if people still feel\nstrongly that we cannot have a single guc. If the other operations use\nthis guc, I think we could get much of the same flexibility as having\nmultiple gucs by just being able to set it at the session level (or\nhaving command options).\n\n> Although I'm not quite opposing the idea of statement level controls\n> (like the VACUUM one proposed here), it is better to make these ring\n> buffer sizes configurable across the system to help with the other\n> similar cases e.g., a CTAS or RMV can help subsequent reads from\n> shared buffers if ring buffer is skipped.\n\nYes, I've done both.\n\n\nOn Tue, Feb 28, 2023 at 3:52 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Feb 23, 2023 at 3:03 AM Melanie Plageman\n> 0001:\n> 1. I don't quite understand the need for this 0001 patch. Firstly,\n> bstrategy allocated once per autovacuum worker in AutovacMemCxt which\n> goes away with the process. Secondly, the worker exits after\n> do_autovacuum() with which memory context is gone. I think this patch\n> is unnecessary unless I'm missing something.\n\nI've dropped this one.\n\n> 0002:\n> 1. Don't we need to remove vac_strategy for analyze.c as well? It's\n> pretty-meaningless there than vacuum.c as we're passing bstrategy to\n> all required functions.\n\nSo, it is a bit harder to remove it from analyze because acquire_func\nfunc doesn't take the buffer access strategy as a parameter and\nacquire_sample_rows uses the vac_context global variable to pass to\ntable_scan_analyze_next_block().\n\nWe could change acquire_func, but it looks like FDW uses it, so I'm not\nsure. It would be more for consistency than as a performance win, as I\nimagine analyze is less of a problem than vacuum (i.e. it is probably\nreading fewer blocks and probably not dirtying them [unless it does\non-access pruning?]).\n\nI haven't done this in the attached set.\n\n> 0004:\n> 1. I think no multiple sentences in a single error message. How about\n> \"of %d, changing it to %d\"?\n> + elog(WARNING, \"buffer_usage_limit %d is below the\n> minimum buffer_usage_limit of %d. setting it to %d\",\n\nI've removed this message, but if I put back a message about clamping, I\nwill remember this note.\n\n> 2. Typically, postgres error messages start with lowercase letters,\n> hints and detail messages start with uppercase letters.\n> + if (buffers == 0)\n> + elog(ERROR, \"Use of shared buffers unsupported for buffer\n> access strategy: %s. nbuffers must be -1.\",\n> + strategy_name);\n> +\n> + if (buffers > 0)\n> + elog(ERROR, \"Specification of ring size in buffers\n> unsupported for buffer access strategy: %s. nbuffers must be -1.\",\n> + strategy_name);\n\nThanks! I've removed some of the error messages for now, but, for the\nones I kept, I tthink they are consistent now with this pattern.\n\n\n> 3. A function for this seems unnecessary, especially when a static\n> array would do the needful, something like forkNames[].\n> +static const char *\n> +btype_get_name(BufferAccessStrategyType btype)\n> +{\n> + switch (btype)\n> + {\n\nI've removed it for now.\n\n>\n> 4. Why are these assumptions needed? Can't we simplify by doing\n> validations on the new buffers parameter only when the btype is\n> BAS_VACUUM?\n> + if (buffers == 0)\n> + elog(ERROR, \"Use of shared buffers unsupported for buffer\n> access strategy: %s. nbuffers must be -1.\",\n> + strategy_name);\n>\n> + // TODO: DEBUG logging message for dev?\n> + if (buffers == 0)\n> + btype = BAS_NORMAL;\n\nSo, I've moved validation to the vacuum option parsing for the vacuum\noption and am using the guc infrastructure to check min and max for the\nguc value.\n\n> 5. Is this change needed for this patch?\n> default:\n> elog(ERROR, \"unrecognized buffer access strategy: %d\",\n> - (int) btype);\n> - return NULL; /* keep compiler quiet */\n> + (int) btype);\n> +\n> + pg_unreachable();\n\nThe pg_unreachable() is removed, as I've left GetAccessStrategy()\nuntouched.\n\n- Melanie", "msg_date": "Wed, 8 Mar 2023 20:28:03 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "> On Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> > On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > > \"temp_buffers\" settings?\n> > >\n> > > The different types of ring buffers have different sizes, for good reasons. So\n> > > I don't see that working well. I also think it'd be more often useful to\n> > > control this on a statement basis - if you have a parallel import tool that\n> > > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > > course each session can change the ring buffer settings, but still.\n> >\n> > How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> > bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> > These options can help especially when statement level controls aren't\n> > easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> > needed users can also set them at the system level. For instance, one\n> > can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> > the ring buffer to use shared_buffers and run a bunch of bulk write\n> > queries.\n\nIn attached v3, I've changed the name of the guc from buffer_usage_limit\nto vacuum_buffer_usage_limit, since it is only used for vacuum and\nautovacuum.\n\nI haven't added the other suggested strategy gucs, as those could easily\nbe done in a future patchset.\n\nI've also changed GetAccessStrategyExt() to GetAccessStrategyWithSize()\n-- similar to initArrayResultWithSize().\n\nAnd I've added tab completion for BUFFER_USAGE_LIMIT so that it is\neasier to try out my patch.\n\nMost of the TODOs in the code are related to the question of how\nautovacuum uses the guc vacuum_buffer_usage_limit. autovacuum creates\nthe buffer access strategy ring in do_autovacuum() before looping\nthrough and vacuuming tables. It passes this strategy object on to\nvacuum(). Since we reuse the same strategy object for all tables in a\ngiven invocation of do_autovacuum(), only failsafe autovacuum would\nchange buffer access strategies. This is probably okay, but it does mean\nthat the table-level VacuumParams variable, ring_size, means something\ndifferent for autovacuum than vacuum. Autovacuum workers will always\nhave set it to -1. We won't ever reach code in vacuum() which relies on\nVacuumParams->ring_size as long as autovacuum workers pass a non-NULL\nBufferAccessStrategy object to vacuum(), though.\n\n- Melanie", "msg_date": "Sat, 11 Mar 2023 09:55:33 -0500", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Mar 11, 2023 at 09:55:33AM -0500, Melanie Plageman wrote:\n> Subject: [PATCH v3 2/3] use shared buffers when failsafe active\n> \n> +\t\t/*\n> +\t\t * Assume the caller who allocated the memory for the\n> +\t\t * BufferAccessStrategy object will free it.\n> +\t\t */\n> +\t\tvacrel->bstrategy = NULL;\n\nThis comment could use elaboration:\n\n** VACUUM normally restricts itself to a small ring buffer; but in\nfailsafe mode, in order to process tables as quickly as possible, allow\nthe leaving behind large number of dirty buffers.\n\n> Subject: [PATCH v3 3/3] add vacuum option to specify ring_size and guc\n\n> #define INT_ACCESS_ONCE(var)\t((int)(*((volatile int *)&(var))))\n> +#define bufsize_limit_to_nbuffers(bufsize) (bufsize * 1024 / BLCKSZ)\n\nMacros are normally be capitalized\n\nIt's a good idea to write \"(bufsize)\", in case someone passes \"a+b\".\n\n> @@ -586,6 +587,45 @@ GetAccessStrategy(BufferAccessStrategyType btype)\n> +BufferAccessStrategy\n> +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size)\n\nMaybe it would make sense for GetAccessStrategy() to call\nGetAccessStrategyWithSize(). Or maybe not.\n\n> +\t\t{\"vacuum_buffer_usage_limit\", PGC_USERSET, RESOURCES_MEM,\n> +\t\t\tgettext_noop(\"Sets the buffer pool size for operations employing a buffer access strategy.\"),\n\nThe description should mention vacuum, if that's the scope of the GUC's\nbehavior.\n\n> +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> +\t\t\t\t# -1 to use default,\n> +\t\t\t\t# 0 to disable vacuum buffer access strategy and use shared buffers\n> +\t\t\t\t# > 0 to specify size\n\nIf I'm not wrong, there's still no documentation about \"ring buffers\" or\npostgres' \"strategy\". Which seems important to do for this patch, along\nwith other documentation.\n\nThis patch should add support in vacuumdb.c. And maybe a comment about\nadding support there, since it's annoying when it the release notes one\nyear say \"support VACUUM (FOO)\" and then one year later say \"support\nvacuumdb --foo\".\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 11 Mar 2023 13:16:19 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, 11 Mar 2023 at 16:55, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> > On Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > > > \"temp_buffers\" settings?\n> > > >\n> > > > The different types of ring buffers have different sizes, for good reasons. So\n> > > > I don't see that working well. I also think it'd be more often useful to\n> > > > control this on a statement basis - if you have a parallel import tool that\n> > > > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > > > course each session can change the ring buffer settings, but still.\n> > >\n> > > How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> > > bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> > > These options can help especially when statement level controls aren't\n> > > easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> > > needed users can also set them at the system level. For instance, one\n> > > can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> > > the ring buffer to use shared_buffers and run a bunch of bulk write\n> > > queries.\n>\n> In attached v3, I've changed the name of the guc from buffer_usage_limit\n> to vacuum_buffer_usage_limit, since it is only used for vacuum and\n> autovacuum.\n\nSorry for arriving late to this thread, but what about sizing the ring\ndynamically? From what I gather the primary motivation for larger ring\nsize is avoiding WAL flushes due to dirty buffer writes. We already\ncatch that event with StrategyRejectBuffer(). So maybe a dynamic\nsizing algorithm could be applied to the ringbuffer. Make the buffers\narray in strategy capable of holding up to the limit of buffers, but\nset ring size conservatively. If we have to flush WAL, double the ring\nsize (up to the limit). If we loop around the ring without flushing,\ndecrease the ring size by a small amount to let clock sweep reclaim\nthem for use by other backends.\n\n-- \nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 13 Mar 2023 14:37:51 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Thanks for your interest in this patch!\n\nOn Mon, Mar 13, 2023 at 8:38 AM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Sat, 11 Mar 2023 at 16:55, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > > On Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > > On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > > > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > > > > \"temp_buffers\" settings?\n> > > > >\n> > > > > The different types of ring buffers have different sizes, for good reasons. So\n> > > > > I don't see that working well. I also think it'd be more often useful to\n> > > > > control this on a statement basis - if you have a parallel import tool that\n> > > > > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > > > > course each session can change the ring buffer settings, but still.\n> > > >\n> > > > How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> > > > bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> > > > These options can help especially when statement level controls aren't\n> > > > easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> > > > needed users can also set them at the system level. For instance, one\n> > > > can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> > > > the ring buffer to use shared_buffers and run a bunch of bulk write\n> > > > queries.\n> >\n> > In attached v3, I've changed the name of the guc from buffer_usage_limit\n> > to vacuum_buffer_usage_limit, since it is only used for vacuum and\n> > autovacuum.\n>\n> Sorry for arriving late to this thread, but what about sizing the ring\n> dynamically? From what I gather the primary motivation for larger ring\n> size is avoiding WAL flushes due to dirty buffer writes. We already\n> catch that event with StrategyRejectBuffer(). So maybe a dynamic\n> sizing algorithm could be applied to the ringbuffer. Make the buffers\n> array in strategy capable of holding up to the limit of buffers, but\n> set ring size conservatively. If we have to flush WAL, double the ring\n> size (up to the limit). If we loop around the ring without flushing,\n> decrease the ring size by a small amount to let clock sweep reclaim\n> them for use by other backends.\n\nSo, the original motivation of this patch was to allow autovacuum in\nfailsafe mode to abandon use of a buffer access strategy, since, at that\npoint, there is no reason to hold back. The idea was expanded to be an\noption to explicit vacuum, since users often must initiate an explicit\nvacuum after a forced shutdown due to transaction ID wraparound.\n\nAs for routine vacuuming and the other buffer access strategies, I think\nthere is an argument for configurability based on operator knowledge --\nperhaps your workload will use the data you are COPYing as soon as the\nCOPY finishes, so you might as well disable a buffer access strategy or\nuse a larger fraction of shared buffers. Also, the ring sizes were\nselected sixteen years ago and average server memory and data set sizes\nhave changed.\n\nStrategyRejectBuffer() will allow bulkreads to, as you say, use more\nbuffers than the original ring size, since it allows them to kick\ndirty buffers out of the ring and claim new shared buffers.\n\nBulkwrites and vacuums, however, will inevitably dirty buffers and\nrequire flushing the buffer (and thus flushing the associated WAL) when\nreusing them. Bulkwrites and vacuum do not kick dirtied buffers out of\nthe ring, since dirtying buffers is their common case. A dynamic\nresizing like the one you suggest would likely devolve to vacuum and\nbulkwrite strategies always using the max size.\n\nAs for decreasing the ring size, buffers are only \"added\" to the ring\nlazily and, technically, as it is now, buffers which have been added\nadded to the ring can always be reclaimed by the clocksweep (as long as\nthey are not pinned). The buffer access strategy is more of a\nself-imposed restriction than it is a reservation. Since the ring is\nsmall and the buffers are being frequently reused, odds are the usage\ncount will be 1 and we will be the one who set it to 1, but there is no\nguarantee. If, when attempting to reuse the buffer, its usage count is\n> 1 (or it is pinned), we also will kick it out of the ring and go look\nfor a replacement buffer.\n\nI do think that it is a bit unreasonable to expect users to know how\nlarge they would like to make their buffer access strategy ring. What we\nwant is some way of balancing different kinds of workloads and\nmaintenance tasks reasonably. If your database has no activity because\nit is the middle of the night or it was shutdown because of transaction\nid wraparound, there is no reason why vacuum should limit the number of\nbuffers it uses. I'm sure there are many other such examples.\n\n- Melanie\n\n\n", "msg_date": "Tue, 14 Mar 2023 20:29:09 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Thanks for the review!\n\nOn Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Mar 11, 2023 at 09:55:33AM -0500, Melanie Plageman wrote:\n> > Subject: [PATCH v3 2/3] use shared buffers when failsafe active\n> >\n> > + /*\n> > + * Assume the caller who allocated the memory for the\n> > + * BufferAccessStrategy object will free it.\n> > + */\n> > + vacrel->bstrategy = NULL;\n>\n> This comment could use elaboration:\n>\n> ** VACUUM normally restricts itself to a small ring buffer; but in\n> failsafe mode, in order to process tables as quickly as possible, allow\n> the leaving behind large number of dirty buffers.\n\nAgreed. It definitely needs a comment like this. I will update in the\nnext version along with addressing your other feedback (after sorting\nout some of the other points in this mail on which I still have\nquestions).\n\n> > Subject: [PATCH v3 3/3] add vacuum option to specify ring_size and guc\n>\n> > #define INT_ACCESS_ONCE(var) ((int)(*((volatile int *)&(var))))\n> > +#define bufsize_limit_to_nbuffers(bufsize) (bufsize * 1024 / BLCKSZ)\n>\n> Macros are normally be capitalized\n\nYes, there doesn't seem to be a great amount of consistency around\nthis... See pgstat.c read_chunk_s and bufmgr.c BufHdrGetBlock and\nfriends. Though there are probably more capitalized than not. Since it\ndoes a bit of math and returns a value, I wanted to convey that it was\nmore like a function. Also, since the name was long, I thought all-caps\nwould be hard to read. However, if you or others feel strongly, I am\nattached neither to the capitalization nor to the name at all (what do\nyou think of the name?).\n\n> It's a good idea to write \"(bufsize)\", in case someone passes \"a+b\".\n\nAh yes, this is a good idea -- I always miss at least one set of\nparentheses when writing a macro. In this case, I didn't think of it\nsince the current caller couldn't pass an expression.\n\n> > @@ -586,6 +587,45 @@ GetAccessStrategy(BufferAccessStrategyType btype)\n> > +BufferAccessStrategy\n> > +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size)\n>\n> Maybe it would make sense for GetAccessStrategy() to call\n> GetAccessStrategyWithSize(). Or maybe not.\n\nYou mean instead of having anyone call GetAccessStrategyWithSize()?\nWe would need to change the signature of GetAccessStrategy() then -- and\nthere are quite a few callers.\n\n>\n> > + {\"vacuum_buffer_usage_limit\", PGC_USERSET, RESOURCES_MEM,\n> > + gettext_noop(\"Sets the buffer pool size for operations employing a buffer access strategy.\"),\n>\n> The description should mention vacuum, if that's the scope of the GUC's\n> behavior.\n\nGood catch. Will update in next version.\n\n> > +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> > + # -1 to use default,\n> > + # 0 to disable vacuum buffer access strategy and use shared buffers\n> > + # > 0 to specify size\n>\n> If I'm not wrong, there's still no documentation about \"ring buffers\" or\n> postgres' \"strategy\". Which seems important to do for this patch, along\n> with other documentation.\n\nYes, it is. I have been thinking about where in the docs to add it (the\ndocs about buffer access strategies). Any ideas?\n\n> This patch should add support in vacuumdb.c.\n\nOh, I had totally forgotten about vacuumdb.\n\n> And maybe a comment about adding support there, since it's annoying\n> when it the release notes one year say \"support VACUUM (FOO)\" and then\n> one year later say \"support vacuumdb --foo\".\n\nI'm not sure I totally follow. Do you mean to add a comment to\nExecVacuum() saying to add support to vacuumdb when adding a new option\nto vacuum?\n\nIn the past, did people forget to add support to vacuumdb for new vacuum\noptions or did they forget to document that they did that or did they\nforgot to include that they did that in the release notes?\n\n- Melanie\n\n\n", "msg_date": "Tue, 14 Mar 2023 20:56:58 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Mar 11, 2023 at 11:55 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> > On Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > > On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > > > \"temp_buffers\" settings?\n> > > >\n> > > > The different types of ring buffers have different sizes, for good reasons. So\n> > > > I don't see that working well. I also think it'd be more often useful to\n> > > > control this on a statement basis - if you have a parallel import tool that\n> > > > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > > > course each session can change the ring buffer settings, but still.\n> > >\n> > > How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> > > bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> > > These options can help especially when statement level controls aren't\n> > > easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> > > needed users can also set them at the system level. For instance, one\n> > > can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> > > the ring buffer to use shared_buffers and run a bunch of bulk write\n> > > queries.\n>\n> In attached v3, I've changed the name of the guc from buffer_usage_limit\n> to vacuum_buffer_usage_limit, since it is only used for vacuum and\n> autovacuum.\n>\n> I haven't added the other suggested strategy gucs, as those could easily\n> be done in a future patchset.\n>\n> I've also changed GetAccessStrategyExt() to GetAccessStrategyWithSize()\n> -- similar to initArrayResultWithSize().\n>\n> And I've added tab completion for BUFFER_USAGE_LIMIT so that it is\n> easier to try out my patch.\n>\n> Most of the TODOs in the code are related to the question of how\n> autovacuum uses the guc vacuum_buffer_usage_limit. autovacuum creates\n> the buffer access strategy ring in do_autovacuum() before looping\n> through and vacuuming tables. It passes this strategy object on to\n> vacuum(). Since we reuse the same strategy object for all tables in a\n> given invocation of do_autovacuum(), only failsafe autovacuum would\n> change buffer access strategies. This is probably okay, but it does mean\n> that the table-level VacuumParams variable, ring_size, means something\n> different for autovacuum than vacuum. Autovacuum workers will always\n> have set it to -1. We won't ever reach code in vacuum() which relies on\n> VacuumParams->ring_size as long as autovacuum workers pass a non-NULL\n> BufferAccessStrategy object to vacuum(), though.\n\nI've not reviewed the patchset in depth yet but I got assertion\nfailure and SEGV when using the buffer_usage_limit parameter.\n\npostgres(1:471180)=# vacuum (buffer_usage_limit 10000000000) ;\n2023-03-15 17:10:02.947 JST [471180] ERROR: buffer_usage_limit for a\nvacuum must be between -1 and 16777216. 10000000000 is invalid. at\ncharacter 9\n\nThe message show the max value is 16777216, but when I set it, I got\nan assertion failure:\n\npostgres(1:470992)=# vacuum (buffer_usage_limit 16777216) ;\nTRAP: failed Assert(\"ring_size < MAX_BAS_RING_SIZE_KB\"), File:\n\"freelist.c\", Line: 606, PID: 470992\n\nThen when I used 1 byte lower value, 16777215, I got a SEGV:\n\npostgres(1:471180)=# vacuum (buffer_usage_limit 16777215) ;\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: 2023-03-15\n17:10:59.404 JST [471159] LOG: server process (PID 471180) was\nterminated by signal 11: Segmentation fault\n\nFinally, when I used a more lower value, 16777100, I got a memory\nallocation error:\n\npostgres(1:471361)=# vacuum (buffer_usage_limit 16777100) ;\n2023-03-15 17:12:17.853 JST [471361] ERROR: invalid memory alloc\nrequest size 18446744073709551572\n\nProbably vacuum_buffer_usage_limit also has the same issue.\n\nAlso, should we support a table option for vacuum_buffer_usage_limit as well?\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 15 Mar 2023 17:31:20 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, 15 Mar 2023 at 02:29, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> As for routine vacuuming and the other buffer access strategies, I think\n> there is an argument for configurability based on operator knowledge --\n> perhaps your workload will use the data you are COPYing as soon as the\n> COPY finishes, so you might as well disable a buffer access strategy or\n> use a larger fraction of shared buffers. Also, the ring sizes were\n> selected sixteen years ago and average server memory and data set sizes\n> have changed.\n\nTo be clear I'm not at all arguing against configurability. I was\nthinking that dynamic use could make the configuration simpler by self\ntuning to use no more buffers than is useful.\n\n> StrategyRejectBuffer() will allow bulkreads to, as you say, use more\n> buffers than the original ring size, since it allows them to kick\n> dirty buffers out of the ring and claim new shared buffers.\n>\n> Bulkwrites and vacuums, however, will inevitably dirty buffers and\n> require flushing the buffer (and thus flushing the associated WAL) when\n> reusing them. Bulkwrites and vacuum do not kick dirtied buffers out of\n> the ring, since dirtying buffers is their common case. A dynamic\n> resizing like the one you suggest would likely devolve to vacuum and\n> bulkwrite strategies always using the max size.\n\nI think it should self stabilize around the point where the WAL is\neither flushed by other commit activity, WAL writer or WAL buffers\nfilling up. Writing out their own dirtied buffers will still happen,\njust the associated WAL flushes will be in larger chunks and possibly\ndone by other processes.\n\n> As for decreasing the ring size, buffers are only \"added\" to the ring\n> lazily and, technically, as it is now, buffers which have been added\n> added to the ring can always be reclaimed by the clocksweep (as long as\n> they are not pinned). The buffer access strategy is more of a\n> self-imposed restriction than it is a reservation. Since the ring is\n> small and the buffers are being frequently reused, odds are the usage\n> count will be 1 and we will be the one who set it to 1, but there is no\n> guarantee. If, when attempting to reuse the buffer, its usage count is\n> > 1 (or it is pinned), we also will kick it out of the ring and go look\n> for a replacement buffer.\n\nRight, but while the buffer is actively used by the ring it is\nunlikely that clocksweep will find it at usage 0 as the ring buffer\nshould cycle more often than the clocksweep. Whereas if the ring stops\nusing a buffer, clocksweep will eventually come and reclaim it. And if\nthe ring shrinking decision turns out to be wrong before the\nclocksweep gets around to reusing it, we can bring the same buffer\nback into the ring.\n\n> I do think that it is a bit unreasonable to expect users to know how\n> large they would like to make their buffer access strategy ring. What we\n> want is some way of balancing different kinds of workloads and\n> maintenance tasks reasonably. If your database has no activity because\n> it is the middle of the night or it was shutdown because of transaction\n> id wraparound, there is no reason why vacuum should limit the number of\n> buffers it uses. I'm sure there are many other such examples.\n\nIdeally yes, though I am not hopeful of finding a solution that does\nthis any time soon. Just to take your example, if a nightly\nmaintenance job wipes out the shared buffer contents slightly\noptimizing its non time-critical work and then causes morning user\nvisible load to have big latency spikes due to cache misses, that's\nnot a good tradeoff either.\n\n--\nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 15 Mar 2023 12:45:58 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, 15 Mar 2023 at 02:57, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> > > Subject: [PATCH v3 3/3] add vacuum option to specify ring_size and guc\n> >\n> > > #define INT_ACCESS_ONCE(var) ((int)(*((volatile int *)&(var))))\n> > > +#define bufsize_limit_to_nbuffers(bufsize) (bufsize * 1024 / BLCKSZ)\n> >\n> > Macros are normally be capitalized\n>\n> Yes, there doesn't seem to be a great amount of consistency around\n> this... See pgstat.c read_chunk_s and bufmgr.c BufHdrGetBlock and\n> friends. Though there are probably more capitalized than not. Since it\n> does a bit of math and returns a value, I wanted to convey that it was\n> more like a function. Also, since the name was long, I thought all-caps\n> would be hard to read. However, if you or others feel strongly, I am\n> attached neither to the capitalization nor to the name at all (what do\n> you think of the name?).\n\nA static inline function seems like a less surprising and more type\nsafe solution for this.\n\n-- \nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 15 Mar 2023 12:48:22 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Tue, Mar 14, 2023 at 08:56:58PM -0400, Melanie Plageman wrote:\n> On Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> > > @@ -586,6 +587,45 @@ GetAccessStrategy(BufferAccessStrategyType btype)\n> > > +BufferAccessStrategy\n> > > +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size)\n> >\n> > Maybe it would make sense for GetAccessStrategy() to call\n> > GetAccessStrategyWithSize(). Or maybe not.\n> \n> You mean instead of having anyone call GetAccessStrategyWithSize()?\n> We would need to change the signature of GetAccessStrategy() then -- and\n> there are quite a few callers.\n\nI mean to avoid code duplication, GetAccessStrategy() could \"Select ring\nsize to use\" and then call into GetAccessStrategyWithSize(). Maybe it's\ncounter to your intent or otherwise not worth it to save 8 LOC.\n\n> > > +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> > > + # -1 to use default,\n> > > + # 0 to disable vacuum buffer access strategy and use shared buffers\n> > > + # > 0 to specify size\n> >\n> > If I'm not wrong, there's still no documentation about \"ring buffers\" or\n> > postgres' \"strategy\". Which seems important to do for this patch, along\n> > with other documentation.\n> \n> Yes, it is. I have been thinking about where in the docs to add it (the\n> docs about buffer access strategies). Any ideas?\n\nThis patch could add something to the vacuum manpage and to the appendix.\nAnd maybe references from the shared_buffers and pg_buffercache\nmanpages.\n\n> > This patch should add support in vacuumdb.c.\n> \n> Oh, I had totally forgotten about vacuumdb.\n\n:)\n\n> > And maybe a comment about adding support there, since it's annoying\n> > when it the release notes one year say \"support VACUUM (FOO)\" and then\n> > one year later say \"support vacuumdb --foo\".\n> \n> I'm not sure I totally follow. Do you mean to add a comment to\n> ExecVacuum() saying to add support to vacuumdb when adding a new option\n> to vacuum?\n\nYeah, like:\n/* Options added here should also be added to vacuumdb.c */\n\n> In the past, did people forget to add support to vacuumdb for new vacuum\n> options or did they forget to document that they did that or did they\n> forgot to include that they did that in the release notes?\n\nThe first. Maybe not often, it's not important whether it's in the\noriginal patch, but it's odd if the vacuumdb option isn't added until\nthe following release, which then shows up as a separate \"feature\".\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 15 Mar 2023 19:14:24 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Thanks for the reviews and for trying the patch!\n\nOn Wed, Mar 15, 2023 at 4:31 AM Masahiko Sawada <sawada.mshk@gmail.com> wrote:\n>\n> On Sat, Mar 11, 2023 at 11:55 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > > On Tue, Feb 28, 2023 at 3:16 AM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > > On Thu, Jan 12, 2023 at 6:06 AM Andres Freund <andres@anarazel.de> wrote:\n> > > > >\n> > > > > On 2023-01-11 17:26:19 -0700, David G. Johnston wrote:\n> > > > > > Should we just add \"ring_buffers\" to the existing \"shared_buffers\" and\n> > > > > > \"temp_buffers\" settings?\n> > > > >\n> > > > > The different types of ring buffers have different sizes, for good reasons. So\n> > > > > I don't see that working well. I also think it'd be more often useful to\n> > > > > control this on a statement basis - if you have a parallel import tool that\n> > > > > starts NCPU COPYs you'd want a smaller buffer than a single threaded COPY. Of\n> > > > > course each session can change the ring buffer settings, but still.\n> > > >\n> > > > How about having GUCs for each ring buffer (bulk_read_ring_buffers,\n> > > > bulk_write_ring_buffers, vacuum_ring_buffers - ah, 3 more new GUCs)?\n> > > > These options can help especially when statement level controls aren't\n> > > > easy to add (COPY, CREATE TABLE AS/CTAS, REFRESH MAT VIEW/RMV)? If\n> > > > needed users can also set them at the system level. For instance, one\n> > > > can set bulk_write_ring_buffers to other than 16MB or -1 to disable\n> > > > the ring buffer to use shared_buffers and run a bunch of bulk write\n> > > > queries.\n> >\n> > In attached v3, I've changed the name of the guc from buffer_usage_limit\n> > to vacuum_buffer_usage_limit, since it is only used for vacuum and\n> > autovacuum.\n> >\n> > I haven't added the other suggested strategy gucs, as those could easily\n> > be done in a future patchset.\n> >\n> > I've also changed GetAccessStrategyExt() to GetAccessStrategyWithSize()\n> > -- similar to initArrayResultWithSize().\n> >\n> > And I've added tab completion for BUFFER_USAGE_LIMIT so that it is\n> > easier to try out my patch.\n> >\n> > Most of the TODOs in the code are related to the question of how\n> > autovacuum uses the guc vacuum_buffer_usage_limit. autovacuum creates\n> > the buffer access strategy ring in do_autovacuum() before looping\n> > through and vacuuming tables. It passes this strategy object on to\n> > vacuum(). Since we reuse the same strategy object for all tables in a\n> > given invocation of do_autovacuum(), only failsafe autovacuum would\n> > change buffer access strategies. This is probably okay, but it does mean\n> > that the table-level VacuumParams variable, ring_size, means something\n> > different for autovacuum than vacuum. Autovacuum workers will always\n> > have set it to -1. We won't ever reach code in vacuum() which relies on\n> > VacuumParams->ring_size as long as autovacuum workers pass a non-NULL\n> > BufferAccessStrategy object to vacuum(), though.\n>\n> I've not reviewed the patchset in depth yet but I got assertion\n> failure and SEGV when using the buffer_usage_limit parameter.\n>\n> postgres(1:471180)=# vacuum (buffer_usage_limit 10000000000) ;\n> 2023-03-15 17:10:02.947 JST [471180] ERROR: buffer_usage_limit for a\n> vacuum must be between -1 and 16777216. 10000000000 is invalid. at\n> character 9\n>\n> The message show the max value is 16777216, but when I set it, I got\n> an assertion failure:\n>\n> postgres(1:470992)=# vacuum (buffer_usage_limit 16777216) ;\n> TRAP: failed Assert(\"ring_size < MAX_BAS_RING_SIZE_KB\"), File:\n> \"freelist.c\", Line: 606, PID: 470992\n>\n> Then when I used 1 byte lower value, 16777215, I got a SEGV:\n>\n> postgres(1:471180)=# vacuum (buffer_usage_limit 16777215) ;\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: 2023-03-15\n> 17:10:59.404 JST [471159] LOG: server process (PID 471180) was\n> terminated by signal 11: Segmentation fault\n>\n> Finally, when I used a more lower value, 16777100, I got a memory\n> allocation error:\n>\n> postgres(1:471361)=# vacuum (buffer_usage_limit 16777100) ;\n> 2023-03-15 17:12:17.853 JST [471361] ERROR: invalid memory alloc\n> request size 18446744073709551572\n>\n> Probably vacuum_buffer_usage_limit also has the same issue.\n\nOh dear--it seems I had an integer overflow when calculating the number\nof buffers using the specified buffer size in the macro:\n\n #define bufsize_limit_to_nbuffers(bufsize) (bufsize * 1024 / BLCKSZ)\n\nIn the attached v4, I've changed that to:\n\n static inline int\n bufsize_limit_to_nbuffers(int bufsize_limit_kb)\n {\n int blcksz_kb = BLCKSZ / 1024;\n\n Assert(blcksz_kb > 0);\n\n return bufsize_limit_kb / blcksz_kb;\n }\n\nThis should address Justin's suggestions and Ants' concern about the\nmacro as well.\n\nAlso, I was missing the = in the Assert(ring_size <= MAX_BAS_RING_SIZE)\nI've fixed that as well, so it should work for you to specify up to 16777216.\n\n> Also, should we support a table option for vacuum_buffer_usage_limit as well?\n\nHmm. Since this is meant more for balancing resource usage globally, it\ndoesn't make as much sense as a table option to me. But, I could be\nconvinced.\n\nOn Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sat, Mar 11, 2023 at 09:55:33AM -0500, Melanie Plageman wrote:\n> > Subject: [PATCH v3 2/3] use shared buffers when failsafe active\n> >\n> > + /*\n> > + * Assume the caller who allocated the memory for the\n> > + * BufferAccessStrategy object will free it.\n> > + */\n> > + vacrel->bstrategy = NULL;\n>\n> This comment could use elaboration:\n>\n> ** VACUUM normally restricts itself to a small ring buffer; but in\n> failsafe mode, in order to process tables as quickly as possible, allow\n> the leaving behind large number of dirty buffers.\n\nI've added a comment in attached v4 which is a bit different than Justin's\nsuggestion but still more verbose than the previous comment.\n\n> > Subject: [PATCH v3 3/3] add vacuum option to specify ring_size and guc\n> > + {\"vacuum_buffer_usage_limit\", PGC_USERSET, RESOURCES_MEM,\n> > + gettext_noop(\"Sets the buffer pool size for operations employing a buffer access strategy.\"),\n>\n> The description should mention vacuum, if that's the scope of the GUC's\n> behavior.\n\nI've updated this in v4.\n\n> > +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> > + # -1 to use default,\n> > + # 0 to disable vacuum buffer access strategy and use shared buffers\n> > + # > 0 to specify size\n>\n> If I'm not wrong, there's still no documentation about \"ring buffers\" or\n> postgres' \"strategy\". Which seems important to do for this patch, along\n> with other documentation.\n\nSo, on the topic of \"other documentation\", I have, at least, added docs\nfor the vacuum_buffer_usage_limit guc and the BUFFER_USAGE option to\nVACUUM and the buffer-usage-limit parameter to vacuumdb.\n\n> This patch should add support in vacuumdb.c. And maybe a comment about\n> adding support there, since it's annoying when it the release notes one\n> year say \"support VACUUM (FOO)\" and then one year later say \"support\n> vacuumdb --foo\".\n\nSo, v4 adds support for buffer-usage-limit to vacuumdb. There are a few\nissues. The main one is that no other vacuumdb option takes a size as a\nparameter. I couldn't actually find any other client with a parameter\nspecified as a size.\n\nMy VACUUM option code is using the GUC size parsing code from\nparse_int() -- including the unit flag GUC_UNIT_KB. Now that vacuumdb\nalso needs to parse sizes, I think we'll need to lift the parse_int()\ncode and the unit_conversion struct and\nunit_conversion_memory_unit_conversion_table out of guc.c and put it\nsomewhere that it can be accessed for more than guc parsing (e.g. option\nparsing).\n\nFor vacuumdb in this version, I just specified buffer-usage-limit is\nonly in kB and thus can only be specified as an int.\n\nIf we had something like pg_parse_size() in common, would this make\nsense? It would be a little bit of work to figure out what to do about\nthe flags, etc.\n\nAnother issue is the server-side guc\n#define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\nI just redefined it in vacuumdb code. I'm not sure what the preferred\nmethod for dealing with this is.\n\nI know this validation would get done server-side if I just passed the\nuser-specified option through, but all of the other vacuumdb options\nappear to be doing min/max boundary validation on the client side.\n\nOn Wed, Mar 15, 2023 at 8:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Tue, Mar 14, 2023 at 08:56:58PM -0400, Melanie Plageman wrote:\n> > On Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > > > @@ -586,6 +587,45 @@ GetAccessStrategy(BufferAccessStrategyType btype)\n> > > > +BufferAccessStrategy\n> > > > +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size)\n> > >\n> > > Maybe it would make sense for GetAccessStrategy() to call\n> > > GetAccessStrategyWithSize(). Or maybe not.\n> >\n> > You mean instead of having anyone call GetAccessStrategyWithSize()?\n> > We would need to change the signature of GetAccessStrategy() then -- and\n> > there are quite a few callers.\n>\n> I mean to avoid code duplication, GetAccessStrategy() could \"Select ring\n> size to use\" and then call into GetAccessStrategyWithSize(). Maybe it's\n> counter to your intent or otherwise not worth it to save 8 LOC.\n\nOh, that's a cool idea. I will think on it.\n\n> > > > +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> > > > + # -1 to use default,\n> > > > + # 0 to disable vacuum buffer access strategy and use shared buffers\n> > > > + # > 0 to specify size\n> > >\n> > > If I'm not wrong, there's still no documentation about \"ring buffers\" or\n> > > postgres' \"strategy\". Which seems important to do for this patch, along\n> > > with other documentation.\n> >\n> > Yes, it is. I have been thinking about where in the docs to add it (the\n> > docs about buffer access strategies). Any ideas?\n>\n> This patch could add something to the vacuum manpage and to the appendix.\n> And maybe references from the shared_buffers and pg_buffercache\n> manpages.\n\nSo, I was thinking it would be good to have some documentation in\ngeneral about Buffer Access Strategies (i.e. not just for vacuum). It\nwould have been nice to have something to reference from the pg_stat_io\ndocs that describe what buffer access strategies are.\n\n> > > And maybe a comment about adding support there, since it's annoying\n> > > when it the release notes one year say \"support VACUUM (FOO)\" and then\n> > > one year later say \"support vacuumdb --foo\".\n> >\n> > I'm not sure I totally follow. Do you mean to add a comment to\n> > ExecVacuum() saying to add support to vacuumdb when adding a new option\n> > to vacuum?\n>\n> Yeah, like:\n> /* Options added here should also be added to vacuumdb.c */\n\nI've added a little something to the comment above the VacuumParams\nstruct.\n\n> > In the past, did people forget to add support to vacuumdb for new vacuum\n> > options or did they forget to document that they did that or did they\n> > forgot to include that they did that in the release notes?\n>\n> The first. Maybe not often, it's not important whether it's in the\n> original patch, but it's odd if the vacuumdb option isn't added until\n> the following release, which then shows up as a separate \"feature\".\n\nI've squished in the code for adding the parameter to vacuumdb in a\nsingle commit with the guc and vacuum option, but I will separate it out\nafter some of the basics get sorted.\n\n- Melanie", "msg_date": "Wed, 15 Mar 2023 21:03:10 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Mar 15, 2023 at 9:03 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Sat, Mar 11, 2023 at 09:55:33AM -0500, Melanie Plageman wrote:\n> > This patch should add support in vacuumdb.c. And maybe a comment about\n> > adding support there, since it's annoying when it the release notes one\n> > year say \"support VACUUM (FOO)\" and then one year later say \"support\n> > vacuumdb --foo\".\n>\n> So, v4 adds support for buffer-usage-limit to vacuumdb. There are a few\n> issues. The main one is that no other vacuumdb option takes a size as a\n> parameter. I couldn't actually find any other client with a parameter\n> specified as a size.\n>\n> My VACUUM option code is using the GUC size parsing code from\n> parse_int() -- including the unit flag GUC_UNIT_KB. Now that vacuumdb\n> also needs to parse sizes, I think we'll need to lift the parse_int()\n> code and the unit_conversion struct and\n> unit_conversion_memory_unit_conversion_table out of guc.c and put it\n> somewhere that it can be accessed for more than guc parsing (e.g. option\n> parsing).\n>\n> For vacuumdb in this version, I just specified buffer-usage-limit is\n> only in kB and thus can only be specified as an int.\n>\n> If we had something like pg_parse_size() in common, would this make\n> sense? It would be a little bit of work to figure out what to do about\n> the flags, etc.\n>\n> Another issue is the server-side guc\n> #define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\n> I just redefined it in vacuumdb code. I'm not sure what the preferred\n> method for dealing with this is.\n>\n> I know this validation would get done server-side if I just passed the\n> user-specified option through, but all of the other vacuumdb options\n> appear to be doing min/max boundary validation on the client side.\n\nSo, after discussing vacuumdb client-side validation off-list with Jelte,\nI realized that I was trying to do too much there.\n\nAttached v5 passes the contents of the buffer-usage-limit option to\nvacuumdb unvalidated into the VACUUM command string which vacuumdb\nbuilds. This solves most of the problems.\n\nI also improved the error messages coming from VACUUM\n(buffer_usage_limit) handling.\n\n- Melanie", "msg_date": "Thu, 16 Mar 2023 20:35:21 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "> Subject: [PATCH v4 3/3] add vacuum[db] option to specify ring_size and guc\n\n> + Specifies the ring buffer size to be used for a given invocation of\n> + <command>VACUUM</command> or instance of autovacuum. This size is\n> + converted to a number of shared buffers which will be reused as part of\n\nI'd say \"specifies the size of shared_buffers to be reused as ..\"\n\n> + a <literal>Buffer Access Strategy</literal>. <literal>0</literal> will\n> + disable use of a <literal>Buffer Access Strategy</literal>.\n> + <literal>-1</literal> will set the size to a default of <literal>256\n> + kB</literal>. The maximum ring buffer size is <literal>16 GB</literal>.\n> + Though you may set <varname>vacuum_buffer_usage_limit</varname> below\n> + <literal>128 kB</literal>, it will be clamped to <literal>128\n> + kB</literal> at runtime. The default value is <literal>-1</literal>.\n> + This parameter can be set at any time.\n\nGUC docs usually also say something like \n\"If this value is specified without units, it is taken as ..\"\n\n> + is used to calculate a number of shared buffers which will be reused as\n\n*the* number?\n\n> + <command>VACUUM</command>. The analyze stage and parallel vacuum workers\n> + do not use this size.\n\nI think what you mean is that vacuum's heap scan stage uses the\nstrategy, but the index scan/cleanup phases doesn't?\n\n> + The size in kB of the ring buffer used for vacuuming. This size is used\n> + to calculate a number of shared buffers which will be reused as part of\n\n*the* number\n\n> +++ b/doc/src/sgml/ref/vacuumdb.sgml\n\nThe docs here duplicate the sql-vacuum docs. It seems better to refer\nto the vacuum page for details, like --parallel does.\n\n\nUnrelated: it would be nice if the client-side options were documented\nseparately from the server-side options. Especially due to --jobs and\n--parallel.\n\n> +\t\t\tif (!parse_int(vac_buffer_size, &result, GUC_UNIT_KB, NULL))\n> +\t\t\t{\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\terrmsg(\"buffer_usage_limit for a vacuum must be between -1 and %d. %s is invalid.\",\n> +\t\t\t\t\t\t\t\t\tMAX_BAS_RING_SIZE_KB, vac_buffer_size),\n> +\t\t\t\t\t\t\tparser_errposition(pstate, opt->location)));\n> +\t\t\t}\n> +\n> +\t\t\t/* check for out-of-bounds */\n> +\t\t\tif (result < -1 || result > MAX_BAS_RING_SIZE_KB)\n> +\t\t\t{\n> +\t\t\t\tereport(ERROR,\n> +\t\t\t\t\t\t(errcode(ERRCODE_SYNTAX_ERROR),\n> +\t\t\t\t\t\t\terrmsg(\"buffer_usage_limit for a vacuum must be between -1 and %d\",\n> +\t\t\t\t\t\t\t\t\tMAX_BAS_RING_SIZE_KB),\n> +\t\t\t\t\t\t\tparser_errposition(pstate, opt->location)));\n> +\t\t\t}\n\nI think these checks could be collapsed into a single ereport().\n\nif !parse_int() || (result < -1 || result > MAX_BAS_RINGSIZE_KB):\n\tereport(ERROR,\n\t\terrcode(ERRCODE_SYNTAX_ERROR),\n\t\terrmsg(\"buffer_usage_limit for a vacuum must be an integer between -1 and %d\",\n\t\t\tMAX_BAS_RING_SIZE_KB),\n\nThere was a recent, similar, and unrelated suggestion here:\nhttps://www.postgresql.org/message-id/20230314.135859.260879647537075548.horikyota.ntt%40gmail.com\n\n> +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> +\t\t\t\t# -1 to use default,\n> +\t\t\t\t# 0 to disable vacuum buffer access strategy and use shared buffers\n\nI think it's confusing to say \"and use shared buffers\", since\n\"strategies\" also use shared_buffers. It seems better to remove those 4\nwords.\n\n> @@ -550,6 +563,13 @@ vacuum_one_database(ConnParams *cparams,\n> \t\tpg_fatal(\"cannot use the \\\"%s\\\" option on server versions older than PostgreSQL %s\",\n> \t\t\t\t \"--parallel\", \"13\");\n> \n> +\t// TODO: this is a problem: if the user specifies this option with -1 in a\n> +\t// version before 16, it will not produce an error message. it also won't\n> +\t// do anything, but that still doesn't seem right.\n\nActually, that seems fine to me. If someone installs v16 vacuumdb, they\ncan run it against old servers and specify the option as -1 without it\nfailing with an error. I don't know if anyone will find that useful,\nbut it doesn't seem unreasonable.\n\nI still think adding something to the glossary would be good.\n\nBuffer Access Strategy:\nA circular/ring buffer used for reading or writing data pages from/to\nthe operating system. Ring buffers are used for sequential scans of\nlarge tables, VACUUM, COPY FROM, CREATE TABLE AS SELECT, ALTER TABLE,\nand CLUSTER. By using only a limited portion of >shared_buffers<, the\nring buffer avoids avoids evicting large amounts of data whenever a\nbackend performs bulk I/O operations. Use of a ring buffer also forces\nthe backend to write out its own dirty pages, rather than leaving them\nbehind to be cleaned up by other backends.\n\nIf there's a larger section added than a glossary entry, the text could\nbe promoted from src/backend/storage/buffer/README to doc/.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 18 Mar 2023 13:30:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Thanks for the review!\n\nAttached is an updated v6.\n\nv6 has some updates and corrections. It has two remaining TODOs in the\ncode: one is around what value to initialize the ring_size to in\nVacuumParams, the other is around whether or not parallel vacuum index\nworkers should in fact stick with the default buffer access strategy\nsizes.\n\nI also separated vacuumdb into its own commit.\n\nI also have addressed Justin's review feedback.\n\nOn Sat, Mar 18, 2023 at 2:30 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> > Subject: [PATCH v4 3/3] add vacuum[db] option to specify ring_size and guc\n>\n> > + Specifies the ring buffer size to be used for a given invocation of\n> > + <command>VACUUM</command> or instance of autovacuum. This size is\n> > + converted to a number of shared buffers which will be reused as part of\n>\n> I'd say \"specifies the size of shared_buffers to be reused as ..\"\n\nI've included \"shared_buffers\" in the description.\n\n> > + a <literal>Buffer Access Strategy</literal>. <literal>0</literal> will\n> > + disable use of a <literal>Buffer Access Strategy</literal>.\n> > + <literal>-1</literal> will set the size to a default of <literal>256\n> > + kB</literal>. The maximum ring buffer size is <literal>16 GB</literal>.\n> > + Though you may set <varname>vacuum_buffer_usage_limit</varname> below\n> > + <literal>128 kB</literal>, it will be clamped to <literal>128\n> > + kB</literal> at runtime. The default value is <literal>-1</literal>.\n> > + This parameter can be set at any time.\n>\n> GUC docs usually also say something like\n> \"If this value is specified without units, it is taken as ..\"\n\nI had updated this in v5 with slightly different wording, but I now am\nusing the wording you suggested (which does appear standard in the rest\nof the docs).\n\n>\n> > + is used to calculate a number of shared buffers which will be reused as\n>\n> *the* number?\n\nupdated.\n\n>\n> > + <command>VACUUM</command>. The analyze stage and parallel vacuum workers\n> > + do not use this size.\n>\n> I think what you mean is that vacuum's heap scan stage uses the\n> strategy, but the index scan/cleanup phases doesn't?\n\nYes, non-parallel index vacuum and cleanup will use whatever value you\nspecify but parallel workers make their own buffer access strategy\nobject. I've updated the docs to indicate that they will use the default\nsize for this.\n\n\n>\n> > + The size in kB of the ring buffer used for vacuuming. This size is used\n> > + to calculate a number of shared buffers which will be reused as part of\n>\n> *the* number\n\nfixed.\n\n> > +++ b/doc/src/sgml/ref/vacuumdb.sgml\n>\n> The docs here duplicate the sql-vacuum docs. It seems better to refer\n> to the vacuum page for details, like --parallel does.\n\nGood idea.\n\n>\n> Unrelated: it would be nice if the client-side options were documented\n> separately from the server-side options. Especially due to --jobs and\n> --parallel.\n\nYes, that would be helpful.\n\n> > + if (!parse_int(vac_buffer_size, &result, GUC_UNIT_KB, NULL))\n> > + {\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"buffer_usage_limit for a vacuum must be between -1 and %d. %s is invalid.\",\n> > + MAX_BAS_RING_SIZE_KB, vac_buffer_size),\n> > + parser_errposition(pstate, opt->location)));\n> > + }\n> > +\n> > + /* check for out-of-bounds */\n> > + if (result < -1 || result > MAX_BAS_RING_SIZE_KB)\n> > + {\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_SYNTAX_ERROR),\n> > + errmsg(\"buffer_usage_limit for a vacuum must be between -1 and %d\",\n> > + MAX_BAS_RING_SIZE_KB),\n> > + parser_errposition(pstate, opt->location)));\n> > + }\n>\n> I think these checks could be collapsed into a single ereport().\n>\n> if !parse_int() || (result < -1 || result > MAX_BAS_RINGSIZE_KB):\n> ereport(ERROR,\n> errcode(ERRCODE_SYNTAX_ERROR),\n> errmsg(\"buffer_usage_limit for a vacuum must be an integer between -1 and %d\",\n> MAX_BAS_RING_SIZE_KB),\n>\n> There was a recent, similar, and unrelated suggestion here:\n> https://www.postgresql.org/message-id/20230314.135859.260879647537075548.horikyota.ntt%40gmail.com\n\nSo, these have been updated/improved in v5. I still didn't combine them.\nI see what you are saying about combining them (and I checked the link\nyou shared), but in this case, having them separate allows me to provide\ninfo using the hintmsg passed to parse_int() about why it failed during\nparse_int -- which could be something not related to range. So, I think\nit makes sense to keep them separate.\n\n> > +#vacuum_buffer_usage_limit = -1 # size of vacuum buffer access strategy ring.\n> > + # -1 to use default,\n> > + # 0 to disable vacuum buffer access strategy and use shared buffers\n>\n> I think it's confusing to say \"and use shared buffers\", since\n> \"strategies\" also use shared_buffers. It seems better to remove those 4\n> words.\n\nGot it. I've gone ahead and done that.\n\n> > @@ -550,6 +563,13 @@ vacuum_one_database(ConnParams *cparams,\n> > pg_fatal(\"cannot use the \\\"%s\\\" option on server versions older than PostgreSQL %s\",\n> > \"--parallel\", \"13\");\n> >\n> > + // TODO: this is a problem: if the user specifies this option with -1 in a\n> > + // version before 16, it will not produce an error message. it also won't\n> > + // do anything, but that still doesn't seem right.\n>\n> Actually, that seems fine to me. If someone installs v16 vacuumdb, they\n> can run it against old servers and specify the option as -1 without it\n> failing with an error. I don't know if anyone will find that useful,\n> but it doesn't seem unreasonable.\n\nI sort of skirted around this by removing any validation from vacuumdb\n(present in v5 and still the case in v6). Now, the parameter is a string\nand I check if it is non-NULL when the version is < 16. However, this\nwill no longer have the property that someone can use v16 vacuumdb and\npass buffer-usage-limit and have it not fail. I think that is okay,\nthough, since they might be confused thinking it was doing something.\n\n> I still think adding something to the glossary would be good.\n>\n> Buffer Access Strategy:\n> A circular/ring buffer used for reading or writing data pages from/to\n> the operating system. Ring buffers are used for sequential scans of\n> large tables, VACUUM, COPY FROM, CREATE TABLE AS SELECT, ALTER TABLE,\n> and CLUSTER. By using only a limited portion of >shared_buffers<, the\n> ring buffer avoids avoids evicting large amounts of data whenever a\n> backend performs bulk I/O operations. Use of a ring buffer also forces\n> the backend to write out its own dirty pages, rather than leaving them\n> behind to be cleaned up by other backends.\n\nYes, I have taken some ideas from here and added a separate commit\nbefore all the others adding Buffer Access Strategy to the\ndocumentation.\n\n> If there's a larger section added than a glossary entry, the text could\n> be promoted from src/backend/storage/buffer/README to doc/.\n\nThis is a good idea. I think we provided enough information in the\nglossary (as far as users would care) if it weren't for the new\nbuffer_usage_limit guc, which probably merits more explanation about how\nit interacts with buffer access strategies. Since it is only used for\nvacuum now, do you think such a thing would belong in VACUUM-related\ndocumentation? Like somewhere in [1]?\n\nOn Wed, Mar 15, 2023 at 9:03 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> On Wed, Mar 15, 2023 at 8:14 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > On Tue, Mar 14, 2023 at 08:56:58PM -0400, Melanie Plageman wrote:\n> > > On Sat, Mar 11, 2023 at 2:16 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> >\n> > > > > @@ -586,6 +587,45 @@ GetAccessStrategy(BufferAccessStrategyType btype)\n> > > > > +BufferAccessStrategy\n> > > > > +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size)\n> > > >\n> > > > Maybe it would make sense for GetAccessStrategy() to call\n> > > > GetAccessStrategyWithSize(). Or maybe not.\n> > >\n> > > You mean instead of having anyone call GetAccessStrategyWithSize()?\n> > > We would need to change the signature of GetAccessStrategy() then -- and\n> > > there are quite a few callers.\n> >\n> > I mean to avoid code duplication, GetAccessStrategy() could \"Select ring\n> > size to use\" and then call into GetAccessStrategyWithSize(). Maybe it's\n> > counter to your intent or otherwise not worth it to save 8 LOC.\n>\n> Oh, that's a cool idea. I will think on it.\n\nSo, I thought about doing a version of this by adding a helper which did\nthe allocation of the BufferAccessStrategy object given a number of\nbuffers that could be called by both GetAccessStrategy() and\nGetAccessStrategyWithSize(). I decided not to because I wanted to emit a\ndebug message if the size of the ring was clamped lower or higher than\nthe user would expect -- but only do this in GetAccessStrategyWithSize()\nsince there is no user expectation in the use of GetAccessStrategy().\n\n- Melanie\n\n[1] https://www.postgresql.org/docs/devel/routine-vacuuming.html", "msg_date": "Sun, 19 Mar 2023 18:50:16 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Mar 15, 2023 at 6:46 AM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Wed, 15 Mar 2023 at 02:29, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > As for routine vacuuming and the other buffer access strategies, I think\n> > there is an argument for configurability based on operator knowledge --\n> > perhaps your workload will use the data you are COPYing as soon as the\n> > COPY finishes, so you might as well disable a buffer access strategy or\n> > use a larger fraction of shared buffers. Also, the ring sizes were\n> > selected sixteen years ago and average server memory and data set sizes\n> > have changed.\n>\n> To be clear I'm not at all arguing against configurability. I was\n> thinking that dynamic use could make the configuration simpler by self\n> tuning to use no more buffers than is useful.\n\nYes, but I am struggling with how we would define \"useful\".\n\n> > StrategyRejectBuffer() will allow bulkreads to, as you say, use more\n> > buffers than the original ring size, since it allows them to kick\n> > dirty buffers out of the ring and claim new shared buffers.\n> >\n> > Bulkwrites and vacuums, however, will inevitably dirty buffers and\n> > require flushing the buffer (and thus flushing the associated WAL) when\n> > reusing them. Bulkwrites and vacuum do not kick dirtied buffers out of\n> > the ring, since dirtying buffers is their common case. A dynamic\n> > resizing like the one you suggest would likely devolve to vacuum and\n> > bulkwrite strategies always using the max size.\n>\n> I think it should self stabilize around the point where the WAL is\n> either flushed by other commit activity, WAL writer or WAL buffers\n> filling up. Writing out their own dirtied buffers will still happen,\n> just the associated WAL flushes will be in larger chunks and possibly\n> done by other processes.\n\nThey will have to write out any WAL associated with modifications to the\ndirty buffer before flushing it, so I'm not sure I understand how this\nwould work.\n\n> > As for decreasing the ring size, buffers are only \"added\" to the ring\n> > lazily and, technically, as it is now, buffers which have been added\n> > added to the ring can always be reclaimed by the clocksweep (as long as\n> > they are not pinned). The buffer access strategy is more of a\n> > self-imposed restriction than it is a reservation. Since the ring is\n> > small and the buffers are being frequently reused, odds are the usage\n> > count will be 1 and we will be the one who set it to 1, but there is no\n> > guarantee. If, when attempting to reuse the buffer, its usage count is\n> > > 1 (or it is pinned), we also will kick it out of the ring and go look\n> > for a replacement buffer.\n>\n> Right, but while the buffer is actively used by the ring it is\n> unlikely that clocksweep will find it at usage 0 as the ring buffer\n> should cycle more often than the clocksweep. Whereas if the ring stops\n> using a buffer, clocksweep will eventually come and reclaim it. And if\n> the ring shrinking decision turns out to be wrong before the\n> clocksweep gets around to reusing it, we can bring the same buffer\n> back into the ring.\n\nI can see what you mean about excluding a buffer from the ring being a\nmore effective way of allowing it to be reclaimed. However, I'm not sure\nI understand the use case. If the operation, say vacuum, is actively\nusing the buffer and keeping its usage count at one, then what would be\nthe criteria for it to decide to stop using it?\n\nAlso, if vacuum used the buffer once and then didn't reuse it but, for\nsome reason, the vacuum isn't over, it isn't any different at that point\nthan some other buffer with a usage count of one. It isn't any harder\nfor it to be reclaimed by the clocksweep.\n\nThe argument I could see for decreasing the size even when the buffers\nare being used by the operation employing the strategy is if there is\npressure from other workloads to use those buffers. But, designing a\nsystem that would reclaim buffers when needed by other workloads is more\ncomplicated than what is being proposed here.\n\n> > I do think that it is a bit unreasonable to expect users to know how\n> > large they would like to make their buffer access strategy ring. What we\n> > want is some way of balancing different kinds of workloads and\n> > maintenance tasks reasonably. If your database has no activity because\n> > it is the middle of the night or it was shutdown because of transaction\n> > id wraparound, there is no reason why vacuum should limit the number of\n> > buffers it uses. I'm sure there are many other such examples.\n>\n> Ideally yes, though I am not hopeful of finding a solution that does\n> this any time soon. Just to take your example, if a nightly\n> maintenance job wipes out the shared buffer contents slightly\n> optimizing its non time-critical work and then causes morning user\n> visible load to have big latency spikes due to cache misses, that's\n> not a good tradeoff either.\n\nYes, that is a valid concern.\n\n- Melanie\n\n\n", "msg_date": "Sun, 19 Mar 2023 18:59:12 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Mon, 20 Mar 2023 at 00:59, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Mar 15, 2023 at 6:46 AM Ants Aasma <ants@cybertec.at> wrote:\n> >\n> > On Wed, 15 Mar 2023 at 02:29, Melanie Plageman\n> > <melanieplageman@gmail.com> wrote:\n> > > As for routine vacuuming and the other buffer access strategies, I think\n> > > there is an argument for configurability based on operator knowledge --\n> > > perhaps your workload will use the data you are COPYing as soon as the\n> > > COPY finishes, so you might as well disable a buffer access strategy or\n> > > use a larger fraction of shared buffers. Also, the ring sizes were\n> > > selected sixteen years ago and average server memory and data set sizes\n> > > have changed.\n> >\n> > To be clear I'm not at all arguing against configurability. I was\n> > thinking that dynamic use could make the configuration simpler by self\n> > tuning to use no more buffers than is useful.\n>\n> Yes, but I am struggling with how we would define \"useful\".\n\nFor copy and vacuum, the only reason I can see for keeping visited\nbuffers around is to avoid flushing WAL or at least doing it in larger\nbatches. Once the ring is big enough that WAL doesn't need to be\nflushed on eviction, making it bigger only wastes space that could be\nused by something that is not going to be evicted soon.\n\n> > > StrategyRejectBuffer() will allow bulkreads to, as you say, use more\n> > > buffers than the original ring size, since it allows them to kick\n> > > dirty buffers out of the ring and claim new shared buffers.\n> > >\n> > > Bulkwrites and vacuums, however, will inevitably dirty buffers and\n> > > require flushing the buffer (and thus flushing the associated WAL) when\n> > > reusing them. Bulkwrites and vacuum do not kick dirtied buffers out of\n> > > the ring, since dirtying buffers is their common case. A dynamic\n> > > resizing like the one you suggest would likely devolve to vacuum and\n> > > bulkwrite strategies always using the max size.\n> >\n> > I think it should self stabilize around the point where the WAL is\n> > either flushed by other commit activity, WAL writer or WAL buffers\n> > filling up. Writing out their own dirtied buffers will still happen,\n> > just the associated WAL flushes will be in larger chunks and possibly\n> > done by other processes.\n>\n> They will have to write out any WAL associated with modifications to the\n> dirty buffer before flushing it, so I'm not sure I understand how this\n> would work.\n\nBy the time the dirty buffer needs eviction the WAL associated with it\ncan already be written out by concurrent commits, WAL writer or by WAL\nbuffers filling up. The bigger the ring is, the higher the chance that\none of these will happen before we loop around.\n\n> > > As for decreasing the ring size, buffers are only \"added\" to the ring\n> > > lazily and, technically, as it is now, buffers which have been added\n> > > added to the ring can always be reclaimed by the clocksweep (as long as\n> > > they are not pinned). The buffer access strategy is more of a\n> > > self-imposed restriction than it is a reservation. Since the ring is\n> > > small and the buffers are being frequently reused, odds are the usage\n> > > count will be 1 and we will be the one who set it to 1, but there is no\n> > > guarantee. If, when attempting to reuse the buffer, its usage count is\n> > > > 1 (or it is pinned), we also will kick it out of the ring and go look\n> > > for a replacement buffer.\n> >\n> > Right, but while the buffer is actively used by the ring it is\n> > unlikely that clocksweep will find it at usage 0 as the ring buffer\n> > should cycle more often than the clocksweep. Whereas if the ring stops\n> > using a buffer, clocksweep will eventually come and reclaim it. And if\n> > the ring shrinking decision turns out to be wrong before the\n> > clocksweep gets around to reusing it, we can bring the same buffer\n> > back into the ring.\n>\n> I can see what you mean about excluding a buffer from the ring being a\n> more effective way of allowing it to be reclaimed. However, I'm not sure\n> I understand the use case. If the operation, say vacuum, is actively\n> using the buffer and keeping its usage count at one, then what would be\n> the criteria for it to decide to stop using it?\n\nThe criteria for reducing ring size could be that we have cycled the\nring buffer n times without having to do any WAL flushes.\n\n> Also, if vacuum used the buffer once and then didn't reuse it but, for\n> some reason, the vacuum isn't over, it isn't any different at that point\n> than some other buffer with a usage count of one. It isn't any harder\n> for it to be reclaimed by the clocksweep.\n>\n> The argument I could see for decreasing the size even when the buffers\n> are being used by the operation employing the strategy is if there is\n> pressure from other workloads to use those buffers. But, designing a\n> system that would reclaim buffers when needed by other workloads is more\n> complicated than what is being proposed here.\n\nI don't think any specific reclaim is needed, if the ring stops using\na buffer *and* there is pressure from other workloads the buffer will\nget used for other stuff by the normal clocksweep. If the ring keeps\nusing it then the normal clocksweep is highly unlikely to find it with\nusage count 0. If there is no concurrent allocation pressure, the ring\ncan start using it again if that turns out to be necessary (probably\nshould still check that it hasn't been reused by someone else).\n--\n\nAnts Aasma\nSenior Database Engineer\nwww.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 21 Mar 2023 12:03:21 +0200", "msg_from": "Ants Aasma <ants@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Tue, Mar 21, 2023 at 6:03 AM Ants Aasma <ants@cybertec.at> wrote:\n>\n> On Mon, 20 Mar 2023 at 00:59, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Wed, Mar 15, 2023 at 6:46 AM Ants Aasma <ants@cybertec.at> wrote:\n> > >\n> > > On Wed, 15 Mar 2023 at 02:29, Melanie Plageman\n> > > <melanieplageman@gmail.com> wrote:\n> > > > As for routine vacuuming and the other buffer access strategies, I think\n> > > > there is an argument for configurability based on operator knowledge --\n> > > > perhaps your workload will use the data you are COPYing as soon as the\n> > > > COPY finishes, so you might as well disable a buffer access strategy or\n> > > > use a larger fraction of shared buffers. Also, the ring sizes were\n> > > > selected sixteen years ago and average server memory and data set sizes\n> > > > have changed.\n> > >\n> > > To be clear I'm not at all arguing against configurability. I was\n> > > thinking that dynamic use could make the configuration simpler by self\n> > > tuning to use no more buffers than is useful.\n> >\n> > Yes, but I am struggling with how we would define \"useful\".\n>\n> For copy and vacuum, the only reason I can see for keeping visited\n> buffers around is to avoid flushing WAL or at least doing it in larger\n> batches. Once the ring is big enough that WAL doesn't need to be\n> flushed on eviction, making it bigger only wastes space that could be\n> used by something that is not going to be evicted soon.\n\nWell, I think if you know you will use the data you are COPYing right\naway in your normal workload, it could be useful to have the ring be\nlarge or to disable use of the ring. And, for vacuum, if you need to get\nit done as quickly as possible, again, it could be useful to have the\nring be large or disable use of the ring.\n\n> > > > StrategyRejectBuffer() will allow bulkreads to, as you say, use more\n> > > > buffers than the original ring size, since it allows them to kick\n> > > > dirty buffers out of the ring and claim new shared buffers.\n> > > >\n> > > > Bulkwrites and vacuums, however, will inevitably dirty buffers and\n> > > > require flushing the buffer (and thus flushing the associated WAL) when\n> > > > reusing them. Bulkwrites and vacuum do not kick dirtied buffers out of\n> > > > the ring, since dirtying buffers is their common case. A dynamic\n> > > > resizing like the one you suggest would likely devolve to vacuum and\n> > > > bulkwrite strategies always using the max size.\n> > >\n> > > I think it should self stabilize around the point where the WAL is\n> > > either flushed by other commit activity, WAL writer or WAL buffers\n> > > filling up. Writing out their own dirtied buffers will still happen,\n> > > just the associated WAL flushes will be in larger chunks and possibly\n> > > done by other processes.\n> >\n> > They will have to write out any WAL associated with modifications to the\n> > dirty buffer before flushing it, so I'm not sure I understand how this\n> > would work.\n>\n> By the time the dirty buffer needs eviction the WAL associated with it\n> can already be written out by concurrent commits, WAL writer or by WAL\n> buffers filling up. The bigger the ring is, the higher the chance that\n> one of these will happen before we loop around.\n\nAh, I think I understand the idea now. So, I think it is an interesting\nidea to try and find the goldilocks size for the ring buffer. It is\nespecially interesting to me in the case in which we are enlarging the\nring.\n\nHowever, given that concurrent workload variability, machine I/O latency\nfluctuations, etc, we will definitely have to set a max value of some\nkind anyway for the ring size. So, this seems more like a complimentary\nfeature to vacuum_buffer_usage_limit. If we added some kind of adaptive\nsizing in a later version, we could emphasize in the guidance for\nsetting vacuum_buffer_usage_limit that it is the *maximum* size you\nwould like to allow vacuum to use. And, of course, there are the other\noperations which use buffer access strategies.\n\n> > > > As for decreasing the ring size, buffers are only \"added\" to the ring\n> > > > lazily and, technically, as it is now, buffers which have been added\n> > > > added to the ring can always be reclaimed by the clocksweep (as long as\n> > > > they are not pinned). The buffer access strategy is more of a\n> > > > self-imposed restriction than it is a reservation. Since the ring is\n> > > > small and the buffers are being frequently reused, odds are the usage\n> > > > count will be 1 and we will be the one who set it to 1, but there is no\n> > > > guarantee. If, when attempting to reuse the buffer, its usage count is\n> > > > > 1 (or it is pinned), we also will kick it out of the ring and go look\n> > > > for a replacement buffer.\n> > >\n> > > Right, but while the buffer is actively used by the ring it is\n> > > unlikely that clocksweep will find it at usage 0 as the ring buffer\n> > > should cycle more often than the clocksweep. Whereas if the ring stops\n> > > using a buffer, clocksweep will eventually come and reclaim it. And if\n> > > the ring shrinking decision turns out to be wrong before the\n> > > clocksweep gets around to reusing it, we can bring the same buffer\n> > > back into the ring.\n> >\n> > I can see what you mean about excluding a buffer from the ring being a\n> > more effective way of allowing it to be reclaimed. However, I'm not sure\n> > I understand the use case. If the operation, say vacuum, is actively\n> > using the buffer and keeping its usage count at one, then what would be\n> > the criteria for it to decide to stop using it?\n>\n> The criteria for reducing ring size could be that we have cycled the\n> ring buffer n times without having to do any WAL flushes.\n>\n> > Also, if vacuum used the buffer once and then didn't reuse it but, for\n> > some reason, the vacuum isn't over, it isn't any different at that point\n> > than some other buffer with a usage count of one. It isn't any harder\n> > for it to be reclaimed by the clocksweep.\n> >\n> > The argument I could see for decreasing the size even when the buffers\n> > are being used by the operation employing the strategy is if there is\n> > pressure from other workloads to use those buffers. But, designing a\n> > system that would reclaim buffers when needed by other workloads is more\n> > complicated than what is being proposed here.\n>\n> I don't think any specific reclaim is needed, if the ring stops using\n> a buffer *and* there is pressure from other workloads the buffer will\n> get used for other stuff by the normal clocksweep. If the ring keeps\n> using it then the normal clocksweep is highly unlikely to find it with\n> usage count 0. If there is no concurrent allocation pressure, the ring\n> can start using it again if that turns out to be necessary (probably\n> should still check that it hasn't been reused by someone else).\n\nYes, you don't need a specific reclaim mechanism. But you would want to\nbe quite conservative about decreasing the ring size (given workload\nvariation and machine variations such as bursting in the cloud) and\nprobably not do so simply because the operation using the strategy\ndoesn't absolutely need the buffer but also because other concurrent\nworkloads really need the buffer. And, it seems complicated to determine\nif other workloads do need the buffer.\n\n- Melanie\n\n\n", "msg_date": "Sat, 25 Mar 2023 17:18:08 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Mon, 20 Mar 2023 at 11:50, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Attached is an updated v6.\n\nI had a look over the v6-0001 patch. There are a few things I think\ncould be done better:\n\n\"Some operations will access a large number of pages at a time\", does\nthis really need \"at a time\"? I think it's more relevant that the\noperation uses a large number of pages.\n\nMissing <firstterm> around Buffer Access Strategy.\n\nVarious things could be linked to other sections of the glossary, e.g.\npages could link to glossary-data-page, shared buffers could link to\nglossary-shared-memory and WAL could link to glossary-wal.\n\nThe final paragraph should have <command> tags around the various\ncommands that you list.\n\nI have adjusted those and slightly reworded a few other things. See\nthe attached .diff which can be applied atop of v6-0001.\n\nDavid", "msg_date": "Fri, 31 Mar 2023 16:54:31 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, Mar 30, 2023 at 11:54 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Mon, 20 Mar 2023 at 11:50, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> > Attached is an updated v6.\n>\n> I had a look over the v6-0001 patch. There are a few things I think\n> could be done better:\n>\n> \"Some operations will access a large number of pages at a time\", does\n> this really need \"at a time\"? I think it's more relevant that the\n> operation uses a large number of pages.\n>\n> Missing <firstterm> around Buffer Access Strategy.\n>\n> Various things could be linked to other sections of the glossary, e.g.\n> pages could link to glossary-data-page, shared buffers could link to\n> glossary-shared-memory and WAL could link to glossary-wal.\n>\n> The final paragraph should have <command> tags around the various\n> commands that you list.\n>\n> I have adjusted those and slightly reworded a few other things. See\n> the attached .diff which can be applied atop of v6-0001.\n\nThere was one small typo keeping this from compiling. Also a repeated\nword. I've fixed these. I also edited a bit of indentation and tweaked\nsome wording. Diff attached (to be applied on top of your diff).\n\n- Melanie", "msg_date": "Fri, 31 Mar 2023 09:52:08 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, 1 Apr 2023 at 02:52, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> There was one small typo keeping this from compiling. Also a repeated\n> word. I've fixed these. I also edited a bit of indentation and tweaked\n> some wording. Diff attached (to be applied on top of your diff).\n\nThanks for fixing that mistake.\n\nFor reference, I had changed things to end lines early so that the\nglossterm tags could be on a line of their own without breaking to a\nnew line. The rest of the file seems to be done that way, so I thought\nwe'd better stick to it.\n\nI swapped out \"associated WAL\" for \"unflushed WAL\". I didn't agree\nthat the WAL that would be flushed would have any particular\nassociation with the to-be-written page.\n\nI dropped CTAS since I didn't see any other mention in the docs about\nthat. I could maybe see the sense in making reference to the\nabbreviated form if we were going to mention it again and didn't want\nto spell the whole thing out each time, but that's not the case here.\n\nI pushed the result.\n\nDavid\n\n\n", "msg_date": "Sat, 1 Apr 2023 10:47:23 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, Mar 31, 2023 at 5:47 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 1 Apr 2023 at 02:52, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > There was one small typo keeping this from compiling. Also a repeated\n> > word. I've fixed these. I also edited a bit of indentation and tweaked\n> > some wording. Diff attached (to be applied on top of your diff).\n>\n> Thanks for fixing that mistake.\n>\n> For reference, I had changed things to end lines early so that the\n> glossterm tags could be on a line of their own without breaking to a\n> new line. The rest of the file seems to be done that way, so I thought\n> we'd better stick to it.\n>\n> I swapped out \"associated WAL\" for \"unflushed WAL\". I didn't agree\n> that the WAL that would be flushed would have any particular\n> association with the to-be-written page.\n>\n> I dropped CTAS since I didn't see any other mention in the docs about\n> that. I could maybe see the sense in making reference to the\n> abbreviated form if we were going to mention it again and didn't want\n> to spell the whole thing out each time, but that's not the case here.\n>\n> I pushed the result.\n\nCool!\n\nI've attached v7 with that commit dropped and with support for parallel\nvacuum workers to use the same number of buffers in their own Buffer\nAccess Strategy ring as the main vacuum phase did. I also updated the\ndocs to indicate that vacuum_buffer_usage_limit is per backend (not per\ninstance of VACUUM).\n\n- Melanie", "msg_date": "Fri, 31 Mar 2023 19:57:36 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, 1 Apr 2023 at 12:57, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> I've attached v7 with that commit dropped and with support for parallel\n> vacuum workers to use the same number of buffers in their own Buffer\n> Access Strategy ring as the main vacuum phase did. I also updated the\n> docs to indicate that vacuum_buffer_usage_limit is per backend (not per\n> instance of VACUUM).\n\n(was just replying about v6-0002 when this came in. Replying here instead)\n\nFor v7-0001, can we just get rid of both of those static globals? I'm\ngobsmacked by the existing \"A few variables that don't seem worth\npassing around as parameters\" comment. Not wanting to pass parameters\naround is a horrible excuse for adding global variables, even static\nones.\n\nAttached is what I propose in .diff form so that the CFbot can run on\nyour v7 patches without picking this up.\n\nI considered if we could switch memory contexts before calling\nexpand_vacuum_rel() and get_all_vacuum_rels(), but I see, at least in\nthe case of expand_vacuum_rel() that we'd probably want to list_free()\nthe output of find_all_inheritors() to save that from leaking into the\nvac_context. It seems safe just to switch into the vac_context only\nwhen we really want to keep that memory around. (I do think switching\nin each iteration of the foreach(part_lc, part_oids) loop is\nexcessive, however. Just not enough for me to want to change it)\n\nDavid", "msg_date": "Sat, 1 Apr 2023 13:05:19 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Apr 01, 2023 at 01:05:19PM +1300, David Rowley wrote:\n> Attached is what I propose in .diff form so that the CFbot can run on\n> your v7 patches without picking this up.\n\nBut it processes .diff, too\n\nhttps://wiki.postgresql.org/wiki/Cfbot#Which_attachments_are_considered_to_be_patches.3F\n\n\n", "msg_date": "Fri, 31 Mar 2023 19:13:38 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, Mar 31, 2023 at 8:05 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 1 Apr 2023 at 12:57, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > I've attached v7 with that commit dropped and with support for parallel\n> > vacuum workers to use the same number of buffers in their own Buffer\n> > Access Strategy ring as the main vacuum phase did. I also updated the\n> > docs to indicate that vacuum_buffer_usage_limit is per backend (not per\n> > instance of VACUUM).\n>\n> (was just replying about v6-0002 when this came in. Replying here instead)\n>\n> For v7-0001, can we just get rid of both of those static globals? I'm\n> gobsmacked by the existing \"A few variables that don't seem worth\n> passing around as parameters\" comment. Not wanting to pass parameters\n> around is a horrible excuse for adding global variables, even static\n> ones.\n\nMakes sense to me.\n\n> Attached is what I propose in .diff form so that the CFbot can run on\n> your v7 patches without picking this up.\n\nYour diff LGTM.\n\nEarlier upthread in [1], Bharath had mentioned in a review comment about\nremoving the global variables that he would have expected the analogous\nglobal in analyze.c to also be removed (vac_strategy [and analyze.c also\nhas anl_context]).\n\nI looked into doing this, and this is what I found out (see full\nrationale in [2]):\n\n> it is a bit harder to remove it from analyze because acquire_func\n> doesn't take the buffer access strategy as a parameter and\n> acquire_sample_rows uses the vac_context global variable to pass to\n> table_scan_analyze_next_block().\n\nI don't know if this is worth mentioning in the commit removing the\nother globals? Maybe it will just make it more confusing...\n\n> I considered if we could switch memory contexts before calling\n> expand_vacuum_rel() and get_all_vacuum_rels(), but I see, at least in\n> the case of expand_vacuum_rel() that we'd probably want to list_free()\n> the output of find_all_inheritors() to save that from leaking into the\n> vac_context. It seems safe just to switch into the vac_context only\n> when we really want to keep that memory around. (I do think switching\n> in each iteration of the foreach(part_lc, part_oids) loop is\n> excessive, however. Just not enough for me to want to change it)\n\nYes, I see what you mean. Your decision makes sense to me.\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CALj2ACXKgAQpKsCPi6ox%2BK5JLDB9TAxeObyVOfrmgTjqmc0aAA%40mail.gmail.com\n[2] https://www.postgresql.org/message-id/CAAKRu_brtqmd4e7kwEeKjySP22y4ywF32M7pvpi%2Bx5txgF0%2Big%40mail.gmail.com\n\n\n", "msg_date": "Fri, 31 Mar 2023 20:24:35 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nI was just doing some cleanup on the main patch in this set and realized\nthat it was missing a few things. One of which is forbidding the\nBUFFER_USAGE_LIMIT with VACUUM FULL since VACUUM FULL does not use a\nBAS_VACUUM strategy.\n\nVACUUM FULL technically uses a bulkread buffer access strategy for\nreading the original relation if its number of blocks is > number of\nshared buffers / 4 (see initscan()). The new rel writing is done using\nsmgrextend/write directly and doesn't go through shared buffers. I\nthink it is a stretch to try and use the size passed in to VACUUM by\nBUFFER_USAGE_LIMIT for the bulkread strategy ring.\n\nAs for forbidding the combination, I noticed that when VACUUM FULL is\nspecified with INDEX_CLEANUP OFF, there is no syntax error but the\nINDEX_CLEANUP option is simply ignored. This is documented behavior.\n\nI somehow feel like VACUUM (FULL, BUFFER_USAGE_LIMIT 'x') should error\nout instead of silently not using the buffer usage limit, though.\n\nI am looking for others' opinions.\n\n- Melanie\n\n\n", "msg_date": "Sat, 1 Apr 2023 13:29:13 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Apr 01, 2023 at 01:29:13PM -0400, Melanie Plageman wrote:\n> Hi,\n> \n> I was just doing some cleanup on the main patch in this set and realized\n> that it was missing a few things. One of which is forbidding the\n> BUFFER_USAGE_LIMIT with VACUUM FULL since VACUUM FULL does not use a\n> BAS_VACUUM strategy.\n> \n> VACUUM FULL technically uses a bulkread buffer access strategy for\n> reading the original relation if its number of blocks is > number of\n> shared buffers / 4 (see initscan()). The new rel writing is done using\n> smgrextend/write directly and doesn't go through shared buffers. I\n> think it is a stretch to try and use the size passed in to VACUUM by\n> BUFFER_USAGE_LIMIT for the bulkread strategy ring.\n\nWhen you say that it's a stretch, do you mean that it'd be a pain to add\narguments to handful of functions to pass down the setting ? Or that\nit's unclear if doing so would be the desirable/needed/intended/expected\nbehavior ?\n\nI think if VACUUM FULL were going to allow a configurable strategy size,\nthen so should CLUSTER. But it seems fine if they don't.\n\nI wonder if maybe strategy should be configurable in some more generic\nway, like a GUC. At one point I had a patch to allow INSERT to use\nstrategy buffers (not just INSERT SELECT). And that's still pretty\ndesirable. Also COPY. I've seen load spikes caused by pg_dumping\ntables which are just below 25% of shared_buffers. Which is exacerbated\nbecause pg_dump deliberately orders tables by size, so those tables are\ndumped one after another, each causing eviction of ~20% of shared\nbuffers. And exacerbated some more because TOAST don't seem to use a\nring buffer in that case.\n\n> I somehow feel like VACUUM (FULL, BUFFER_USAGE_LIMIT 'x') should error\n> out instead of silently not using the buffer usage limit, though.\n> \n> I am looking for others' opinions.\n\nSorry, no opinion here :)\n\nOne thing is that it's fine to take something that previously throw an\nerror and change it to not throw an error anymore. But it's undesirable\nto do the opposite. For that reason, there's may be a tendency to add\nerrors for cases like this.\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 1 Apr 2023 12:57:23 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Apr 1, 2023 at 1:57 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Sat, Apr 01, 2023 at 01:29:13PM -0400, Melanie Plageman wrote:\n> > Hi,\n> >\n> > I was just doing some cleanup on the main patch in this set and realized\n> > that it was missing a few things. One of which is forbidding the\n> > BUFFER_USAGE_LIMIT with VACUUM FULL since VACUUM FULL does not use a\n> > BAS_VACUUM strategy.\n> >\n> > VACUUM FULL technically uses a bulkread buffer access strategy for\n> > reading the original relation if its number of blocks is > number of\n> > shared buffers / 4 (see initscan()). The new rel writing is done using\n> > smgrextend/write directly and doesn't go through shared buffers. I\n> > think it is a stretch to try and use the size passed in to VACUUM by\n> > BUFFER_USAGE_LIMIT for the bulkread strategy ring.\n>\n> When you say that it's a stretch, do you mean that it'd be a pain to add\n> arguments to handful of functions to pass down the setting ? Or that\n> it's unclear if doing so would be the desirable/needed/intended/expected\n> behavior ?\n\nMore that I don't think it makes sense. VACUUM FULL only uses a buffer\naccess strategy (BAS_BULKREAD) for reading the original relation in and\nnot for writing the new one. It has different concerns because its\nbehavior is totally different from regular vacuum. It is not modifying\nthe original buffers (AFAIK) and the amount of WAL it is generating is\ndifferent. Also, no matter what, the new relation won't be in shared\nbuffers because of VACUUM FULL using the smgr functions directly. So, I\nthink that allowing the two options together is confusing for the user\nbecause it seems to imply we can give them some benefit that we cannot.\n\n> I wonder if maybe strategy should be configurable in some more generic\n> way, like a GUC. At one point I had a patch to allow INSERT to use\n> strategy buffers (not just INSERT SELECT). And that's still pretty\n> desirable. Also COPY. I've seen load spikes caused by pg_dumping\n> tables which are just below 25% of shared_buffers. Which is exacerbated\n> because pg_dump deliberately orders tables by size, so those tables are\n> dumped one after another, each causing eviction of ~20% of shared\n> buffers. And exacerbated some more because TOAST don't seem to use a\n> ring buffer in that case.\n\nYes, it is probably worth exploring how configurable or dynamic Buffer\nAccess Strategies should be for other users (e.g. not just VACUUM).\nHowever, since the ring sizes wouldn't be the same for all the different\noperations, it is probably easier to start with a single kind of\noperation and go from there.\n\n> > I somehow feel like VACUUM (FULL, BUFFER_USAGE_LIMIT 'x') should error\n> > out instead of silently not using the buffer usage limit, though.\n> >\n> > I am looking for others' opinions.\n>\n> Sorry, no opinion here :)\n>\n> One thing is that it's fine to take something that previously throw an\n> error and change it to not throw an error anymore. But it's undesirable\n> to do the opposite. For that reason, there's may be a tendency to add\n> errors for cases like this.\n\nSo, I have made it error out when you specify BUFFER_USAGE_LIMIT with\nVACUUM FULL or VACUUM ONLY_DATABASE_STATS. However, if you specify\nbuffer_usage_limit -1 with either of these options, it will not error\nout. I don't love this, but I noticed that VACUUM (FULL, PARALLEL 0)\ndoes not error out, while VACUUM (FULL, PARALLEL X) where X > 0 does.\n\nIf I want to error out when BUFFER_USAGE_LIMIT specified at all but\nstill do so at the bottom of ExecVacuum() with the rest of the vacuum\noption sanity checking, I will probably need to add a flag bit for\nVacuumParams->options.\n\nI was wondering why some \"sanity checking\" of vacuum options is done in\nExecVacuum() and some in vacuum() (it isn't just split by what is\napplicable to autovacuum and what isn't).\n\nI noticed that even in cases where we don't use the strategy object we\nstill made it, which I thought seemed like a bit of a waste and easy to\nfix. I've added a commit which does not make the BufferAccessStrategy\nobject when VACUUM FULL or VACUUM ONLY_DATABASE_STATS are specified. I\nnoticed that we also don't use the strategy for VACUUM (PROCESS_MAIN\nfalse, PROCESS_TOAST false), but it didn't seem worth handling this very\nspecific case, so I didn't.\n\nv8 attached has the prohibitions specified above (including for\nvacuumdb, as relevant) as well as some cleanup, added test cases, and\nupdated documentation.\n\n0001 is essentially unmodified (i.e. I didn't do anything with the other\nglobal variable David mentioned).\n\nI still have a few open questions:\n- what the initial value of ring_size for autovacuum should be (see the\n one remaining TODO in the code)\n- should ANALYZE allow specifying BUFFER_USAGE_LIMIT since it uses the guc\n value when that is set?\n- should INDEX_CLEANUP off cause VACUUM to use shared buffers and\n disable use of a strategy (like failsafe vacuum)\n- should we add anything to VACUUM VERBOSE output about the number of\n reuses of strategy buffers?\n- Should we make BufferAccessStrategyData non-opaque so that we don't\n have to add a getter for nbuffers. I could have implemented this in\n another way, but I don't really see why BufferAccessStrategyData\n should be opaque\n\n- Melanie", "msg_date": "Sun, 2 Apr 2023 16:11:47 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, 1 Apr 2023 at 13:24, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Your diff LGTM.\n>\n> Earlier upthread in [1], Bharath had mentioned in a review comment about\n> removing the global variables that he would have expected the analogous\n> global in analyze.c to also be removed (vac_strategy [and analyze.c also\n> has anl_context]).\n>\n> I looked into doing this, and this is what I found out (see full\n> rationale in [2]):\n>\n> > it is a bit harder to remove it from analyze because acquire_func\n> > doesn't take the buffer access strategy as a parameter and\n> > acquire_sample_rows uses the vac_context global variable to pass to\n> > table_scan_analyze_next_block().\n>\n> I don't know if this is worth mentioning in the commit removing the\n> other globals? Maybe it will just make it more confusing...\n\nI did look at that, but it seems a little tricky to make work unless\nthe AcquireSampleRowsFunc signature was changed. To me, it just does\nnot seem worth doing that to get rid of the two globals in analyze.c.\n\nI pushed the patch with just the vacuum.c changes.\n\nDavid\n\n\n", "msg_date": "Mon, 3 Apr 2023 17:09:37 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "I've now pushed up v8-0004. Can rebase the remaining 2 patches on top\nof master again and resend?\n\nOn Mon, 3 Apr 2023 at 08:11, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> I still have a few open questions:\n> - what the initial value of ring_size for autovacuum should be (see the\n> one remaining TODO in the code)\n\nI assume you're talking about the 256KB BAS_VACUUM one set in\nGetAccessStrategy()? I don't think this patch should be doing anything\nto change those defaults. Anything that does that should likely have\na new thread and come with analysis or reasoning about why the newly\nproposed defaults are better than the old ones.\n\n> - should ANALYZE allow specifying BUFFER_USAGE_LIMIT since it uses the guc\n> value when that is set?\n\nThat's a good question...\n\n> - should INDEX_CLEANUP off cause VACUUM to use shared buffers and\n> disable use of a strategy (like failsafe vacuum)\n\nI don't see why it should. It seems strange to have one option\nmagically make changes to some other option.\n\n> - should we add anything to VACUUM VERBOSE output about the number of\n> reuses of strategy buffers?\n\nSounds like this would require an extra array of counter variables in\nBufferAccessStrategyData? I think it might be a bit late to start\nexperimenting with this.\n\n> - Should we make BufferAccessStrategyData non-opaque so that we don't\n> have to add a getter for nbuffers. I could have implemented this in\n> another way, but I don't really see why BufferAccessStrategyData\n> should be opaque\n\nIf nothing outside of the .c file requires access then there's little\nneed to make the members known outside of the file. Same as you'd want\nto make classes private rather than public when possible in OOP.\n\nIf you do come up with a reason to be able to determine the size of\nthe BufferAccessStrategy from outside freelist.c, I'd say an accessor\nmethod is the best way.\n\nDavid\n\nDavid\n\n\n", "msg_date": "Mon, 3 Apr 2023 23:56:57 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Mon, Apr 3, 2023 at 1:09 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 1 Apr 2023 at 13:24, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > Your diff LGTM.\n> >\n> > Earlier upthread in [1], Bharath had mentioned in a review comment about\n> > removing the global variables that he would have expected the analogous\n> > global in analyze.c to also be removed (vac_strategy [and analyze.c also\n> > has anl_context]).\n> >\n> > I looked into doing this, and this is what I found out (see full\n> > rationale in [2]):\n> >\n> > > it is a bit harder to remove it from analyze because acquire_func\n> > > doesn't take the buffer access strategy as a parameter and\n> > > acquire_sample_rows uses the vac_context global variable to pass to\n> > > table_scan_analyze_next_block().\n> >\n> > I don't know if this is worth mentioning in the commit removing the\n> > other globals? Maybe it will just make it more confusing...\n>\n> I did look at that, but it seems a little tricky to make work unless\n> the AcquireSampleRowsFunc signature was changed. To me, it just does\n> not seem worth doing that to get rid of the two globals in analyze.c.\n\nYes, I came to basically the same conclusion.\n\nOn Mon, Apr 3, 2023 at 7:57 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> I've now pushed up v8-0004. Can rebase the remaining 2 patches on top\n> of master again and resend?\n\nv9 attached.\n\n> On Mon, 3 Apr 2023 at 08:11, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > I still have a few open questions:\n> > - what the initial value of ring_size for autovacuum should be (see the\n> > one remaining TODO in the code)\n>\n> I assume you're talking about the 256KB BAS_VACUUM one set in\n> GetAccessStrategy()? I don't think this patch should be doing anything\n> to change those defaults. Anything that does that should likely have\n> a new thread and come with analysis or reasoning about why the newly\n> proposed defaults are better than the old ones.\n\nI actually was talking about something much more trivial but a little\nmore confusing.\n\nIn table_recheck_autovac(), I initialize the\nautovac_table->at_params.ring_size to the value of the\nvacuum_buffer_usage_limit guc. However, autovacuum makes its own\nBufferAccessStrategy object (instead of relying on vacuum() to do it)\nand passes that in to vacuum(). So, if we wanted autovacuum to disable\nuse of a strategy (and use as many shared buffers as it likes), it would\npass in NULL to vacuum(). If vauum_buffer_usage_limit is not 0, then we\nwould end up making and using a BufferAccessStrategy in vacuum().\n\nIf we instead initialized autovac_table->at_params.ring_size to 0, even\nif the passed in BufferAccessStrategy is NULL, we wouldn't make a ring\nfor autovacuum. Right now, we don't disable the strategy for autovacuum\nexcept in failsafe mode. And it is unclear when or why we would want to.\n\nI also thought it might be weird to have the value of the ring_size be\ninitialized to something other than the value of\nvacuum_buffer_usage_limit for autovacuum, since it is supposed to use\nthat guc value.\n\nIn fact, right now, we don't use the autovac_table->at_params.ring_size\nset in table_recheck_autovac() when making the ring in do_autovacuum()\nbut instead use the guc directly.\n\nI actually don't really like how vacuum() relies on the\nBufferAccessStrategy parameter being NULL for autovacuum and feel like\nthere is a more intuitive way to handle all this. But, I didn't want to\nmake major changes at this point.\n\nAnyway, the above is quite a bit more analysis than the issue is really\nworth. We should pick something and then document it in a comment.\n\n> > - should ANALYZE allow specifying BUFFER_USAGE_LIMIT since it uses the guc\n> > value when that is set?\n>\n> That's a good question...\n\nI kinda think we should just skip it. It adds to the surface area of the\nfeature.\n\n> > - should INDEX_CLEANUP off cause VACUUM to use shared buffers and\n> > disable use of a strategy (like failsafe vacuum)\n>\n> I don't see why it should. It seems strange to have one option\n> magically make changes to some other option.\n\nSure, sounds good.\n\n> > - should we add anything to VACUUM VERBOSE output about the number of\n> > reuses of strategy buffers?\n>\n> Sounds like this would require an extra array of counter variables in\n> BufferAccessStrategyData? I think it might be a bit late to start\n> experimenting with this.\n\nMakes sense. I hadn't thought through the implementation. We count reuses in\npg_stat_io data structures but that is global and not per\nBufferAccessStrategyData instance, so I agree to scrapping this idea.\n\n> > - Should we make BufferAccessStrategyData non-opaque so that we don't\n> > have to add a getter for nbuffers. I could have implemented this in\n> > another way, but I don't really see why BufferAccessStrategyData\n> > should be opaque\n>\n> If nothing outside of the .c file requires access then there's little\n> need to make the members known outside of the file. Same as you'd want\n> to make classes private rather than public when possible in OOP.\n>\n> If you do come up with a reason to be able to determine the size of\n> the BufferAccessStrategy from outside freelist.c, I'd say an accessor\n> method is the best way.\n\nIn the main patch, I wanted access to the number of buffers so that\nparallel vacuum workers could make their own rings the same size. I\nadded an accessor, but it looked a bit silly so I thought I would ask if\nwe needed to keep the data structure opaque. It isn't called frequently\nenough to worry about the function call overhead. Though the accessor\ncould use a better name than the one I chose.\n\n- Melanie", "msg_date": "Mon, 3 Apr 2023 10:49:43 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Tue, 4 Apr 2023 at 02:49, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> v9 attached.\n\nI've made a pass on the v9-0001 patch only. Here's what I noted down:\n\nv9-0001:\n\n1. In the documentation and comments, generally we always double-space\nafter a period. I see quite often you're not following this.\n\n2. Doc: We could generally seem to break tags within paragraphs into\nmultiple lines. You're doing that quite a bit, e.g:\n\n linkend=\"glossary-buffer-access-strategy\">Buffer Access\n Strategy</glossterm>. <literal>0</literal> will disable use of a\n\n2. This is not a command\n\n <command>BUFFER_USAGE_LIMIT</command> parameter.\n\n<option> is probably what you want.\n\n3. I'm not sure I agree that it's a good idea to refer to the strategy\nwith multiple different names. Here you've called it a \"ring buffer\",\nbut in the next sentence, you're calling it a Buffer Access Strategy.\n\n Specifies the ring buffer size for <command>VACUUM</command>. This size\n is used to calculate the number of shared buffers which will be reused as\n part of a <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\n Access Strategy</glossterm>. <literal>0</literal> disables use of a\n\n4. Can you explain your choice in not just making < 128 a hard error\nrather than clamping?\n\nI guess it means checks like this are made more simple, but that does\nnot seem like a good enough reason:\n\n/* check for out-of-bounds */\nif (result < -1 || result > MAX_BAS_RING_SIZE_KB)\n\n\npostgres=# vacuum (parallel -1) pg_class;\nERROR: parallel workers for vacuum must be between 0 and 1024\n\nMaybe the above is a good guide to follow.\n\nTo allow you to get rid of the clamping code, you'd likely need an\nassign hook function for vacuum_buffer_usage_limit.\n\n5. I see vacuum.sgml is full of inconsistencies around the use of\n<literal> vs <option>. I was going to complain about your:\n\n <literal>ONLY_DATABASE_STATS</literal> option. If\n <literal>ANALYZE</literal> is also specified, the\n <literal>BUFFER_USAGE_LIMIT</literal> value is used for both the vacuum\n\nbut I see you've likely just copied what's nearby.\n\nThere are also plenty of usages of <option> in that file. I'd rather\nsee you use <option>. Maybe there can be some other patch that sweeps\nthe entire docs to look for <literal>OPTION_NAME</literal> and\nreplaces them to use <option>.\n\n6. I was surprised to see you've added both\nGetAccessStrategyWithSize() and GetAccessStrategyWithNBuffers(). I\nthink the former is suitable for both. GetAccessStrategyWithNBuffers()\nseems to be just used once outside of freelist.c\n\n7. I don't think bas_nbuffers() is a good name for an external\nfunction. StrategyGetBufferCount() seems better.\n\n8. I don't quite follow this comment:\n\n/*\n* TODO: should this be 0 so that we are sure that vacuum() never\n* allocates a new bstrategy for us, even if we pass in NULL for that\n* parameter? maybe could change how failsafe NULLs out bstrategy if\n* so?\n*/\n\nCan you explain under what circumstances would vacuum() allocate a\nbstrategy when do_autovacuum() would not? Is this a case of a config\nreload where someone changes vacuum_buffer_usage_limit from 0 to\nsomething non-zero? If so, perhaps do_autovacuum() needs to detect\nthis and allocate a strategy rather than having vacuum() do it once\nper table (wastefully).\n\n9. buffer/README. I think it might be overkill to document details\nabout how the new vacuum option works in a section talking about\nBuffer Ring Replacement Strategy. Perhaps it just worth something\nlike:\n\n\"In v16, the 256KB ring was made configurable by way of the\nvacuum_buffer_usage_limit GUC and the BUFFER_USAGE_LIMIT VACUUM\noption.\"\n\n10. I think if you do #4 then you can get rid of all the range checks\nand DEBUG1 elogs in GetAccessStrategyWithSize().\n\n11. This seems a bit badly done:\n\nint vacuum_buffer_usage_limit = -1;\n\nint VacuumCostPageHit = 1; /* GUC parameters for vacuum */\nint VacuumCostPageMiss = 2;\nint VacuumCostPageDirty = 20;\n\nI'd class vacuum_buffer_usage_limit as a \"GUC parameters for vacuum\"\ntoo. Probably the CamelCase naming should be followed too.\n\n\n12. ANALYZE too?\n\n{\"vacuum_buffer_usage_limit\", PGC_USERSET, RESOURCES_MEM,\ngettext_noop(\"Sets the buffer pool size for VACUUM and autovacuum.\"),\n\n13. VacuumParams.ring_size has no comments explaining what it is.\n\n14. vacuum_buffer_usage_limit seems to be lumped in with unrelated GUCs\n\nextern PGDLLIMPORT int maintenance_work_mem;\nextern PGDLLIMPORT int max_parallel_maintenance_workers;\n+extern PGDLLIMPORT int vacuum_buffer_usage_limit;\n\nextern PGDLLIMPORT int VacuumCostPageHit;\nextern PGDLLIMPORT int VacuumCostPageMiss;\n\n\n15. No comment explaining what these are:\n\n#define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\n#define MIN_BAS_RING_SIZE_KB 128\n\n16. Parameter names in function declaration and definition don't match in:\n\nextern BufferAccessStrategy\nGetAccessStrategyWithNBuffers(BufferAccessStrategyType btype, int\nnbuffers);\nextern BufferAccessStrategy\nGetAccessStrategyWithSize(BufferAccessStrategyType btype, int\nnbuffers);\n\nAlso, line wraps at 79 chars. (80 including line feed)\n\n17. If you want to test the 16GB upper limit, maybe going 1KB (or\n8KB?) rather than 1GB over 16GB is better? 2097153kB, I think.\n\nVACUUM (BUFFER_USAGE_LIMIT '17 GB') vac_option_tab;\n\nDavid\n\n\n", "msg_date": "Tue, 4 Apr 2023 12:37:10 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Mon, Apr 3, 2023 at 8:37 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Tue, 4 Apr 2023 at 02:49, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > v9 attached.\n>\n> I've made a pass on the v9-0001 patch only. Here's what I noted down:\n\nThanks for the review!\n\nAttached v10 addresses the review feedback below.\n\nRemaining TODOs:\n- tests\n- do something about config reload changing GUC\n\n> v9-0001:\n>\n> 1. In the documentation and comments, generally we always double-space\n> after a period. I see quite often you're not following this.\n\nI've gone through and done this. I noticed after building the docs that\nit doesn't seem to affect how many spaces are after a period in the\nrendered docs, but I suppose it affects readability when editing the\nsgml files.\n\n> 2. Doc: We could generally seem to break tags within paragraphs into\n> multiple lines. You're doing that quite a bit, e.g:\n>\n> linkend=\"glossary-buffer-access-strategy\">Buffer Access\n> Strategy</glossterm>. <literal>0</literal> will disable use of a\n\nI've updated all of the ones I could find that I did this with.\n\n> 2. This is not a command\n>\n> <command>BUFFER_USAGE_LIMIT</command> parameter.\n>\n> <option> is probably what you want.\n\nI have gone through and attempted to correct all\noption/command/application tag usages.\n\n> 3. I'm not sure I agree that it's a good idea to refer to the strategy\n> with multiple different names. Here you've called it a \"ring buffer\",\n> but in the next sentence, you're calling it a Buffer Access Strategy.\n>\n> Specifies the ring buffer size for <command>VACUUM</command>. This size\n> is used to calculate the number of shared buffers which will be reused as\n> part of a <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\n> Access Strategy</glossterm>. <literal>0</literal> disables use of a\n\nI've updated this to always prefix any use of ring with \"Buffer Access\nStrategy\". I don't know how you'll feel about it. It felt awkward in\nsome places to use Buffer Access Strategy as a complete stand-in for\nring buffer.\n\n> 4. Can you explain your choice in not just making < 128 a hard error\n> rather than clamping?\n>\n> I guess it means checks like this are made more simple, but that does\n> not seem like a good enough reason:\n>\n> /* check for out-of-bounds */\n> if (result < -1 || result > MAX_BAS_RING_SIZE_KB)\n>\n> postgres=# vacuum (parallel -1) pg_class;\n> ERROR: parallel workers for vacuum must be between 0 and 1024\n>\n> Maybe the above is a good guide to follow.\n>\n> To allow you to get rid of the clamping code, you'd likely need an\n> assign hook function for vacuum_buffer_usage_limit.\n\nI've added a check hook and replicated the same restrictions in\nExecVacuum() where it parses the limit. I have included enforcement of\nthe conditional limit that the ring cannot occupy more than 1/8 of\nshared buffers. The immediate consequence of this was that my tests were\nno longer stable (except for the integer overflow one).\nI have removed them for now until I can come up with a better testing\nstrategy.\n\nOn the topic of testing, I also thought we should remove the\nVACUUM(BUFFER_USAGE_LIMIT X, PARALLEL X) test. Though the parallel\nworkers do make their own strategy rings and such a test would be\ncovering some code, I am hesitant to write a test that would never\nreally fail. The observable behavior of not using a strategy will be\neither 1) basically nothing or 2) the same for parallel and\nnon-parallel. What do you think?\n\n> 5. I see vacuum.sgml is full of inconsistencies around the use of\n> <literal> vs <option>. I was going to complain about your:\n>\n> <literal>ONLY_DATABASE_STATS</literal> option. If\n> <literal>ANALYZE</literal> is also specified, the\n> <literal>BUFFER_USAGE_LIMIT</literal> value is used for both the vacuum\n>\n> but I see you've likely just copied what's nearby.\n>\n> There are also plenty of usages of <option> in that file. I'd rather\n> see you use <option>. Maybe there can be some other patch that sweeps\n> the entire docs to look for <literal>OPTION_NAME</literal> and\n> replaces them to use <option>.\n\nI haven't done the separate sweep patch, but I have updated my own\nusages in this set.\n\n> 6. I was surprised to see you've added both\n> GetAccessStrategyWithSize() and GetAccessStrategyWithNBuffers(). I\n> think the former is suitable for both. GetAccessStrategyWithNBuffers()\n> seems to be just used once outside of freelist.c\n\nThis has been updated and reorganized.\n\n> 7. I don't think bas_nbuffers() is a good name for an external\n> function. StrategyGetBufferCount() seems better.\n\nI've used this name.\n\n> 8. I don't quite follow this comment:\n>\n> /*\n> * TODO: should this be 0 so that we are sure that vacuum() never\n> * allocates a new bstrategy for us, even if we pass in NULL for that\n> * parameter? maybe could change how failsafe NULLs out bstrategy if\n> * so?\n> */\n>\n> Can you explain under what circumstances would vacuum() allocate a\n> bstrategy when do_autovacuum() would not? Is this a case of a config\n> reload where someone changes vacuum_buffer_usage_limit from 0 to\n> something non-zero? If so, perhaps do_autovacuum() needs to detect\n> this and allocate a strategy rather than having vacuum() do it once\n> per table (wastefully).\n\nHmm. Yes, I started hacking on this, but I think it might be a bit\ntricky to get right. I think it would make sense to check if\nvacuum_buffer_usage_limit goes from 0 to not 0 or from not 0 to 0 and\nallow disabling and enabling the buffer access strategy, however, I'm\nnot sure we want to allow changing the size during an autovacuum\nworker's run. I started writing code to just allow enabling and\ndisabling, but I'm a little concerned that the distinction will be\ndifficult to understand for the user with no obvious indication of what\nis happening. That is, you change the size and it silently does nothing,\nbut you set it to/from 0 and it silently does something?\n\nOne alternative for now is to save the ring size before looping through\nthe relations in do_autovacuum() and always restore that value in\ntab->at_params.ring_size in table_recheck_autovac().\n\nI'm not sure what to do.\n\n> 9. buffer/README. I think it might be overkill to document details\n> about how the new vacuum option works in a section talking about\n> Buffer Ring Replacement Strategy. Perhaps it just worth something\n> like:\n>\n> \"In v16, the 256KB ring was made configurable by way of the\n> vacuum_buffer_usage_limit GUC and the BUFFER_USAGE_LIMIT VACUUM\n> option.\"\n\nI've made the change you suggested.\n\n> 10. I think if you do #4 then you can get rid of all the range checks\n> and DEBUG1 elogs in GetAccessStrategyWithSize().\n\nDone.\n\n> 11. This seems a bit badly done:\n>\n> int vacuum_buffer_usage_limit = -1;\n>\n> int VacuumCostPageHit = 1; /* GUC parameters for vacuum */\n> int VacuumCostPageMiss = 2;\n> int VacuumCostPageDirty = 20;\n>\n> I'd class vacuum_buffer_usage_limit as a \"GUC parameters for vacuum\"\n> too. Probably the CamelCase naming should be followed too.\n\nI've made this change.\n\n> 12. ANALYZE too?\n>\n> {\"vacuum_buffer_usage_limit\", PGC_USERSET, RESOURCES_MEM,\n> gettext_noop(\"Sets the buffer pool size for VACUUM and autovacuum.\"),\n\nI've mentioned this here and also added the option for ANALYZE.\n\n> 13. VacuumParams.ring_size has no comments explaining what it is.\n\nI've added one.\n\n> 14. vacuum_buffer_usage_limit seems to be lumped in with unrelated GUCs\n>\n> extern PGDLLIMPORT int maintenance_work_mem;\n> extern PGDLLIMPORT int max_parallel_maintenance_workers;\n> +extern PGDLLIMPORT int vacuum_buffer_usage_limit;\n>\n> extern PGDLLIMPORT int VacuumCostPageHit;\n> extern PGDLLIMPORT int VacuumCostPageMiss;\n\nI've moved it down a line.\n\n> 15. No comment explaining what these are:\n>\n> #define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\n> #define MIN_BAS_RING_SIZE_KB 128\n\nI've added one.\n\n> 16. Parameter names in function declaration and definition don't match in:\n>\n> extern BufferAccessStrategy\n> GetAccessStrategyWithNBuffers(BufferAccessStrategyType btype, int\n> nbuffers);\n> extern BufferAccessStrategy\n> GetAccessStrategyWithSize(BufferAccessStrategyType btype, int\n> nbuffers);\n\nI've fixed this.\n\n> Also, line wraps at 79 chars. (80 including line feed)\n\nI've fixed that function prototype instance of it.\n\nIn general line wrap limit + pgindent can be quite challenging. I often\nbreak something onto multiple lines to appease the line limit and then\npgindent will add an absurd number of tabs to align the second line in a\nway that looks truly awful. I try to make local variables when this is a\nproblem, but it is often quite annoying to do that. I wish there was\nsome way to make pgindent do something different in these cases.\n\n> 17. If you want to test the 16GB upper limit, maybe going 1KB (or\n> 8KB?) rather than 1GB over 16GB is better? 2097153kB, I think.\n>\n> VACUUM (BUFFER_USAGE_LIMIT '17 GB') vac_option_tab;\n\nI've removed this test for now until I figure out a way to actually hit\nthis reliably with different-sized shared buffers.\n\n- Melanie", "msg_date": "Tue, 4 Apr 2023 13:53:15 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, 5 Apr 2023 at 05:53, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Attached v10 addresses the review feedback below.\n\nThanks. Here's another round on v10-0001:\n\n1. The following documentation fragment does not seem to be aligned\nwith the code:\n\n <literal>16 GB</literal>. The minimum size is the lesser\n of 1/8 the size of shared buffers and <literal>128 KB</literal>. The\n default value is <literal>-1</literal>. If this value is specified\n\nThe relevant code is:\n\nstatic inline int\nStrategyGetClampedBufsize(int bufsize_kb)\n{\n int sb_limit_kb;\n int blcksz_kb = BLCKSZ / 1024;\n\n Assert(blcksz_kb > 0);\n\n sb_limit_kb = NBuffers / 8 * blcksz_kb;\n\n return Min(sb_limit_kb, bufsize_kb);\n}\n\nThe code seems to mean that the *maximum* is the lesser of 16GB and\nshared_buffers / 8. You're saying it's the minimum.\n\n\n2. I think you could get rid of the double \"Buffer Access Stategy\" in:\n\n <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\nAccess Strategy</glossterm>.\n <literal>0</literal> will disable use of a <literal>Buffer\nAccess Strategy</literal>.\n <literal>-1</literal> will set the size to a default of\n <literal>256 KB</literal>. The maximum size is\n\nhow about:\n\n <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\nAccess Strategy</glossterm>.\n A setting of <literal>0</literal> will allow the operation to use any\n number of <varname>shared_buffers</varname>, whereas\n <literal>-1</literal> will set the size to a default of\n <literal>256 KB</literal>. The maximum size is\n\n\n3. In the following snippet you can use <xref linkend=\"sql-vacuum\"/>\nor just <command>VACUUM</command>. There are examples of both in that\nfile. I don't have a preference as it which, but I think what you've\ngot isn't great.\n\n <link linkend=\"sql-vacuum\"><command>VACUUM</command></link> and\n <link linkend=\"sql-analyze\"><command>ANALYZE</command></link>\n\n4. I wonder if there's a reason this needs to be written in the\noverview of ANALYZE.\n\n <command>ANALYZE</command> uses a\n <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer Access\nStrategy</glossterm>\n when reading in the sample data. The number of buffers consumed for this can\n be controlled by <xref linkend=\"guc-vacuum-buffer-usage-limit\"/> or by using\n the <option>BUFFER_USAGE_LIMIT</option> option.\n\nI think it's fine just to mention it under BUFFER_USAGE_LIMIT. It just\ndoes not seem fundamental enough to be worth being upfront about it.\nThe other things mentioned in that section don't seem related to\nparameters, so there might be no better place for those to go. That's\nnot the case for what you're adding here.\n\n5. I think I'd rather see the details spelt out here rather than\ntelling the readers to look at what VACUUM does:\n\n Specifies the\n <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\nAccess Strategy</glossterm>\n ring buffer size for <command>ANALYZE</command>. See the\n <link linkend=\"sql-vacuum\"><command>VACUUM</command></link> option with\n the same name.\n\n\n6. When I asked about restricting the valid values of\nvacuum_buffer_usage_limit to -1 / 0 or 128 KB to 16GB, I didn't expect\nyou to code in the NBuffers / 8 check. We shouldn't chain\ndependencies between GUCs like that. Imagine someone editing their\npostgresql.conf after realising shared_buffers is too large for their\nRAM, they reduce it and restart. The database fails to start because\nvacuum_buffer_usage_limit is too large! Angry DBA?\n\nTake what's already written about vacuum_failsafe_age as your guidance on this:\n\n\"The default is 1.6 billion transactions. Although users can set this\nvalue anywhere from zero to 2.1 billion, VACUUM will silently adjust\nthe effective value to no less than 105% of\nautovacuum_freeze_max_age.\"\n\nHere we just document the silent capping. You can still claim the\nvalid range is 128KB to 16GB in the docs. You can mention the 1/8th of\nshared buffers cap same as what's mentioned about \"105%\" above.\n\nWhen I mentioned #4 and #10 in my review of the v9-0001 patch, I just\nwanted to not surprise users who do vacuum_buffer_usage_limit = 64 and\nmagically get 128.\n\n7. In ExecVacuum(), similar to #6 from above, it's also not great that\nyou're raising an ERROR based on if StrategyGetClampedBufsize() clamps\nor not. If someone has a script that does:\n\nVACUUM (BUFFER_USAGE_LIMIT '1 GB'); it might be annoying that it stops\nworking when someone adjusts shared buffers from 10GB to 6GB.\n\nI really think the NBuffers / 8 clamping just should be done inside\nGetAccessStrategyWithSize().\n\n8. I think this ERROR in vacuum.c should mention that 0 is a valid value.\n\nereport(ERROR,\n(errcode(ERRCODE_INVALID_PARAMETER_VALUE),\nerrmsg(\"buffer_usage_limit for a vacuum must be between %d KB and %d KB\",\nMIN_BAS_RING_SIZE_KB, MAX_BAS_RING_SIZE_KB)));\n\nI doubt there's a need to mention -1 as that's the same as not\nspecifying BUFFER_USAGE_LIMIT.\n\n9. The following might be worthy of a comment explaining the order of\nprecedence of how we choose the size:\n\nif (params->ring_size == -1)\n{\nif (VacuumBufferUsageLimit == -1)\nbstrategy = GetAccessStrategy(BAS_VACUUM);\nelse\nbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n}\nelse\nbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, params->ring_size);\n\n10. I wonder if you need to keep bufsize_limit_to_nbuffers(). It's\njust used once and seems trivial enough just to write that code inside\nGetAccessStrategyWithSize().\n\n11. It's probably worth putting the valid range in the sample config:\n\n#vacuum_buffer_usage_limit = -1 # size of vacuum and analyze buffer\naccess strategy ring.\n# -1 to use default,\n# 0 to disable vacuum buffer access strategy\n# > 0 to specify size <-- here\n\n12. Is bufmgr.h the right place for these?\n\n/*\n * Upper and lower hard limits for the Buffer Access Strategy ring size\n * specified by the vacuum_buffer_usage_limit GUC and BUFFER_USAGE_LIMIT option\n * to VACUUM and ANALYZE.\n */\n#define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\n#define MIN_BAS_RING_SIZE_KB 128\n\nYour comment implies they're VACUUM / ANALYZE limits. If we want to\nimpose these limits to all access strategies then these seem like good\nnames and location, otherwise, I imagine miscadmin.h is the correct\nplace. If so, they'll likely want to be renamed to something more\nVACUUM specific. I don't particularly have a preference. 128 -\n1677216 seem like reasonable limits for any buffer access strategy.\n\n13. I think check_vacuum_buffer_usage_limit() does not belong in\nfreelist.c. Maybe vacuum.c?\n\n14. Not related to this patch, but why do we have half the vacuum\nrelated GUCs in vacuum.c and the other half in globals.c? I see\nvacuum_freeze_table_age is defined in vacuum.c but is also needed in\nautovacuum.c, so that rules out the globals.c ones being for vacuum.c\nand autovacuum.c. It seems a bit messy. I'm not really sure where\nVacuumBufferUsageLimit should go now.\n\n> Remaining TODOs:\n> - tests\n> - do something about config reload changing GUC\n\nShouldn't table_recheck_autovac() pfree/palloc a new strategy if the\nsize changes?\n\nI'm not sure what the implications are with that and the other patch\nyou're working on to allow vacuum config changes mid-vacuum. We'll\nneed to be careful and not immediately break that if that gets\ncommitted then this does or vice-versa.\n\nDavid\n\n\n", "msg_date": "Wed, 5 Apr 2023 15:14:17 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Tue, Apr 4, 2023 at 8:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> 14. Not related to this patch, but why do we have half the vacuum\n> related GUCs in vacuum.c and the other half in globals.c? I see\n> vacuum_freeze_table_age is defined in vacuum.c but is also needed in\n> autovacuum.c, so that rules out the globals.c ones being for vacuum.c\n> and autovacuum.c. It seems a bit messy. I'm not really sure where\n> VacuumBufferUsageLimit should go now.\n\nvacuum_freeze_table_age is an abomination, even compared to the rest\nof these GUCs. It was added around the time the visibility map first\nwent in, and so is quite a bit more recent than\nautovacuum_freeze_max_age.\n\nBefore the introduction of the visibility map, we only had\nautovacuum_freeze_max_age, and it was used to schedule antiwraparound\nautovacuums -- there was no such thing as aggressive VACUUMs (just\nantiwraparound autovacuums), and no need for autovacuum_freeze_max_age\nat all. So autovacuum_freeze_max_age was just for autovacuum.c code.\nThere was only one type of VACUUM, and they always advanced\nrelfrozenxid to the same degree.\n\nWith the introduction of the visibility map, we needed to have\nautovacuum_freeze_max_age in vacuum.c for the first time, to deal with\ninterpreting the then-new vacuum_freeze_table_age GUC correctly. We\nsilently truncate the vacuum_freeze_table_age setting so that it never\nexceeds 95% of autovacuum_freeze_max_age (the\n105%-of-autovacuum_freeze_max_age vacuum_failsafe_age thing that\nyou're discussing is symmetric). So after 2009\nautovacuum_freeze_max_age actually plays an important role in VACUUM,\nthe command, and not just autovacuum.\n\nThis is related to the problem of the autovacuum_freeze_max_age\nreloption being completely broken [1]. If autovacuum_freeze_max_age\nwas still purely just an autovacuum.c scheduling thing, then there\nwould be no problem with having a separate reloption of the same name.\nThere are big problems precisely because vacuum.c doesn't do anything\nwith the autovacuum_freeze_max_age reloption. Though it does okay with\nthe autovacuum_freeze_table_age reloption, which gets passed in. (Yes,\nit's called autovacuum_freeze_table_age in reloption form -- not\nvacuum_freeze_table_age, like the GUC).\n\nNote that the decision to ignore the reloption version of\nautovacuum_freeze_max_age in the failsafe's\n105%-of-autovacuum_freeze_max_age thing was a deliberate one. The\nautovacuum_freeze_max_age GUC is authoritative in the sense that it\ncannot be overridden locally, except in the direction of making\naggressive VACUUMs happen more frequently for a given table (so they\ncan't be less frequent via reloption configuration). I suppose you'd\nhave to untangle that mess if you wanted to fix the\nautovacuum_freeze_max_age reloption issue I go into in [1].\n\n[1] https://postgr.es/m/CAH2-Wz=DJAokY_GhKJchgpa8k9t_H_OVOvfPEn97jGNr9W=deg@mail.gmail.com\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 4 Apr 2023 21:36:55 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-04-04 13:53:15 -0400, Melanie Plageman wrote:\n> > 8. I don't quite follow this comment:\n> >\n> > /*\n> > * TODO: should this be 0 so that we are sure that vacuum() never\n> > * allocates a new bstrategy for us, even if we pass in NULL for that\n> > * parameter? maybe could change how failsafe NULLs out bstrategy if\n> > * so?\n> > */\n> >\n> > Can you explain under what circumstances would vacuum() allocate a\n> > bstrategy when do_autovacuum() would not? Is this a case of a config\n> > reload where someone changes vacuum_buffer_usage_limit from 0 to\n> > something non-zero? If so, perhaps do_autovacuum() needs to detect\n> > this and allocate a strategy rather than having vacuum() do it once\n> > per table (wastefully).\n\nHm. I don't much like that we use a single strategy for multiple tables\ntoday. That way even tiny tables never end up in shared_buffers. But that's\nreally a discussion for a different thread. However, if were to use a\nper-table bstrategy, it'd be a lot easier to react to changes of the config.\n\n\nI doubt it's worth adding complications to the code for changing the size of\nthe ringbuffer during an ongoing vacuum scan, at least for 16. Reacting to\nenabling/disbling the ringbuffer alltogether seems a bit more important, but\nstill not crucial compared to making it configurable at all.\n\nI think it'd be OK to add a comment saying something like \"XXX: In the future\nwe might want to react to configuration changes of the ring buffer size during\na vacuum\" or such.\n\nWRT to the TODO specifically: Yes, passing in 0 seems to make sense. I don't\nsee a reason not to do so? But perhaps there's a better solution:\n\nPerhaps the best solution for autovac vs interactive vacuum issue would be to\nmove the allocation of the bstrategy to ExecVacuum()?\n\n\nRandom note while looking at the code:\nISTM that adding handling of -1 in GetAccessStrategyWithSize() would make the\ncode more readable. Instead of\n\n\t\tif (params->ring_size == -1)\n\t\t{\n\t\t\tif (VacuumBufferUsageLimit == -1)\n\t\t\t\tbstrategy = GetAccessStrategy(BAS_VACUUM);\n\t\t\telse\n\t\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n\t\t}\n\t\telse\n\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, params->ring_size);\n\nyou could just have something like:\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM,\n params->ring_size != -1 ? params->ring_size : VacuumBufferUsageLimit);\n\nby falling back to the default values from GetAccessStrategy().\n\nOr even more \"extremely\", you could entirely remove references to\nVacuumBufferUsageLimit and handle that in freelist.c\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Apr 2023 10:05:53 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "v11 attached with updates detailed below.\n\nOn Tue, Apr 4, 2023 at 11:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Wed, 5 Apr 2023 at 05:53, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > Attached v10 addresses the review feedback below.\n>\n> Thanks. Here's another round on v10-0001:\n>\n> 1. The following documentation fragment does not seem to be aligned\n> with the code:\n>\n> <literal>16 GB</literal>. The minimum size is the lesser\n> of 1/8 the size of shared buffers and <literal>128 KB</literal>. The\n> default value is <literal>-1</literal>. If this value is specified\n>\n> The relevant code is:\n>\n> static inline int\n> StrategyGetClampedBufsize(int bufsize_kb)\n> {\n> int sb_limit_kb;\n> int blcksz_kb = BLCKSZ / 1024;\n>\n> Assert(blcksz_kb > 0);\n>\n> sb_limit_kb = NBuffers / 8 * blcksz_kb;\n>\n> return Min(sb_limit_kb, bufsize_kb);\n> }\n>\n> The code seems to mean that the *maximum* is the lesser of 16GB and\n> shared_buffers / 8. You're saying it's the minimum.\n\nGood catch. Fixed.\n\n> 2. I think you could get rid of the double \"Buffer Access Stategy\" in:\n>\n> <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\n> Access Strategy</glossterm>.\n> <literal>0</literal> will disable use of a <literal>Buffer\n> Access Strategy</literal>.\n> <literal>-1</literal> will set the size to a default of\n> <literal>256 KB</literal>. The maximum size is\n>\n> how about:\n>\n> <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\n> Access Strategy</glossterm>.\n> A setting of <literal>0</literal> will allow the operation to use any\n> number of <varname>shared_buffers</varname>, whereas\n> <literal>-1</literal> will set the size to a default of\n> <literal>256 KB</literal>. The maximum size is\n\nI've made these changes.\n\n> 3. In the following snippet you can use <xref linkend=\"sql-vacuum\"/>\n> or just <command>VACUUM</command>. There are examples of both in that\n> file. I don't have a preference as it which, but I think what you've\n> got isn't great.\n>\n> <link linkend=\"sql-vacuum\"><command>VACUUM</command></link> and\n> <link linkend=\"sql-analyze\"><command>ANALYZE</command></link>\n\nI've updated it to use the link. I thought it would be nice to have the\nlink in case the reader wants to look at the BUFFER_USAGE_LIMIT option\ndocs there.\n\n> 4. I wonder if there's a reason this needs to be written in the\n> overview of ANALYZE.\n>\n> <command>ANALYZE</command> uses a\n> <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer Access\n> Strategy</glossterm>\n> when reading in the sample data. The number of buffers consumed for this can\n> be controlled by <xref linkend=\"guc-vacuum-buffer-usage-limit\"/> or by using\n> the <option>BUFFER_USAGE_LIMIT</option> option.\n>\n> I think it's fine just to mention it under BUFFER_USAGE_LIMIT. It just\n> does not seem fundamental enough to be worth being upfront about it.\n> The other things mentioned in that section don't seem related to\n> parameters, so there might be no better place for those to go. That's\n> not the case for what you're adding here.\n\nI updated this.\n\n> 5. I think I'd rather see the details spelt out here rather than\n> telling the readers to look at what VACUUM does:\n>\n> Specifies the\n> <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer\n> Access Strategy</glossterm>\n> ring buffer size for <command>ANALYZE</command>. See the\n> <link linkend=\"sql-vacuum\"><command>VACUUM</command></link> option with\n> the same name.\n\nI've updated it to contain the same text, as relevant, as the VACUUM\noption contains. Note that both rely on the vacuum_buffer_usage_limit\nGUC documentation for a description of upper and lower bounds.\n\n> 6. When I asked about restricting the valid values of\n> vacuum_buffer_usage_limit to -1 / 0 or 128 KB to 16GB, I didn't expect\n> you to code in the NBuffers / 8 check. We shouldn't chain\n> dependencies between GUCs like that. Imagine someone editing their\n> postgresql.conf after realising shared_buffers is too large for their\n> RAM, they reduce it and restart. The database fails to start because\n> vacuum_buffer_usage_limit is too large! Angry DBA?\n>\n> Take what's already written about vacuum_failsafe_age as your guidance on this:\n>\n> \"The default is 1.6 billion transactions. Although users can set this\n> value anywhere from zero to 2.1 billion, VACUUM will silently adjust\n> the effective value to no less than 105% of\n> autovacuum_freeze_max_age.\"\n>\n> Here we just document the silent capping. You can still claim the\n> valid range is 128KB to 16GB in the docs. You can mention the 1/8th of\n> shared buffers cap same as what's mentioned about \"105%\" above.\n>\n> When I mentioned #4 and #10 in my review of the v9-0001 patch, I just\n> wanted to not surprise users who do vacuum_buffer_usage_limit = 64 and\n> magically get 128.\n\n\n> 7. In ExecVacuum(), similar to #6 from above, it's also not great that\n> you're raising an ERROR based on if StrategyGetClampedBufsize() clamps\n> or not. If someone has a script that does:\n>\n> VACUUM (BUFFER_USAGE_LIMIT '1 GB'); it might be annoying that it stops\n> working when someone adjusts shared buffers from 10GB to 6GB.\n>\n> I really think the NBuffers / 8 clamping just should be done inside\n> GetAccessStrategyWithSize().\n\nGot it. I've done what you suggested.\nI had some logic issues as well that I fixed and reorderd the code.\n\n> 8. I think this ERROR in vacuum.c should mention that 0 is a valid value.\n>\n> ereport(ERROR,\n> (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> errmsg(\"buffer_usage_limit for a vacuum must be between %d KB and %d KB\",\n> MIN_BAS_RING_SIZE_KB, MAX_BAS_RING_SIZE_KB)));\n>\n> I doubt there's a need to mention -1 as that's the same as not\n> specifying BUFFER_USAGE_LIMIT.\n\nI've done this. I didn't say that 0 meant disabling the strategy. Do you\nthink that would be useful?\n\n> 9. The following might be worthy of a comment explaining the order of\n> precedence of how we choose the size:\n>\n> if (params->ring_size == -1)\n> {\n> if (VacuumBufferUsageLimit == -1)\n> bstrategy = GetAccessStrategy(BAS_VACUUM);\n> else\n> bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n> }\n> else\n> bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, params->ring_size);\n\nI've updated this. Also, after doing so, I realized the if/else logic\nhere and in ExecVacuum() could be better and updated the ordering to\nmore closely mirror the human readable logic.\n\n> 10. I wonder if you need to keep bufsize_limit_to_nbuffers(). It's\n> just used once and seems trivial enough just to write that code inside\n> GetAccessStrategyWithSize().\n\nI've gotten rid of it.\n\n> 11. It's probably worth putting the valid range in the sample config:\n>\n> #vacuum_buffer_usage_limit = -1 # size of vacuum and analyze buffer\n> access strategy ring.\n> # -1 to use default,\n> # 0 to disable vacuum buffer access strategy\n> # > 0 to specify size <-- here\n\nDone.\n\n> 12. Is bufmgr.h the right place for these?\n>\n> /*\n> * Upper and lower hard limits for the Buffer Access Strategy ring size\n> * specified by the vacuum_buffer_usage_limit GUC and BUFFER_USAGE_LIMIT option\n> * to VACUUM and ANALYZE.\n> */\n> #define MAX_BAS_RING_SIZE_KB (16 * 1024 * 1024)\n> #define MIN_BAS_RING_SIZE_KB 128\n>\n> Your comment implies they're VACUUM / ANALYZE limits. If we want to\n> impose these limits to all access strategies then these seem like good\n> names and location, otherwise, I imagine miscadmin.h is the correct\n> place. If so, they'll likely want to be renamed to something more\n> VACUUM specific. I don't particularly have a preference. 128 -\n> 1677216 seem like reasonable limits for any buffer access strategy.\n\nI don't assert on these limits in GetAccessStrategyWithSize(), and,\nsince the rest of the code is mainly only concerned with vacuum, I think\nit is better to make these limits vacuum-specific. If we decide to make\nother access strategies configurable, we can generalize these macros\nthen. As such, I have moved them into miscadmin.h.\n\n> 13. I think check_vacuum_buffer_usage_limit() does not belong in\n> freelist.c. Maybe vacuum.c?\n\nI've moved it to vacuum.c. I put it above ExecVacuum() since that would\nbe correct alphabetically, but I'm not sure if it would be better to\nmove it down since ExecVacuum() is the \"main entry point\".\n\n> 14. Not related to this patch, but why do we have half the vacuum\n> related GUCs in vacuum.c and the other half in globals.c? I see\n> vacuum_freeze_table_age is defined in vacuum.c but is also needed in\n> autovacuum.c, so that rules out the globals.c ones being for vacuum.c\n> and autovacuum.c. It seems a bit messy. I'm not really sure where\n> VacuumBufferUsageLimit should go now.\n\nI've left it where it is and added a (helpful?) comment.\n\n> > Remaining TODOs:\n> > - tests\n> > - do something about config reload changing GUC\n>\n> Shouldn't table_recheck_autovac() pfree/palloc a new strategy if the\n> size changes?\n\nSee thoughts about that below in response to Andres' mail.\n\n> I'm not sure what the implications are with that and the other patch\n> you're working on to allow vacuum config changes mid-vacuum. We'll\n> need to be careful and not immediately break that if that gets\n> committed then this does or vice-versa.\n\nWe can think hard about this. If we go with adding a TODO for the size,\nand keeping the same ring, it won't be a problem.\n\nOn Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2023-04-04 13:53:15 -0400, Melanie Plageman wrote:\n> > > 8. I don't quite follow this comment:\n> > >\n> > > /*\n> > > * TODO: should this be 0 so that we are sure that vacuum() never\n> > > * allocates a new bstrategy for us, even if we pass in NULL for that\n> > > * parameter? maybe could change how failsafe NULLs out bstrategy if\n> > > * so?\n> > > */\n> > >\n> > > Can you explain under what circumstances would vacuum() allocate a\n> > > bstrategy when do_autovacuum() would not? Is this a case of a config\n> > > reload where someone changes vacuum_buffer_usage_limit from 0 to\n> > > something non-zero? If so, perhaps do_autovacuum() needs to detect\n> > > this and allocate a strategy rather than having vacuum() do it once\n> > > per table (wastefully).\n>\n> Hm. I don't much like that we use a single strategy for multiple tables\n> today. That way even tiny tables never end up in shared_buffers. But that's\n> really a discussion for a different thread. However, if were to use a\n> per-table bstrategy, it'd be a lot easier to react to changes of the config.\n\nAgreed. I was wondering if it is okay to do the palloc()/pfree() for\nevery table given that some may be small.\n\n> I doubt it's worth adding complications to the code for changing the size of\n> the ringbuffer during an ongoing vacuum scan, at least for 16. Reacting to\n> enabling/disbling the ringbuffer alltogether seems a bit more important, but\n> still not crucial compared to making it configurable at all.\n>\n> I think it'd be OK to add a comment saying something like \"XXX: In the future\n> we might want to react to configuration changes of the ring buffer size during\n> a vacuum\" or such.\n\nI've added the XXX to the autovacuum code. I think you mean it also\ncould be considered for VACUUM, but I've refrained from mentioning that\nfor now.\n\n> WRT to the TODO specifically: Yes, passing in 0 seems to make sense. I don't\n> see a reason not to do so? But perhaps there's a better solution:\n\nI've done that (passed in a 0), I was concerned that future code may\nreference this vacuum param and expect it to be aligned with the Buffer\nAccess Strategy in use. Really only vacuum() should be referencing the\nparams, so, perhaps it is not an issue...\n\nOkay, now I've convinced myself that it is better to allocate the\nstrategy in ExecVacuum(). Then we can get rid of the\nVacuumParams->ring_size altogether.\n\nI haven't done that in this version because of the below concern (re: it\nbeing appropriate to allocate the strategy in ExecVacuum() given its\ncurrent concern/focus).\n\n> Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> move the allocation of the bstrategy to ExecVacuum()?\n\nThought about this -- I did think it might be a bit weird since\nExecVacuum() mainly does option handling and sanity checking. Doing\nBuffer Access Strategy allocation seemed a bit out of place. I've left\nit as is, but would be happy to change it if the consensus is that this\nis better.\n\n> Random note while looking at the code:\n> ISTM that adding handling of -1 in GetAccessStrategyWithSize() would make the\n> code more readable. Instead of\n>\n> if (params->ring_size == -1)\n> {\n> if (VacuumBufferUsageLimit == -1)\n> bstrategy = GetAccessStrategy(BAS_VACUUM);\n> else\n> bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n> }\n> else\n> bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, params->ring_size);\n>\n> you could just have something like:\n> bstrategy = GetAccessStrategyWithSize(BAS_VACUUM,\n> params->ring_size != -1 ? params->ring_size : VacuumBufferUsageLimit);\n>\n> by falling back to the default values from GetAccessStrategy().\n>\n> Or even more \"extremely\", you could entirely remove references to\n> VacuumBufferUsageLimit and handle that in freelist.c\n\nHmm. I see what you mean.\n\nI've updated it to this, which is a bit better.\n\n if (params->ring_size > -1)\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM,\nparams->ring_size);\n else if (VacuumBufferUsageLimit > -1)\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM,\nVacuumBufferUsageLimit);\n else\n bstrategy = GetAccessStrategy(BAS_VACUUM);\n\nNot referencing VacuumBufferUsageLimit except in freelist.c is more\nchallenging because I think it would be weird to have\nGetAccessStrategyWithSize() call GetAccessStrategy() which then calls\nGetAccessStrategyWithSize().\n\n- Melanie", "msg_date": "Wed, 5 Apr 2023 15:25:52 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-04-04 13:53:15 -0400, Melanie Plageman wrote:\n> > > 8. I don't quite follow this comment:\n> > >\n> > > /*\n> > > * TODO: should this be 0 so that we are sure that vacuum() never\n> > > * allocates a new bstrategy for us, even if we pass in NULL for that\n> > > * parameter? maybe could change how failsafe NULLs out bstrategy if\n> > > * so?\n> > > */\n> > >\n> > > Can you explain under what circumstances would vacuum() allocate a\n> > > bstrategy when do_autovacuum() would not? Is this a case of a config\n> > > reload where someone changes vacuum_buffer_usage_limit from 0 to\n> > > something non-zero? If so, perhaps do_autovacuum() needs to detect\n> > > this and allocate a strategy rather than having vacuum() do it once\n> > > per table (wastefully).\n>\n> Hm. I don't much like that we use a single strategy for multiple tables\n> today. That way even tiny tables never end up in shared_buffers. But that's\n> really a discussion for a different thread. However, if were to use a\n> per-table bstrategy, it'd be a lot easier to react to changes of the config.\n>\n>\n> I doubt it's worth adding complications to the code for changing the size of\n> the ringbuffer during an ongoing vacuum scan, at least for 16. Reacting to\n> enabling/disbling the ringbuffer alltogether seems a bit more important, but\n> still not crucial compared to making it configurable at all.\n>\n> I think it'd be OK to add a comment saying something like \"XXX: In the future\n> we might want to react to configuration changes of the ring buffer size during\n> a vacuum\" or such.\n>\n> WRT to the TODO specifically: Yes, passing in 0 seems to make sense. I don't\n> see a reason not to do so? But perhaps there's a better solution:\n>\n> Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> move the allocation of the bstrategy to ExecVacuum()?\n\nSo, I started looking into allocating the bstrategy in ExecVacuum().\n\nWhile doing so, I was trying to understand if the \"sanity checking\" in\nvacuum() could possibly apply to autovacuum, and I don't really see how.\n\nAFAICT, autovacuum does not ever set VACOPT_DISABLE_PAGE_SKIPPING or\nVACOPT_FULL or VACOPT_ONLY_DATABASE_STATS.\n\nWe could move those sanity checks up into ExecVacuum().\n\nI also noticed that we make the vac_context in vacuum() which says it is\nfor \"cross-transaction storage\". We use it for the buffer access\nstrategy and for the newrels relation list created in vacuum(). Then we\ndelete it at the end of vacuum().\n\nAutovacuum workers already make a similar kind of memory context called\nAutovacMemCxt in do_autovacuum() which the comment says is for the list\nof relations to vacuum/analyze across transactions.\n\nWhat if we made ExecVacuum() make its own memory context and both it and\ndo_autovacuum() pass that memory context (along with the buffer access\nstrategy they make) to vacuum(), which then uses the memory context in\nthe same way it does now?\n\nIt simplifies the buffer usage limit patchset and also seems a bit more\nclear than what is there now?\n\n- Melanie\n\n\n", "msg_date": "Wed, 5 Apr 2023 16:17:20 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn 2023-04-05 16:17:20 -0400, Melanie Plageman wrote:\n> On Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> > move the allocation of the bstrategy to ExecVacuum()?\n> \n> So, I started looking into allocating the bstrategy in ExecVacuum().\n> \n> While doing so, I was trying to understand if the \"sanity checking\" in\n> vacuum() could possibly apply to autovacuum, and I don't really see how.\n> \n> AFAICT, autovacuum does not ever set VACOPT_DISABLE_PAGE_SKIPPING or\n> VACOPT_FULL or VACOPT_ONLY_DATABASE_STATS.\n> \n> We could move those sanity checks up into ExecVacuum().\n\nWould make sense.\n\nISTM that eventually most of what currently happens in vacuum() should be in\nExecVacuum(). There's a lot of stuff that can't happen for autovacuum. So it\njust seems to make more sense to move those parts to ExecVacuum().\n\n\n> I also noticed that we make the vac_context in vacuum() which says it is\n> for \"cross-transaction storage\". We use it for the buffer access\n> strategy and for the newrels relation list created in vacuum(). Then we\n> delete it at the end of vacuum().\n\n> Autovacuum workers already make a similar kind of memory context called\n> AutovacMemCxt in do_autovacuum() which the comment says is for the list\n> of relations to vacuum/analyze across transactions.\n\nAutovacMemCxt seems to be a bit longer lived / cover more than the context\ncreated in vacuum(). It's where all the hash tables etc live that\ndo_autovacuum() uses to determine what to vacuum.\n\nNote that do_autovacuum() also creates:\n\n\t/*\n\t * create a memory context to act as fake PortalContext, so that the\n\t * contexts created in the vacuum code are cleaned up for each table.\n\t */\n\tPortalContext = AllocSetContextCreate(AutovacMemCxt,\n\t\t\t\t\t\t\t\t\t\t \"Autovacuum Portal\",\n\t\t\t\t\t\t\t\t\t\t ALLOCSET_DEFAULT_SIZES);\n\nwhich is then what vacuum() creates the \"Vacuum\" context in.\n\n\n> What if we made ExecVacuum() make its own memory context and both it and\n> do_autovacuum() pass that memory context (along with the buffer access\n> strategy they make) to vacuum(), which then uses the memory context in\n> the same way it does now?\n\nMaybe? It's not clear to me why it'd be a win.\n\n\n> It simplifies the buffer usage limit patchset and also seems a bit more\n> clear than what is there now?\n\nI don't really see what it'd make simpler? The context in vacuum() is used for\njust that vacuum - we couldn't just use AutovacMemCxt, as that'd live much\nlonger (for all the tables a autovac worker processes).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 5 Apr 2023 14:15:34 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, 5 Apr 2023 at 16:37, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Tue, Apr 4, 2023 at 8:14 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> > 14. Not related to this patch, but why do we have half the vacuum\n> > related GUCs in vacuum.c and the other half in globals.c? I see\n> > vacuum_freeze_table_age is defined in vacuum.c but is also needed in\n> > autovacuum.c, so that rules out the globals.c ones being for vacuum.c\n> > and autovacuum.c. It seems a bit messy. I'm not really sure where\n> > VacuumBufferUsageLimit should go now.\n>\n> vacuum_freeze_table_age is an abomination, even compared to the rest\n> of these GUCs. It was added around the time the visibility map first\n> went in, and so is quite a bit more recent than\n> autovacuum_freeze_max_age.\n>\n> Before the introduction of the visibility map, we only had\n> autovacuum_freeze_max_age, and it was used to schedule antiwraparound\n> autovacuums -- there was no such thing as aggressive VACUUMs (just\n> antiwraparound autovacuums), and no need for autovacuum_freeze_max_age\n> at all. So autovacuum_freeze_max_age was just for autovacuum.c code.\n> There was only one type of VACUUM, and they always advanced\n> relfrozenxid to the same degree.\n>\n> With the introduction of the visibility map, we needed to have\n> autovacuum_freeze_max_age in vacuum.c for the first time, to deal with\n> interpreting the then-new vacuum_freeze_table_age GUC correctly. We\n> silently truncate the vacuum_freeze_table_age setting so that it never\n> exceeds 95% of autovacuum_freeze_max_age (the\n> 105%-of-autovacuum_freeze_max_age vacuum_failsafe_age thing that\n> you're discussing is symmetric). So after 2009\n> autovacuum_freeze_max_age actually plays an important role in VACUUM,\n> the command, and not just autovacuum.\n>\n> This is related to the problem of the autovacuum_freeze_max_age\n> reloption being completely broken [1]. If autovacuum_freeze_max_age\n> was still purely just an autovacuum.c scheduling thing, then there\n> would be no problem with having a separate reloption of the same name.\n> There are big problems precisely because vacuum.c doesn't do anything\n> with the autovacuum_freeze_max_age reloption. Though it does okay with\n> the autovacuum_freeze_table_age reloption, which gets passed in. (Yes,\n> it's called autovacuum_freeze_table_age in reloption form -- not\n> vacuum_freeze_table_age, like the GUC).\n>\n> Note that the decision to ignore the reloption version of\n> autovacuum_freeze_max_age in the failsafe's\n> 105%-of-autovacuum_freeze_max_age thing was a deliberate one. The\n> autovacuum_freeze_max_age GUC is authoritative in the sense that it\n> cannot be overridden locally, except in the direction of making\n> aggressive VACUUMs happen more frequently for a given table (so they\n> can't be less frequent via reloption configuration). I suppose you'd\n> have to untangle that mess if you wanted to fix the\n> autovacuum_freeze_max_age reloption issue I go into in [1].\n>\n> [1] https://postgr.es/m/CAH2-Wz=DJAokY_GhKJchgpa8k9t_H_OVOvfPEn97jGNr9W=deg@mail.gmail.com\n\nI read this twice yesterday and again this morning. It looks like\nyou're taking an opportunity to complain/vent about\nvacuum_freeze_table_age and didn't really answer my query about why\nall the vacuum GUCs aren't defined in the one file. I'd just picked\nvacuum_freeze_table_age as a random one from vacuum.c to raise the\npoint about the inconsistency about the GUC locations.\n\n I don't think this is the place to raise concerns with existing GUCs,\nbut if you did have a point about the GUCs locations, then you might\nneed to phase it differently as I didn't catch it.\n\nDavid\n\n\n", "msg_date": "Thu, 6 Apr 2023 09:33:25 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 5, 2023 at 2:33 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> I read this twice yesterday and again this morning. It looks like\n> you're taking an opportunity to complain/vent about\n> vacuum_freeze_table_age and didn't really answer my query about why\n> all the vacuum GUCs aren't defined in the one file. I'd just picked\n> vacuum_freeze_table_age as a random one from vacuum.c to raise the\n> point about the inconsistency about the GUC locations.\n\nI thought that the point was obvious. Which is: the current situation\nwith the locations of these GUCs came about because the division\nbetween autovacuum and VACUUM used to be a lot clearer, but that\nchanged. Without the locations of the GUCs also changing. More\ngenerally, the current structure has lots of problems. And so it seems\nto me that you're probably not wrong to suspect that it just doesn't\nmake much sense to keep them in different files now.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 5 Apr 2023 15:05:21 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 5, 2023 at 5:15 PM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> On 2023-04-05 16:17:20 -0400, Melanie Plageman wrote:\n> > On Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > > Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> > > move the allocation of the bstrategy to ExecVacuum()?\n> >\n> > So, I started looking into allocating the bstrategy in ExecVacuum().\n> >\n> > While doing so, I was trying to understand if the \"sanity checking\" in\n> > vacuum() could possibly apply to autovacuum, and I don't really see how.\n> >\n> > AFAICT, autovacuum does not ever set VACOPT_DISABLE_PAGE_SKIPPING or\n> > VACOPT_FULL or VACOPT_ONLY_DATABASE_STATS.\n> >\n> > We could move those sanity checks up into ExecVacuum().\n>\n> Would make sense.\n>\n> ISTM that eventually most of what currently happens in vacuum() should be in\n> ExecVacuum(). There's a lot of stuff that can't happen for autovacuum. So it\n> just seems to make more sense to move those parts to ExecVacuum().\n\nI've done that in the attached wip patch. It is perhaps too much of a\nchange, I dunno.\n\n> > I also noticed that we make the vac_context in vacuum() which says it is\n> > for \"cross-transaction storage\". We use it for the buffer access\n> > strategy and for the newrels relation list created in vacuum(). Then we\n> > delete it at the end of vacuum().\n>\n> > Autovacuum workers already make a similar kind of memory context called\n> > AutovacMemCxt in do_autovacuum() which the comment says is for the list\n> > of relations to vacuum/analyze across transactions.\n>\n> AutovacMemCxt seems to be a bit longer lived / cover more than the context\n> created in vacuum(). It's where all the hash tables etc live that\n> do_autovacuum() uses to determine what to vacuum.\n>\n> Note that do_autovacuum() also creates:\n>\n> /*\n> * create a memory context to act as fake PortalContext, so that the\n> * contexts created in the vacuum code are cleaned up for each table.\n> */\n> PortalContext = AllocSetContextCreate(AutovacMemCxt,\n> \"Autovacuum Portal\",\n> ALLOCSET_DEFAULT_SIZES);\n>\n> which is then what vacuum() creates the \"Vacuum\" context in.\n\nYea, I realized that when writing the patch.\n\n> > What if we made ExecVacuum() make its own memory context and both it and\n> > do_autovacuum() pass that memory context (along with the buffer access\n> > strategy they make) to vacuum(), which then uses the memory context in\n> > the same way it does now?\n>\n> Maybe? It's not clear to me why it'd be a win.\n\nLess that it is a win and more that we need access to that memory\ncontext when allocating the buffer access strategy, so we would have had\nto make it in ExecVacuum(). And if we have already made it, we would\nneed to pass it in to vacuum() for it to use.\n\n> > It simplifies the buffer usage limit patchset and also seems a bit more\n> > clear than what is there now?\n>\n> I don't really see what it'd make simpler? The context in vacuum() is used for\n> just that vacuum - we couldn't just use AutovacMemCxt, as that'd live much\n> longer (for all the tables a autovac worker processes).\n\nAutovacuum already made the BufferAccessStrategy in the AutovacMemCxt,\nso this is the same behavior. I simply made autovacuum_do_vac_analyze()\nmake the per table vacuum memory context and pass that to vacuum(). So\nwe have the same amount of memory context granularity as before.\n\nAttached patchset has some kind of isolation test failure due to a hard\ndeadlock that I haven't figured out yet. I thought it was something with\nthe \"in_vacuum\" static variable and having VACUUM or ANALYZE called when\nalready in VACUUM or ANALYZE, but that variable is the same as in\nmaster.\n\nI've mostly shared it because I want to know if this approach is worth\npursuing or not.\n\nAlso, while working on it, I noticed that I made a mistake in the code\nthat was committed in 4830f102 and didn't remember that we should still\nmake a Buffer Access Strategy in the case of VACUUM (FULL, ANALYZE).\n\nChanging this:\n\nif (params->options & (VACOPT_ONLY_DATABASE_STATS | VACOPT_FULL)) == 0)\n\nto this:\n\nif ((params.options & VACOPT_ONLY_DATABASE_STATS) == 0 ||\n (params.options & VACOPT_FULL && (params.options & VACOPT_ANALYZE) == 0)\n\nshould fix it.\n\n- Melanie", "msg_date": "Wed, 5 Apr 2023 18:55:10 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 5, 2023 at 6:55 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Apr 5, 2023 at 5:15 PM Andres Freund <andres@anarazel.de> wrote:\n> >\n> > Hi,\n> >\n> > On 2023-04-05 16:17:20 -0400, Melanie Plageman wrote:\n> > > On Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> > > > move the allocation of the bstrategy to ExecVacuum()?\n> > >\n> > > So, I started looking into allocating the bstrategy in ExecVacuum().\n> > >\n> > > While doing so, I was trying to understand if the \"sanity checking\" in\n> > > vacuum() could possibly apply to autovacuum, and I don't really see how.\n> > >\n> > > AFAICT, autovacuum does not ever set VACOPT_DISABLE_PAGE_SKIPPING or\n> > > VACOPT_FULL or VACOPT_ONLY_DATABASE_STATS.\n> > >\n> > > We could move those sanity checks up into ExecVacuum().\n> >\n> > Would make sense.\n> >\n> > ISTM that eventually most of what currently happens in vacuum() should be in\n> > ExecVacuum(). There's a lot of stuff that can't happen for autovacuum. So it\n> > just seems to make more sense to move those parts to ExecVacuum().\n>\n> I've done that in the attached wip patch. It is perhaps too much of a\n> change, I dunno.\n>\n> > > I also noticed that we make the vac_context in vacuum() which says it is\n> > > for \"cross-transaction storage\". We use it for the buffer access\n> > > strategy and for the newrels relation list created in vacuum(). Then we\n> > > delete it at the end of vacuum().\n> >\n> > > Autovacuum workers already make a similar kind of memory context called\n> > > AutovacMemCxt in do_autovacuum() which the comment says is for the list\n> > > of relations to vacuum/analyze across transactions.\n> >\n> > AutovacMemCxt seems to be a bit longer lived / cover more than the context\n> > created in vacuum(). It's where all the hash tables etc live that\n> > do_autovacuum() uses to determine what to vacuum.\n> >\n> > Note that do_autovacuum() also creates:\n> >\n> > /*\n> > * create a memory context to act as fake PortalContext, so that the\n> > * contexts created in the vacuum code are cleaned up for each table.\n> > */\n> > PortalContext = AllocSetContextCreate(AutovacMemCxt,\n> > \"Autovacuum Portal\",\n> > ALLOCSET_DEFAULT_SIZES);\n> >\n> > which is then what vacuum() creates the \"Vacuum\" context in.\n>\n> Yea, I realized that when writing the patch.\n>\n> > > What if we made ExecVacuum() make its own memory context and both it and\n> > > do_autovacuum() pass that memory context (along with the buffer access\n> > > strategy they make) to vacuum(), which then uses the memory context in\n> > > the same way it does now?\n> >\n> > Maybe? It's not clear to me why it'd be a win.\n>\n> Less that it is a win and more that we need access to that memory\n> context when allocating the buffer access strategy, so we would have had\n> to make it in ExecVacuum(). And if we have already made it, we would\n> need to pass it in to vacuum() for it to use.\n>\n> > > It simplifies the buffer usage limit patchset and also seems a bit more\n> > > clear than what is there now?\n> >\n> > I don't really see what it'd make simpler? The context in vacuum() is used for\n> > just that vacuum - we couldn't just use AutovacMemCxt, as that'd live much\n> > longer (for all the tables a autovac worker processes).\n>\n> Autovacuum already made the BufferAccessStrategy in the AutovacMemCxt,\n> so this is the same behavior. I simply made autovacuum_do_vac_analyze()\n> make the per table vacuum memory context and pass that to vacuum(). So\n> we have the same amount of memory context granularity as before.\n>\n> Attached patchset has some kind of isolation test failure due to a hard\n> deadlock that I haven't figured out yet. I thought it was something with\n> the \"in_vacuum\" static variable and having VACUUM or ANALYZE called when\n> already in VACUUM or ANALYZE, but that variable is the same as in\n> master.\n>\n> I've mostly shared it because I want to know if this approach is worth\n> pursuing or not.\n\nFigured out how to fix the issue, though I'm not sure I understand how\nthe issue can occur.\nuse_own_xacts seems like it will always be true for autovacuum when it\ncalls vacuum() and ExecVacuum() only calls vacuum() once, so I thought\nthat I could make use_own_xacts a parameter to vacuum() and push up its\ncalculation for VACUUM and ANALYZE into ExecVacuum().\nThis caused a deadlock, so there must be a way that in_vacuum is false\nbut vacuum() is called in a nested context.\nAnyway, recalculating it every time in vacuum() fixes it.\n\nAttached is a v12 of the whole vacuum_buffer_usage_limit patch set which\nincludes a commit to fix the bug in master and a commit to move relevant\ncode from vacuum() up into ExecVacuum().\n\nThe logic I suggested earlier for fixing the bug was...not right.\nAttached fix should be right?\n\n- Melanie", "msg_date": "Wed, 5 Apr 2023 20:41:48 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, 6 Apr 2023 at 12:42, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Attached is a v12 of the whole vacuum_buffer_usage_limit patch set which\n> includes a commit to fix the bug in master and a commit to move relevant\n> code from vacuum() up into ExecVacuum().\n\nI'm still playing catch up to the moving of the pre-checks from\nvacuum() to ExecVacuum(). I'm now wondering...\n\nIs it intended that VACUUM t1,t2; now share the same strategy?\nCurrently, in master, we'll allocate a new strategy for t2 after\nvacuuming t1. Does this not mean we'll now leave fewer t1 pages in\nshared_buffers because the reuse of the strategy will force them out\nwith t2 pages? I understand there's nothing particularly invalid\nabout that, but it is a change in behaviour that the patch seems to be\nmaking with very little consideration as to if it's better or worse.\n\nDavid\n\n\n", "msg_date": "Thu, 6 Apr 2023 13:14:47 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 5, 2023 at 9:15 PM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Thu, 6 Apr 2023 at 12:42, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > Attached is a v12 of the whole vacuum_buffer_usage_limit patch set which\n> > includes a commit to fix the bug in master and a commit to move relevant\n> > code from vacuum() up into ExecVacuum().\n>\n> I'm still playing catch up to the moving of the pre-checks from\n> vacuum() to ExecVacuum(). I'm now wondering...\n>\n> Is it intended that VACUUM t1,t2; now share the same strategy?\n> Currently, in master, we'll allocate a new strategy for t2 after\n> vacuuming t1. Does this not mean we'll now leave fewer t1 pages in\n> shared_buffers because the reuse of the strategy will force them out\n> with t2 pages? I understand there's nothing particularly invalid\n> about that, but it is a change in behaviour that the patch seems to be\n> making with very little consideration as to if it's better or worse.\n\nI'm pretty sure that in master we also reuse the strategy since we make\nit above this loop in vacuum() (and pass it in)\n\n /*\n * Loop to process each selected relation.\n */\n foreach(cur, relations)\n {\n VacuumRelation *vrel = lfirst_node(VacuumRelation, cur);\n if (params->options & VACOPT_VACUUM)\n {\n if (!vacuum_rel(vrel->oid, vrel->relation, params,\nfalse, bstrategy))\n continue;\n }\n\nOn Wed, Apr 5, 2023 at 8:41 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> On Wed, Apr 5, 2023 at 6:55 PM Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > On Wed, Apr 5, 2023 at 5:15 PM Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > Hi,\n> > >\n> > > On 2023-04-05 16:17:20 -0400, Melanie Plageman wrote:\n> > > > On Wed, Apr 5, 2023 at 1:05 PM Andres Freund <andres@anarazel.de> wrote:\n> > > > > Perhaps the best solution for autovac vs interactive vacuum issue would be to\n> > > > > move the allocation of the bstrategy to ExecVacuum()?\n> > > >\n> > > > So, I started looking into allocating the bstrategy in ExecVacuum().\n> > > >\n> > > > While doing so, I was trying to understand if the \"sanity checking\" in\n> > > > vacuum() could possibly apply to autovacuum, and I don't really see how.\n> > > >\n> > > > AFAICT, autovacuum does not ever set VACOPT_DISABLE_PAGE_SKIPPING or\n> > > > VACOPT_FULL or VACOPT_ONLY_DATABASE_STATS.\n> > > >\n> > > > We could move those sanity checks up into ExecVacuum().\n> > >\n> > > Would make sense.\n> > >\n> > > ISTM that eventually most of what currently happens in vacuum() should be in\n> > > ExecVacuum(). There's a lot of stuff that can't happen for autovacuum. So it\n> > > just seems to make more sense to move those parts to ExecVacuum().\n> >\n> > I've done that in the attached wip patch. It is perhaps too much of a\n> > change, I dunno.\n> >\n> > > > I also noticed that we make the vac_context in vacuum() which says it is\n> > > > for \"cross-transaction storage\". We use it for the buffer access\n> > > > strategy and for the newrels relation list created in vacuum(). Then we\n> > > > delete it at the end of vacuum().\n> > >\n> > > > Autovacuum workers already make a similar kind of memory context called\n> > > > AutovacMemCxt in do_autovacuum() which the comment says is for the list\n> > > > of relations to vacuum/analyze across transactions.\n> > >\n> > > AutovacMemCxt seems to be a bit longer lived / cover more than the context\n> > > created in vacuum(). It's where all the hash tables etc live that\n> > > do_autovacuum() uses to determine what to vacuum.\n> > >\n> > > Note that do_autovacuum() also creates:\n> > >\n> > > /*\n> > > * create a memory context to act as fake PortalContext, so that the\n> > > * contexts created in the vacuum code are cleaned up for each table.\n> > > */\n> > > PortalContext = AllocSetContextCreate(AutovacMemCxt,\n> > > \"Autovacuum Portal\",\n> > > ALLOCSET_DEFAULT_SIZES);\n> > >\n> > > which is then what vacuum() creates the \"Vacuum\" context in.\n> >\n> > Yea, I realized that when writing the patch.\n> >\n> > > > What if we made ExecVacuum() make its own memory context and both it and\n> > > > do_autovacuum() pass that memory context (along with the buffer access\n> > > > strategy they make) to vacuum(), which then uses the memory context in\n> > > > the same way it does now?\n> > >\n> > > Maybe? It's not clear to me why it'd be a win.\n> >\n> > Less that it is a win and more that we need access to that memory\n> > context when allocating the buffer access strategy, so we would have had\n> > to make it in ExecVacuum(). And if we have already made it, we would\n> > need to pass it in to vacuum() for it to use.\n> >\n> > > > It simplifies the buffer usage limit patchset and also seems a bit more\n> > > > clear than what is there now?\n> > >\n> > > I don't really see what it'd make simpler? The context in vacuum() is used for\n> > > just that vacuum - we couldn't just use AutovacMemCxt, as that'd live much\n> > > longer (for all the tables a autovac worker processes).\n> >\n> > Autovacuum already made the BufferAccessStrategy in the AutovacMemCxt,\n> > so this is the same behavior. I simply made autovacuum_do_vac_analyze()\n> > make the per table vacuum memory context and pass that to vacuum(). So\n> > we have the same amount of memory context granularity as before.\n> >\n> > Attached patchset has some kind of isolation test failure due to a hard\n> > deadlock that I haven't figured out yet. I thought it was something with\n> > the \"in_vacuum\" static variable and having VACUUM or ANALYZE called when\n> > already in VACUUM or ANALYZE, but that variable is the same as in\n> > master.\n> >\n> > I've mostly shared it because I want to know if this approach is worth\n> > pursuing or not.\n>\n> Figured out how to fix the issue, though I'm not sure I understand how\n> the issue can occur.\n> use_own_xacts seems like it will always be true for autovacuum when it\n> calls vacuum() and ExecVacuum() only calls vacuum() once, so I thought\n> that I could make use_own_xacts a parameter to vacuum() and push up its\n> calculation for VACUUM and ANALYZE into ExecVacuum().\n> This caused a deadlock, so there must be a way that in_vacuum is false\n> but vacuum() is called in a nested context.\n> Anyway, recalculating it every time in vacuum() fixes it.\n>\n> Attached is a v12 of the whole vacuum_buffer_usage_limit patch set which\n> includes a commit to fix the bug in master and a commit to move relevant\n> code from vacuum() up into ExecVacuum().\n>\n> The logic I suggested earlier for fixing the bug was...not right.\n> Attached fix should be right?\n\nDavid had already pushed a fix, so the patchset had merge conflicts.\nAttached v13 should work.\n\n- Melanie", "msg_date": "Wed, 5 Apr 2023 21:24:59 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, 6 Apr 2023 at 13:14, David Rowley <dgrowleyml@gmail.com> wrote:\n> Is it intended that VACUUM t1,t2; now share the same strategy?\n> Currently, in master, we'll allocate a new strategy for t2 after\n> vacuuming t1. Does this not mean we'll now leave fewer t1 pages in\n> shared_buffers because the reuse of the strategy will force them out\n> with t2 pages? I understand there's nothing particularly invalid\n> about that, but it is a change in behaviour that the patch seems to be\n> making with very little consideration as to if it's better or worse.\n\nActually, never mind that. I'm wrong. The same strategy is used for\nboth tables before and after this change.\n\nI stumbled on thinking vacuum() was being called in a loop from\nExecVacuum() rather than it just passing all of the relations to\nvacuum() and the loop being done inside vacuum(), which is does.\n\nDavid\n\n\n", "msg_date": "Thu, 6 Apr 2023 13:25:13 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Attached is v14 which adds back in tests for the BUFFER_USAGE_LIMIT\noption. I haven't included a test for VACUUM (BUFFER_USAGE_LIMIT x,\nPARALLEL x) for the reason I mentioned upthread -- even if we force it\nto actually do the parallel vacuuming, we are adding exercising the code\nwhere parallel vacuum workers make their own buffer access strategy\nrings but not really adding a test that will fail usefully. If something\nis wrong with the configurability of the buffer access strategy object,\nI don't see how it will break differently in parallel vacuum workers vs\nregular vacuum.\n\n- Melanie", "msg_date": "Wed, 5 Apr 2023 22:14:42 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "second On Thu, 6 Apr 2023 at 14:14, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n> Attached is v14 which adds back in tests for the BUFFER_USAGE_LIMIT\n> option.\n\nI've spent quite a bit of time looking at this since you sent it. I've\nalso made quite a few changes, mostly cosmetic ones, but there are a\nfew things below which are more fundamental.\n\n1. I don't really think we need to support VACUUM (BUFFER_USAGE_LIMIT\n-1); It's just the same as VACUUM; Removing that makes the\ndocumentation more simple.\n\n2. I don't think we really need to allow vacuum_buffer_usage_limit =\n-1. I think we can just set this to 256 and leave it. If we allow -1\nthen we need to document what -1 means. The more I think about it, the\nmore strange it seems to allow -1. I can't quite imagine work_mem = -1\nmeans 4MB. Why 4MB? Changing this means we don't really need to do\nanything special in:\n\n+ if (VacuumBufferUsageLimit > -1)\n+ bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n+ else\n+ bstrategy = GetAccessStrategy(BAS_VACUUM);\n\nThat simply becomes:\n\nbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n\nThe code inside GetAccessStrategyWithSize() handles returning NULL\nwhen the GUC is zero.\n\nThe equivalent in ExecVacuum() becomes:\n\nif (ring_size > -1)\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, ring_size);\nelse\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n\ninstead of:\n\nif (ring_size > -1)\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, ring_size);\nelse if (VacuumBufferUsageLimit > -1)\n bstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\nelse\n bstrategy = GetAccessStrategy(BAS_VACUUM);\n\n3. There was a bug in GetAccessStrategyBufferCount() (I renamed it to\nthat from StrategyGetBufferCount()) that didn't handle NULL input. The\nproblem was that if you set vacuum_buffer_usage_limit = 0 then did a\nparallel vacuum that GetAccessStrategyWithSize() would return NULL due\nto the 0 buffer input, but GetAccessStrategyBufferCount() couldn't\nhandle NULL. I've adjusted GetAccessStrategyBufferCount() just to\nreturn 0 on NULL input.\n\nMost of the rest is cosmetic. GetAccessStrategyWithSize() ended up\nlooking quite different. I didn't see the sense in converting the\nshared_buffer size into kilobytes to compare when we could just\nconvert ring_size_kb to buffers slightly sooner and then just do:\n\n/* Cap to 1/8th of shared_buffers */\nring_buffers = Min(NBuffers / 8, ring_buffers);\n\nI renamed nbuffers to ring_buffers as it was a little too confusing to\nhave nbuffers (for ring size) and NBuffers (for shared_buffers).\n\nA few other changes like getting rid of the regression test and code\ncheck for VACUUM (ONLY_DATABASE_STATS, BUFFER_USAGE_LIMIT 0); There is\nalready an if check and ERROR that looks for ONLY_DATABASE_STATS with\nany other option slightly later in the function. I also got rid of\nthe documentation that mentioned that wasn't supported as there's\nalready a mention in the ONLY_DATABASE_STATS which says it's not\nsupported with anything else. No other option seemed to care enough to\nmention it, so I don't think BUFFER_USAGE_LIMIT is an exception.\n\nI've attached v15. I've only glanced at the vacuumdb patch so far.\nI'm not expecting it to be too controversial.\n\nI'm fairly happy with v15 now but would welcome anyone who wants to\nhave a look in the next 8 hours or so, else I plan to push it.\n\nDavid", "msg_date": "Thu, 6 Apr 2023 23:34:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Thu, Apr 06, 2023 at 11:34:44PM +1200, David Rowley wrote:\n> second On Thu, 6 Apr 2023 at 14:14, Melanie Plageman\n> <melanieplageman@gmail.com> wrote:\n> >\n> > Attached is v14 which adds back in tests for the BUFFER_USAGE_LIMIT\n> > option.\n> \n> I've spent quite a bit of time looking at this since you sent it. I've\n> also made quite a few changes, mostly cosmetic ones, but there are a\n> few things below which are more fundamental.\n> \n> 1. I don't really think we need to support VACUUM (BUFFER_USAGE_LIMIT\n> -1); It's just the same as VACUUM; Removing that makes the\n> documentation more simple.\n\nAgreed.\n \n> 2. I don't think we really need to allow vacuum_buffer_usage_limit =\n> -1. I think we can just set this to 256 and leave it. If we allow -1\n> then we need to document what -1 means. The more I think about it, the\n> more strange it seems to allow -1. I can't quite imagine work_mem = -1\n> means 4MB. Why 4MB?\n\nAgreed.\n \n> 3. There was a bug in GetAccessStrategyBufferCount() (I renamed it to\n> that from StrategyGetBufferCount()) that didn't handle NULL input. The\n> problem was that if you set vacuum_buffer_usage_limit = 0 then did a\n> parallel vacuum that GetAccessStrategyWithSize() would return NULL due\n> to the 0 buffer input, but GetAccessStrategyBufferCount() couldn't\n> handle NULL. I've adjusted GetAccessStrategyBufferCount() just to\n> return 0 on NULL input.\n\nGood catch.\n \n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index bcc49aec45..c421da348d 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -2001,6 +2001,35 @@ include_dir 'conf.d'\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"guc-vacuum-buffer-usage-limit\" xreflabel=\"vacuum_buffer_usage_limit\">\n> + <term>\n> + <varname>vacuum_buffer_usage_limit</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>vacuum_buffer_usage_limit</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies the size of <varname>shared_buffers</varname> to be reused\n> + for each backend participating in a given invocation of\n> + <command>VACUUM</command> or <command>ANALYZE</command> or in\n> + autovacuum. This size is converted to the number of shared buffers\n> + which will be reused as part of a\n> + <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer Access Strategy</glossterm>.\n> + A setting of <literal>0</literal> will allow the operation to use any\n> + number of <varname>shared_buffers</varname>. The maximum size is\n> + <literal>16 GB</literal> and the minimum size is\n> + <literal>128 KB</literal>. If the specified size would exceed 1/8 the\n> + size of <varname>shared_buffers</varname>, it is silently capped to\n> + that value. The default value is <literal>-1</literal>. If this\n\nThis still says that the default value is -1.\n\n> diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml\n> index b6d30b5764..0b02d9faef 100644\n> --- a/doc/src/sgml/ref/vacuum.sgml\n> +++ b/doc/src/sgml/ref/vacuum.sgml\n> @@ -345,6 +346,24 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry>\n> + <term><literal>BUFFER_USAGE_LIMIT</literal></term>\n> + <listitem>\n> + <para>\n> + Specifies the\n> + <glossterm linkend=\"glossary-buffer-access-strategy\">Buffer Access Strategy</glossterm>\n> + ring buffer size for <command>VACUUM</command>. This size is used to\n> + calculate the number of shared buffers which will be reused as part of\n> + this strategy. <literal>0</literal> disables use of a\n> + <literal>Buffer Access Strategy</literal>. If <option>ANALYZE</option>\n> + is also specified, the <option>BUFFER_USAGE_LIMIT</option> value is used\n> + for both the vacuum and analyze stages. This option can't be used with\n> + the <option>FULL</option> option except if <option>ANALYZE</option> is\n> + also specified.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nI noticed you seemed to have removed the reference to the GUC\nvacuum_buffer_usage_limit here. Was that intentional?\nWe may not need to mention \"falling back\" as I did before, however, the\nGUC docs mention max/min values and such, which might be useful.\n\n> diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> index ea1d8960f4..e92738c7f0 100644\n> --- a/src/backend/commands/vacuum.c\n> +++ b/src/backend/commands/vacuum.c\n> @@ -56,6 +56,7 @@\n> #include \"utils/acl.h\"\n> #include \"utils/fmgroids.h\"\n> #include \"utils/guc.h\"\n> +#include \"utils/guc_hooks.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/pg_rusage.h\"\n> #include \"utils/snapmgr.h\"\n> @@ -95,6 +96,30 @@ static VacOptValue get_vacoptval_from_boolean(DefElem *def);\n> static bool vac_tid_reaped(ItemPointer itemptr, void *state);\n> static int\tvac_cmp_itemptr(const void *left, const void *right);\n> \n> +/*\n> + * GUC check function to ensure GUC value specified is within the allowable\n> + * range.\n> + */\n> +bool\n> +check_vacuum_buffer_usage_limit(int *newval, void **extra,\n> +\t\t\t\t\t\t\t\tGucSource source)\n> +{\n> +\t/* Allow specifying the default or disabling Buffer Access Strategy */\n> +\tif (*newval == -1 || *newval == 0)\n> +\t\treturn true;\n\nThis should not check for -1 since that isn't the default anymore.\nIt should only need to check for 0 I think?\n\n> +\t/* Value upper and lower hard limits are inclusive */\n> +\tif (*newval >= MIN_BAS_VAC_RING_SIZE_KB &&\n> +\t\t*newval <= MAX_BAS_VAC_RING_SIZE_KB)\n> +\t\treturn true;\n> +\n> +\t/* Value does not fall within any allowable range */\n> +\tGUC_check_errdetail(\"\\\"vacuum_buffer_usage_limit\\\" must be -1, 0 or between %d KB and %d KB\",\n> +\t\t\t\t\t\tMIN_BAS_VAC_RING_SIZE_KB, MAX_BAS_VAC_RING_SIZE_KB);\n\nAlso remove -1 here.\n\n> * Primary entry point for manual VACUUM and ANALYZE commands\n> *\n> @@ -114,6 +139,8 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> \tbool\t\tdisable_page_skipping = false;\n> \tbool\t\tprocess_main = true;\n> \tbool\t\tprocess_toast = true;\n> +\t/* by default use buffer access strategy with default size */\n> +\tint\t\t\tring_size = -1;\n\nWe need to update this comment to something like, \"use an invalid value\nfor ring_size\" so it is clear whether or not the BUFFER_USAGE_LIMIT was\nspecified when making the access strategy later\". Also, I think just\nremoving the comment would be okay, because this is the normal behavior\nfor initializing values, I think.\n\n> @@ -240,6 +309,17 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> \t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> \t\t\t\t errmsg(\"VACUUM FULL cannot be performed in parallel\")));\n> \n> +\t/*\n> +\t * BUFFER_USAGE_LIMIT does nothing for VACUUM (FULL) so just raise an\n> +\t * ERROR for that case. VACUUM (FULL, ANALYZE) does make use of it, so\n> +\t * we'll permit that.\n> +\t */\n> +\tif ((params.options & VACOPT_FULL) && !(params.options & VACOPT_ANALYZE) &&\n> +\t\tring_size > -1)\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> +\t\t\t\t errmsg(\"BUFFER_USAGE_LIMIT cannot be specified for VACUUM FULL\")));\n> +\n> \t/*\n> \t * Make sure VACOPT_ANALYZE is specified if any column lists are present.\n> \t */\n> @@ -341,7 +421,20 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> \n> \t\tMemoryContext old_context = MemoryContextSwitchTo(vac_context);\n> \n> -\t\tbstrategy = GetAccessStrategy(BAS_VACUUM);\n\nIs it worth moving this assert up above when we do the \"sanity checking\"\nfor VACUUM FULL with BUFFER_USAGE_LIMIT?\n\n> +\t\tAssert(ring_size >= -1);\n\n> +\t\t/*\n> +\t\t * If BUFFER_USAGE_LIMIT was specified by the VACUUM command, that\n> +\t\t * overrides the value of VacuumBufferUsageLimit. Otherwise, use\n> +\t\t * VacuumBufferUsageLimit to define the size, which might be 0. We\n> +\t\t * expliot that calling GetAccessStrategyWithSize with a zero size\n\ns/expliot/exploit\n\nI might rephrase the last sentence(s). Since it overrides it, I think it\nis clear that if it is not specified, then the thing it overrides is\nused. Then you could phrase the whole thing like:\n\n \"If BUFFER_USAGE_LIMIT was specified by the VACUUM or ANALYZE command,\n it overrides the value of VacuumBufferUsageLimit. Either value may be\n 0, in which case GetAccessStrategyWithSize() will return NULL, which is\n the expected behavior.\"\n\n> +\t\t * returns NULL.\n> +\t\t */\n> +\t\tif (ring_size > -1)\n> +\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, ring_size);\n> +\t\telse\n> +\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n> +\n> \t\tMemoryContextSwitchTo(old_context);\n> \t}\n\n> diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> index c1e911b1b3..1b5f779384 100644\n> --- a/src/backend/postmaster/autovacuum.c\n> +++ b/src/backend/postmaster/autovacuum.c\n> @@ -2287,11 +2287,21 @@ do_autovacuum(void)\n> \t/*\n> -\t * Create a buffer access strategy object for VACUUM to use. We want to\n> -\t * use the same one across all the vacuum operations we perform, since the\n> -\t * point is for VACUUM not to blow out the shared cache.\n> +\t * Optionally, create a buffer access strategy object for VACUUM to use.\n> +\t * When creating one, we want the same one across all tables being\n> +\t * vacuumed this helps prevent autovacuum from blowing out shared buffers.\n\n\"When creating one\" is a bit awkward. I would say something like \"Use\nthe same BufferAccessStrategy object for all tables VACUUMed by this\nworker to prevent autovacuum from blowing out shared buffers.\"\n\n> diff --git a/src/backend/storage/buffer/freelist.c b/src/backend/storage/buffer/freelist.c\n> index f122709fbe..710b05cbc5 100644\n> --- a/src/backend/storage/buffer/freelist.c\n> +++ b/src/backend/storage/buffer/freelist.c\n> +/*\n> + * GetAccessStrategyWithSize -- create a BufferAccessStrategy object with a\n> + *\t\tnumber of buffers equivalent to the passed in size.\n> + *\n> + * If the given ring size is 0, no BufferAccessStrategy will be created and\n> + * the function will return NULL. The ring size may not be negative.\n> + */\n> +BufferAccessStrategy\n> +GetAccessStrategyWithSize(BufferAccessStrategyType btype, int ring_size_kb)\n> +{\n> +\tint\t\t\tring_buffers;\n> +\tBufferAccessStrategy strategy;\n> +\n> +\tAssert(ring_size_kb >= 0);\n> +\n> +\t/* Figure out how many buffers ring_size_kb is */\n> +\tring_buffers = ring_size_kb / (BLCKSZ / 1024);\n\nIs there any BLCKSZ that could end up rounding down to 0 and resulting\nin a divide by 0?\n\n> diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c\n> index 1b1d814254..011ec18015 100644\n> --- a/src/backend/utils/init/globals.c\n> +++ b/src/backend/utils/init/globals.c\n> @@ -139,7 +139,10 @@ int\t\t\tmax_worker_processes = 8;\n> int\t\t\tmax_parallel_workers = 8;\n> int\t\t\tMaxBackends = 0;\n> \n> -int\t\t\tVacuumCostPageHit = 1;\t/* GUC parameters for vacuum */\n> +/* GUC parameters for vacuum */\n> +int\t\t\tVacuumBufferUsageLimit = 256;\n\nSo, I know we agreed to make it camel cased, but I couldn't help\nmentioning the discussion over in [1] in which Sawada-san says:\n\n> In vacuum.c, we use snake case for GUC parameters and camel case for\n> other global variables\n\nOur variable doesn't have a corresponding global that is not a GUC, and\nthe current pattern is hardly consistent. But, I know we are discussing\nfollowing this convention, so I thought I would mention it.\n\n> diff --git a/src/include/miscadmin.h b/src/include/miscadmin.h\n> index 06a86f9ac1..d4f9ff8077 100644\n> --- a/src/include/miscadmin.h\n> +++ b/src/include/miscadmin.h\n> @@ -263,6 +263,18 @@ extern PGDLLIMPORT double hash_mem_multiplier;\n> extern PGDLLIMPORT int maintenance_work_mem;\n> extern PGDLLIMPORT int max_parallel_maintenance_workers;\n> \n\nGUC name mentioned in comment is inconsistent with current GUC name.\n\n> +/*\n> + * Upper and lower hard limits for the buffer access strategy ring size\n> + * specified by the vacuum_buffer_usage_limit GUC and BUFFER_USAGE_LIMIT\n> + * option to VACUUM and ANALYZE.\n> + */\n> +#define MIN_BAS_VAC_RING_SIZE_KB 128\n> +#define MAX_BAS_VAC_RING_SIZE_KB (16 * 1024 * 1024)\n\n> diff --git a/src/test/regress/sql/vacuum.sql b/src/test/regress/sql/vacuum.sql\n> index a1fad43657..d23e1a8ced 100644\n> --- a/src/test/regress/sql/vacuum.sql\n> +++ b/src/test/regress/sql/vacuum.sql\n> @@ -272,6 +272,18 @@ SELECT t.relfilenode = :toast_filenode AS is_same_toast_filenode\n> FROM pg_class c, pg_class t\n> WHERE c.reltoastrelid = t.oid AND c.relname = 'vac_option_tab';\n> \n> +-- BUFFER_USAGE_LIMIT option\n> +VACUUM (BUFFER_USAGE_LIMIT '512 kB') vac_option_tab;\n\nIs it worth adding a VACUUM (BUFFER_USAGE_LIMIT 0) vac_option_tab test?\n\n- Melanie\n\n[1] https://www.postgresql.org/message-id/CAD21AoC5aDwARiqsL%2BKwHqnN7phub9AaMkbGkJ9aUCeETx8esw%40mail.gmail.com\n\n\n", "msg_date": "Thu, 6 Apr 2023 13:20:56 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, 7 Apr 2023 at 05:20, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n\n> This still says that the default value is -1.\n\nOops, I had this staged but didn't commit and formed the patch with\n\"git diff master..\" instead of \"git diff master\", so missed a few\nstaged changes.\n\n> > diff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml\n> I noticed you seemed to have removed the reference to the GUC\n> vacuum_buffer_usage_limit here. Was that intentional?\n> We may not need to mention \"falling back\" as I did before, however, the\n> GUC docs mention max/min values and such, which might be useful.\n\nUnintentional. I removed it when removing the -1 stuff. It's useful to\nkeep something about the fallback, so I put that part back.\n\n> > + /* Allow specifying the default or disabling Buffer Access Strategy */\n> > + if (*newval == -1 || *newval == 0)\n> > + return true;\n>\n> This should not check for -1 since that isn't the default anymore.\n> It should only need to check for 0 I think?\n\nThanks. That one was one of the staged fixes.\n\n> > + /* Value upper and lower hard limits are inclusive */\n> > + if (*newval >= MIN_BAS_VAC_RING_SIZE_KB &&\n> > + *newval <= MAX_BAS_VAC_RING_SIZE_KB)\n> > + return true;\n> > +\n> > + /* Value does not fall within any allowable range */\n> > + GUC_check_errdetail(\"\\\"vacuum_buffer_usage_limit\\\" must be -1, 0 or between %d KB and %d KB\",\n> > + MIN_BAS_VAC_RING_SIZE_KB, MAX_BAS_VAC_RING_SIZE_KB);\n>\n> Also remove -1 here.\n\nAnd this one.\n\n> > * Primary entry point for manual VACUUM and ANALYZE commands\n> > *\n> > @@ -114,6 +139,8 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> > bool disable_page_skipping = false;\n> > bool process_main = true;\n> > bool process_toast = true;\n> > + /* by default use buffer access strategy with default size */\n> > + int ring_size = -1;\n>\n> We need to update this comment to something like, \"use an invalid value\n> for ring_size\" so it is clear whether or not the BUFFER_USAGE_LIMIT was\n> specified when making the access strategy later\". Also, I think just\n> removing the comment would be okay, because this is the normal behavior\n> for initializing values, I think.\n\nYeah, I've moved the assignment away from the declaration and wrote\nsomething along those lines.\n\n> > @@ -240,6 +309,17 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> > (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > errmsg(\"VACUUM FULL cannot be performed in parallel\")));\n> >\n> > + /*\n> > + * BUFFER_USAGE_LIMIT does nothing for VACUUM (FULL) so just raise an\n> > + * ERROR for that case. VACUUM (FULL, ANALYZE) does make use of it, so\n> > + * we'll permit that.\n> > + */\n> > + if ((params.options & VACOPT_FULL) && !(params.options & VACOPT_ANALYZE) &&\n> > + ring_size > -1)\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_FEATURE_NOT_SUPPORTED),\n> > + errmsg(\"BUFFER_USAGE_LIMIT cannot be specified for VACUUM FULL\")));\n> > +\n> > /*\n> > * Make sure VACOPT_ANALYZE is specified if any column lists are present.\n> > */\n> > @@ -341,7 +421,20 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> >\n> > MemoryContext old_context = MemoryContextSwitchTo(vac_context);\n> >\n> > - bstrategy = GetAccessStrategy(BAS_VACUUM);\n>\n> Is it worth moving this assert up above when we do the \"sanity checking\"\n> for VACUUM FULL with BUFFER_USAGE_LIMIT?\n\nI didn't do this, but I did adjust that check to check ring_size != -1\nand put that as the first condition. It's likely more rare to have\nring_size not set to -1, so probably should check that first.\n\n> s/expliot/exploit\n\noops\n\n> I might rephrase the last sentence(s). Since it overrides it, I think it\n> is clear that if it is not specified, then the thing it overrides is\n> used. Then you could phrase the whole thing like:\n>\n> \"If BUFFER_USAGE_LIMIT was specified by the VACUUM or ANALYZE command,\n> it overrides the value of VacuumBufferUsageLimit. Either value may be\n> 0, in which case GetAccessStrategyWithSize() will return NULL, which is\n> the expected behavior.\"\n\nFixed.\n\n> > diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c\n> > index c1e911b1b3..1b5f779384 100644\n> > --- a/src/backend/postmaster/autovacuum.c\n> > +++ b/src/backend/postmaster/autovacuum.c\n> > @@ -2287,11 +2287,21 @@ do_autovacuum(void)\n> > /*\n> > - * Create a buffer access strategy object for VACUUM to use. We want to\n> > - * use the same one across all the vacuum operations we perform, since the\n> > - * point is for VACUUM not to blow out the shared cache.\n> > + * Optionally, create a buffer access strategy object for VACUUM to use.\n> > + * When creating one, we want the same one across all tables being\n> > + * vacuumed this helps prevent autovacuum from blowing out shared buffers.\n>\n> \"When creating one\" is a bit awkward. I would say something like \"Use\n> the same BufferAccessStrategy object for all tables VACUUMed by this\n> worker to prevent autovacuum from blowing out shared buffers.\"\n\nAdjusted\n\n> > + /* Figure out how many buffers ring_size_kb is */\n> > + ring_buffers = ring_size_kb / (BLCKSZ / 1024);\n>\n> Is there any BLCKSZ that could end up rounding down to 0 and resulting\n> in a divide by 0?\n\nI removed that Assert() as I found quite a number of other places in\nour code that assume BLCKSZ / 1024 is never 0.\n\n> So, I know we agreed to make it camel cased, but I couldn't help\n> mentioning the discussion over in [1] in which Sawada-san says:\n\nI didn't change anything here. I'm happy to follow any rules about\nthis once they're defined. What we have today is horribly\ninconsistent.\n\n> GUC name mentioned in comment is inconsistent with current GUC name.\n>\n> > +/*\n> > + * Upper and lower hard limits for the buffer access strategy ring size\n> > + * specified by the vacuum_buffer_usage_limit GUC and BUFFER_USAGE_LIMIT\n> > + * option to VACUUM and ANALYZE.\n\nI did adjust this. I wasn't sure it was incorrect as I mentioned \"GUC\"\nas in, the user facing setting.\n\n> Is it worth adding a VACUUM (BUFFER_USAGE_LIMIT 0) vac_option_tab test?\n\nI think so.\n\nI've attached v16.\n\nDavid", "msg_date": "Fri, 7 Apr 2023 09:12:32 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "+VACUUM uses a ring like sequential scans, however, the size this ring\n+controlled by the vacuum_buffer_usage_limit GUC. Dirty pages are not removed\n\nshould say: \".. the size OF this ring IS ..\" ?\n\n\n", "msg_date": "Thu, 6 Apr 2023 16:16:47 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, Apr 07, 2023 at 09:12:32AM +1200, David Rowley wrote:\n> On Fri, 7 Apr 2023 at 05:20, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> > GUC name mentioned in comment is inconsistent with current GUC name.\n> >\n> > > +/*\n> > > + * Upper and lower hard limits for the buffer access strategy ring size\n> > > + * specified by the vacuum_buffer_usage_limit GUC and BUFFER_USAGE_LIMIT\n> > > + * option to VACUUM and ANALYZE.\n> \n> I did adjust this. I wasn't sure it was incorrect as I mentioned \"GUC\"\n> as in, the user facing setting.\n\nOh, actually maybe you are right then.\n\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index bcc49aec45..220f9ee84c 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -2001,6 +2001,35 @@ include_dir 'conf.d'\n> </listitem>\n> </varlistentry>\n> \n> + <varlistentry id=\"guc-vacuum-buffer-usage-limit\" xreflabel=\"vacuum_buffer_usage_limit\">\n> + <term>\n> + <varname>vacuum_buffer_usage_limit</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>vacuum_buffer_usage_limit</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Specifies the size of <varname>shared_buffers</varname> to be reused\n> + for each backend participating in a given invocation of\n> + <command>VACUUM</command> or <command>ANALYZE</command> or in\n> + autovacuum. \n\nRereading this, I think it is not a good sentence (my fault).\nPerhaps we should use the same language as the BUFFER_USAGE_LIMIT\noptions. Something like:\n\nSpecifies the\n<glossterm linkend=\"glossary-buffer-access-strategy\">Buffer Access Strategy</glossterm>\nring buffer size used by each backend participating in a given\ninvocation of <command>VACUUM</command> or <command>ANALYZE</command> or\nin autovacuum.\n\nLast part is admittedly a bit awkward...\n\n> diff --git a/src/backend/commands/vacuum.c b/src/backend/commands/vacuum.c\n> index ea1d8960f4..995b4bd54a 100644\n> --- a/src/backend/commands/vacuum.c\n> +++ b/src/backend/commands/vacuum.c\n> @@ -56,6 +56,7 @@\n> #include \"utils/acl.h\"\n> #include \"utils/fmgroids.h\"\n> #include \"utils/guc.h\"\n> +#include \"utils/guc_hooks.h\"\n> #include \"utils/memutils.h\"\n> #include \"utils/pg_rusage.h\"\n> #include \"utils/snapmgr.h\"\n> @@ -95,6 +96,26 @@ static VacOptValue get_vacoptval_from_boolean(DefElem *def);\n> static bool vac_tid_reaped(ItemPointer itemptr, void *state);\n> static int\tvac_cmp_itemptr(const void *left, const void *right);\n> \n> +/*\n> + * GUC check function to ensure GUC value specified is within the allowable\n> + * range.\n> + */\n> +bool\n> +check_vacuum_buffer_usage_limit(int *newval, void **extra,\n> +\t\t\t\t\t\t\t\tGucSource source)\n> +{\n\nI'm not so hot on this comment. It seems very...generic. Like it could\nbe the comment on any GUC check function. I'm also okay with leaving it\nas is.\n\n> @@ -341,7 +422,19 @@ ExecVacuum(ParseState *pstate, VacuumStmt *vacstmt, bool isTopLevel)\n> \n> \t\tMemoryContext old_context = MemoryContextSwitchTo(vac_context);\n> \n> -\t\tbstrategy = GetAccessStrategy(BAS_VACUUM);\n> +\t\tAssert(ring_size >= -1);\n> +\n> +\t\t/*\n> +\t\t * If BUFFER_USAGE_LIMIT was specified by the VACUUM or ANALYZE\n> +\t\t * command, it overrides the value of VacuumBufferUsageLimit. Either\n> +\t\t * value may be 0, in which case GetAccessStrategyWithSize() will\n> +\t\t * return NULL, effectively allowing full use of shared buffers.\n\nMaybe \"unlimited\" is better than \"full\"?\n\n> +\t\t */\n> +\t\tif (ring_size != -1)\n> +\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, ring_size);\n> +\t\telse\n> +\t\t\tbstrategy = GetAccessStrategyWithSize(BAS_VACUUM, VacuumBufferUsageLimit);\n> +\n> \t\tMemoryContextSwitchTo(old_context);\n> \t}\n> \n> diff --git a/src/backend/commands/vacuumparallel.c b/src/backend/commands/vacuumparallel.c\n> @@ -365,6 +371,9 @@ parallel_vacuum_init(Relation rel, Relation *indrels, int nindexes,\n> \t\tmaintenance_work_mem / Min(parallel_workers, nindexes_mwm) :\n> \t\tmaintenance_work_mem;\n> \n> +\t/* Use the same buffer size for all workers */\n\nI would say ring buffer size -- this sounds like it is the size of a\nsingle buffer.\n\n> +\tshared->ring_nbuffers = GetAccessStrategyBufferCount(bstrategy);\n> +\n> \tpg_atomic_init_u32(&(shared->cost_balance), 0);\n> \tpg_atomic_init_u32(&(shared->active_nworkers), 0);\n> \tpg_atomic_init_u32(&(shared->idx), 0);\n\n> + * Upper and lower hard limits for the buffer access strategy ring size\n> + * specified by the VacuumBufferUsageLimit GUC and BUFFER_USAGE_LIMIT option\n\nI agree with your original usage of the actual GUC name, now that I\nrealize why you were doing it and am rereading it.\n\n> + * to VACUUM and ANALYZE.\n> + */\n> +#define MIN_BAS_VAC_RING_SIZE_KB 128\n> +#define MAX_BAS_VAC_RING_SIZE_KB (16 * 1024 * 1024)\n\n\nOtherwise, LGTM.\n\n\n", "msg_date": "Thu, 6 Apr 2023 17:44:31 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, 7 Apr 2023 at 09:44, Melanie Plageman <melanieplageman@gmail.com> wrote:\n> Otherwise, LGTM.\n\nThanks for looking. I've also taken Justin's comments about the\nREADME into account and fixed that part.\n\nI've pushed the patch after a little more adjusting. I added some\ntext to the docs that mention larger VACUUM_BUFFER_LIMITs can speed up\nvacuum and also a reason why they might not want to go nuts with it.\n\nI've also just now pushed the vacuumdb patch too. I ended up adjusting\nsome of the ERROR messages in the main patch after the following not\nso nice user experience:\n\n$ vacuumdb --buffer-usage-limit=1TB --analyze postgres\nvacuumdb: vacuuming database \"postgres\"\nSQL: VACUUM (SKIP_DATABASE_STATS, ANALYZE, BUFFER_USAGE_LIMIT '1TB')\nvacuumdb: error: processing of database \"postgres\" failed: ERROR:\nbuffer_usage_limit option must be 0 or between 128 KB and 16777216 KB\n\n$ vacuumdb --buffer-usage-limit=128KB --analyze postgres\nvacuumdb: vacuuming database \"postgres\"\nSQL: VACUUM (SKIP_DATABASE_STATS, ANALYZE, BUFFER_USAGE_LIMIT '128KB')\nvacuumdb: error: processing of database \"postgres\" failed: ERROR:\nvalue: \"128KB\": is invalid for buffer_usage_limit\nHINT: Valid units for this parameter are \"B\", \"kB\", \"MB\", \"GB\", and \"TB\".\n\nDavid\n\n\n", "msg_date": "Fri, 7 Apr 2023 12:52:03 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On 07.04.23 02:52, David Rowley wrote:\n> On Fri, 7 Apr 2023 at 09:44, Melanie Plageman <melanieplageman@gmail.com> wrote:\n>> Otherwise, LGTM.\n> \n> Thanks for looking. I've also taken Justin's comments about the\n> README into account and fixed that part.\n> \n> I've pushed the patch after a little more adjusting. I added some\n> text to the docs that mention larger VACUUM_BUFFER_LIMITs can speed up\n> vacuum and also a reason why they might not want to go nuts with it.\n\nI came across these new options and had a little bit of trouble figuring \nthem out from the documentation. Maybe this could be polished a bit.\n\nvacuumdb --help says\n\n --buffer-usage-limit=BUFSIZE\n\nI can guess what a \"SIZE\" might be, but is \"BUFSIZE\" different from a \n\"SIZE\"? Maybe simplify here.\n\nOn the vacuumdb man page, the placeholder is\n\n <replaceable class=\"parameter\">buffer_usage_limit</replaceable>\n\nwhich is yet another way of phrasing it. Maybe also use \"size\" here?\n\nThe VACUUM man page says\n\n BUFFER_USAGE_LIMIT [ <replaceable ...>string</replaceable> ]\n\nwhich had me really confused. The detailed description later doesn't \ngive any further explanation of possible values, except that \n<literal>0</literal> is apparently a possible value, which in my mind is \nnot a string. Then there is a link to guc-vacuum-buffer-usage-limit, \nwhich lifts the mystery that this is really just an integer setting with \npossible memory-size units, but it was really hard to figure that out \nfrom the start!\n\nMoreover, on the VACUUM man page, right below BUFFER_USAGE_LIMIT, it \nexplains the different kinds of accepted values, and \"string\" wasn't \nadded there. Maybe also change this to \"size\" here and add an \nexplanation there what kinds of sizes are possible.\n\nFinally, the locations of the new options in the various documentation \nplaces seems a bit random. The vacuumdb --help output and the man page \nappear to be mostly alphabetical, so --buffer-usage-limit should be \nafter -a/--all. (Also note that right now the option isn't even in the \nsame place in the --help output versus the man page.)\n\nThe order of the options on the VACUUM man page doesn't make any sense \nanymore. This isn't really the fault of this patch, but maybe it's time \nto do a fresh reordering there.\n\n\n", "msg_date": "Fri, 14 Apr 2023 09:20:41 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Fri, 14 Apr 2023 at 19:20, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> I came across these new options and had a little bit of trouble figuring\n> them out from the documentation. Maybe this could be polished a bit.\n>\n> vacuumdb --help says\n>\n> --buffer-usage-limit=BUFSIZE\n>\n> I can guess what a \"SIZE\" might be, but is \"BUFSIZE\" different from a\n> \"SIZE\"? Maybe simplify here.\n>\n> On the vacuumdb man page, the placeholder is\n>\n> <replaceable class=\"parameter\">buffer_usage_limit</replaceable>\n>\n> which is yet another way of phrasing it. Maybe also use \"size\" here?\n>\n> The VACUUM man page says\n>\n> BUFFER_USAGE_LIMIT [ <replaceable ...>string</replaceable> ]\n>\n> which had me really confused. The detailed description later doesn't\n> give any further explanation of possible values, except that\n> <literal>0</literal> is apparently a possible value, which in my mind is\n> not a string. Then there is a link to guc-vacuum-buffer-usage-limit,\n> which lifts the mystery that this is really just an integer setting with\n> possible memory-size units, but it was really hard to figure that out\n> from the start!\n>\n> Moreover, on the VACUUM man page, right below BUFFER_USAGE_LIMIT, it\n> explains the different kinds of accepted values, and \"string\" wasn't\n> added there. Maybe also change this to \"size\" here and add an\n> explanation there what kinds of sizes are possible.\n>\n> Finally, the locations of the new options in the various documentation\n> places seems a bit random. The vacuumdb --help output and the man page\n> appear to be mostly alphabetical, so --buffer-usage-limit should be\n> after -a/--all. (Also note that right now the option isn't even in the\n> same place in the --help output versus the man page.)\n\nThese are all valid points. I've attached a patch aiming to address\neach of them.\n\n> The order of the options on the VACUUM man page doesn't make any sense\n> anymore. This isn't really the fault of this patch, but maybe it's time\n> to do a fresh reordering there.\n\nAgreed, that likely wasn't a big problem say about 5 years ago when we\nhad far fewer options, but the number has grown quite a bit since\nthen.\n\nRight after I fix the points you've mentioned seems a good time to address that.\n\nDavid", "msg_date": "Sat, 15 Apr 2023 12:59:52 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, 15 Apr 2023 at 12:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> These are all valid points. I've attached a patch aiming to address\n> each of them.\n\nI tweaked this a little further and pushed it.\n\nDavid\n\n\n", "msg_date": "Sun, 16 Apr 2023 12:09:44 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Sat, Apr 15, 2023 at 12:59:52PM +1200, David Rowley wrote:\n> On Fri, 14 Apr 2023 at 19:20, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n> >\n> > I came across these new options and had a little bit of trouble figuring\n> > them out from the documentation. Maybe this could be polished a bit.\n> >\n> > vacuumdb --help says\n> >\n> > --buffer-usage-limit=BUFSIZE\n> >\n> > I can guess what a \"SIZE\" might be, but is \"BUFSIZE\" different from a\n> > \"SIZE\"? Maybe simplify here.\n> >\n> > On the vacuumdb man page, the placeholder is\n> >\n> > <replaceable class=\"parameter\">buffer_usage_limit</replaceable>\n> >\n> > which is yet another way of phrasing it. Maybe also use \"size\" here?\n> >\n> > The VACUUM man page says\n> >\n> > BUFFER_USAGE_LIMIT [ <replaceable ...>string</replaceable> ]\n> >\n> > which had me really confused. The detailed description later doesn't\n> > give any further explanation of possible values, except that\n> > <literal>0</literal> is apparently a possible value, which in my mind is\n> > not a string. Then there is a link to guc-vacuum-buffer-usage-limit,\n> > which lifts the mystery that this is really just an integer setting with\n> > possible memory-size units, but it was really hard to figure that out\n> > from the start!\n> >\n> > Moreover, on the VACUUM man page, right below BUFFER_USAGE_LIMIT, it\n> > explains the different kinds of accepted values, and \"string\" wasn't\n> > added there. Maybe also change this to \"size\" here and add an\n> > explanation there what kinds of sizes are possible.\n> >\n> > Finally, the locations of the new options in the various documentation\n> > places seems a bit random. The vacuumdb --help output and the man page\n> > appear to be mostly alphabetical, so --buffer-usage-limit should be\n> > after -a/--all. (Also note that right now the option isn't even in the\n> > same place in the --help output versus the man page.)\n> \n> These are all valid points. I've attached a patch aiming to address\n> each of them.\n\nI like that we are now using \"size\" consistently instead of bufsize etc.\n\n> \n> > The order of the options on the VACUUM man page doesn't make any sense\n> > anymore. This isn't really the fault of this patch, but maybe it's time\n> > to do a fresh reordering there.\n> \n> Agreed, that likely wasn't a big problem say about 5 years ago when we\n> had far fewer options, but the number has grown quite a bit since\n> then.\n> \n> Right after I fix the points you've mentioned seems a good time to address that.\n\nAre we still thinking that reordering the VACUUM (and ANALYZE) options\nmakes sense. And, if so, should it be alphabetical within parameter\ncategory? That is, all actual parameters (e.g. FULL and FREEZE) are\nalphabetically organized first followed by all parameter types (e.g.\nboolean and size) alphabetically listed?\n\n- Melanie\n\n\n", "msg_date": "Mon, 17 Apr 2023 17:21:09 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Tue, 18 Apr 2023 at 09:21, Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n> Are we still thinking that reordering the VACUUM (and ANALYZE) options\n> makes sense. And, if so, should it be alphabetical within parameter\n> category? That is, all actual parameters (e.g. FULL and FREEZE) are\n> alphabetically organized first followed by all parameter types (e.g.\n> boolean and size) alphabetically listed?\n\nI've opened a thread for that [1].\n\nDavid\n\n[1] https://postgr.es/m/CAApHDvo1eWbt5PVpk0G=yCbBNgLU7KaRP6dCBHpNbFaBjyGyQA@mail.gmail.com\n\n\n", "msg_date": "Tue, 18 Apr 2023 17:46:53 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "Hi,\n\nOn Sun, Apr 16, 2023 at 9:09 AM David Rowley <dgrowleyml@gmail.com> wrote:\n>\n> On Sat, 15 Apr 2023 at 12:59, David Rowley <dgrowleyml@gmail.com> wrote:\n> > These are all valid points. I've attached a patch aiming to address\n> > each of them.\n>\n> I tweaked this a little further and pushed it.\n>\n\nI realized that the value of vacuum_buffer_usage_limit parameter in\npostgresql.conf.sample doesn't have the unit:\n\n#vacuum_buffer_usage_limit = 256 # size of vacuum and analyze buffer\naccess strategy ring.\n # 0 to disable vacuum buffer access strategy\n # range 128kB to 16GB\n\nIt works but I think we might want to add the unit kB for\nunderstandability and consistency with other GUC_UNIT_KB parameters.\nI've attached a small patch that adds the unit and aligns the indent\nof the comments to the perimeter parameters.\n\nRegards,\n\n-- \nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 26 Apr 2023 17:47:43 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com> wrote:\n\n> I realized that the value of vacuum_buffer_usage_limit parameter in\n> postgresql.conf.sample doesn't have the unit:\n>\n> #vacuum_buffer_usage_limit = 256 # size of vacuum and analyze buffer\n> access strategy ring.\n> # 0 to disable vacuum buffer access\n> strategy\n> # range 128kB to 16GB\n>\n> It works but I think we might want to add the unit kB for\n> understandability and consistency with other GUC_UNIT_KB parameters.\n> I've attached a small patch that adds the unit and aligns the indent\n> of the comments to the perimeter parameters.\n>\n\nI'm not currently able to check, but if work_mem has a unit in the sample\nconf then I agree that vacuum_buffer_usage_limit should too.\n\nI'm fine for you to go ahead and adjust this, otherwise it'll be Monday\nbefore I can.\n\nDavid\n\n>\n\nOn Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com> wrote:\nI realized that the value of vacuum_buffer_usage_limit parameter in\npostgresql.conf.sample doesn't have the unit:\n\n#vacuum_buffer_usage_limit = 256 # size of vacuum and analyze buffer\naccess strategy ring.\n                                 # 0 to disable vacuum buffer access strategy\n                                 # range 128kB to 16GB\n\nIt works but I think we might want to add the unit kB for\nunderstandability and consistency with other GUC_UNIT_KB parameters.\nI've attached a small patch that adds the unit and aligns the indent\nof the comments to the perimeter parameters.I'm not currently able to check, but if work_mem has a unit in the sample conf then I agree that vacuum_buffer_usage_limit should too.I'm fine for you to go ahead and adjust this, otherwise it'll be Monday before I can.David", "msg_date": "Wed, 26 Apr 2023 23:26:43 +1200", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "> On 26 Apr 2023, at 13:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com <mailto:sawada.mshk@gmail.com>> wrote:\n\n> It works but I think we might want to add the unit kB for\n> understandability and consistency with other GUC_UNIT_KB parameters.\n> I've attached a small patch that adds the unit and aligns the indent\n> of the comments to the perimeter parameters.\n> \n> I'm not currently able to check, but if work_mem has a unit in the sample conf then I agree that vacuum_buffer_usage_limit should too.\n\n+1 work_mem and all other related options in this section has a unit in the\nsample conf so adding this makes sense.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 26 Apr 2023 14:31:38 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 26, 2023 at 8:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 26 Apr 2023, at 13:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> > On Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com\n> <mailto:sawada.mshk@gmail.com>> wrote:\n>\n> > It works but I think we might want to add the unit kB for\n> > understandability and consistency with other GUC_UNIT_KB parameters.\n> > I've attached a small patch that adds the unit and aligns the indent\n> > of the comments to the perimeter parameters.\n> >\n> > I'm not currently able to check, but if work_mem has a unit in the\n> sample conf then I agree that vacuum_buffer_usage_limit should too.\n>\n> +1 work_mem and all other related options in this section has a unit in the\n> sample conf so adding this makes sense.\n>\n\nAgreed.\nfor the patch, the other GUCs have a tab instead of a space between the\nunit and the \"#\" of the first comment.\n(not the fault of this patch but probably makes sense to fix now).\nOtherwise, LGTM\n\n- Melanie\n\nOn Wed, Apr 26, 2023 at 8:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:> On 26 Apr 2023, at 13:26, David Rowley <dgrowleyml@gmail.com> wrote:\n> On Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com <mailto:sawada.mshk@gmail.com>> wrote:\n\n> It works but I think we might want to add the unit kB for\n> understandability and consistency with other GUC_UNIT_KB parameters.\n> I've attached a small patch that adds the unit and aligns the indent\n> of the comments to the perimeter parameters.\n> \n> I'm not currently able to check, but if work_mem has a unit in the sample conf then I agree that vacuum_buffer_usage_limit should too.\n\n+1 work_mem and all other related options in this section has a unit in the\nsample conf so adding this makes sense.Agreed.for the patch, the other GUCs have a tab instead of a space between the unit and the \"#\" of the first comment.(not the fault of this patch but probably makes sense to fix now).Otherwise, LGTM- Melanie", "msg_date": "Wed, 26 Apr 2023 08:59:13 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" }, { "msg_contents": "On Wed, Apr 26, 2023 at 9:59 PM Melanie Plageman\n<melanieplageman@gmail.com> wrote:\n>\n>\n> On Wed, Apr 26, 2023 at 8:31 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>>\n>> > On 26 Apr 2023, at 13:26, David Rowley <dgrowleyml@gmail.com> wrote:\n>> > On Wed, 26 Apr 2023, 8:48 pm Masahiko Sawada, <sawada.mshk@gmail.com <mailto:sawada.mshk@gmail.com>> wrote:\n>>\n>> > It works but I think we might want to add the unit kB for\n>> > understandability and consistency with other GUC_UNIT_KB parameters.\n>> > I've attached a small patch that adds the unit and aligns the indent\n>> > of the comments to the perimeter parameters.\n>> >\n>> > I'm not currently able to check, but if work_mem has a unit in the sample conf then I agree that vacuum_buffer_usage_limit should too.\n>>\n>> +1 work_mem and all other related options in this section has a unit in the\n>> sample conf so adding this makes sense.\n>\n>\n> Agreed.\n> for the patch, the other GUCs have a tab instead of a space between the unit and the \"#\" of the first comment.\n> (not the fault of this patch but probably makes sense to fix now).\n> Otherwise, LGTM\n\nThanks for the review! Pushed after incorporating a comment from Melanie.\n\nRegards,\n\n--\nMasahiko Sawada\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 28 Apr 2023 15:44:04 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Option to not use ringbuffer in VACUUM, using it in failsafe mode" } ]
[ { "msg_contents": "I'm developing a module that implements Haskell as a procedural language (\nhttps://www.postgresql.org/about/news/plhaskell-v10-released-2519/)\n\nI'm using a callback function that is called when a memory context is\ndeleted to remove a temporary file. This works fine when the transaction\nends normally or raises an ERROR. However, when a FATAL event happens, the\ncallback is not run. Is this a bug or intended behaviour? I think that this\nis a new behavior and that the callback was called in an earlier version\n(perhaps v14) when I was originally developing this code. I'm running\nv15.1.\n\nIt seems to me that callbacks should be run in the event of a FATAL event\nin order to clean up any lingering issues.\n -Ed\n\nI'm developing a module that implements Haskell as a procedural language (https://www.postgresql.org/about/news/plhaskell-v10-released-2519/)I'm using a callback function that is called when a memory context is deleted to remove a temporary file. This works fine when the transaction ends normally or raises an ERROR. However, when a FATAL event happens, the callback is not run. Is this a bug or intended behaviour? I think that this is a new behavior and that the callback was called in an earlier version (perhaps v14) when I was originally developing this code. I'm running v15.1. It seems to me that callbacks should be run in the event of a FATAL event in order to clean up any lingering issues.                 -Ed", "msg_date": "Wed, 11 Jan 2023 17:47:28 -0500", "msg_from": "Ed Behn <ed@behn.us>", "msg_from_op": true, "msg_subject": "No Callbacks on FATAL" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 17:47:28 -0500, Ed Behn wrote:\n> I'm developing a module that implements Haskell as a procedural language (\n> https://www.postgresql.org/about/news/plhaskell-v10-released-2519/)\n> \n> I'm using a callback function that is called when a memory context is\n> deleted to remove a temporary file. This works fine when the transaction\n> ends normally or raises an ERROR. However, when a FATAL event happens, the\n> callback is not run. Is this a bug or intended behaviour? I think that this\n> is a new behavior and that the callback was called in an earlier version\n> (perhaps v14) when I was originally developing this code. I'm running\n> v15.1.\n> \n> It seems to me that callbacks should be run in the event of a FATAL event\n> in order to clean up any lingering issues.\n\nI think you need to provide a bit more details to allow us to analyze this. I\nassume you're talking about a MemoryContextRegisterResetCallback()? Which\nmemory context are you registering the callback on? What FATAL error is\npreventing the cleanup from happening?\n\nEven better would be a way to reproduce this without needing to build an\nexternal extension with its own dependencies. Perhaps you can hack it into one\nof the contrib/ modules?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 15:04:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" }, { "msg_contents": "Ed Behn <ed@behn.us> writes:\n> I'm using a callback function that is called when a memory context is\n> deleted to remove a temporary file. This works fine when the transaction\n> ends normally or raises an ERROR. However, when a FATAL event happens, the\n> callback is not run. Is this a bug or intended behaviour?\n\nIt's intended behavior, and I seriously doubt that it ever worked\ndifferently.\n\n> It seems to me that callbacks should be run in the event of a FATAL event\n> in order to clean up any lingering issues.\n\nThey'd be far more likely to cause issues than cure them. Or at least\nthat's the design assumption. If you really need something here, put\nit in an on_proc_exit callback not a memory context callback.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 18:10:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" }, { "msg_contents": "Hi,\n\nOn 2023-01-11 18:10:33 -0500, Tom Lane wrote:\n> Ed Behn <ed@behn.us> writes:\n> > I'm using a callback function that is called when a memory context is\n> > deleted to remove a temporary file. This works fine when the transaction\n> > ends normally or raises an ERROR. However, when a FATAL event happens, the\n> > callback is not run. Is this a bug or intended behaviour?\n>\n> It's intended behavior, and I seriously doubt that it ever worked\n> differently.\n\nHm? MemoryContextDelete() unconditionally calls the\ncallbacks. ShutdownPostgres() calls AbortOutOfAnyTransaction(). So if there's\nan ongoing transaction, we'll call the reset callbacks on TopMemoryContext and\nits children.\n\nOf course that doesn't mean we'll delete all existing contexts...\n\n\n> > It seems to me that callbacks should be run in the event of a FATAL event\n> > in order to clean up any lingering issues.\n>\n> They'd be far more likely to cause issues than cure them. Or at least\n> that's the design assumption. If you really need something here, put\n> it in an on_proc_exit callback not a memory context callback.\n\nOr, depending on the use case, a transaction callback.\n\nIt's really hard to know what precisely to suggest, without knowing a good bit\nmore about the intended usecase.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 16:57:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-11 18:10:33 -0500, Tom Lane wrote:\n>> It's intended behavior, and I seriously doubt that it ever worked\n>> differently.\n\n> Hm? MemoryContextDelete() unconditionally calls the\n> callbacks. ShutdownPostgres() calls AbortOutOfAnyTransaction(). So if there's\n> an ongoing transaction, we'll call the reset callbacks on TopMemoryContext and\n> its children.\n\nHmm ... I'd forgotten that we'd reach AbortOutOfAnyTransaction in\nthe FATAL code path. It does seem like any memory contexts below\nTopTransactionContext ought to get cleaned up then.\n\nAs you say, we really need more details to see what's happening\nhere.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 11 Jan 2023 20:17:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" }, { "msg_contents": "Hi hackers,\n\n> > Hm? MemoryContextDelete() unconditionally calls the\n> > callbacks. ShutdownPostgres() calls AbortOutOfAnyTransaction(). So if there's\n> > an ongoing transaction, we'll call the reset callbacks on TopMemoryContext and\n> > its children.\n>\n> Hmm ... I'd forgotten that we'd reach AbortOutOfAnyTransaction in\n> the FATAL code path. It does seem like any memory contexts below\n> TopTransactionContext ought to get cleaned up then.\n\nI wonder if this is a desired behavior. FATAL means a critical error\nlocal to a given backend, but not affecting shared memory, right? Is\nit generally safe to execute context memory callbacks having a FATAL\nerror?\n\n> As you say, we really need more details to see what's happening here.\n\nYep, minimal steps to reproduce the issue would be much appreciated!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 13 Jan 2023 16:14:11 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" }, { "msg_contents": "Hi,\n\nOn 2023-01-13 16:14:11 +0300, Aleksander Alekseev wrote:\n> > > Hm? MemoryContextDelete() unconditionally calls the\n> > > callbacks. ShutdownPostgres() calls AbortOutOfAnyTransaction(). So if there's\n> > > an ongoing transaction, we'll call the reset callbacks on TopMemoryContext and\n> > > its children.\n> >\n> > Hmm ... I'd forgotten that we'd reach AbortOutOfAnyTransaction in\n> > the FATAL code path. It does seem like any memory contexts below\n> > TopTransactionContext ought to get cleaned up then.\n> \n> I wonder if this is a desired behavior. FATAL means a critical error\n> local to a given backend, but not affecting shared memory, right? Is\n> it generally safe to execute context memory callbacks having a FATAL\n> error?\n\nWe need to roll back the in-progress transaction, otherwise we'd be in\ntrouble. Some resets are part of that. If the error actually corrupted local\nstate badly enough to break the transaction machinery, we'd need to PANIC out.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 09:54:54 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: No Callbacks on FATAL" } ]
[ { "msg_contents": "Motivation\n==========\n\nSECURITY INVOKER is dangerous, particularly for administrators. There\nare numerous ways to put code in a place that's likely to be executed:\ntriggers, views calling functions, logically-replicated tables, casts,\nsearch path and function resolution tricks, etc. If this code is run\nwith the privileges of the invoker, then it provides an easy path to\nprivilege escalation.\n\nWe've addressed some of these risks, i.e. by offering better ways to\ncontrol the search path, and by ignoring SECURITY INVOKER in some\ncontexts (like maintenance commands). But it still leaves a lot of\nrisks for administrators who want to do a SELECT or INSERT. And it\nlimits major use cases, like logical replication (where the\nsubscription owner must trust all table owners).\n\nNote that, in the SQL spec, SECURITY DEFINER is the default, which may\nbe due to some of the dangers of SECURITY INVOKER. (SECURITY DEFINER\ncarries its own risks, of course, especially if the definer is highly\nprivileged.)\n\nPrior work\n==========\n\nhttps://www.postgresql.org/message-id/19327.1533748538%40sss.pgh.pa.us\n\nThe above thread came up with a couple solutions to express a trust\nrelationship between users (via GUC or DDL). I'm happy if that\ndiscussion continues, but it appeared to trail off.\n\nMy new proposal is different (and simpler, I believe) in two ways:\n\nFirst, my proposal is only concerned with SECURITY INVOKER functions\nand executing arbitrary code written by untrusted users. There's still\nthe potential for mischief without using SECURITY INVOKER (e.g. if the\nsearch path isn't properly controlled), but it's a different kind of\nproblem. This narrower problem scope makes my proposal less invasive.\n\nSecond, my proposal doesn't establish a new trust relationship. If the\nSECURITY INVOKER function is owned by a user that can SET ROLE to you,\nyou trust it; otherwise not. \n\nProposal\n========\n\nNew boolean GUC check_function_owner_trust, default false.\n\nIf check_function_owner_trust=true, throw an error if you try to\nexecute a function that is SECURITY INVOKER and owned by a user other\nthan you or someone that can SET ROLE to you.\n\nUse Cases\n=========\n\n1. If you are a superuser/admin working on a problem interactively, you\ncan protect yourself against accidentally executing malicious code with\nyour privileges.\n\n2. You can set up logical replication subscriptions into tables owned\nby users you don't trust, as long as triggers (if needed) can be safely\nwritten as SECURITY DEFINER.\n\n3. You can ensure that running an extension script doesn't somehow\nexecute malicious code with superuser privileges.\n\n4. Users can protect themselves from executing malicious code in cases\nwhere:\n a. role membership accurately describes the trust relationship\nalready\n b. triggers, views-calling-UDFs, etc., (if any) can be safely written\nas SECURITY DEFINER\n\nWhile that may not be every conceivable use case, it feels very useful\nto me.\n\nEven if you really don't like SECURITY DEFINER, points 1, 3, and 4(a)\nseem like wins. And there are a lot of cases where the user simply\ndoesn't need triggers (etc.).\n\nExtensions\n==========\n\nSome extensions might create and extension-specific user that owns lots\nof SECURITY INVOKER functions. If this GUC is set, other users wouldn't\nbe able to call those functions.\n\nOur contrib extensions don't seem do that, and all the tests for them\npass without modification (even when the GUC is true).\n\nFor extensions that do create extension-specific users that own\nSECURITY INVOKER functions, this GUC alone won't work. Trying to\ncapture that use case as well could involve more discussion (involving\nextension authors) and may result in an extension-specific trust\nproposal, so I'm considering that out of scope.\n\nLoose Ends\n==========\n\nDo we need to check security-invoker views? I don't think it's nearly\nas important, because views can't write data. A security-invoker view\nread from a security definer function uses the privileges of the\nfunction owner, so I don't see an obvious way to abuse a security\ninvoker view, but perhaps I'm not creative enough.\n\nAlso, Noah's patch did things differently from mine in a few places,\nand I need to work out whether I missed something. I may have to add a\ncheck for the range types \"subtype_diff\" function, for instance.\n\nFuture Work\n===========\n\nIn some cases, we should consider defaulting (or even forcing) this GUC\nto be true, such as in a subscription apply worker.\n\nThis proposal may offer a path to allowing non-superusers to create\nevent triggers.\n\nWe may want to provide SECURITY PUBLIC or SECURITY NONE (or even\n\"SECURITY AS <role>\"?), which would execute a function with minimal\nprivileges, and further reduce the need for executing untrusted\nSECURITY INVOKER code.\n\nAnother idea is to have READ ONLY functions which would be another way\nto make SECURITY INVOKER safer.\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS", "msg_date": "Wed, 11 Jan 2023 18:16:32 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Blocking execution of SECURITY INVOKER" }, { "msg_contents": "Hi,\n\n\nOn 2023-01-11 18:16:32 -0800, Jeff Davis wrote:\n> Motivation\n> ==========\n> \n> SECURITY INVOKER is dangerous, particularly for administrators. There\n> are numerous ways to put code in a place that's likely to be executed:\n> triggers, views calling functions, logically-replicated tables, casts,\n> search path and function resolution tricks, etc. If this code is run\n> with the privileges of the invoker, then it provides an easy path to\n> privilege escalation.\n\n> We've addressed some of these risks, i.e. by offering better ways to\n> control the search path, and by ignoring SECURITY INVOKER in some\n> contexts (like maintenance commands). But it still leaves a lot of\n> risks for administrators who want to do a SELECT or INSERT. And it\n> limits major use cases, like logical replication (where the\n> subscription owner must trust all table owners).\n\nI'm very skeptical about this framing. On the one hand, you can do a lot of\nmischief with security definer functions if they get privileged information as\nwell. But more importantly, just because a function is security definer,\ndoesn't mean it's safe to be called with attacker controlled input, and the\nprivilege check will be done with the rights of the admin in many of these\ncontexts.\n\nAnd encouraging more security definer functions to be used will cause a lot of\nother security issues.\n\n\nHowever - I think the concept of more strict ownership checks is a good one. I\njust don't think it's right to tie it to SECURITY INVOKER.\n\nI think it'd be quite valuable to have a guc that prevents the execution of\nany code that's not directly controlled by the privileged user. Not just\nchecking function ownership, but also checking ownership of the trigger\ndefinition (i.e. table), check constraints, domain constraints, indexes with\nexpression columns / partial indexes, etc.\n\n\n\n> Use Cases\n> =========\n> \n> 1. If you are a superuser/admin working on a problem interactively, you\n> can protect yourself against accidentally executing malicious code with\n> your privileges.\n\nIn that case I think what's actually desirable is to simply execute no code\ncontrolled by untrusted users. Even a security definer function can mess up\nyour day when called in the wrong situation, e.g. due to getting access to the\ncontent of arguments (e.g. a trigger's row contents) or preventing an admin's\nwrite from taking effect (by returning the relevant values from a trigger).\n\nAnd not ever allowing execution of untrusted code in that situation IME\ndoesn't prevent desirable things.\n\n\n> 2. You can set up logical replication subscriptions into tables owned\n> by users you don't trust, as long as triggers (if needed) can be safely\n> written as SECURITY DEFINER.\n\nI think a much more promising path towards that is to add a feature to logical\nreplication that changes the execution context to the table owner while\napplying those changes.\n\nUsing security definer functions for triggers opens up a significant new\nattack surface, lots of code that previously didn't need to be safe against\nany possible privilege escalation, now needs to be. Expanding the scope of\nwhat needs to protect against privesc, is a BAD idea.\n\n\n> 3. You can ensure that running an extension script doesn't somehow\n> execute malicious code with superuser privileges.\n\nIt's not safe to allow executing secdef code in that context either. If a less\nprivileged user manages to get called in that context you don't want to\nexecute the code, even in a secdef, you want to error out, so the problem can\nbe detected and rectified.\n\n\n> 4. Users can protect themselves from executing malicious code in cases\n> where:\n> a. role membership accurately describes the trust relationship\n> already\n> b. triggers, views-calling-UDFs, etc., (if any) can be safely written\n> as SECURITY DEFINER\n\nI don't think b) is true very often. And a) seems very problematic as well, as\nparticularly in \"pseudo superuser\" environments the most feasible way to\nimplement pseudo superusers is to automatically grant group membership to the\npseudo superuser. See also e5b8a4c098a.\n\n\n> While that may not be every conceivable use case, it feels very useful\n> to me.\n> \n> Even if you really don't like SECURITY DEFINER, points 1, 3, and 4(a)\n> seem like wins. And there are a lot of cases where the user simply\n> doesn't need triggers (etc.).\n\n4 doesn't seem like a win, it's a requirement?\n\nAnd as I said, for 1 and 3 I think it's way preferrable to error out.\n\n\n\n> Future Work\n> ===========\n> \n> In some cases, we should consider defaulting (or even forcing) this GUC\n> to be true, such as in a subscription apply worker.\n> \n> This proposal may offer a path to allowing non-superusers to create\n> event triggers.\n\nThat'd allow a less-privileged user to completely hobble the admin by erroring\nout on all actions.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 11 Jan 2023 19:33:55 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "On Wed, 2023-01-11 at 19:33 -0800, Andres Freund wrote:\n\n> and the\n> privilege check will be done with the rights of the admin in many of\n> these\n> contexts.\n\nCan you explain?\n\n> And encouraging more security definer functions to be used will cause\n> a lot of\n> other security issues.\n\nMy proposal just gives some user foo a GUC to say \"I am not accepting\nthe risk of eval()ing whatever arbitrary code finds its way in front of\nme with all of my privileges\".\n\nIf user foo sheds this security burden by setting the GUC, user bar may\nthen choose to write a trigger function as SECURITY DEFINER so that foo\ncan access bar's table. But that's the deal the two users struck -- foo\ndeclined the burden, bar accepted it. Why do we want to prevent that\narrangement?\n\nRight now, foo *always* has the burden and no opportunity to decline\nit, and even a paranoid user can't figure out what code they will be\nexecuting with a given command. That doesn't seem reasonable to me.\n\n> However - I think the concept of more strict ownership checks is a\n> good one. I\n> just don't think it's right to tie it to SECURITY INVOKER.\n\nConsider a canonical trigger example like:\nhttps://wiki.postgresql.org/wiki/Audit_trigger or \nhttps://github.com/2ndQuadrant/audit-trigger/blob/master/audit.sql\n\nHow can we make that secure for users that insert into the table with\nthe trigger if you don't differentiate between SECURITY INVOKER and\nSECURITY DEFINER? If you allow neither, then it obviously won't work.\nAnd if you allow both, then the owner of the table can change the\nfunction to SECURITY INVOKER and the definition to be malicious a\nmillisecond before you insert a tuple.\n\nI guess we currently say that anyone foolish enough to insert into a\ntable that they don't own deserves what they get. That's a weird thing\nto say when we have such a fine-grained GRANT system and RLS.\n\n> I think it'd be quite valuable to have a guc that prevents the\n> execution of\n> any code that's not directly controlled by the privileged user. Not\n> just\n> checking function ownership, but also checking ownership of the\n> trigger\n> definition (i.e. table), check constraints, domain constraints,\n> indexes with\n> expression columns / partial indexes, etc\n\nThat sounds like a mix of my proposal and Noah's. The way you've\nphrased it seems overly strict though -- do you mean not even execute\nuntrusted expressions? And it seems to cut out maintenance commands,\nwhich means it would be hard for administrators to use.\n\nI'm OK considering these proposals. Anything that offers some safety.\nBut it seems like both your proposal and Noah's cut out huge amounts of\nfunctionality unless you have unqualified trust.\n\n> Even a security definer function can mess up\n> your day when called in the wrong situation, e.g. due to getting\n> access to the\n> content of arguments (e.g. a trigger's row contents)\n\nI don't see that as a problem. If you're inserting data in a table,\nyou'd expect the owner of the table to see that data and be able to\nmodify it as they see fit.\n\n> or preventing an admin's\n> write from taking effect (by returning the relevant values from a\n> trigger).\n\nI don't see the problem here either. Even if we force the row to be\ninserted, the table owner could just delete it.\n\n> And not ever allowing execution of untrusted code in that situation\n> IME\n> doesn't prevent desirable things.\n\nI don't understand this statement.\n\n> \n> I think a much more promising path towards that is to add a feature\n> to logical\n> replication that changes the execution context to the table owner\n> while\n> applying those changes.\n\nHow is that different from SECURITY DEFINER?\n\n\n\n> \n> \n> And as I said, for 1 and 3 I think it's way preferrable to error out.\n\nMy proposal does error out for SECURITY INVOKER, so I suppose you're\nsaying we should error out for SECURITY DEFINER as well? In the case of\n1, I think that would prevent regular maintenance by an admin.\n\nBut for use case 3 (extension scripts), I think you're right, erroring\non any non-superuser-owned code is probably good.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 18:40:30 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "Hi,\n\nOn 2023-01-12 18:40:30 -0800, Jeff Davis wrote:\n> On Wed, 2023-01-11 at 19:33 -0800, Andres Freund wrote:\n>\n> > and the\n> > privilege check will be done with the rights of the admin in many of\n> > these\n> > contexts.\n>\n> Can you explain?\n\nIf the less-privileged user does *not* have execution rights to a security\ndefiner function, but somehow can trick the more-privileged user into calling\nthe function for them, e.g. by using it as the default expression of a column,\nthe less-privileged user can escalate to the permissions of the security\ndefiner function.\n\nsuperuser:\n# CREATE FUNCTION exec_su(p_sql text) RETURNS text LANGUAGE plpgsql SECURITY DEFINER AS $$BEGIN RAISE NOTICE 'executing %', p_sql; EXECUTE p_sql;RETURN 'p_sql';END;$$;\n# REVOKE ALL ON FUNCTION exec_su FROM PUBLIC ;\n\nunprivileged user:\n$ SELECT exec_su('ALTER USER less_privs SUPERUSER');\nERROR: 42501: permission denied for function exec_su\n$ CREATE TABLE trick_superuser(value text default exec_su('ALTER USER less_privs SUPERUSER'));\n\nsuperuser:\n# INSERT INTO trick_superuser DEFAULT VALUES;\nNOTICE: 00000: executing ALTER USER less_privs SUPERUSER\n\n\nThis case would *not* be prevented by your proposed GUC, unless I miss\nsomething major. The superuser trusts itself and thus the exec_su() function.\n\n\n\n> > And encouraging more security definer functions to be used will cause\n> > a lot of\n> > other security issues.\n>\n> My proposal just gives some user foo a GUC to say \"I am not accepting\n> the risk of eval()ing whatever arbitrary code finds its way in front of\n> me with all of my privileges\".\n>\n> If user foo sheds this security burden by setting the GUC, user bar may\n> then choose to write a trigger function as SECURITY DEFINER so that foo\n> can access bar's table. But that's the deal the two users struck -- foo\n> declined the burden, bar accepted it. Why do we want to prevent that\n> arrangement?\n\nBecause it afaict doesn't provide any meaningfully increased security\nguarantees (see above), and opens up new ways of attacking, because while\ngranting execute on a security definer function is low risk, granting execute\non security invoker functions is very high risk, but required for triggers etc\nto work.\n\n\n> Right now, foo *always* has the burden and no opportunity to decline\n> it, and even a paranoid user can't figure out what code they will be\n> executing with a given command. That doesn't seem reasonable to me.\n\nI agree it's not reasonable - I just don't see the proposal moving the bar.\n\n\nThe proposal to not trust any expressions controlled by untrusted users at\nleast allows to prevent execution of code, even if it doesn't provide a way to\nexecute the code in a safe manner. Given that we don't have the former, it\nseems foolish to shoot for the latter.\n\n\n\n> > However - I think the concept of more strict ownership checks is a\n> > good one. I\n> > just don't think it's right to tie it to SECURITY INVOKER.\n>\n> Consider a canonical trigger example like:\n> https://wiki.postgresql.org/wiki/Audit_trigger or\n> https://github.com/2ndQuadrant/audit-trigger/blob/master/audit.sql\n>\n> How can we make that secure for users that insert into the table with\n> the trigger if you don't differentiate between SECURITY INVOKER and\n> SECURITY DEFINER? If you allow neither, then it obviously won't work.\n> And if you allow both, then the owner of the table can change the\n> function to SECURITY INVOKER and the definition to be malicious a\n> millisecond before you insert a tuple.\n\nAs shown above, triggers are simply not a relevant boundary when a more\nprivileged user accesses a table controlled by a less privileged user.\n\nAnd yes, of course an audit function needs to be security definer. But that's\nindependent of whether it's safe for a more privileged user modify table\ncontents.\n\n\n> I guess we currently say that anyone foolish enough to insert into a\n> table that they don't own deserves what they get.\n\nI agree that we have a problem that we should address. I just don't think your\nsolution works.\n\n\n> That's a weird thing to say when we have such a fine-grained GRANT system\n> and RLS.\n\nThat's a non-sequitur imo. Particularly when RLS you'd not allow\nless-privileged users to create any objects, with the possible exception of\ntemp tables. The point of the grant system is for a privileged user to safely\nallow a less privileged user to perform a safe subset of actions. That's just\na separate angle than allowing safe access for a more privileged user to to\nobjects controlled by a less privileged user.\n\n\n\n> > I think it'd be quite valuable to have a guc that prevents the\n> > execution of\n> > any code that's not directly controlled by the privileged user. Not\n> > just\n> > checking function ownership, but also checking ownership of the\n> > trigger\n> > definition (i.e. table), check constraints, domain constraints,\n> > indexes with\n> > expression columns / partial indexes, etc\n>\n> That sounds like a mix of my proposal and Noah's. The way you've\n> phrased it seems overly strict though -- do you mean not even execute\n> untrusted expressions? And it seems to cut out maintenance commands,\n> which means it would be hard for administrators to use.\n\nYes, I mean every expression. As show above, as soon as there is *any*\nexpression controlled by a less privileged user is executed, the game is lost.\n\nI don't think that prevents all maintenance btw - for things like reindex we\nswitch to the object owner for evaluation via SetUserIdAndSecContext(). After\nchecking whether the current user is allowed to that kind of thing.\n\nBut for things like default expressions, generated columns etc, I just don't\nsee an alternative to erroring out when we'd otherwise evaluate an expression\nthat's controlled by a less privileged user. The admin can alter the table\ndefinition / drop it, if requried.\n\n\n> > Even a security definer function can mess up\n> > your day when called in the wrong situation, e.g. due to getting\n> > access to the\n> > content of arguments (e.g. a trigger's row contents)\n>\n> I don't see that as a problem. If you're inserting data in a table,\n> you'd expect the owner of the table to see that data and be able to\n> modify it as they see fit.\n>\n> > or preventing an admin's\n> > write from taking effect (by returning the relevant values from a\n> > trigger).\n>\n> I don't see the problem here either. Even if we force the row to be\n> inserted, the table owner could just delete it.\n\n\n\n> > And not ever allowing execution of untrusted code in that situation IME\n> > doesn't prevent desirable things.\n>\n> I don't understand this statement.\n\nIt's not a huge problem if a server admin gets an error while evaluating a\nless-privileged-expression, because that's not commonly something that an\nadmin needs to do. And the admin likely can switch into the user context of\nthe less privileged user to perform operations in a safer context.\n\n\n> >\n> > I think a much more promising path towards that is to add a feature to\n> > logical replication that changes the execution context to the table owner\n> > while applying those changes.\n>\n> How is that different from SECURITY DEFINER?\n\nIt protect against vastly more things, see the default expression example.\n\n\n> > And as I said, for 1 and 3 I think it's way preferrable to error out.\n>\n> My proposal does error out for SECURITY INVOKER, so I suppose you're\n> saying we should error out for SECURITY DEFINER as well? In the case of\n> 1, I think that would prevent regular maintenance by an admin.\n\nWhat regular maintenance would be prevented? And would it be safe to execute\nsaid code as superuser?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Jan 2023 19:29:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "On 2023-01-12 19:29:43 -0800, Andres Freund wrote:\n> Hi,\n> \n> On 2023-01-12 18:40:30 -0800, Jeff Davis wrote:\n> > On Wed, 2023-01-11 at 19:33 -0800, Andres Freund wrote:\n> >\n> > > and the\n> > > privilege check will be done with the rights of the admin in many of\n> > > these\n> > > contexts.\n> >\n> > Can you explain?\n> \n> If the less-privileged user does *not* have execution rights to a security\n> definer function, but somehow can trick the more-privileged user into calling\n> the function for them, e.g. by using it as the default expression of a column,\n> the less-privileged user can escalate to the permissions of the security\n> definer function.\n> \n> superuser:\n> # CREATE FUNCTION exec_su(p_sql text) RETURNS text LANGUAGE plpgsql SECURITY DEFINER AS $$BEGIN RAISE NOTICE 'executing %', p_sql; EXECUTE p_sql;RETURN 'p_sql';END;$$;\n> # REVOKE ALL ON FUNCTION exec_su FROM PUBLIC ;\n> \n> unprivileged user:\n> $ SELECT exec_su('ALTER USER less_privs SUPERUSER');\n> ERROR: 42501: permission denied for function exec_su\n> $ CREATE TABLE trick_superuser(value text default exec_su('ALTER USER less_privs SUPERUSER'));\n> \n> superuser:\n> # INSERT INTO trick_superuser DEFAULT VALUES;\n> NOTICE: 00000: executing ALTER USER less_privs SUPERUSER\n> \n> \n> This case would *not* be prevented by your proposed GUC, unless I miss\n> something major. The superuser trusts itself and thus the exec_su() function.\n\nAnother reason security definer isn't a way to allow safe execution of lesser\nprivileged code:\n\nsuperuser (andres):\n# CREATE FUNCTION bleat_whoami() RETURNS text LANGUAGE plpgsql SECURITY INVOKER AS $$BEGIN RAISE NOTICE 'whoami: %', current_user;RETURN current_user;END;$$;\n# REVOKE ALL ON FUNCTION bleat_whoami FROM PUBLIC;\n\nunprivileged user:\n$ CREATE FUNCTION secdef_with_default(foo text = bleat_whoami()) RETURNS text LANGUAGE plpgsql SECURITY DEFINER AS $$BEGIN RETURN 'secdef_with_default';END;$$;\n$ SELECT secdef_with_default();\nERROR: 42501: permission denied for function bleat_whoami\n\nsuperuser (andres):\n# SELECT secdef_with_default();\nNOTICE: 00000: whoami: andres\nLOCATION: exec_stmt_raise, pl_exec.c:3893\n┌─────────────────────┐\n│ secdef_with_default │\n├─────────────────────┤\n│ secdef_with_default │\n└─────────────────────┘\n(1 row)\n\nI.e. the default arguments where evaluated with the invoker's permissions, not\nthe definer's, despite being controlled by the less privileged user. Worsened\nin this case by the fact that this allowed the less privileged user to call a\nfunction they couldn't even call.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Jan 2023 19:38:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "Hi,\n\nOn Thu, 2023-01-12 at 19:29 -0800, Andres Freund wrote:\n> superuser:\n> # CREATE FUNCTION exec_su(p_sql text) RETURNS text LANGUAGE plpgsql\n> SECURITY DEFINER AS $$BEGIN RAISE NOTICE 'executing %', p_sql;\n> EXECUTE p_sql;RETURN 'p_sql';END;$$;\n> # REVOKE ALL ON FUNCTION exec_su FROM PUBLIC ;\n\nThat can be solved by creating the function in a schema where ordinary\nusers don't have USAGE:\n\nCREATE TABLE trick_superuser(value text default admin.exec_su('ALTER\nUSER less_privs SUPERUSER'));\nERROR: permission denied for schema admin\n\nAn interesting case, but it looks more like a gotcha (which is solvable\nwith best practices); not a fundamental problem.\n\n> The point of the grant system is for a privileged user to safely\n> allow a less privileged user to perform a safe subset of actions.\n\nThere is not necessarily a GRANT hierarchy like you describe. The two\nusers can be peers each with comparable privileges that might make\ngrants to each other.\n\n\n> And the admin likely can switch into the user context of\n> the less privileged user to perform operations in a safer context.\n\nHow would the admin do that? The malicious UDF can just \"RESET SESSION\nAUTHORIZATION\" to pop back out of the safer context.\n\nIf there's not a good way to do this safely now, then we should\nprobably provide one.\n\n> > \nRegards,\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 23:38:50 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "Hi,\n\nOn 2023-01-12 23:38:50 -0800, Jeff Davis wrote:\n> On Thu, 2023-01-12 at 19:29 -0800, Andres Freund wrote:\n> > superuser:\n> > # CREATE FUNCTION exec_su(p_sql text) RETURNS text LANGUAGE plpgsql\n> > SECURITY DEFINER AS $$BEGIN RAISE NOTICE 'executing %', p_sql;\n> > EXECUTE p_sql;RETURN 'p_sql';END;$$;\n> > # REVOKE ALL ON FUNCTION exec_su FROM PUBLIC ;\n> \n> That can be solved by creating the function in a schema where ordinary\n> users don't have USAGE:\n> \n> CREATE TABLE trick_superuser(value text default admin.exec_su('ALTER\n> USER less_privs SUPERUSER'));\n> ERROR: permission denied for schema admin\n\nDoubtful. Leaving aside the practicalities of using dedicated schemas and\nenforcing their use, there's plenty functions in pg_catalog that a less\nprivileged user can use to do bad things.\n\nJust think of set_config(), pg_read_file(), lo_create(), binary_upgrade_*(),\npg_drop_replication_slot()...\n\nIf the default values get evaluated, this is arbitrary code exec, even if it\nrequires a few contortions. And the same is true for evaluating *any*\nexpression.\n\n\n\n> > And the admin likely can switch into the user context of\n> > the less privileged user to perform operations in a safer context.\n> \n> How would the admin do that? The malicious UDF can just \"RESET SESSION\n> AUTHORIZATION\" to pop back out of the safer context.\n\nI thought we had a reasonably convenient way, but now I am not sure\nanymore. Might have been via a C helper function. It can be hacked together,\nbut this is an area that should be as unhacky as possible.\n\n\n> If there's not a good way to do this safely now, then we should\n> probably provide one.\n\nYea, particularly because we do have all the infrastructure for it\n(c.f. SECURITY_LOCAL_USERID_CHANGE / SECURITY_RESTRICTED_OPERATION).\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 00:16:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "On Thu, 2023-01-12 at 19:38 -0800, Andres Freund wrote:\n> I.e. the default arguments where evaluated with the invoker's\n> permissions, not\n> the definer's, despite being controlled by the less privileged user.\n\nThis is a very interesting case. It also involves tricking the\nsuperuser into executing their own function with the attacker's inputs.\nThat part is the same as the other case. What's intriguing here is that\nit shows the function can be SECURITY INVOKER, and that really means it\ncould be any builtin function as long as the types work out.\n\nFor example:\n=> create function trick(l pg_lsn = pg_switch_wal()) returns int\nlanguage plpgsql security definer as $$ begin return 42; end; $$;\n\nIf the superuser executes that, even though it's a SECURITY DEFINER\nfunction owned by an unprivileged user, it will still call\npg_switch_wal().\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 00:19:12 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "On Fri, 2023-01-13 at 00:16 -0800, Andres Freund wrote:\n\n> Just think of set_config(), pg_read_file(), lo_create(),\n> binary_upgrade_*(),\n> pg_drop_replication_slot()...\n\nThank you for walking through the details here. I missed it from your\nfirst example because it was an extreme case -- a superuser writing an\neval() security definer function -- so the answer was to lock such a\ndangerous function away. But more mild cases are the real problem,\nbecause there's a lot of stuff in pg_catalog.*.\n\n> If the default values get evaluated, this is arbitrary code exec,\n> even if it\n> requires a few contortions. And the same is true for evaluating *any*\n> expression.\n\nRight.\n\nHowever, the normal use case for expressions (whether in a default\nexpression or check constraint or whatever) is very simple and doesn't\neven involve table access. There must be a way to satisfy those simple\ncases without opening up a vast attack surface area, and if we do, then\nI think my proposal might look useful again.\n\n\n-- \nJeff Davis\nPostgreSQL Contributor Team - AWS\n\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 10:04:19 -0800", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": true, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" }, { "msg_contents": "Hi,\n\nOn 2023-01-13 10:04:19 -0800, Jeff Davis wrote:\n> However, the normal use case for expressions (whether in a default\n> expression or check constraint or whatever) is�very simple and doesn't\n> even involve table access. There must be a way to satisfy those simple\n> cases without opening up a vast attack surface area, and if we do, then\n> I think my proposal might look useful again.\n\nI don't see how. I guess we could try to introduce a classification of \"never\ndangerous\" functions (and thus operators). But that seems like a crapton of\nwork and hard to get right. And I think my examples pretty conclusively show\nthat security definer isn't a useful boundary to *reduce* privileges. So the\nwhole idea of preventing only security invoker functions just seems\nnon-viable.\n\nI think the combination of\na) a setting that restricts evaluation of any non-trusted expressions,\n independent of the origin\nb) an easy way to execute arbitrary statements within\n SECURITY_RESTRICTED_OPERATION\n\nis promising though. In later steps We might be able to use a) to offer the\noption to automatically switch to expression owners in specific places (if the\ncurrent user has the rights to do that).\n\n\nAn alternative to b would be a version SET ROLE that can't be undone. But I\nthink we'd just miss all the other things that are prevented by\nSECURITY_RESTRICTED_OPERATION.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 10:30:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Blocking execution of SECURITY INVOKER" } ]
[ { "msg_contents": "Hi\n\nI need to resend\nhttps://www.postgresql.org/message-id/CALte62yFXQvRrA47unpedfcn%3DGoE_VyvxcKkqj2NUhenK__qgA%40mail.gmail.com\n\nUnfortunately I didn't get any mail.\n\nRegards\n\nPavel\n\nHiI need to resend https://www.postgresql.org/message-id/CALte62yFXQvRrA47unpedfcn%3DGoE_VyvxcKkqj2NUhenK__qgA%40mail.gmail.comUnfortunately I didn't get any mail.RegardsPavel", "msg_date": "Thu, 12 Jan 2023 06:20:16 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "resend from mailing list archive doesn't working" }, { "msg_contents": "On Thu, Jan 12, 2023 at 06:20:16AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I need to resend\n> https://www.postgresql.org/message-id/CALte62yFXQvRrA47unpedfcn%3DGoE_VyvxcKkqj2NUhenK__qgA%40mail.gmail.com\n> \n> Unfortunately I didn't get any mail.\n\nIt looks like you're already on the \"CC\" list, and so gmail will\nde-duplicate the mail, and you won't see the resent one.\n\nI only know that because it was explained to me here:\nhttps://www.postgresql.org/message-id/CABUevEzDVK5sXWFuTZFKgJb7zvF3M_Y-uxbw1CxeYVbxUu6P6g@mail.gmail.com\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 11 Jan 2023 23:24:27 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: resend from mailing list archive doesn't working" }, { "msg_contents": "čt 12. 1. 2023 v 6:24 odesílatel Justin Pryzby <pryzby@telsasoft.com>\nnapsal:\n\n> On Thu, Jan 12, 2023 at 06:20:16AM +0100, Pavel Stehule wrote:\n> > Hi\n> >\n> > I need to resend\n> >\n> https://www.postgresql.org/message-id/CALte62yFXQvRrA47unpedfcn%3DGoE_VyvxcKkqj2NUhenK__qgA%40mail.gmail.com\n> >\n> > Unfortunately I didn't get any mail.\n>\n> It looks like you're already on the \"CC\" list, and so gmail will\n> de-duplicate the mail, and you won't see the resent one.\n>\n> I only know that because it was explained to me here:\n>\n> https://www.postgresql.org/message-id/CABUevEzDVK5sXWFuTZFKgJb7zvF3M_Y-uxbw1CxeYVbxUu6P6g@mail.gmail.com\n\n\nIt is true, I found this thread in an thrash\n\nThank you for info\n\nRegards\n\nPavel\n\n>\n>\n> --\n> Justin\n>\n\nčt 12. 1. 2023 v 6:24 odesílatel Justin Pryzby <pryzby@telsasoft.com> napsal:On Thu, Jan 12, 2023 at 06:20:16AM +0100, Pavel Stehule wrote:\n> Hi\n> \n> I need to resend\n> https://www.postgresql.org/message-id/CALte62yFXQvRrA47unpedfcn%3DGoE_VyvxcKkqj2NUhenK__qgA%40mail.gmail.com\n> \n> Unfortunately I didn't get any mail.\n\nIt looks like you're already on the \"CC\" list, and so gmail will\nde-duplicate the mail, and you won't see the resent one.\n\nI only know that because it was explained to me here:\nhttps://www.postgresql.org/message-id/CABUevEzDVK5sXWFuTZFKgJb7zvF3M_Y-uxbw1CxeYVbxUu6P6g@mail.gmail.comIt is true, I found this thread in an thrash Thank you for infoRegardsPavel \n\n-- \nJustin", "msg_date": "Thu, 12 Jan 2023 06:26:32 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": true, "msg_subject": "Re: resend from mailing list archive doesn't working" } ]
[ { "msg_contents": "Hi, hackers\n\nSome conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n\n\tif (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\n\nRemove the condition `toc_bytes + nbytes < toc_bytes` and take a sizeof(shm_entry) into account in shm_toc_allocate though\nshm_toc_allocate does that too.\n\n\t/* Check for memory exhaustion and overflow. */\n\t- if (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\n\t+ if (toc_bytes + sizeof(shm_toc_entry) + nbytes > total_bytes)\n\t {\n \tSpinLockRelease(&toc->toc_mutex);\n\nshm_toc_freespace is introduced with shm_toc by original commit 6ddd5137b2, but is not used since then, so remove it.\n\n\nRegards,\nZhang Mingli", "msg_date": "Thu, 12 Jan 2023 14:34:01 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Fix condition in shm_toc and remove unused function\n shm_toc_freespace." }, { "msg_contents": "Hi,\n\nOn Jan 12, 2023, 14:34 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n> Hi, hackers\n>\n> Some conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n> \t\tif (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\n> Remove the condition `toc_bytes + nbytes < toc_bytes` and take a sizeof(shm_entry) into account in shm_toc_allocate though\n> shm_toc_allocate does that too.\n  shm_toc_insert does that too, and  we can report error earlier.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHi,\n\nOn Jan 12, 2023, 14:34 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\nHi, hackers\n\nSome conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n \tif (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\nRemove the condition `toc_bytes + nbytes < toc_bytes` and take a sizeof(shm_entry) into account in shm_toc_allocate though \nshm_toc_allocate does that too.\n  shm_toc_insert does that too, and  we can report error earlier.\n\n\nRegards,\nZhang Mingli", "msg_date": "Thu, 12 Jan 2023 14:50:13 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix condition in shm_toc and remove unused function\n shm_toc_freespace." }, { "msg_contents": "On Thu, Jan 12, 2023 at 2:50 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n\n> On Jan 12, 2023, 14:34 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n>\n> Some conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n> if (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\n> Remove the condition `toc_bytes + nbytes < toc_bytes` and take a\n> sizeof(shm_entry) into account in shm_toc_allocate though\n> shm_toc_allocate does that too.\n>\n> shm_toc_insert does that too, and we can report error earlier.\n>\n\nI don't think we should consider sizeof(shm_toc_entry) in the 'if'\ncondition in shm_toc_allocate, as this function is not in charge of\nallocating a new TOC entry. That's what shm_toc_insert does.\n\nOther parts of this patch look good to me.\n\nThanks\nRichard\n\nOn Thu, Jan 12, 2023 at 2:50 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\nOn Jan 12, 2023, 14:34 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\nSome conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n \tif (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)\nRemove the condition `toc_bytes + nbytes < toc_bytes` and take a sizeof(shm_entry) into account in shm_toc_allocate though \nshm_toc_allocate does that too.\n  shm_toc_insert does that too, and  we can report error earlier. I don't think we should consider sizeof(shm_toc_entry) in the 'if'condition in shm_toc_allocate, as this function is not in charge ofallocating a new TOC entry.  That's what shm_toc_insert does.Other parts of this patch look good to me.ThanksRichard", "msg_date": "Thu, 12 Jan 2023 16:54:40 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix condition in shm_toc and remove unused function\n shm_toc_freespace." }, { "msg_contents": "Hi,\n\nRegards,\nZhang Mingli\nOn Jan 12, 2023, 16:54 +0800, Richard Guo <guofenglinux@gmail.com>, wrote:\n>\n> On Thu, Jan 12, 2023 at 2:50 PM Zhang Mingli <zmlpostgres@gmail.com> wrote:\n> > On Jan 12, 2023, 14:34 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n> > > Some conditions in shm_toc_insert and shm_toc_allocate are bogus, like:\n> > > \t\tif (toc_bytes + nbytes > total_bytes || toc_bytes + nbytes < toc_bytes)Remove the condition `toc_bytes + nbytes < toc_bytes` and take a sizeof(shm_entry) into account in shm_toc_allocate though\n> > > shm_toc_allocate does that too.\n> >   shm_toc_insert does that too, and  we can report error earlier.\n>\n> I don't think we should consider sizeof(shm_toc_entry) in the 'if'\n> condition in shm_toc_allocate, as this function is not in charge of\n> allocating a new TOC entry.  That's what shm_toc_insert does.\nThanks for review.\nMake sense.\nEven reserve a sizeof(shm_toc_entry) when shm_toc_allocate, it cloud happen that there is no space  when shm_toc_insert\nin case of other processes may take space after that.\nPatch updated.", "msg_date": "Fri, 13 Jan 2023 23:09:30 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix condition in shm_toc and remove unused function\n shm_toc_freespace." } ]
[ { "msg_contents": "Hi,\n\n I'm migrating our existing PG instances from PG11.4 to PG14.3. I\nhave around 5 Million Tables in a single database. When migrating using\npg_upgrade, its taking 3 hours for the process to complete. I'm not sure if\nits the intended behaviour or we're missing something here.\n Most of the tables (90%) in 5 Million are foreign tables. On analysis\nfound that most of the time is spent in pg_dump (~2.3 hours). In pg_dump\ngetTableAttrs(), dumpTable() functions take the most time, approx 1 hour\neach since we're processing table by table. Also, there are no columns with\ndefault values, which if present might take some time. We're using PG14's\npg_upgrade binary for the process.\n Since we have all these tables in one database, parallelism doesn't\nhave any effect here. Can we make binary upgrade for a single database run\nin parallel ?\n Kindly advise us if we have missed anything here and possible\nsolutions for this problem.\nSo we're not sure on what we missed here.\nHave added more info on the process below.\n\nNo. of Tables: 5 Million\nTime Taken: 3 Hours\nCommand Used: $PG14_UPGRADE -Uroot -b $PG11_DIR/bin -B $PG14_DIR/bin -d\n$PG11_DIR/data -D $PG14_DIR/data -k -r -j32\nVersion: PG11.4 to PG14.3\nEnvironment: CentOS machine (32 cores(Intel), 128GB RAM)\n\n\nThanks and Regards,\nVignesh K.\n\nHi,       I'm migrating  our existing PG instances from PG11.4  to PG14.3. I have around 5 Million Tables in a single database. When migrating using pg_upgrade, its taking 3 hours for the process to complete. I'm not sure if its the intended behaviour or we're missing something here.     Most of the tables (90%) in 5 Million are foreign tables. On analysis found that most of the time is spent in pg_dump (~2.3 hours). In pg_dump getTableAttrs(), dumpTable() functions take the most time, approx 1 hour each since we're processing table by table. Also, there are no columns with default values, which if present might take some time. We're using PG14's pg_upgrade binary for the process.    Since we have all these tables in one database, parallelism doesn't have any effect here. Can we make binary upgrade for a single database run in parallel ?     Kindly advise us if we have missed anything here and possible solutions for this problem.So we're not sure on what we missed here.Have added more info on the process below.No. of Tables: 5 MillionTime Taken: 3 HoursCommand Used: $PG14_UPGRADE -Uroot -b $PG11_DIR/bin -B $PG14_DIR/bin -d $PG11_DIR/data -D $PG14_DIR/data -k -r -j32Version: PG11.4 to PG14.3Environment: CentOS machine (32 cores(Intel), 128GB RAM)Thanks and Regards,Vignesh K.", "msg_date": "Thu, 12 Jan 2023 14:45:41 +0530", "msg_from": "Vigneshk Kvignesh <krrvignesh2@gmail.com>", "msg_from_op": true, "msg_subject": "PG11 to PG14 Migration Slowness" }, { "msg_contents": "On Thu, Jan 12, 2023 at 02:45:41PM +0530, Vigneshk Kvignesh wrote:\n> Hi,\n> \n> I'm migrating our existing PG instances from PG11.4 to PG14.3. I\n> have around 5 Million Tables in a single database. When migrating using\n> pg_upgrade, its taking 3 hours for the process to complete. I'm not sure\n> if its the intended behaviour or we're missing something here.\n\n Yes. In fact, you have a good hardware and I would expect longer time\non average.\n\n> Most of the tables (90%) in 5 Million are foreign tables. On analysis\n> found that most of the time is spent in pg_dump (~2.3 hours). In pg_dump\n> getTableAttrs(), dumpTable() functions take the most time, approx 1 hour\n> each since we're processing table by table. Also, there are no columns\n> with default values, which if present might take some time. We're using\n> PG14's pg_upgrade binary for the process.\n> Since we have all these tables in one database, parallelism doesn't\n> have any effect here. Can we make binary upgrade for a single database run\n> in parallel ?\n> Kindly advise us if we have missed anything here and possible\n> solutions for this problem.\n\n I don't see any problem. Three-hour downtime every three years\nfor such setup... You are lucky you have only that.\n\n But you could try some logical replication to the new server\nversion for upgrading, if you really want to bother. (Well, pg-\nlogical is my preferred on that scale, but the three general op-\ntions are: internal logical replication, pglogical, slony).\n\n\n\n> So we're not sure on what we missed here.\n> Have added more info on the process below.\n> No. of Tables: 5 Million\n> Time Taken: 3 Hours\n> Command Used: $PG14_UPGRADE -Uroot -b $PG11_DIR/bin -B $PG14_DIR/bin -d\n> $PG11_DIR/data -D $PG14_DIR/data -k -r -j32\n> Version: PG11.4 to PG14.3\n> Environment: CentOS machine (32 cores(Intel), 128GB RAM)\n> \n> Thanks and Regards,\n> Vignesh K.\n\n\n", "msg_date": "Thu, 12 Jan 2023 14:07:22 +0300", "msg_from": "Ilya Anfimov <ilan@tzirechnoy.com>", "msg_from_op": false, "msg_subject": "Re: PG11 to PG14 Migration Slowness" }, { "msg_contents": "Vigneshk Kvignesh <krrvignesh2@gmail.com> writes:\n> I'm migrating our existing PG instances from PG11.4 to PG14.3. I\n> have around 5 Million Tables in a single database. When migrating using\n> pg_upgrade, its taking 3 hours for the process to complete. I'm not sure if\n> its the intended behaviour or we're missing something here.\n\nThere was some work done in v15 to make pg_dump deal better with\nzillions of tables. Don't know if you can consider retargeting\nto v15, or how much the speedups would help in your particular\nsituation.\n\nWhy are you using 14.3, when the current release is 14.6?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:48:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PG11 to PG14 Migration Slowness" }, { "msg_contents": "Hi Vigneshk,\n\n> I'm migrating our existing PG instances from PG11.4 to PG14.3. I have around 5 Million Tables in a single database. When migrating using pg_upgrade, its taking 3 hours for the process to complete. I'm not sure if its the intended behaviour or we're missing something here.\n\nThanks for reporting this. I would say this is more or less an\nexpected behaviour. This being said I think we could do better than\nthat.\n\nCould you identify the bottleneck or perhaps provide the minimal\nautomated steps (ideally, a script) to reproduce your issue in a clean\nenvironment?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 13 Jan 2023 16:02:00 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: PG11 to PG14 Migration Slowness" }, { "msg_contents": "Hi,\n\nSorry for the delayed response. We have an fdw extension, we started code\nchanges in the extension for PGv14 on 14.3, we just completed code changes,\ntesting and benchmarking. We'll retarget to 14.6\nAlso we'll take a look at the changes for pg_dump in v15 . Thanks for the\nadvice.\n\nThanks and Regards,\nVignesh K.\n\nOn Thu, 12 Jan 2023 at 21:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Vigneshk Kvignesh <krrvignesh2@gmail.com> writes:\n> > I'm migrating our existing PG instances from PG11.4 to PG14.3. I\n> > have around 5 Million Tables in a single database. When migrating using\n> > pg_upgrade, its taking 3 hours for the process to complete. I'm not sure\n> if\n> > its the intended behaviour or we're missing something here.\n>\n> There was some work done in v15 to make pg_dump deal better with\n> zillions of tables. Don't know if you can consider retargeting\n> to v15, or how much the speedups would help in your particular\n> situation.\n>\n> Why are you using 14.3, when the current release is 14.6?\n>\n> regards, tom lane\n>\n\nHi, Sorry for the delayed response. We have an fdw extension, we started code changes in the extension for PGv14 on 14.3, we just completed code changes, testing and benchmarking. We'll retarget to 14.6Also we'll take a look at the changes for pg_dump in v15 . Thanks for the advice.Thanks and Regards,Vignesh K.On Thu, 12 Jan 2023 at 21:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:Vigneshk Kvignesh <krrvignesh2@gmail.com> writes:\n>       I'm migrating  our existing PG instances from PG11.4  to PG14.3. I\n> have around 5 Million Tables in a single database. When migrating using\n> pg_upgrade, its taking 3 hours for the process to complete. I'm not sure if\n> its the intended behaviour or we're missing something here.\n\nThere was some work done in v15 to make pg_dump deal better with\nzillions of tables.  Don't know if you can consider retargeting\nto v15, or how much the speedups would help in your particular\nsituation.\n\nWhy are you using 14.3, when the current release is 14.6?\n\n                        regards, tom lane", "msg_date": "Fri, 27 Jan 2023 17:27:40 +0530", "msg_from": "Vigneshk Kvignesh <krrvignesh2@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PG11 to PG14 Migration Slowness" } ]
[ { "msg_contents": "Technically correct name of this feature would be Readable Names for\nOperators, or Pronounceable Names for Operators. But I'd like to call\nit Named Operators.\n\nWith this patch in place, the users can name the operators as\n:some_pronounceable_name: instead of having to choose from the special\ncharacters like #^&@. For example, users will be able to create and\nuse operators like:\n\n select\n expr1 :distance: expr2,\n expr3 :contains_all: expr4,\n expr5 :contains_any: expr6\n expr7 :contains_exactly_two_of: expr8\n from mytable;\n\ninstead of being forced to use these:\n\n select\n expr1 <#> expr2,\n expr3 ?& expr4,\n expr5 ?| expr6\n expr7 ??!! expr8 -- ¯\\_(ツ)_/¯\n from mytable;\n\n I think Named Operators will significantly improve the readability\nof queries.\n\n After a little trial-an-error, it was easy to develop the scan.l\nrules to implement this feature, without flex barfing. The hard part\nhas been convincing myself that this is a safe implementation, even\nthough there are no regressions in `make check`. I am unsure of this\nimplementation's compatibility with the SQL Spec, and I'm not able to\nenvision problems with its interaction with some current or potential\nfeature of Postgres. So I'd really appreciate feedback from someone\nwho is conversant with the SQL Spec.\n\n If the colon character being used as a delimiter poses a\nchallenge, other good candidates for the delimiter seem to be one of\n~^` Although I haven't tested any of these to see if they cause a\nregression. The colon character is be preferable for the delimiter,\nsince it is already used in the typecast :: operator.\n\n I tried to strip the delimiters/colons from the name right in the\nscanner, primarily because that would allow the identifier part of the\nname to be as long as NAMEDATALEN-1, just like other identifiers\nPostgres allows. Added benefit of stripping delimiters was that the\nrest of the code, and catalogs/storage won't have to see the\ndelimiters. But stripping the delimiters made the code brittle; some\nplaces in code now had to be taught different handling depending on\nwhether the operator name was coming from the user command, or from\nthe catalogs. I had to special-case code in pg_dump, as well. To share\ncode with frontends like pg_dump, I had to place code in src/common/.\nI was still not able to address some obvious bugs.\n\n By retaining the delimiters : in the name, the code became much\nsimpler; pg_dump support came for free! The bugs became a non-issue.\nTo see how much code and complexity was reduced, one can see this\ncommit [1]. The downside of retaining the delimiters is that the\nidentifier part of the name can be no more than NAMEDATALEN-3 in\nlength.\n\n Because of the minimal changes to the scanner rules, and no\nchanges in the grammar, I don't think there's any impact on precedence\nand associativity rules of the operators. I'd be happy to learn\notherwise.\n\n Here's a rudimentary test case to demonstrate the feature:\n\ncreate operator :add_point: (function = box_add, leftarg = box,\nrightarg = point);\ncreate table test(a box);\ninsert into test values('((0,0),(1,1))'), ('((0,0),(2,1))');\nselect a as original, a :add_point: '(1,1)' as modified from test;\ndrop operator :add_point:(box, point);\n\n Feedback will be much appreciated!\n\n[1]: Commit: Don't strip the delimiters\nhttps://github.com/gurjeet/postgres/commit/62d11a578e5325c32109bb2a55a624d0d89d4b7e\n\n[2]: Git branch named_operators\nhttps://github.com/gurjeet/postgres/tree/named_operators\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Thu, 12 Jan 2023 01:16:11 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Named Operators" }, { "msg_contents": "Please see attached a slightly updated patch. There were some comment\nchanges sitting in uncommitted in Git worktree, that were missed.\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Thu, 12 Jan 2023 01:30:42 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, 12 Jan 2023 at 10:16, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> Technically correct name of this feature would be Readable Names for\n> Operators, or Pronounceable Names for Operators. But I'd like to call\n> it Named Operators.\n>\n> With this patch in place, the users can name the operators as\n> :some_pronounceable_name: instead of having to choose from the special\n> characters like #^&@.\n> [...]\n> I think Named Operators will significantly improve the readability\n> of queries.\n\nCouldn't the user better opt to call the functions that implement the\noperator directly if they want more legible operations? So, from your\nexample, `SELECT box_add(a, b)` instead of `SELECT a :add_point: b`?\n\nI'm -1 on the chosen syntax; :name: shadows common variable\nsubstitution patterns including those of psql.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:48:56 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, Jan 12, 2023 at 1:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n>\n> On Thu, 12 Jan 2023 at 10:16, Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > Technically correct name of this feature would be Readable Names for\n> > Operators, or Pronounceable Names for Operators. But I'd like to call\n> > it Named Operators.\n> >\n> > With this patch in place, the users can name the operators as\n> > :some_pronounceable_name: instead of having to choose from the special\n> > characters like #^&@.\n> > [...]\n> > I think Named Operators will significantly improve the readability\n> > of queries.\n>\n> Couldn't the user better opt to call the functions that implement the\n> operator directly if they want more legible operations? So, from your\n> example, `SELECT box_add(a, b)` instead of `SELECT a :add_point: b`?\n\nMatter of taste, I guess. But more importantly, defining an operator\ngives you many additional features that the planner can use to\noptimize your query differently, which it can't do with functions. See\nthe COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n\nhttps://www.postgresql.org/docs/current/sql-createoperator.html\n\nThis proposal is primarily a replacement for the myriad of\nhard-to-pronounce operators that users have to memorize. For example,\nit'd be nice to have readable names for the PostGIS operators.\n\nhttps://postgis.net/docs/reference.html#Operators\n\nFor someone who's reading/troubleshooting a PostGIS query, when they\nencounter operator <<| — in the query for the first time, they'd have\nto open up the docs. But if the query used the :strictly_below:\noperator, there's no need to switch to docs and lose context.\n\n> I'm -1 on the chosen syntax; :name: shadows common variable\n> substitution patterns including those of psql.\n\nAh, thanks for reminding! Early on when I hadn't written code yet, I\nremember discarding colon : as a delimiter choice, precisely because\nit is used for using variables in psql, and perhaps in some drivers,\nas well. But in the rush of implementing and wrangling code, I forgot\nabout that argument altogether.\n\nI'll consider using one of the other special characters. Do you have\nany suggestions?\n\n\nBest regards,\nGurjeet\nhttp://Gurje.et\n\n\n", "msg_date": "Thu, 12 Jan 2023 02:59:04 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, 12 Jan 2023 at 11:59, Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> On Thu, Jan 12, 2023 at 1:49 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n> >\n> > On Thu, 12 Jan 2023 at 10:16, Gurjeet Singh <gurjeet@singh.im> wrote:\n> > >\n> > > Technically correct name of this feature would be Readable Names for\n> > > Operators, or Pronounceable Names for Operators. But I'd like to call\n> > > it Named Operators.\n> > >\n> > > With this patch in place, the users can name the operators as\n> > > :some_pronounceable_name: instead of having to choose from the special\n> > > characters like #^&@.\n> > > [...]\n> > > I think Named Operators will significantly improve the readability\n> > > of queries.\n> >\n> > Couldn't the user better opt to call the functions that implement the\n> > operator directly if they want more legible operations? So, from your\n> > example, `SELECT box_add(a, b)` instead of `SELECT a :add_point: b`?\n>\n> Matter of taste, I guess. But more importantly, defining an operator\n> gives you many additional features that the planner can use to\n> optimize your query differently, which it can't do with functions. See\n> the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n\nI see. Wouldn't it be better then to instead make it possible for the\nplanner to detect the use of the functions used in operators and treat\nthem as aliases of the operator? Or am I missing something w.r.t.\ndifferences between operator and function invocation?\n\nE.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n`my_bigint + 1` (and vice versa), while they should be able to support\nthat, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n\nKind regards,\n\nMatthias van de Meent\n\n\n", "msg_date": "Thu, 12 Jan 2023 14:55:11 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> I'm -1 on the chosen syntax; :name: shadows common variable\n> substitution patterns including those of psql.\n\nYeah, this syntax is DOA because of that. I think almost\nanything you might invent is going to have conflict risks.\n\nWe could probably make it work by allowing the existing OPERATOR\nsyntax to take things that look like names as well as operators,\nlike\n\n\texpr3 OPERATOR(contains_all) expr4\n\nBut that's bulky enough that nobody will care to use it.\n\nOn the whole I don't see this proposal going anywhere.\nThere's too much investment in the existing operator names,\nand too much risk of conflicts if you try to shorten the\nsyntax.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:21:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, Jan 12, 2023 at 3:59 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> On Thu, Jan 12, 2023 at 1:49 AM Matthias van de Meent\n> <boekewurm+postgres@gmail.com> wrote:\n>\n> > I'm -1 on the chosen syntax; :name: shadows common variable\n> > substitution patterns including those of psql.\n>\n> I'll consider using one of the other special characters. Do you have\n> any suggestions?\n>\n>\nThe R language uses %...% to denote custom operators.\n\nThat would be a bit annoying for dynamic SQL using format though...\n\nDo we have to choose? There are 15 allowed characters for operator names\npresently (aside from + and -), could we define the rule that an operator\nname can contain any sequence of alphabetic+underscore+space? characters so\nlong as the first and last symbol of the operator name is one of those 15\ncharacters?\n\nAnother appealing option would be the non-matching but complementary pair\n<...> (I'd consider removing these from the 15 choices in we go that route)\n\nSELECT 1 <add> 2;\n\nI would probably avoid requiring back-ticks given their usage as identifier\nquoting in other systems - probably remove it from the 15 choices if we go\nthat route.\n\nDavid J.\n\nOn Thu, Jan 12, 2023 at 3:59 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Thu, Jan 12, 2023 at 1:49 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> I'm -1 on the chosen syntax; :name: shadows common variable\n> substitution patterns including those of psql.\n\nI'll consider using one of the other special characters. Do you have\nany suggestions?The R language uses %...% to denote custom operators.That would be a bit annoying for dynamic SQL using format though...Do we have to choose?  There are 15 allowed characters for operator names presently (aside from + and -), could we define the rule that an operator name can contain any sequence of alphabetic+underscore+space? characters so long as the first and last symbol of the operator name is one of those 15 characters?Another appealing option would be the non-matching but complementary pair <...> (I'd consider removing these from the 15 choices in we go that route)SELECT 1 <add> 2;I would probably avoid requiring back-ticks given their usage as identifier quoting in other systems - probably remove it from the 15 choices if we go that route.David J.", "msg_date": "Thu, 12 Jan 2023 08:21:57 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, 12 Jan 2023 at 05:59, Gurjeet Singh <gurjeet@singh.im> wrote:\n\nI'll consider using one of the other special characters. Do you have\n> any suggestions?\n>\n\nWhat about backticks (`)? They are allowed as operator characters but do\nnot otherwise appear in the lexical syntax as far as I can tell:\nhttps://www.postgresql.org/docs/current/sql-syntax-lexical.html\n\nOn Thu, 12 Jan 2023 at 05:59, Gurjeet Singh <gurjeet@singh.im> wrote:\nI'll consider using one of the other special characters. Do you have\nany suggestions?What about backticks (`)? They are allowed as operator characters but do not otherwise appear in the lexical syntax as far as I can tell: https://www.postgresql.org/docs/current/sql-syntax-lexical.html", "msg_date": "Thu, 12 Jan 2023 12:02:37 -0500", "msg_from": "Isaac Morland <isaac.morland@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "Isaac Morland <isaac.morland@gmail.com> writes:\n> What about backticks (`)? They are allowed as operator characters but do\n> not otherwise appear in the lexical syntax as far as I can tell:\n> https://www.postgresql.org/docs/current/sql-syntax-lexical.html\n\nSince they're already allowed as operator characters, you can't\nuse them for this purpose without breaking existing use-cases.\n\nEven if they were completely unused, I'd be pretty hesitant to\nadopt them for this purpose because of the potential confusion\nfor users coming from mysql.\n\nPretty much the only available syntax space is curly braces,\nand I don't really want to give those up for this either.\n(One has to assume that the SQL committee has their eyes\non those too.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Jan 2023 12:14:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On 1/12/23 18:14, Tom Lane wrote:\n\n> Pretty much the only available syntax space is curly braces,\n> and I don't really want to give those up for this either.\n> (One has to assume that the SQL committee has their eyes\n> on those too.)\n\nThey are used in row pattern recognition.\n-- \nVik Fearing\n\n\n\n", "msg_date": "Thu, 12 Jan 2023 18:45:09 +0100", "msg_from": "Vik Fearing <vik@postgresfriends.org>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > What about backticks (`)? They are allowed as operator characters but do\n> > not otherwise appear in the lexical syntax as far as I can tell:\n> > https://www.postgresql.org/docs/current/sql-syntax-lexical.html\n>\n> Since they're already allowed as operator characters, you can't\n> use them for this purpose without breaking existing use-cases.\n>\n>\nIIUC, specifically the fact that an operator is defined to start with one\nof those symbols and end at the first non-symbol. We can't change the\nallowed set of non-symbols at this point, without defining something else\nto denote the start of an operator.\n\nDavid J.\n\nOn Thu, Jan 12, 2023 at 10:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Isaac Morland <isaac.morland@gmail.com> writes:\n> What about backticks (`)? They are allowed as operator characters but do\n> not otherwise appear in the lexical syntax as far as I can tell:\n> https://www.postgresql.org/docs/current/sql-syntax-lexical.html\n\nSince they're already allowed as operator characters, you can't\nuse them for this purpose without breaking existing use-cases.IIUC, specifically the fact that an operator is defined to start with one of those symbols and end at the first non-symbol.  We can't change the allowed set of non-symbols at this point, without defining something else to denote the start of an operator.David J.", "msg_date": "Thu, 12 Jan 2023 11:37:48 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Thu, Jan 12, 2023 at 5:55 AM Matthias van de Meent\n<boekewurm+postgres@gmail.com> wrote:\n> On Thu, 12 Jan 2023 at 11:59, Gurjeet Singh <gurjeet@singh.im> wrote:\n> > ... defining an operator\n> > gives you many additional features that the planner can use to\n> > optimize your query differently, which it can't do with functions. See\n> > the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n>\n> I see. Wouldn't it be better then to instead make it possible for the\n> planner to detect the use of the functions used in operators and treat\n> them as aliases of the operator? Or am I missing something w.r.t.\n> differences between operator and function invocation?\n>\n> E.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n> `my_bigint + 1` (and vice versa), while they should be able to support\n> that, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n\nSuch a feature would be immensely useful in its own right. But it's\nalso going to be at least 2 orders of magnitude (or more) effort to\nimplement, and to get accepted in the community. I'm thinking of\nchanges in planner, catalogs, etc.\n\nOn Thu, Jan 12, 2023 at 7:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Matthias van de Meent <boekewurm+postgres@gmail.com> writes:\n> > I'm -1 on the chosen syntax; :name: shadows common variable\n> > substitution patterns including those of psql.\n>\n> Yeah, this syntax is DOA because of that. I think almost\n> anything you might invent is going to have conflict risks.\n\nI remember discussing this in a meeting with Joe Conway a few weeks\nago, when this was just a proposal in my head and I was just bouncing\nit off him. And I remember pointing out that colons would be a bad\nchoice because of their use in psql; but for life of me I can't think\nof a reason (except temporary memory loss) why I failed to consider\nthe psql conflict when implementing the feature. If only some test in\n`make check` would have pointed out the mistake, I wouldn't have made\nthis obvious mistake.\n\n> We could probably make it work by allowing the existing OPERATOR\n> syntax to take things that look like names as well as operators,\n> like\n>\n> expr3 OPERATOR(contains_all) expr4\n>\n> But that's bulky enough that nobody will care to use it.\n\n+1. Although that'd be better for readers than the all-special-char\nnames, this format is bulky enough that you won't be able to convince\nthe query writers to bother using it. But if all other efforts fail,\nI'll take this format over the cryptic ones any day.\n\n> On the whole I don't see this proposal going anywhere.\n> There's too much investment in the existing operator names,\n> and too much risk of conflicts if you try to shorten the\n> syntax.\n\nI wouldn't give up on the idea, yet :-) See new proposal below.\n\nOn Thu, Jan 12, 2023 at 9:14 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Isaac Morland <isaac.morland@gmail.com> writes:\n> > What about backticks (`)?\n>\n> Since they're already allowed as operator characters, you can't\n> use them for this purpose without breaking existing use-cases.\n>\n> Even if they were completely unused, I'd be pretty hesitant to\n> adopt them for this purpose because of the potential confusion\n> for users coming from mysql.\n\nSince when have we started caring for the convenience of users of\nother databases?!! /s\n\n> Pretty much the only available syntax space is curly braces,\n> and I don't really want to give those up for this either.\n> (One has to assume that the SQL committee has their eyes\n> on those too.)\n\nOn Thu, Jan 12, 2023 at 9:45 AM Vik Fearing <vik@postgresfriends.org> wrote:\n> They are used in row pattern recognition.\n\nI was very hopeful of using { }, and hoping that we'd beat the SQL\ncommittee to it, so that they have to choose something else, if we\nrelease this into the wild before them. But it seems that they beat us\nto it long ago. (tangent: Reading some blog posts, I have to say I\nloved the Row Pattern Recognition feature!)\n\nConsidering that there are almost no printable characters left in\n1-255 ASCII range for us to choose from, I had to get creative; and I\nbelieve I have found a way to make it work.\n\nUnless the SQL committee has their eyes on a freestanding backslash \\\ncharacter for something, I believe we can use it as a prefix for Named\nOperators. Since the most common use of backslash is for escaping\ncharacters, I believe it would feel natural for the users to use it as\ndescribed below.\n\nNew scheme for the named operators: \\#foo That is, an identifier\nprefixed with \\# would serve as an operator name. psql considers \\ to\nbe the start of its commands, but it wasn't hard to convince psql to\nignore \\# and let it pass through to server.\n\nI agree that an identifier _surrounded_ by the same token (e.g. #foo#)\nor the pairing token (e.g. {foo}) looks better aesthetically, so I am\nokay with any of the following variations of the scheme, as well:\n\n\\#foo\\# (tested; works)\n\\#foo# (not tested; reduces ident length by 1)\n\nWe can choose a different character, instead of #. Perhaps \\{foo} !\n\nAttached is the v2 patch that supports \\#foo style Named Operators.\nFollowing is the SQL snippet to see what the usage looks like.\n\ncreate operator \\#add_point\n (function = box_add, leftarg = box, rightarg = point);\ncreate table test(a box);\ninsert into test values('((0,0),(1,1))'), ('((0,0),(2,1))');\nselect a as original, a \\#add_point '(1,1)' as modified from test;\ndrop operator \\#add_point(box, point);\n\nAlthough we have never done it before, but by using backslash we\nmight be able to define new custom token types as well, if needed.\n\nFor those interested, I have couple of different branches with\nnamed_operators* prefix in my Git fork [1] where I'm trying different\ncombinations.\n\n[1]: https://github.com/gurjeet/postgres/branches\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Sat, 14 Jan 2023 06:14:02 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Sat, Jan 14, 2023 at 6:14 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> I agree that an identifier _surrounded_ by the same token (e.g. #foo#)\n> or the pairing token (e.g. {foo}) looks better aesthetically, so I am\n> okay with any of the following variations of the scheme, as well:\n>\n> \\#foo\\# (tested; works)\n> \\#foo# (not tested; reduces ident length by 1)\n>\n> We can choose a different character, instead of #. Perhaps \\{foo} !\n\nPlease find attached the patch that uses \\{foo} styled Named\nOperators. This is in line with Tom's reluctant hint at possibly using\ncurly braces as delimiter characters. Since the curly braces are used\nby the SQL Specification for row pattern recognition, this patch\nproposes escaping the first of the curly braces.\n\nWe can get rid of the leading backslash, if (a) we're confident that\nSQL committee will not use curly braces anywhere else, and (b) if\nwe're confident that if/when Postgres supports Row Pattern Recognition\nfeature, we'll be able to treat curly braces inside the PATTERN clause\nspecially. Since both of those conditions are unlikely, I think we\nmust settle for the escaped-first-curly-brace style for the naming our\noperators.\n\nKeeping with the previous posts, here's a sample SQL script showing\nwhat the proposed syntax will look like in action. Personally, I\nprefer the \\#foo style, since the \\# prefix stands out among the text,\nbetter than \\{..} does, and because # character is a better signal of\nan operator than {.\n\ncreate operator \\{add_point}\n (function = box_add, leftarg = box, rightarg = point);\ncreate table test(a box);\ninsert into test values('((0,0),(1,1))'), ('((0,0),(2,1))');\nselect a as original, a \\{add_point} '(1,1)' as modified from test;\ndrop operator \\{add_point}(box, point);\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Fri, 20 Jan 2023 09:16:42 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Fri, Jan 20, 2023 at 9:17 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n\n> On Sat, Jan 14, 2023 at 6:14 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n> >\n> > I agree that an identifier _surrounded_ by the same token (e.g. #foo#)\n> > or the pairing token (e.g. {foo}) looks better aesthetically, so I am\n> > okay with any of the following variations of the scheme, as well:\n> >\n> > \\#foo\\# (tested; works)\n> > \\#foo# (not tested; reduces ident length by 1)\n> >\n> > We can choose a different character, instead of #. Perhaps \\{foo} !\n>\n> Please find attached the patch that uses \\{foo} styled Named\n> Operators. This is in line with Tom's reluctant hint at possibly using\n> curly braces as delimiter characters. Since the curly braces are used\n> by the SQL Specification for row pattern recognition, this patch\n> proposes escaping the first of the curly braces.\n>\n> We can get rid of the leading backslash, if (a) we're confident that\n> SQL committee will not use curly braces anywhere else, and (b) if\n> we're confident that if/when Postgres supports Row Pattern Recognition\n> feature, we'll be able to treat curly braces inside the PATTERN clause\n> specially. Since both of those conditions are unlikely, I think we\n> must settle for the escaped-first-curly-brace style for the naming our\n> operators.\n>\n> Keeping with the previous posts, here's a sample SQL script showing\n> what the proposed syntax will look like in action. Personally, I\n> prefer the \\#foo style, since the \\# prefix stands out among the text,\n> better than \\{..} does, and because # character is a better signal of\n> an operator than {.\n>\n> create operator \\{add_point}\n> (function = box_add, leftarg = box, rightarg = point);\n> create table test(a box);\n> insert into test values('((0,0),(1,1))'), ('((0,0),(2,1))');\n> select a as original, a \\{add_point} '(1,1)' as modified from test;\n> drop operator \\{add_point}(box, point);\n>\n> Best regards,\n> Gurjeet\n> http://Gurje.et\n\n\nHi,\nSince `validIdentifier` doesn't modify the contents of `name` string, it\nseems that there is no need to create `tmp` string in `validNamedOperator`.\nYou can pass the start and end offsets into the string (name) as second and\nthird parameters to `validIdentifier`.\n\nCheers\n\nOn Fri, Jan 20, 2023 at 9:17 AM Gurjeet Singh <gurjeet@singh.im> wrote:On Sat, Jan 14, 2023 at 6:14 AM Gurjeet Singh <gurjeet@singh.im> wrote:\n>\n> I agree that an identifier _surrounded_ by the same token (e.g. #foo#)\n> or the pairing token (e.g. {foo}) looks better aesthetically, so I am\n> okay with any of the following variations of the scheme, as well:\n>\n> \\#foo\\#  (tested; works)\n> \\#foo#   (not tested; reduces ident length by 1)\n>\n> We can choose a different character, instead of #. Perhaps \\{foo} !\n\nPlease find attached the patch that uses \\{foo} styled Named\nOperators. This is in line with Tom's reluctant hint at possibly using\ncurly braces as delimiter characters. Since the curly braces are used\nby the SQL Specification for row pattern recognition, this patch\nproposes escaping the first of the curly braces.\n\nWe can get rid of the leading backslash, if (a) we're confident that\nSQL committee will not use curly braces anywhere else, and (b) if\nwe're confident that if/when Postgres supports Row Pattern Recognition\nfeature, we'll be able to treat curly braces inside the PATTERN clause\nspecially. Since both of those conditions are unlikely, I think we\nmust settle for the escaped-first-curly-brace style for the naming our\noperators.\n\nKeeping with the previous posts, here's a sample SQL script showing\nwhat the proposed syntax will look like in action. Personally, I\nprefer the \\#foo style, since the \\# prefix stands out among the text,\nbetter than \\{..} does, and because # character is a better signal of\nan operator than {.\n\ncreate operator \\{add_point}\n    (function = box_add, leftarg = box, rightarg = point);\ncreate table test(a box);\ninsert into test values('((0,0),(1,1))'), ('((0,0),(2,1))');\nselect a as original, a \\{add_point} '(1,1)' as modified from test;\ndrop operator \\{add_point}(box, point);\n\nBest regards,\nGurjeet\nhttp://Gurje.etHi,Since `validIdentifier` doesn't modify the contents of `name` string, it seems that there is no need to create `tmp` string in `validNamedOperator`.You can pass the start and end offsets into the string (name) as second and third parameters to `validIdentifier`.Cheers", "msg_date": "Fri, 20 Jan 2023 09:32:10 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Fri, Jan 20, 2023 at 9:32 AM Ted Yu <yuzhihong@gmail.com> wrote:\n>\n> Since `validIdentifier` doesn't modify the contents of `name` string, it seems that there is no need to create `tmp` string in `validNamedOperator`.\n> You can pass the start and end offsets into the string (name) as second and third parameters to `validIdentifier`.\n\nThanks for reviewing the patch!\n\nI was making a temporary copy of the string, since I had to modify it\nbefore the validation, whereas the callee expects a `const char*`. I\nagree that the same check can be done with more elegance, while\neliminating the temporary allocation. Please find the updated patch\nattached.\n\nInstead of passing the start and end of region I want to check, as\nsuggested, I'm now passing just the length of the string I want\nvalidated. But I think that's for the better, since it now aligns with\nthe comment that validIdentifier() does not check if the passed string\nis shorter than NAMEDATALEN.\n\nBest regards,\nGurjeet\nhttp://Gurje.et", "msg_date": "Fri, 20 Jan 2023 10:56:00 -0800", "msg_from": "Gurjeet Singh <gurjeet@singh.im>", "msg_from_op": true, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On 12.01.23 14:55, Matthias van de Meent wrote:\n>> Matter of taste, I guess. But more importantly, defining an operator\n>> gives you many additional features that the planner can use to\n>> optimize your query differently, which it can't do with functions. See\n>> the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n> I see. Wouldn't it be better then to instead make it possible for the\n> planner to detect the use of the functions used in operators and treat\n> them as aliases of the operator? Or am I missing something w.r.t.\n> differences between operator and function invocation?\n> \n> E.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n> `my_bigint + 1` (and vice versa), while they should be able to support\n> that, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n\nI have been thinking about something like this for a long time. \nBasically, we would merge pg_proc and pg_operator internally. Then, all \nthe special treatment for operators would also be available to \ntwo-argument functions.\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:26:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On Fri, 27 Jan 2023 at 16:26, Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n>\n> On 12.01.23 14:55, Matthias van de Meent wrote:\n> >> Matter of taste, I guess. But more importantly, defining an operator\n> >> gives you many additional features that the planner can use to\n> >> optimize your query differently, which it can't do with functions. See\n> >> the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n> > I see. Wouldn't it be better then to instead make it possible for the\n> > planner to detect the use of the functions used in operators and treat\n> > them as aliases of the operator? Or am I missing something w.r.t.\n> > differences between operator and function invocation?\n> >\n> > E.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n> > `my_bigint + 1` (and vice versa), while they should be able to support\n> > that, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n>\n> I have been thinking about something like this for a long time.\n> Basically, we would merge pg_proc and pg_operator internally. Then, all\n> the special treatment for operators would also be available to\n> two-argument functions.\n\nAnd single-argument functions in case of prefix operators, right?\n\n\n", "msg_date": "Fri, 27 Jan 2023 16:34:52 +0100", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "On 27.01.23 16:34, Matthias van de Meent wrote:\n> On Fri, 27 Jan 2023 at 16:26, Peter Eisentraut\n> <peter.eisentraut@enterprisedb.com> wrote:\n>>\n>> On 12.01.23 14:55, Matthias van de Meent wrote:\n>>>> Matter of taste, I guess. But more importantly, defining an operator\n>>>> gives you many additional features that the planner can use to\n>>>> optimize your query differently, which it can't do with functions. See\n>>>> the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n>>> I see. Wouldn't it be better then to instead make it possible for the\n>>> planner to detect the use of the functions used in operators and treat\n>>> them as aliases of the operator? Or am I missing something w.r.t.\n>>> differences between operator and function invocation?\n>>>\n>>> E.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n>>> `my_bigint + 1` (and vice versa), while they should be able to support\n>>> that, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n>>\n>> I have been thinking about something like this for a long time.\n>> Basically, we would merge pg_proc and pg_operator internally. Then, all\n>> the special treatment for operators would also be available to\n>> two-argument functions.\n> \n> And single-argument functions in case of prefix operators, right?\n\nRight.\n\n(The removal of postfix operators is helpful to remove ambiguity here.)\n\n\n\n", "msg_date": "Tue, 31 Jan 2023 11:21:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> On 12.01.23 14:55, Matthias van de Meent wrote:\n>>> Matter of taste, I guess. But more importantly, defining an operator\n>>> gives you many additional features that the planner can use to\n>>> optimize your query differently, which it can't do with functions. See\n>>> the COMMUTATOR, HASHES, etc. clause in the CREATE OPERATOR command.\n\n>> I see. Wouldn't it be better then to instead make it possible for the\n>> planner to detect the use of the functions used in operators and treat\n>> them as aliases of the operator? Or am I missing something w.r.t.\n>> differences between operator and function invocation?\n>> E.g. indexes on `int8pl(my_bigint, 1)` does not match queries for\n>> `my_bigint + 1` (and vice versa), while they should be able to support\n>> that, as OPERATOR(pg_catalog.+(int8, int8)) 's function is int8pl.\n\n> I have been thinking about something like this for a long time. \n> Basically, we would merge pg_proc and pg_operator internally. Then, all \n> the special treatment for operators would also be available to \n> two-argument functions.\n\nI had a thought about this ...\n\nI do not think this proposal is going anywhere as-written.\nThere seems very little chance that we can invent a syntax that\nis concise, non-ugly, and not likely to get blindsided by future\nSQL spec extensions. Even if we were sure that, say, \"{foo}\"\nwas safe from spec interference, the syntax \"a {foo} b\" has\nexactly nothing to recommend it compared to \"foo(a,b)\".\nIt's not shorter, it's not standard, it won't help any pre-existing\nqueries, and it can't use function-call features such as named\narguments.\n\nAs Matthias said, what we actually need is for the planner to be able\nto optimize function calls on the same basis as operators. We should\ntackle that directly rather than inventing new syntax.\n\nWe could go after that by inventing a bunch of new function properties\nto parallel operator properties, but there is a simpler way: just\nteach the planner to look to see if a function call is a call of the\nunderlying function of some operator, and if so treat it like that\noperator. Right now that'd be an expensive lookup, but we could\nremove that objection with an index on pg_operator.oprcode or a\nsingle new field in pg_proc.\n\nThis approach does have a couple of shortcomings:\n\n* You still have to invent an operator name, even if you never\nplan to use it in queries. This is just cosmetic though.\nIt's not going to matter if the operator name is long or looks like\nline noise, if you only need to use it a few times in setup DDL.\n\n* We could not extend this to support index functions with more than\ntwo arguments, a request we've heard once in awhile in the past.\nOur answer to that so far has been \"make a function/operator with\none indexed argument and one composite-type argument\", which is a\nbit of an ugly workaround but seems to be serviceable enough.\n\nOn the whole I don't think these shortcomings are big enough\nto justify all the work that would be involved in attaching\noperator-like optimization information directly to functions.\n(To mention just one nontrivial stumbling block: do you really\nwant to invent \"shell functions\" similar to the shell-operator\nhack? If not, how are you going to handle declaration of\ncommutator pairs?)\n\nIn the long run this might lead to thinking of pg_operator as\nan extension of pg_proc in the same way that pg_aggregate is.\nBut we have not unified pg_aggregate into pg_proc, and I don't\nthink anyone wants to, because pg_proc rows are undesirably\nwide already. There's a similar objection to attaching\noptimization fields directly to pg_proc.\n\nYou could imagine some follow-on internal cleanup like trying\nto unify FuncExpr and OpExpr into a single node type (carrying\na function OID and optionally an operator OID). But that need\nnot have any user-visible impact either; it'd mainly be good\nfor eliminating a lot of near-duplicate code.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 10:57:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "I wrote:\n> This approach does have a couple of shortcomings:\n\n> * You still have to invent an operator name, even if you never\n> plan to use it in queries. This is just cosmetic though.\n> It's not going to matter if the operator name is long or looks like\n> line noise, if you only need to use it a few times in setup DDL.\n\nOh, one other thought is that we could address that complaint\nby allowing OPERATOR(identifier), so that your DDL could use\na meaningful name for the operator. I see that we don't\nactually support OPERATOR() right now in CREATE OPERATOR or\nALTER OPERATOR:\n\nregression=# create operator operator(+) (function = foo);\nERROR: syntax error at or near \"(\"\nLINE 1: create operator operator(+) (function = foo);\n ^\n\nbut I doubt that'd be hard to fix.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 11:58:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Named Operators" }, { "msg_contents": "> On 8 Feb 2023, at 16:57, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I do not think this proposal is going anywhere as-written.\n\nReading this thread, it seems there is concensus against this proposal in its\ncurrent form, and no updated patch has been presented, so I will mark this as\nReturned with Feedback. Please feel free to resubmit to a future CF when there\nis renewed interest in working on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 15:11:37 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Named Operators" } ]
[ { "msg_contents": "Hi hackers,\n\nPlease find attached a patch to $SUBJECT.\n\nIt is a preliminary patch for [1].\n\nThe main ideas are: 1) to have consistent naming between the pg_stat_get*() functions\nand their associated counters and 2) to define the new macros in [1] the same way as it\nhas been done in 8018ffbf58 (aka not using the prefixes in the macros).\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/flat/89606d96-cd94-af74-18f3-c7ab2b684ba2@gmail.com", "msg_date": "Thu, 12 Jan 2023 10:47:11 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On 2023-Jan-12, Drouvot, Bertrand wrote:\n\n> Please find attached a patch to $SUBJECT.\n> \n> It is a preliminary patch for [1].\n> \n> The main ideas are: 1) to have consistent naming between the pg_stat_get*() functions\n> and their associated counters and 2) to define the new macros in [1] the same way as it\n> has been done in 8018ffbf58 (aka not using the prefixes in the macros).\n\nI don't like this at all. With these prefixes in place, it's much more\nlikely that you'll be able to grep the whole source tree and not run\ninto tons of false positives. If you remove them, that tends to be not\nvery workable. If we use these commits as precedent for expanding this\nsort of renaming across the tree, we'll soon end up with a very\ngrep-unfriendly code base.\n\nThe PGSTAT_ACCUM_DBCOUNT and PG_STAT_GET_DBENTRY macros are just one\nargument away from being able to generate the variable name including\nthe prefix, anyway. I don't know why we had to rename everything in\norder to do 8018ffbf5895, and if I had my druthers, we'd undo that.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 12 Jan 2023 18:12:52 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> I don't like this at all. With these prefixes in place, it's much more\n> likely that you'll be able to grep the whole source tree and not run\n> into tons of false positives. If you remove them, that tends to be not\n> very workable. If we use these commits as precedent for expanding this\n> sort of renaming across the tree, we'll soon end up with a very\n> grep-unfriendly code base.\n\n+1, that was my immediate fear as well.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 12 Jan 2023 12:23:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "Hi,\n\nOn 2023-01-12 18:12:52 +0100, Alvaro Herrera wrote:\n> On 2023-Jan-12, Drouvot, Bertrand wrote:\n>\n> > Please find attached a patch to $SUBJECT.\n> >\n> > It is a preliminary patch for [1].\n> >\n> > The main ideas are: 1) to have consistent naming between the pg_stat_get*() functions\n> > and their associated counters and 2) to define the new macros in [1] the same way as it\n> > has been done in 8018ffbf58 (aka not using the prefixes in the macros).\n>\n> I don't like this at all. With these prefixes in place, it's much more\n> likely that you'll be able to grep the whole source tree and not run\n> into tons of false positives. If you remove them, that tends to be not\n> very workable. If we use these commits as precedent for expanding this\n> sort of renaming across the tree, we'll soon end up with a very\n> grep-unfriendly code base.\n\nThe problem with that is that the prefixes are used completely inconsistently\n- and have been for a long time. And not just between the different type of\nstats. Compare e.g. PgStat_TableCounts with PgStat_TableXactStatus and\nPgStat_StatTabEntry. Whereas PgStat_FunctionCounts and PgStat_StatFuncEntry\nboth use it. Right now there's no way to remember where to add the t_ prefix,\nand where not.\n\nImo the reason to rename here isn't to abolish prefixes, it's to be halfway\nconsistent within closeby code. And the code overwhelmingly doesn't use the\nprefixes.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:07:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:07:33AM -0800, Andres Freund wrote:\n> The problem with that is that the prefixes are used completely inconsistently\n> - and have been for a long time. And not just between the different type of\n> stats. Compare e.g. PgStat_TableCounts with PgStat_TableXactStatus and\n> PgStat_StatTabEntry. Whereas PgStat_FunctionCounts and PgStat_StatFuncEntry\n> both use it. Right now there's no way to remember where to add the t_ prefix,\n> and where not.\n>\n> Imo the reason to rename here isn't to abolish prefixes, it's to be halfway\n> consistent within closeby code. And the code overwhelmingly doesn't use the\n> prefixes.\n\nReading through the patch, two things are done, basically:\n- Remove the prefix \"t_\" from the fields related to table stats.\n- Remove the prefix \"f_\" from the fields related to function stats.\n\nAnd FWIW, with my recent lookups at the pgstat code, I'd like to think\nthat removing the prefixes is actually an improvement in consistency.\nIt will help in refactoring this code to use more macros, reducing its\nsize, as well.\n\nSo, the code paths where the structures are updated are pretty short\nso you know to what they refer to. And that's even more OK because\nnow the objects are split into their own files, so you know what you\nare dealing with even if the individual variable names are more\ncommon. That's for pgstat_relation.c and pgstat_function.c, first.\nThe second part of the changes involves pgstatfuncs.c, where all the\nobjects are grouped in a single file. We don't lose any information\nhere, either, as the code updated deals with a \"tabentry\" or\n\"funcentry\". There is a small part in pgstat.h where a few macros\nhave their fields renamed, where we manipulate a \"rel\", so that looks\nrather clear to me what we are dealing with, IMO.\n\n /* Total time previously charged to function, as of function start */\n- instr_time save_f_total_time;\n+ instr_time save_total_time;\nI have something to say about this one, though.. I find this change a\nbit confusing. It may be better kept as it is, or I think that we'd\nbetter rename also \"save_total\" and \"start\" to reflect in a better way\nwhat they do, because removing \"f_\" reduces the meaning of the field\nwith the two others in the same structure.\n--\nMichael", "msg_date": "Mon, 20 Mar 2023 16:32:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "Hi,\n\nOn 3/20/23 8:32 AM, Michael Paquier wrote:\n> \n> /* Total time previously charged to function, as of function start */\n> - instr_time save_f_total_time;\n> + instr_time save_total_time;\n> I have something to say about this one, though.. I find this change a\n> bit confusing. It may be better kept as it is, or I think that we'd\n> better rename also \"save_total\" and \"start\" to reflect in a better way\n> what they do, because removing \"f_\" reduces the meaning of the field\n> with the two others in the same structure.\n\nThanks for looking at it!\n\nGood point and keeping it as it is currently would not\naffect the work that is/will be done in [1].\n\nSo, please find attached V2 attached taking this comment into account.\n\n[1]: https://www.postgresql.org/message-id/flat/89606d96-cd94-af74-18f3-c7ab2b684ba2%40gmail.com\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 20 Mar 2023 10:05:21 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Mon, Mar 20, 2023 at 10:05:21AM +0100, Drouvot, Bertrand wrote:\n> Good point and keeping it as it is currently would not\n> affect the work that is/will be done in [1].\n\nI guess that I'm OK with that to get more of pgstatfuncs.c to use\nmacros for the function definitions there.. Alvaro, Tom, perhaps you\nstill think that this is unadapted? Based on the file split and the\nreferences to funcentry and tabentry, I think that's OK, but that\nstands just as one opinion among many..\n\n> So, please find attached V2 attached taking this comment into account. \n> [1]: https://www.postgresql.org/message-id/flat/89606d96-cd94-af74-18f3-c7ab2b684ba2%40gmail.com\n\nNice. I am pretty sure that finishing some of that is doable by the\nend of this CF to reduce the size of pgstatfuncs.c overall.\n--\nMichael", "msg_date": "Wed, 22 Mar 2023 10:28:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Mon, Mar 20, 2023 at 10:05:21AM +0100, Drouvot, Bertrand wrote:\n> Hi,\n> \n> On 3/20/23 8:32 AM, Michael Paquier wrote:\n> > \n> > /* Total time previously charged to function, as of function start */\n> > - instr_time save_f_total_time;\n> > + instr_time save_total_time;\n> > I have something to say about this one, though.. I find this change a\n> > bit confusing. It may be better kept as it is, or I think that we'd\n> > better rename also \"save_total\" and \"start\" to reflect in a better way\n> > what they do, because removing \"f_\" reduces the meaning of the field\n> > with the two others in the same structure.\n> \n> Thanks for looking at it!\n> \n> Good point and keeping it as it is currently would not\n> affect the work that is/will be done in [1].\n> \n> So, please find attached V2 attached taking this comment into account.\n\n> diff --git a/src/backend/utils/adt/pgstatfuncs.c b/src/backend/utils/adt/pgstatfuncs.c\n> index 35c6d46555..4f21fb2dc2 100644\n> --- a/src/backend/utils/adt/pgstatfuncs.c\n> +++ b/src/backend/utils/adt/pgstatfuncs.c\n> @@ -1552,7 +1552,7 @@ pg_stat_get_xact_tuples_inserted(PG_FUNCTION_ARGS)\n> \t\tresult = 0;\n> \telse\n> \t{\n> -\t\tresult = tabentry->t_counts.t_tuples_inserted;\n> +\t\tresult = tabentry->counts.tuples_inserted;\n\nThis comment still has the t_ prefix as does the one for tuples_updated\nand deleted.\n\notherwise, LGTM.\n\n> \t\t/* live subtransactions' counts aren't in t_tuples_inserted yet */\n> \t\tfor (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n> \t\t\tresult += trans->tuples_inserted;\n> @@ -1573,7 +1573,7 @@ pg_stat_get_xact_tuples_updated(PG_FUNCTION_ARGS)\n> \t\tresult = 0;\n> \telse\n> \t{\n> -\t\tresult = tabentry->t_counts.t_tuples_updated;\n> +\t\tresult = tabentry->counts.tuples_updated;\n> \t\t/* live subtransactions' counts aren't in t_tuples_updated yet */\n> \t\tfor (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n> \t\t\tresult += trans->tuples_updated;\n> @@ -1594,7 +1594,7 @@ pg_stat_get_xact_tuples_deleted(PG_FUNCTION_ARGS)\n> \t\tresult = 0;\n> \telse\n> \t{\n> -\t\tresult = tabentry->t_counts.t_tuples_deleted;\n> +\t\tresult = tabentry->counts.tuples_deleted;\n> \t\t/* live subtransactions' counts aren't in t_tuples_deleted yet */\n> \t\tfor (trans = tabentry->trans; trans != NULL; trans = trans->upper)\n> \t\t\tresult += trans->tuples_deleted;\n> @@ -1613,7 +1613,7 @@ pg_stat_get_xact_tuples_hot_updated(PG_FUNCTION_ARGS)\n> \tif ((tabentry = find_tabstat_entry(relid)) == NULL)\n> \t\tresult = 0;\n> \telse\n> -\t\tresult = (int64) (tabentry->t_counts.t_tuples_hot_updated);\n> +\t\tresult = (int64) (tabentry->counts.tuples_hot_updated);\n> \n> \tPG_RETURN_INT64(result);\n> }\n\n- Melanie\n\n\n", "msg_date": "Wed, 22 Mar 2023 14:52:23 -0400", "msg_from": "Melanie Plageman <melanieplageman@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Wed, Mar 22, 2023 at 02:52:23PM -0400, Melanie Plageman wrote:\n> This comment still has the t_ prefix as does the one for tuples_updated\n> and deleted.\n> \n> otherwise, LGTM.\n\nGood catch. pgstat_count_heap_update() has a t_tuples_hot_updated,\nand pgstat_update_heap_dead_tuples() a t_delta_dead_tuples on top of\nthe three you have just reported.\n\nI have grepped for all the fields renamed, and nothing else stands\nout. However, my eyes don't have a 100% accuracy, either.\n--\nMichael", "msg_date": "Thu, 23 Mar 2023 09:09:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "Hi,\n\nOn 3/23/23 1:09 AM, Michael Paquier wrote:\n> On Wed, Mar 22, 2023 at 02:52:23PM -0400, Melanie Plageman wrote:\n>> This comment still has the t_ prefix as does the one for tuples_updated\n>> and deleted.\n>>\n>> otherwise, LGTM.\n> \n> Good catch. pgstat_count_heap_update() has a t_tuples_hot_updated,\n> and pgstat_update_heap_dead_tuples() a t_delta_dead_tuples on top of\n> the three you have just reported.\n> \n> I have grepped for all the fields renamed, and nothing else stands\n> out. However, my eyes don't have a 100% accuracy, either.\n\nThank you both for your keen eye! I just did another check too and did not\nfind more than the ones you've just reported.\n\nPlease find attached V3 getting rid of them.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 23 Mar 2023 07:51:37 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Thu, Mar 23, 2023 at 07:51:37AM +0100, Drouvot, Bertrand wrote:\n> Thank you both for your keen eye! I just did another check too and did not\n> find more than the ones you've just reported.\n\nThis matches what I have, thanks!\n--\nMichael", "msg_date": "Thu, 23 Mar 2023 17:24:22 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" }, { "msg_contents": "On Thu, Mar 23, 2023 at 05:24:22PM +0900, Michael Paquier wrote:\n> On Thu, Mar 23, 2023 at 07:51:37AM +0100, Drouvot, Bertrand wrote:\n> > Thank you both for your keen eye! I just did another check too and did not\n> > find more than the ones you've just reported.\n> \n> This matches what I have, thanks!\n\nApplied that, after handling the new t_tuples_newpage_updated.\n--\nMichael", "msg_date": "Fri, 24 Mar 2023 08:56:35 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Remove nonmeaningful prefixes in PgStat_* fields" } ]
[ { "msg_contents": "Dear all,\n\nI think I've found a problem in logical replication that was introduced \nrecently in the patch:\n\nFix calculation of which GENERATED columns need to be updated\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3f7836ff651ad710fef52fa87b248ecdfc6468dc\n\nThere is an assertion which accidentally terminates logical replication \nworker process. The assertion was introduced in the patch. To reproduce \nthe problem Postgres should be compiled with enabled assertions. The \nproblem appears when executing UPDATE operation on a non-empty table \nwith GENERATED columns and a BEFORE UPDATE trigger. The problem seems to \nappear on the latest snapshots of 13 and 14 versions (sorry, I haven't \ntested other versions).\n\nStack:\n------\nTRAP: FailedAssertion(\"relinfo->ri_GeneratedExprs != NULL\", File: \n\"execUtils.c\", Line: 1292)\npostgres: logical replication worker for subscription 16401 \n(ExceptionalCondition+0x89)[0x55838760b902]\npostgres: logical replication worker for subscription 16401 \n(ExecGetExtraUpdatedCols+0x90)[0x558387314bd8]\npostgres: logical replication worker for subscription 16401 \n(ExecGetAllUpdatedCols+0x1c)[0x558387314c20]\npostgres: logical replication worker for subscription 16401 \n(ExecUpdateLockMode+0x19)[0x558387306ce3]\npostgres: logical replication worker for subscription 16401 \n(ExecBRUpdateTriggers+0xc7)[0x5583872debe8]\npostgres: logical replication worker for subscription 16401 \n(ExecSimpleRelationUpdate+0x122)[0x55838730dca7]\npostgres: logical replication worker for subscription 16401 \n(+0x43d32f)[0x55838745632f]\npostgres: logical replication worker for subscription 16401 \n(+0x43e382)[0x558387457382]\npostgres: logical replication worker for subscription 16401 \n(+0x43e5d3)[0x5583874575d3]\npostgres: logical replication worker for subscription 16401 \n(+0x43e76b)[0x55838745776b]\npostgres: logical replication worker for subscription 16401 \n(ApplyWorkerMain+0x3ac)[0x558387457e8b]\npostgres: logical replication worker for subscription 16401 \n(StartBackgroundWorker+0x253)[0x5583874157ed]\npostgres: logical replication worker for subscription 16401 \n(+0x40e9c9)[0x5583874279c9]\npostgres: logical replication worker for subscription 16401 \n(+0x40eb43)[0x558387427b43]\npostgres: logical replication worker for subscription 16401 \n(+0x40fd28)[0x558387428d28]\n/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7f08cd44b520]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0xbd)[0x7f08cd52474d]\npostgres: logical replication worker for subscription 16401 \n(+0x410ceb)[0x558387429ceb]\npostgres: logical replication worker for subscription 16401 \n(PostmasterMain+0xbf3)[0x55838742ac4d]\npostgres: logical replication worker for subscription 16401 \n(main+0x20c)[0x55838736076d]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f08cd432d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f08cd432e40]\n\nHow to reproduce:\n-----------------\n1. Create master-replica configuration with enabled logical replication. \nThe initial schema is shown below:\n\nCREATE TABLE gtest26 (\n a int PRIMARY KEY,\n b int GENERATED ALWAYS AS (a * 2) STORED\n );\n\nCREATE FUNCTION gtest_trigger_func() RETURNS trigger\n LANGUAGE plpgsql\n AS $$\n BEGIN\n IF tg_op IN ('DELETE', 'UPDATE') THEN\n RAISE INFO '%: %: old = %', TG_NAME, TG_WHEN, OLD;\n END IF;\n IF tg_op IN ('INSERT', 'UPDATE') THEN\n RAISE INFO '%: %: new = %', TG_NAME, TG_WHEN, NEW;\n END IF;\n IF tg_op = 'DELETE' THEN\n RETURN OLD;\n ELSE\n RETURN NEW;\n END IF;\n END\n $$;\n\nCREATE TRIGGER gtest1 BEFORE DELETE OR UPDATE ON gtest26\n FOR EACH ROW\n WHEN (OLD.b < 0) -- ok\n EXECUTE PROCEDURE gtest_trigger_func();\n\nINSERT INTO gtest26(a) values (-2), (0), (3)\n\n2. The problem appears if to execute the following sql on the master \nnode:\n\nUPDATE gtest26 SET a = a + 1;\n\nI'm not sure that this assertion is the proper one and how to properly \nfix the issue. That's why I'm asking for some help of the community. \nThank you in advance.\n\nWith best regards,\nVitaly\n\n\n", "msg_date": "Thu, 12 Jan 2023 13:23:57 +0300", "msg_from": "v.davydov@postgrespro.ru", "msg_from_op": true, "msg_subject": "UPDATE operation terminates logical replication receiver process due\n to an assertion" }, { "msg_contents": "On Thu, Jan 12, 2023 at 01:23:57PM +0300, v.davydov@postgrespro.ru wrote:\n> Dear all,\n> \n> I think I've found a problem in logical replication that was introduced\n> recently in the patch:\n> \n> Fix calculation of which GENERATED columns need to be updated\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=3f7836ff651ad710fef52fa87b248ecdfc6468dc\n\n> There is an assertion which accidentally terminates logical replication\n> worker process. The assertion was introduced in the patch. To reproduce the\n> problem Postgres should be compiled with enabled assertions. The problem\n> appears when executing UPDATE operation on a non-empty table with GENERATED\n> columns and a BEFORE UPDATE trigger. The problem seems to appear on the\n> latest snapshots of 13 and 14 versions (sorry, I haven't tested other\n> versions).\n> \n> Stack:\n> ------\n> TRAP: FailedAssertion(\"relinfo->ri_GeneratedExprs != NULL\", File: \"execUtils.c\", Line: 1292)\n\nYeah, confirmed under master branch and v15.\n\nTom ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 15 Jan 2023 10:24:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE operation terminates logical replication receiver process\n due to an assertion" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Yeah, confirmed under master branch and v15.\n> Tom ?\n\nYeah, sorry, I've been absorbed in $other_stuff. Will look\nat this soon. My guess is that this logrep code path is\nmissing the necessary setup operation.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 11:34:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPDATE operation terminates logical replication receiver process\n due to an assertion" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Jan 12, 2023 at 01:23:57PM +0300, v.davydov@postgrespro.ru wrote:\n>> TRAP: FailedAssertion(\"relinfo->ri_GeneratedExprs != NULL\", File: \"execUtils.c\", Line: 1292)\n\n> Yeah, confirmed under master branch and v15.\n\nv15? That assert is from 8bf6ec3ba, which wasn't back-patched.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 13:25:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPDATE operation terminates logical replication receiver process\n due to an assertion" }, { "msg_contents": "On Sun, Jan 15, 2023 at 01:25:11PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Thu, Jan 12, 2023 at 01:23:57PM +0300, v.davydov@postgrespro.ru wrote:\n> >> TRAP: FailedAssertion(\"relinfo->ri_GeneratedExprs != NULL\", File: \"execUtils.c\", Line: 1292)\n> \n> > Yeah, confirmed under master branch and v15.\n> \n> v15? That assert is from 8bf6ec3ba, which wasn't back-patched.\n\nI misspoke, and had actually reproduced under master and v14:\n\nTRAP: FailedAssertion(\"relinfo->ri_GeneratedExprs != NULL\", File: \"execUtils.c\", Line: 1336, PID: 25692)\n\nThe assert isn't from 8bf6 (Improve handling of inherited GENERATED\nexpressions.), but rather:\n\ncommit 3f7836ff651ad710fef52fa87b248ecdfc6468dc\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jan 5 14:12:17 2023 -0500\n\n Fix calculation of which GENERATED columns need to be updated.\n\nAnd in v14: \ncommit 8cd190e13a22dab12e86f7f1b59de6b9b128c784\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Thu Jan 5 14:12:17 2023 -0500\n\n Fix calculation of which GENERATED columns need to be updated.\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 15 Jan 2023 12:35:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: UPDATE operation terminates logical replication receiver process\n due to an assertion" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Sun, Jan 15, 2023 at 01:25:11PM -0500, Tom Lane wrote:\n>> v15? That assert is from 8bf6ec3ba, which wasn't back-patched.\n\n> The assert isn't from 8bf6 (Improve handling of inherited GENERATED\n> expressions.), but rather:\n\n> commit 3f7836ff651ad710fef52fa87b248ecdfc6468dc\n\nAh. I jumped to the wrong conclusion after failing to reproduce\non v15, but I must've fat-fingered the test case somehow.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 13:48:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: UPDATE operation terminates logical replication receiver process\n due to an assertion" } ]
[ { "msg_contents": "Hi,\n\nAs discussed [1], here's a patch to beautify pg_walinspect docs\nsimilar to pageinspect docs. The existing pg_walinspect docs calls out\nthe column names explicitly and then also shows them in the function\nexecution examples which is duplicate and too informative. Also \\x\nisn't used so some of the execution outputs are out of indentation.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/Y7+gQy/lOuWk4tFj@paquier.xyz\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 12 Jan 2023 17:29:39 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Beautify pg_walinspect docs a bit" }, { "msg_contents": "On Thu, Jan 12, 2023 at 05:29:39PM +0530, Bharath Rupireddy wrote:\n> As discussed [1], here's a patch to beautify pg_walinspect docs\n> similar to pageinspect docs. The existing pg_walinspect docs calls out\n> the column names explicitly and then also shows them in the function\n> execution examples which is duplicate and too informative. Also \\x\n> isn't used so some of the execution outputs are out of indentation.\n> \n> Thoughts?\n\nThanks, this looked basically fine, so applied. I have tweaked a few\nsentences while reviewing the docs, while on it. I have decided to\nremove the example where we specify per_record=true for\npg_get_wal_stats(), as it does not bring much value while bloating the\nwhole, and the parameter is clearly documented.\n--\nMichael", "msg_date": "Fri, 13 Jan 2023 09:33:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Beautify pg_walinspect docs a bit" }, { "msg_contents": "It looks like 58597ed accidentally added an \"end_lsn\" to the docs for\npg_get_wal_stats_till_end_of_wal().\n\ndiff --git a/doc/src/sgml/pgwalinspect.sgml b/doc/src/sgml/pgwalinspect.sgml\nindex 22677e54f2..3d7cdb95cc 100644\n--- a/doc/src/sgml/pgwalinspect.sgml\n+++ b/doc/src/sgml/pgwalinspect.sgml\n@@ -174,7 +174,7 @@ combined_size_percentage | 2.8634072910530795\n <varlistentry id=\"pgwalinspect-funcs-pg-get-wal-stats-till-end-of-wal\">\n <term>\n <function>\n- pg_get_wal_stats_till_end_of_wal(start_lsn pg_lsn, end_lsn pg_lsn, per_record boolean DEFAULT false)\n+ pg_get_wal_stats_till_end_of_wal(start_lsn pg_lsn, per_record boolean DEFAULT false)\n returns setof record\n </function>\n </term>\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 28 Feb 2023 11:57:40 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Beautify pg_walinspect docs a bit" }, { "msg_contents": "On Tue, Feb 28, 2023 at 11:57:40AM -0800, Nathan Bossart wrote:\n> It looks like 58597ed accidentally added an \"end_lsn\" to the docs for\n> pg_get_wal_stats_till_end_of_wal().\n\nIndeed. Fixed, thanks!\n--\nMichael", "msg_date": "Wed, 1 Mar 2023 08:42:48 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Beautify pg_walinspect docs a bit" } ]
[ { "msg_contents": "Hi hackers,\n\nI was running static analyser against PostgreSQL and found there're 2\nreturn statements in PL/Python module which is not safe. Patch is\nattached.\n\n-- \nBest Regards,\nXing", "msg_date": "Thu, 12 Jan 2023 23:19:29 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Thu, Jan 12, 2023 at 11:19:29PM +0800, Xing Guo wrote:\n> I was running static analyser against PostgreSQL and found there're 2\n> return statements in PL/Python module which is not safe. Patch is\n> attached.\n\nIs the problem that PG_exception_stack and error_context_stack aren't\nproperly reset?\n\n> @@ -690,12 +690,12 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r\n> \tPyObject *volatile pltdata = NULL;\n> \tchar\t *stroid;\n> \n> +\tpltdata = PyDict_New();\n> +\tif (!pltdata)\n> +\t\treturn NULL;\n> +\n> \tPG_TRY();\n> \t{\n> -\t\tpltdata = PyDict_New();\n> -\t\tif (!pltdata)\n> -\t\t\treturn NULL;\n> -\n> \t\tpltname = PLyUnicode_FromString(tdata->tg_trigger->tgname);\n> \t\tPyDict_SetItemString(pltdata, \"name\", pltname);\n> \t\tPy_DECREF(pltname);\n\nThere's another \"return\" later on in this PG_TRY block. I wonder if it's\npossible to detect this sort of thing at compile time.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 12 Jan 2023 10:44:33 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Thu, Jan 12, 2023 at 10:44:33AM -0800, Nathan Bossart wrote:\n> There's another \"return\" later on in this PG_TRY block. I wonder if it's\n> possible to detect this sort of thing at compile time.\n\nNote also:\nsrc/pl/tcl/pltcl.c- * PG_CATCH();\nsrc/pl/tcl/pltcl.c- * {\nsrc/pl/tcl/pltcl.c- * pltcl_subtrans_abort(interp, oldcontext, oldowner);\nsrc/pl/tcl/pltcl.c- * return TCL_ERROR;\nsrc/pl/tcl/pltcl.c- * }\n\nThis is documented once, repeated twice:\nsrc/pl/plpython/plpy_spi.c- * PG_CATCH();\nsrc/pl/plpython/plpy_spi.c- * {\nsrc/pl/plpython/plpy_spi.c- * <do cleanup>\nsrc/pl/plpython/plpy_spi.c- * PLy_spi_subtransaction_abort(oldcontext, oldowner);\nsrc/pl/plpython/plpy_spi.c- * return NULL;\nsrc/pl/plpython/plpy_spi.c- * }\n\nI don't quite get why this would be a sane thing to do here when\naborting a subtransaction in pl/python, but my experience with this\ncode is limited.\n--\nMichael", "msg_date": "Fri, 13 Jan 2023 10:45:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Hi,\n\nOn 2023-01-12 10:44:33 -0800, Nathan Bossart wrote:\n> On Thu, Jan 12, 2023 at 11:19:29PM +0800, Xing Guo wrote:\n> > @@ -690,12 +690,12 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo, PLyProcedure *proc, HeapTuple *r\n> > \tPyObject *volatile pltdata = NULL;\n> > \tchar\t *stroid;\n> > \n> > +\tpltdata = PyDict_New();\n> > +\tif (!pltdata)\n> > +\t\treturn NULL;\n> > +\n> > \tPG_TRY();\n> > \t{\n> > -\t\tpltdata = PyDict_New();\n> > -\t\tif (!pltdata)\n> > -\t\t\treturn NULL;\n> > -\n> > \t\tpltname = PLyUnicode_FromString(tdata->tg_trigger->tgname);\n> > \t\tPyDict_SetItemString(pltdata, \"name\", pltname);\n> > \t\tPy_DECREF(pltname);\n> \n> There's another \"return\" later on in this PG_TRY block. I wonder if it's\n> possible to detect this sort of thing at compile time.\n\nClang provides some annotations that allow to detect this kind of thing. I\nhacked up a test for this, and it finds quite a bit of prolematic\ncode. plpython is, uh, not being good? But also in plperl, pltcl.\n\nExample complaints:\n\n[776/1239 42 62%] Compiling C object src/pl/plpython/plpython3.so.p/plpy_exec.c.o\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:472:1: warning: no_returns_in_pg_try 'no_returns_handle' is not held on every path through here [-Wthread-safety-analysis]\n}\n^\n../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:417:2: note: no_returns_in_pg_try acquired here\n PG_TRY();\n ^\n../../../../home/andres/src/postgresql/src/include/utils/elog.h:424:7: note: expanded from macro 'PG_TRY'\n no_returns_start(no_returns_handle##__VA_ARGS__)\n ^\n...\n[785/1239 42 63%] Compiling C object src/pl/tcl/pltcl.so.p/pltcl.c.o\n../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1830:1: warning: no_returns_in_pg_try 'no_returns_handle' is not held on every path through here [-Wthread-safety-analysis]\n}\n^\n../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1809:2: note: no_returns_in_pg_try acquired here\n PG_CATCH();\n ^\n../../../../home/andres/src/postgresql/src/include/utils/elog.h:433:7: note: expanded from macro 'PG_CATCH'\n no_returns_start(no_returns_handle##__VA_ARGS__)\n ^\n\nNot perfect digestible, but also not too bad. I pushed the\nno_returns_start()/no_returns_stop() calls into all the PG_TRY related macros,\nbecause that causes the warning to point to block that the problem is\nin. E.g. above the first warning points to PG_TRY, the second to\nPG_CATCH. It'd work to just put it into PG_TRY and PG_END_TRY.\n\n\nClearly this would need a bunch more work, but it seems promising? I think\nthere'd be other uses than this.\n\n\nI briefly tried to use it for spinlocks. Mostly works and detects things like\nreturning with a spinlock held. But it does not like dynahash's habit of\nconditionally acquiring and releasing spinlocks.\n\nGreetings,\n\nAndres Freund", "msg_date": "Thu, 12 Jan 2023 21:49:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Hi,\n\nOn 2023-01-12 21:49:00 -0800, Andres Freund wrote:\n> Clearly this would need a bunch more work, but it seems promising? I think\n> there'd be other uses than this.\n> \n> I briefly tried to use it for spinlocks. Mostly works and detects things like\n> returning with a spinlock held. But it does not like dynahash's habit of\n> conditionally acquiring and releasing spinlocks.\n\nOne example is to prevent things like elog()/ereport()/SpinlockAcquire() while\nholding a spinlock\n\nThe \"locks_excluded(thing)\" attribute (which is just heuristic, doesn't require\nexpansive annotation like requires_capability(!thing)), can quite easily be\nused to trigger warnings about this kind of thing:\n\n../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:6771:2: warning: cannot call function 'errstart' while no_nested_spinlock 'in_spinlock' is held [-Wthread-safety-analysis]\n elog(LOG, \"logging with spinlock held\");\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 12 Jan 2023 23:23:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Hi Nathan.\n\nOn 1/13/23, Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Thu, Jan 12, 2023 at 11:19:29PM +0800, Xing Guo wrote:\n>> I was running static analyser against PostgreSQL and found there're 2\n>> return statements in PL/Python module which is not safe. Patch is\n>> attached.\n>\n> Is the problem that PG_exception_stack and error_context_stack aren't\n> properly reset?\n\nYes, it is.\n\n>\n>> @@ -690,12 +690,12 @@ PLy_trigger_build_args(FunctionCallInfo fcinfo,\n>> PLyProcedure *proc, HeapTuple *r\n>> \tPyObject *volatile pltdata = NULL;\n>> \tchar\t *stroid;\n>>\n>> +\tpltdata = PyDict_New();\n>> +\tif (!pltdata)\n>> +\t\treturn NULL;\n>> +\n>> \tPG_TRY();\n>> \t{\n>> -\t\tpltdata = PyDict_New();\n>> -\t\tif (!pltdata)\n>> -\t\t\treturn NULL;\n>> -\n>> \t\tpltname = PLyUnicode_FromString(tdata->tg_trigger->tgname);\n>> \t\tPyDict_SetItemString(pltdata, \"name\", pltname);\n>> \t\tPy_DECREF(pltname);\n>\n> There's another \"return\" later on in this PG_TRY block. I wonder if it's\n> possible to detect this sort of thing at compile time.\n\nThanks for pointing it out! I missed some return statements. Because\nmy checker is treating 'return statement in PG_TRY() block' as errors.\nIt stops compiling when it finds the code pattern and I forget to\ndouble check it.\n\nMy checker is based on AST matcher and it's possible to turn it into a\nclang-tidy plugin or something similar. I want to make it easy to\nintegrate with scan-build, so I implement it as a static analyzer\nplugin :)\n\nIf anyone is interested in the checker itself, the source code can be\nfound here[1]:\n\n> Note also:\n> src/pl/tcl/pltcl.c- * PG_CATCH();\n> src/pl/tcl/pltcl.c- * {\n> src/pl/tcl/pltcl.c- * pltcl_subtrans_abort(interp, oldcontext,\n> oldowner);\n> src/pl/tcl/pltcl.c- * return TCL_ERROR;\n> src/pl/tcl/pltcl.c- * }\n>\n> This is documented once, repeated twice:\n> src/pl/plpython/plpy_spi.c- * PG_CATCH();\n> src/pl/plpython/plpy_spi.c- * {\n> src/pl/plpython/plpy_spi.c- * <do cleanup>\n> src/pl/plpython/plpy_spi.c- * PLy_spi_subtransaction_abort(oldcontext,\n> oldowner);\n> src/pl/plpython/plpy_spi.c- * return NULL;\n> src/pl/plpython/plpy_spi.c- * }\n>\n> I don't quite get why this would be a sane thing to do here when\n> aborting a subtransaction in pl/python, but my experience with this\n> code is limited.\n\nHi Michael,\n\nI'll try to understand what's going on in your pasted codes. Thanks\nfor pointing it out!\n\n> Example complaints:\n>\n> [776/1239 42 62%] Compiling C object\n> src/pl/plpython/plpython3.so.p/plpy_exec.c.o\n> ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:472:1:\n> warning: no_returns_in_pg_try 'no_returns_handle' is not held on every path\n> through here [-Wthread-safety-analysis]\n> }\n> ^\n> ../../../../home/andres/src/postgresql/src/pl/plpython/plpy_exec.c:417:2:\n> note: no_returns_in_pg_try acquired here\n> PG_TRY();\n> ^\n> ../../../../home/andres/src/postgresql/src/include/utils/elog.h:424:7: note:\n> expanded from macro 'PG_TRY'\n> no_returns_start(no_returns_handle##__VA_ARGS__)\n> ^\n> ...\n> [785/1239 42 63%] Compiling C object src/pl/tcl/pltcl.so.p/pltcl.c.o\n> ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1830:1: warning:\n> no_returns_in_pg_try 'no_returns_handle' is not held on every path through\n> here [-Wthread-safety-analysis]\n> }\n> ^\n> ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1809:2: note:\n> no_returns_in_pg_try acquired here\n> PG_CATCH();\n> ^\n> ../../../../home/andres/src/postgresql/src/include/utils/elog.h:433:7: note:\n> expanded from macro 'PG_CATCH'\n> no_returns_start(no_returns_handle##__VA_ARGS__)\n> ^\n\nHi Andres,\n\nYour patch looks interesting and useful. I will play with it in the\nnext following days. I'm burried with other works recently, so my\nreply may delay.\n\n> Not perfect digestible, but also not too bad. I pushed the\n> no_returns_start()/no_returns_stop() calls into all the PG_TRY related\n> macros,\n> because that causes the warning to point to block that the problem is\n> in. E.g. above the first warning points to PG_TRY, the second to\n> PG_CATCH. It'd work to just put it into PG_TRY and PG_END_TRY.\n>\n>\n> Clearly this would need a bunch more work, but it seems promising? I think\n> there'd be other uses than this.\n>\n>\n> I briefly tried to use it for spinlocks. Mostly works and detects things\n> like\n> returning with a spinlock held. But it does not like dynahash's habit of\n> conditionally acquiring and releasing spinlocks.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n[1] https://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cpp\n\n-- \nBest Regards,\nXing\n\n\n\n\n\n\n\n\n\nOn 1/13/23, Andres Freund <andres@anarazel.de> wrote:\n> Hi,\n>\n> On 2023-01-12 21:49:00 -0800, Andres Freund wrote:\n>> Clearly this would need a bunch more work, but it seems promising? I\n>> think\n>> there'd be other uses than this.\n>>\n>> I briefly tried to use it for spinlocks. Mostly works and detects things\n>> like\n>> returning with a spinlock held. But it does not like dynahash's habit of\n>> conditionally acquiring and releasing spinlocks.\n>\n> One example is to prevent things like elog()/ereport()/SpinlockAcquire()\n> while\n> holding a spinlock\n>\n> The \"locks_excluded(thing)\" attribute (which is just heuristic, doesn't\n> require\n> expansive annotation like requires_capability(!thing)), can quite easily be\n> used to trigger warnings about this kind of thing:\n>\n> ../../../../home/andres/src/postgresql/src/backend/access/transam/xlog.c:6771:2:\n> warning: cannot call function 'errstart' while no_nested_spinlock\n> 'in_spinlock' is held [-Wthread-safety-analysis]\n> elog(LOG, \"logging with spinlock held\");\n>\n>\n> Greetings,\n>\n> Andres Freund\n>\n\n\n-- \nBest Regards,\nXing\n\n\n", "msg_date": "Fri, 13 Jan 2023 23:00:14 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Thu, Jan 12, 2023 at 09:49:00PM -0800, Andres Freund wrote:\n> On 2023-01-12 10:44:33 -0800, Nathan Bossart wrote:\n>> There's another \"return\" later on in this PG_TRY block. I wonder if it's\n>> possible to detect this sort of thing at compile time.\n> \n> Clang provides some annotations that allow to detect this kind of thing. I\n> hacked up a test for this, and it finds quite a bit of prolematic\n> code.\n\nNice!\n\n> plpython is, uh, not being good? But also in plperl, pltcl.\n\nYikes.\n\n> ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1830:1: warning: no_returns_in_pg_try 'no_returns_handle' is not held on every path through here [-Wthread-safety-analysis]\n> }\n> ^\n> ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1809:2: note: no_returns_in_pg_try acquired here\n> PG_CATCH();\n> ^\n> ../../../../home/andres/src/postgresql/src/include/utils/elog.h:433:7: note: expanded from macro 'PG_CATCH'\n> no_returns_start(no_returns_handle##__VA_ARGS__)\n> ^\n> \n> Not perfect digestible, but also not too bad. I pushed the\n> no_returns_start()/no_returns_stop() calls into all the PG_TRY related macros,\n> because that causes the warning to point to block that the problem is\n> in. E.g. above the first warning points to PG_TRY, the second to\n> PG_CATCH. It'd work to just put it into PG_TRY and PG_END_TRY.\n\nThis seems roughly as digestible as the pg_prevent_errno_in_scope stuff.\nHowever, on my macOS machine with clang 14.0.0, the messages say \"mutex\"\ninstead of \"no_returns_in_pg_try,\" which is unfortunate since that's the\npart that would clue me into what the problem is. I suppose it'd be easy\nenough to figure out after a grep or two, though.\n\n> Clearly this would need a bunch more work, but it seems promising? I think\n> there'd be other uses than this.\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 13 Jan 2023 10:03:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Hi,\n\nI revised my patch, added the missing one that Nathan mentioned.\n\nAre there any unsafe codes in pltcl.c? The return statement is in the\nPG_CATCH() block, I think the exception stack has been recovered in\nPG_CATCH block so the return statement in PG_CATCH block should be ok?\n\n```\nPG_TRY();\n{\nUTF_BEGIN;\nereport(level,\n(errcode(ERRCODE_EXTERNAL_ROUTINE_EXCEPTION),\nerrmsg(\"%s\", UTF_U2E(Tcl_GetString(objv[2])))));\nUTF_END;\n}\nPG_CATCH();\n{\nErrorData *edata;\n\n/* Must reset elog.c's state */\nMemoryContextSwitchTo(oldcontext);\nedata = CopyErrorData();\nFlushErrorState();\n\n/* Pass the error data to Tcl */\npltcl_construct_errorCode(interp, edata);\nUTF_BEGIN;\nTcl_SetObjResult(interp, Tcl_NewStringObj(UTF_E2U(edata->message), -1));\nUTF_END;\nFreeErrorData(edata);\n\nreturn TCL_ERROR;\n}\nPG_END_TRY();\n```\n\nBest Regards,\nXing\n\n\n\n\n\n\n\n\nOn Sat, Jan 14, 2023 at 2:03 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Thu, Jan 12, 2023 at 09:49:00PM -0800, Andres Freund wrote:\n> > On 2023-01-12 10:44:33 -0800, Nathan Bossart wrote:\n> >> There's another \"return\" later on in this PG_TRY block. I wonder if\n> it's\n> >> possible to detect this sort of thing at compile time.\n> >\n> > Clang provides some annotations that allow to detect this kind of thing.\n> I\n> > hacked up a test for this, and it finds quite a bit of prolematic\n> > code.\n>\n> Nice!\n>\n> > plpython is, uh, not being good? But also in plperl, pltcl.\n>\n> Yikes.\n>\n> > ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1830:1:\n> warning: no_returns_in_pg_try 'no_returns_handle' is not held on every path\n> through here [-Wthread-safety-analysis]\n> > }\n> > ^\n> > ../../../../home/andres/src/postgresql/src/pl/tcl/pltcl.c:1809:2: note:\n> no_returns_in_pg_try acquired here\n> > PG_CATCH();\n> > ^\n> > ../../../../home/andres/src/postgresql/src/include/utils/elog.h:433:7:\n> note: expanded from macro 'PG_CATCH'\n> > no_returns_start(no_returns_handle##__VA_ARGS__)\n> > ^\n> >\n> > Not perfect digestible, but also not too bad. I pushed the\n> > no_returns_start()/no_returns_stop() calls into all the PG_TRY related\n> macros,\n> > because that causes the warning to point to block that the problem is\n> > in. E.g. above the first warning points to PG_TRY, the second to\n> > PG_CATCH. It'd work to just put it into PG_TRY and PG_END_TRY.\n>\n> This seems roughly as digestible as the pg_prevent_errno_in_scope stuff.\n> However, on my macOS machine with clang 14.0.0, the messages say \"mutex\"\n> instead of \"no_returns_in_pg_try,\" which is unfortunate since that's the\n> part that would clue me into what the problem is. I suppose it'd be easy\n> enough to figure out after a grep or two, though.\n>\n> > Clearly this would need a bunch more work, but it seems promising? I\n> think\n> > there'd be other uses than this.\n>\n> +1\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>", "msg_date": "Mon, 16 Jan 2023 21:03:43 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Xing Guo <higuoxing@gmail.com> writes:\n> Are there any unsafe codes in pltcl.c? The return statement is in the\n> PG_CATCH() block, I think the exception stack has been recovered in\n> PG_CATCH block so the return statement in PG_CATCH block should be ok?\n\nYes, the stack has already been unwound at the start of a PG_CATCH\n(or PG_FINALLY) block, so there is no reason to avoid returning\nout of those.\n\nIn principle you could also mess things up with a \"continue\", \"break\",\nor \"goto\" leading out of PG_TRY. That's probably far less likely\nthan \"return\", but I wonder whether Andres' compiler hack will\ncatch that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:29:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Mon, Jan 16, 2023 at 11:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Xing Guo <higuoxing@gmail.com> writes:\n> > Are there any unsafe codes in pltcl.c? The return statement is in the\n> > PG_CATCH() block, I think the exception stack has been recovered in\n> > PG_CATCH block so the return statement in PG_CATCH block should be ok?\n>\n> Yes, the stack has already been unwound at the start of a PG_CATCH\n> (or PG_FINALLY) block, so there is no reason to avoid returning\n> out of those.\n>\n> In principle you could also mess things up with a \"continue\", \"break\",\n> or \"goto\" leading out of PG_TRY. That's probably far less likely\n> than \"return\", but I wonder whether Andres' compiler hack will\n> catch that.\n>\n> regards, tom lane\n>\n\nThank you Tom,\n\nBased on your comments, I've refactored my clang checker[1], now it can\nwarn about the following patterns:\n1. return statement in PG_TRY(). We've catched all of them in this thread.\n2. continue statement in PG_TRY() *unless* it's in for/while/do-while\nstatements.\n3. break statement in PG_TRY() *unless* it's in for/while/do-while/switch\nstatements.\n4. goto statement in PG_TRY() *unless* the label it points to is in the\nsame PG_TRY block.\n\nGood news is that, there's no patterns (2, 3, 4) in Postgres source tree\nand we've catched all of the return statements in the PG_TRY block in this\nthread.\n\n[1]\nhttps://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cpp\n\nBest Regards,\nXing\n\nOn Mon, Jan 16, 2023 at 11:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Xing Guo <higuoxing@gmail.com> writes:\n> Are there any unsafe codes in pltcl.c? The return statement is in the\n> PG_CATCH() block, I think the exception stack has been recovered in\n> PG_CATCH block so the return statement in PG_CATCH block should be ok?\n\nYes, the stack has already been unwound at the start of a PG_CATCH\n(or PG_FINALLY) block, so there is no reason to avoid returning\nout of those.\n\nIn principle you could also mess things up with a \"continue\", \"break\",\nor \"goto\" leading out of PG_TRY.  That's probably far less likely\nthan \"return\", but I wonder whether Andres' compiler hack will\ncatch that.\n\n                        regards, tom laneThank you Tom,Based on your comments, I've refactored my clang checker[1], now it can warn about the following patterns:1. return statement in PG_TRY(). We've catched all of them in this thread.2. continue statement in PG_TRY() *unless* it's in for/while/do-while statements.3. break statement in PG_TRY() *unless* it's in for/while/do-while/switch statements.4. goto statement in PG_TRY() *unless* the label it points to is in the same PG_TRY block.Good news is that, there's no patterns (2, 3, 4) in Postgres source tree and we've catched all of the return statements in the PG_TRY block in this thread.[1] https://github.com/higuoxing/clang-plugins/blob/main/lib/ReturnInPgTryBlockChecker.cppBest Regards,Xing", "msg_date": "Wed, 18 Jan 2023 20:31:04 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Hi,\n\nOn 2023-01-16 10:29:03 -0500, Tom Lane wrote:\n> Xing Guo <higuoxing@gmail.com> writes:\n> > Are there any unsafe codes in pltcl.c? The return statement is in the\n> > PG_CATCH() block, I think the exception stack has been recovered in\n> > PG_CATCH block so the return statement in PG_CATCH block should be ok?\n> \n> Yes, the stack has already been unwound at the start of a PG_CATCH\n> (or PG_FINALLY) block, so there is no reason to avoid returning\n> out of those.\n\nIt's probably true for PG_CATCH, but for PG_FINALLY? Won't returning lead us\nto miss rethrowing the error? I guess you can argue that's desired, but then\nwhy would one use PG_FINALLY?\n\n\nI'm somewhat dubious about allowing to return inside PG_CATCH, even if it's\nsafe today.\n\n\n> In principle you could also mess things up with a \"continue\", \"break\",\n> or \"goto\" leading out of PG_TRY. That's probably far less likely\n> than \"return\", but I wonder whether Andres' compiler hack will\n> catch that.\n\nI haven't tested it, but it should - it basically traces every path and sees\nwhether there's any way the \"capability\" isn't released. To the point that\nit's very annoying in other contexts, because it doesn't deal well with\nconditional lock acquisition/releases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 19 Jan 2023 17:07:11 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Thu, Jan 19, 2023 at 05:07:11PM -0800, Andres Freund wrote:\n> I'm somewhat dubious about allowing to return inside PG_CATCH, even if it's\n> safe today.\n\n+1. Unless there are known use-cases, IMHO it'd be better to restrict it\nto prevent future compatibility breaks as PG_TRY evolves.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 11:02:01 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Here's a new version of the patch. Besides adding comments and a commit\nmessage, I made sure to decrement the reference count for pltargs in the\nPG_CATCH block (which means that pltargs likely needs to be volatile). I'm\nnot too wild about moving the chunk of code for pltargs like this, but I\nhaven't thought of a better option. We could error instead of returning\nNULL, but IIUC that would go against d0aa965's stated purpose.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 May 2023 13:21:16 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Here's a new version of the patch. Besides adding comments and a commit\n> message, I made sure to decrement the reference count for pltargs in the\n> PG_CATCH block (which means that pltargs likely needs to be volatile).\n\nHmm, actually I think these changes should allow you to *remove* some\nvolatile markers. IIUC, you need volatile for variables that are declared\noutside PG_TRY but modified within it. That is the case for these\npointers as the code stands, but your patch is changing them to the\nnon-risky case where they are assigned once before entering PG_TRY.\n\n(My mental model of this is that without \"volatile\", the compiler\nmay keep the variable in a register, creating the hazard that longjmp\nwill revert the variable's value to what it was at setjmp time thanks\nto the register save/restore that those functions do. But if it hasn't\nchanged value since entering PG_TRY, then that doesn't matter.)\n\n> I'm\n> not too wild about moving the chunk of code for pltargs like this, but I\n> haven't thought of a better option. We could error instead of returning\n> NULL, but IIUC that would go against d0aa965's stated purpose.\n\nAgreed, throwing an error in these situations doesn't improve matters.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 03 May 2023 16:33:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Wed, May 03, 2023 at 04:33:32PM -0400, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> Here's a new version of the patch. Besides adding comments and a commit\n>> message, I made sure to decrement the reference count for pltargs in the\n>> PG_CATCH block (which means that pltargs likely needs to be volatile).\n> \n> Hmm, actually I think these changes should allow you to *remove* some\n> volatile markers. IIUC, you need volatile for variables that are declared\n> outside PG_TRY but modified within it. That is the case for these\n> pointers as the code stands, but your patch is changing them to the\n> non-risky case where they are assigned once before entering PG_TRY.\n> \n> (My mental model of this is that without \"volatile\", the compiler\n> may keep the variable in a register, creating the hazard that longjmp\n> will revert the variable's value to what it was at setjmp time thanks\n> to the register save/restore that those functions do. But if it hasn't\n> changed value since entering PG_TRY, then that doesn't matter.)\n\nAh, I think you are right. elog.h states as follows:\n\n * Note: if a local variable of the function containing PG_TRY is modified\n * in the PG_TRY section and used in the PG_CATCH section, that variable\n * must be declared \"volatile\" for POSIX compliance. This is not mere\n * pedantry; we have seen bugs from compilers improperly optimizing code\n * away when such a variable was not marked. Beware that gcc's -Wclobbered\n * warnings are just about entirely useless for catching such oversights.\n\nWith this change, pltdata isn't modified in the PG_TRY section, and the\nonly modification of pltargs happens after all elogs. It might be worth\nkeeping pltargs volatile in case someone decides to add an elog() in the\nfuture, but I see no need to keep it for pltdata.\n \n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 3 May 2023 13:58:38 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Wed, May 03, 2023 at 01:58:38PM -0700, Nathan Bossart wrote:\n> With this change, pltdata isn't modified in the PG_TRY section, and the\n> only modification of pltargs happens after all elogs. It might be worth\n> keeping pltargs volatile in case someone decides to add an elog() in the\n> future, but I see no need to keep it for pltdata.\n\nHere's a new patch that removes the volatile marker from pltdata.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 May 2023 21:54:13 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Wed, May 03, 2023 at 09:54:13PM -0700, Nathan Bossart wrote:\n> Here's a new patch that removes the volatile marker from pltdata.\n\nGah, right after I sent that, I realized we can remove one more volatile\nmarker. Sorry for the noise.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 3 May 2023 22:01:08 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> Gah, right after I sent that, I realized we can remove one more volatile\n> marker. Sorry for the noise.\n\nHmm, I'm not sure why PLy_trigger_build_args's pltargs needs to\ngain a \"volatile\" here? LGTM otherwise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 04 May 2023 08:39:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "On Thu, May 04, 2023 at 08:39:03AM -0400, Tom Lane wrote:\n> Hmm, I'm not sure why PLy_trigger_build_args's pltargs needs to\n> gain a \"volatile\" here? LGTM otherwise.\n\nI removed that new \"volatile\" marker before committing. I was trying to\nfuture-proof a bit and follow elog.h's recommendation to the letter, but\nfollowing your mental model upthread, it doesn't seem to be strictly\nnecessary, and we'd need to set pltargs to NULL after decrementing its\nreference count in the PG_TRY section for such future-proofing to be\neffective, anyway.\n\nThank you for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 4 May 2023 16:42:35 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." }, { "msg_contents": "Sorry for not responding to this thread for a long time and a huge thank\nyou for pushing it forward!\n\nBest Regards,\nXing\n\n\n\n\n\n\n\n\nOn Fri, May 5, 2023 at 7:42 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Thu, May 04, 2023 at 08:39:03AM -0400, Tom Lane wrote:\n> > Hmm, I'm not sure why PLy_trigger_build_args's pltargs needs to\n> > gain a \"volatile\" here? LGTM otherwise.\n>\n> I removed that new \"volatile\" marker before committing. I was trying to\n> future-proof a bit and follow elog.h's recommendation to the letter, but\n> following your mental model upthread, it doesn't seem to be strictly\n> necessary, and we'd need to set pltargs to NULL after decrementing its\n> reference count in the PG_TRY section for such future-proofing to be\n> effective, anyway.\n>\n> Thank you for reviewing!\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n\nSorry for not responding to this thread for a long time and a huge thank you for pushing it forward!Best Regards,XingOn Fri, May 5, 2023 at 7:42 AM Nathan Bossart <nathandbossart@gmail.com> wrote:On Thu, May 04, 2023 at 08:39:03AM -0400, Tom Lane wrote:\n> Hmm, I'm not sure why PLy_trigger_build_args's pltargs needs to\n> gain a \"volatile\" here?  LGTM otherwise.\n\nI removed that new \"volatile\" marker before committing.  I was trying to\nfuture-proof a bit and follow elog.h's recommendation to the letter, but\nfollowing your mental model upthread, it doesn't seem to be strictly\nnecessary, and we'd need to set pltargs to NULL after decrementing its\nreference count in the PG_TRY section for such future-proofing to be\neffective, anyway.\n\nThank you for reviewing!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 5 May 2023 09:40:32 +0800", "msg_from": "Xing Guo <higuoxing@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PL/Python: Fix return in the middle of PG_TRY() block." } ]
[ { "msg_contents": "What's the distinction between errdetail and errdetail_log in the ereport interface?\n\n", "msg_date": "Thu, 12 Jan 2023 12:28:39 -0800", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "errdetail vs errdetail_log?" }, { "msg_contents": "On 2023-01-12 12:28:39 -0800, Christophe Pettus wrote:\n> What's the distinction between errdetail and errdetail_log in the ereport interface?\n\nOnly goes to the server log, not to the client.\n\n\n", "msg_date": "Thu, 12 Jan 2023 12:35:32 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: errdetail vs errdetail_log?" }, { "msg_contents": "\n\n> On Jan 12, 2023, at 12:35, Andres Freund <andres@anarazel.de> wrote:\n> \n> On 2023-01-12 12:28:39 -0800, Christophe Pettus wrote:\n>> What's the distinction between errdetail and errdetail_log in the ereport interface?\n> \n> Only goes to the server log, not to the client.\n\nThanks!\n\n", "msg_date": "Thu, 12 Jan 2023 13:07:04 -0800", "msg_from": "Christophe Pettus <xof@thebuild.com>", "msg_from_op": true, "msg_subject": "Re: errdetail vs errdetail_log?" } ]
[ { "msg_contents": "Hi,\n\nWhile working on the postmaster latch stuff, one of the things I\nlooked into, but de-scoped for now, is how the postmaster code would\nlook if it didn't use global variables to track its sockets, children\nand state (ie now that it's no longer necessary for technical\nreasons). Here's the quick experimental patch I came up with that\nlifts most of the global variables out of postmaster.c and puts them\ninto a struct Postmaster, which is allocated in the postmaster and\nfreed in forked children. Then 'pm' gets passed around to postmaster\nsubroutines and all references to X are replaced with pm->X (so\npm->ListenSockets, pm->WalWriterPid, pm->WalReceiverRequested, etc).\n\nUnfortunately bgworker.c isn't quite so easy to refactor along these\nlines, because its list of background workers, which you might think\nshould be in Postmaster private memory in the Postmaster struct much\nlike pm->BackendList, also needs to be accessible globally for\nextensions to be able to register them in their init hook. Perhaps\nthere should be separate 'running' and 'registered' worker lists.\nThat stopped me in my tracks (decisions are so much harder than\nmechanical changes...), but I thought I'd share this concept patch\nanyway... This is not a proposal for 16, more of a sketch to see what\npeople's appetite is for global variable removal projects, which\n(IMHO) increase clarity about module boundaries, but I guess also have\nback-patching and code churn costs.", "msg_date": "Fri, 13 Jan 2023 12:11:53 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Experimenting with Postmaster variable scope" } ]
[ { "msg_contents": "Hi,\n\nThis is a follow-up for commit c94ae9d8. It's in the spirit of other\nrecent changes to remove noise from ancient pre-standard systems.\n\nThe reason we introduced PG_SETMASK() in the first place was to\nsupport one particular system that was very slow to adopt the POSIX\nsignals stuff: NeXTSTEP 3.x.\n\n From some time in the dark age before our current repo begins until\n'97 we used sigprocmask() freely. Then commit a5494a2d added a\nsigsetmask() fallback for NeXTSTEP (that's a pre-standard function\ninherited from '80s BSD). In 1999 we added the PG_SETMASK() macro to\navoid repeating #ifdef HAVE_SIGPROCMASK to select between them at each\ncall site (commit 47937403676). I have no personal knowledge of those\nsystems; I wonder if it was already effectively quite defunct while we\nwere adding the macro, but I dunno (NS 4.x never shipped?, but its\nliving descendent OSX had already shipped that year).\n\nThen we invented a bogus reason to need the macro for a couple more\ndecades: our Windows simulated signal layer accidentally implemented\nthe old BSD interface instead of the standard one, as complained about\nin commit a65e0864.\n\nThat's all ancient history now, and I think we might as well drop the\nmacro to make our source a tiny bit less weird for new players, with a\nslightly richer interface. Trivial patch attached.", "msg_date": "Fri, 13 Jan 2023 14:00:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "PG_SETMASK() archeology" }, { "msg_contents": "On Fri, Jan 13, 2023 at 02:00:05PM +1300, Thomas Munro wrote:\n> The reason we introduced PG_SETMASK() in the first place was to\n> support one particular system that was very slow to adopt the POSIX\n> signals stuff: NeXTSTEP 3.x.\n> \n> From some time in the dark age before our current repo begins until\n> '97 we used sigprocmask() freely. Then commit a5494a2d added a\n> sigsetmask() fallback for NeXTSTEP (that's a pre-standard function\n> inherited from '80s BSD). In 1999 we added the PG_SETMASK() macro to\n> avoid repeating #ifdef HAVE_SIGPROCMASK to select between them at each\n> call site (commit 47937403676). I have no personal knowledge of those\n> systems; I wonder if it was already effectively quite defunct while we\n> were adding the macro, but I dunno (NS 4.x never shipped?, but its\n> living descendent OSX had already shipped that year).\n> \n> Then we invented a bogus reason to need the macro for a couple more\n> decades: our Windows simulated signal layer accidentally implemented\n> the old BSD interface instead of the standard one, as complained about\n> in commit a65e0864.\n\nI found this very interesting. Thanks for sharing.\n\n> That's all ancient history now, and I think we might as well drop the\n> macro to make our source a tiny bit less weird for new players, with a\n> slightly richer interface. Trivial patch attached.\n\n+1, LGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:49:10 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: PG_SETMASK() archeology" } ]
[ { "msg_contents": "Hi,\n\nIn CustomScan cost estimator, where PlannerInfo and RelOptInfo are passed,\nI want to get access to the relation stats (for example pg_stat_all_tables)\nby calling pg_stat_fetch_stat_tabentry(). However, I don't have access to\nrelid to pass to this function. For a sample relation, when I hardcode the\nrelid (for example 16385), it works. However, RelOptInfo->relid is always 1\n(for whatever relation the query is scanning). Why this happens and how to\nget access to the correct relid (16385) as in pg_stat_all_tables?\n\nThank you!\n\nHi,In CustomScan cost estimator, where PlannerInfo and RelOptInfo are passed, I want to get access to the relation stats (for example pg_stat_all_tables) by calling pg_stat_fetch_stat_tabentry(). However, I don't have access to relid to pass to this function. For a sample relation, when I hardcode the relid (for example 16385), it works. However, RelOptInfo->relid is always 1 (for whatever relation the query is scanning). Why this happens and how to get access to the correct relid (16385) as in pg_stat_all_tables?Thank you!", "msg_date": "Thu, 12 Jan 2023 17:48:59 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "Get relid for a relation" }, { "msg_contents": "Amin <amin.fallahi@gmail.com> writes:\n> In CustomScan cost estimator, where PlannerInfo and RelOptInfo are passed,\n> I want to get access to the relation stats (for example pg_stat_all_tables)\n> by calling pg_stat_fetch_stat_tabentry(). However, I don't have access to\n> relid to pass to this function.\n\nSure you do. The existing code, eg in selfuncs.c, does it about like\nthis:\n\n RangeTblEntry *rte = planner_rt_fetch(rel->relid, root);\n\n Assert(rte->rtekind == RTE_RELATION);\n relid = rte->relid;\n Assert(relid != InvalidOid);\n ...\n vardata.statsTuple = SearchSysCache3(STATRELATTINH,\n ObjectIdGetDatum(relid),\n Int16GetDatum(colnum),\n BoolGetDatum(rte->inh));\n\nThis is maybe a bit confusing, in that rel->relid is a range\ntable index but rte->relid is an OID.\n\nFWIW, I seriously doubt that the numbers kept by the pg_stat mechanisms\nare what you want for query planning purposes.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Jan 2023 09:45:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Get relid for a relation" } ]
[ { "msg_contents": "Hi,\n\nThe commit 7265dbffad7feac6ea9d373828583b5d3d152e07 has added a script\nin src/backend/utils/misc/check_guc that cross-checks the consistency\nof the GUCs with postgresql.conf.sample, making sure that its format\nis in line with what guc.c has. As per the commit message, the\nparameters which are not listed as NOT_IN_SAMPLE in guc.c should be\npresent in postgresql.conf.sample. But I have observed a test case\nfailure when the parameters which are listed as GUC_NO_SHOW_ALL in\nguc.c and if it is present in postgresql.conf.sample. I feel this\nbehaviour is not expected and this should be fixed. I spent some time\non the analysis and found that query [1] is used to fetch all the\nparameters which are not listed as NOT_IN_SAMPLE. But the pg_settings\nview does not return the parameters listed as GUC_NO_SHOW_ALL. Hence\nthese records will be missed. Please share your thoughts. I would like\nto work on the patch if a fix is required.\n\n\n[1]:\nSELECT name FROM pg_settings WHERE NOT 'NOT_IN_SAMPLE' = ANY\n(pg_settings_get_flags(name)) AND name <> 'config_file' ORDER BY 1;\n\nThanks & Regards,\nNitin Jadhav\n\n\n", "msg_date": "Fri, 13 Jan 2023 18:15:38 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Fri, Jan 13, 2023 at 06:15:38PM +0530, Nitin Jadhav wrote:\n> Hi,\n> \n> The commit 7265dbffad7feac6ea9d373828583b5d3d152e07 has added a script\n> in src/backend/utils/misc/check_guc that cross-checks the consistency\n> of the GUCs with postgresql.conf.sample, making sure that its format\n> is in line with what guc.c has. As per the commit message, the\n> parameters which are not listed as NOT_IN_SAMPLE in guc.c should be\n> present in postgresql.conf.sample. But I have observed a test case\n> failure when the parameters which are listed as GUC_NO_SHOW_ALL in\n> guc.c and if it is present in postgresql.conf.sample. I feel this\n> behaviour is not expected and this should be fixed. I spent some time\n> on the analysis and found that query [1] is used to fetch all the\n> parameters which are not listed as NOT_IN_SAMPLE. But the pg_settings\n> view does not return the parameters listed as GUC_NO_SHOW_ALL. Hence\n> these records will be missed. Please share your thoughts. I would like\n> to work on the patch if a fix is required.\n\nLooks like you're right ; show_all_settings() elides settings marked\n\"noshow\".\n\nDo you know how you'd implement a fix ?\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 13 Jan 2023 08:02:21 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> Looks like you're right ; show_all_settings() elides settings marked\n> \"noshow\".\n>\n> Do you know how you'd implement a fix ?\n\nI could think of the following options.\n\nOption-1 is, expose a function like pg_settings_get_no_show_all()\nwhich just returns the parameters which are just listed as\nGUC_NO_SHOW_ALL (Not in combination with NOT_IN_SAMPLE). We can then\nuse this function in the test file and verify whether there are config\nentries for these.\n\nOption-2 is, if exposing new function and that too to expose\nparameters which are listed as GUC_NO_SHOW_ALL is not recommended,\nthen how about exposing a function like pg_settings_get_count() which\nreturns the count of all parameters including GUC_NO_SHOW_ALL. We can\nthen use this number to verify whether these many are present in the\nsample config file. But we cannot show the name of the parameters if\nit is not matching. We can just display an error saying \"Parameter\nwith GUC_NO_SHOW_ALL is missing from postgresql.conf.sample\".\n\nOption-3 is, if exposing both of the above functions is not\nrecommended, then we can use the existing function\npg_settings_get_flags() for each of the parameters while reading the\nsample config file in 003_check_guc.pl. This validates the\nGUC_NO_SHOW_ALL parameter if that is present in the sample config\nfile. It does not validate if it is present in guc.c and missing in\nthe sample config file.\n\nOption-4 is, how about manually adding the parameter name to\n'all_params_array' in 003_check_guc.pl whenever we add such GUCs.\n\nI am not able to choose any of the above options as each has some\ndisadvantages but if no other options exist, then I would like to go\nwith option-3 as it validates more than the one currently doing.\nPlease share if any other better ideas.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Fri, Jan 13, 2023 at 7:32 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> On Fri, Jan 13, 2023 at 06:15:38PM +0530, Nitin Jadhav wrote:\n> > Hi,\n> >\n> > The commit 7265dbffad7feac6ea9d373828583b5d3d152e07 has added a script\n> > in src/backend/utils/misc/check_guc that cross-checks the consistency\n> > of the GUCs with postgresql.conf.sample, making sure that its format\n> > is in line with what guc.c has. As per the commit message, the\n> > parameters which are not listed as NOT_IN_SAMPLE in guc.c should be\n> > present in postgresql.conf.sample. But I have observed a test case\n> > failure when the parameters which are listed as GUC_NO_SHOW_ALL in\n> > guc.c and if it is present in postgresql.conf.sample. I feel this\n> > behaviour is not expected and this should be fixed. I spent some time\n> > on the analysis and found that query [1] is used to fetch all the\n> > parameters which are not listed as NOT_IN_SAMPLE. But the pg_settings\n> > view does not return the parameters listed as GUC_NO_SHOW_ALL. Hence\n> > these records will be missed. Please share your thoughts. I would like\n> > to work on the patch if a fix is required.\n>\n> Looks like you're right ; show_all_settings() elides settings marked\n> \"noshow\".\n>\n> Do you know how you'd implement a fix ?\n>\n> --\n> Justin\n\n\n", "msg_date": "Sat, 14 Jan 2023 19:10:55 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Sat, Jan 14, 2023 at 07:10:55PM +0530, Nitin Jadhav wrote:\n> Option-1 is, expose a function like pg_settings_get_no_show_all()\n> which just returns the parameters which are just listed as\n> GUC_NO_SHOW_ALL (Not in combination with NOT_IN_SAMPLE). We can then\n> use this function in the test file and verify whether there are config\n> entries for these.\n> \n> Option-2 is, if exposing new function and that too to expose\n> parameters which are listed as GUC_NO_SHOW_ALL is not recommended,\n> then how about exposing a function like pg_settings_get_count() which\n> returns the count of all parameters including GUC_NO_SHOW_ALL. We can\n> then use this number to verify whether these many are present in the\n> sample config file. But we cannot show the name of the parameters if\n> it is not matching. We can just display an error saying \"Parameter\n> with GUC_NO_SHOW_ALL is missing from postgresql.conf.sample\".\n\nWe would miss the names of the parameters that are marked as NO_SHOW,\nmissing from pg_settings, making debugging harder.\n\n> Option-3 is, if exposing both of the above functions is not\n> recommended, then we can use the existing function\n> pg_settings_get_flags() for each of the parameters while reading the\n> sample config file in 003_check_guc.pl. This validates the\n> GUC_NO_SHOW_ALL parameter if that is present in the sample config\n> file. It does not validate if it is present in guc.c and missing in\n> the sample config file.\n\nThis would make the test more costly by forcing one SQL for each\nGUC..\n\n> Option-4 is, how about manually adding the parameter name to\n> 'all_params_array' in 003_check_guc.pl whenever we add such GUCs.\n> \n> I am not able to choose any of the above options as each has some\n> disadvantages but if no other options exist, then I would like to go\n> with option-3 as it validates more than the one currently doing.\n> Please share if any other better ideas.\n\nWe could extend pg_show_all_settings() with a boolean parameter to\nenforce listing all the parameters, even the ones that are marked as\nNOSHOW, but this does not count on GetConfigOptionValues() that could\nforce a parameter to become noshow on a superuser-only GUC depending\non the role that's running the function. At the end, we'd better rely\non a separate superuser-only function to do this job, aka option 1.\n\nHow much do we need to care with the duplication this would involve\nwith show_all_settings(), actually? If you don't use the SRF macros,\nthe code would just be a couple of lines with InitMaterializedSRF()\ndoing a loop on GetConfigOptionValues(). Even if that means listing\ntwice the parameters in pg_proc.dat, the chances of adding new\nparameters in pg_settings is rather low so that would be a one-time\nchange?\n--\nMichael", "msg_date": "Mon, 16 Jan 2023 11:28:13 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> We would miss the names of the parameters that are marked as NO_SHOW,\n> missing from pg_settings, making debugging harder.\n>\n> This would make the test more costly by forcing one SQL for each\n> GUC..\n\nI agree.\n\n\n> We could extend pg_show_all_settings() with a boolean parameter to\n> enforce listing all the parameters, even the ones that are marked as\n> NOSHOW, but this does not count on GetConfigOptionValues() that could\n> force a parameter to become noshow on a superuser-only GUC depending\n> on the role that's running the function. At the end, we'd better rely\n> on a separate superuser-only function to do this job, aka option 1.\n\nI did not get it completely. To understand it better, I just gave a\nthought of adding a boolean parameter to pg_show_all_settings(). Then\nwe should modify GetConfigOptionValues() like below [1]. When we call\npg_show_all_settings(false), it behaves like existing behaviour (with\nsuper user and without super user). When we call\npg_show_all_settings(true) with super user privileges, it returns all\nparameters including GUC_NO_SHOW_ALL as well as GUC_SUPER_USER_ONLY.\nIf we call pg_show_all_settings(true) without super user privileges,\nthen it returns all parameters except GUC_NO_SHOW_ALL and\nGUC_SUPER_USER_ONLY. Can't we do it this way? Please share your\nthoughts.\n\n\n> How much do we need to care with the duplication this would involve\n> with show_all_settings(), actually? If you don't use the SRF macros,\n> the code would just be a couple of lines with InitMaterializedSRF()\n> doing a loop on GetConfigOptionValues(). Even if that means listing\n> twice the parameters in pg_proc.dat, the chances of adding new\n> parameters in pg_settings is rather low so that would be a one-time\n> change?\n\nHow about just fetching the parameter name instead of fetching all its\ndetails. This will meet our objective as well as it controls the code\nduplication.\n\n[1]:\nstatic void\nGetConfigOptionValues(struct config_generic *conf, const char **values,\n bool *noshow, bool is_show_all)\n{\n char buffer[256];\n\n if (noshow)\n {\n if (((conf->flags & GUC_NO_SHOW_ALL) && !is_show_all) ||\n ((conf->flags & GUC_NO_SHOW_ALL) &&\n !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)) ||\n ((conf->flags & GUC_SUPERUSER_ONLY) &&\n !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n *noshow = true;\n else\n *noshow = false;\n }\n -\n -\n -\n}\n\nOn Mon, Jan 16, 2023 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 14, 2023 at 07:10:55PM +0530, Nitin Jadhav wrote:\n> > Option-1 is, expose a function like pg_settings_get_no_show_all()\n> > which just returns the parameters which are just listed as\n> > GUC_NO_SHOW_ALL (Not in combination with NOT_IN_SAMPLE). We can then\n> > use this function in the test file and verify whether there are config\n> > entries for these.\n> >\n> > Option-2 is, if exposing new function and that too to expose\n> > parameters which are listed as GUC_NO_SHOW_ALL is not recommended,\n> > then how about exposing a function like pg_settings_get_count() which\n> > returns the count of all parameters including GUC_NO_SHOW_ALL. We can\n> > then use this number to verify whether these many are present in the\n> > sample config file. But we cannot show the name of the parameters if\n> > it is not matching. We can just display an error saying \"Parameter\n> > with GUC_NO_SHOW_ALL is missing from postgresql.conf.sample\".\n>\n> We would miss the names of the parameters that are marked as NO_SHOW,\n> missing from pg_settings, making debugging harder.\n>\n> > Option-3 is, if exposing both of the above functions is not\n> > recommended, then we can use the existing function\n> > pg_settings_get_flags() for each of the parameters while reading the\n> > sample config file in 003_check_guc.pl. This validates the\n> > GUC_NO_SHOW_ALL parameter if that is present in the sample config\n> > file. It does not validate if it is present in guc.c and missing in\n> > the sample config file.\n>\n> This would make the test more costly by forcing one SQL for each\n> GUC..\n>\n> > Option-4 is, how about manually adding the parameter name to\n> > 'all_params_array' in 003_check_guc.pl whenever we add such GUCs.\n> >\n> > I am not able to choose any of the above options as each has some\n> > disadvantages but if no other options exist, then I would like to go\n> > with option-3 as it validates more than the one currently doing.\n> > Please share if any other better ideas.\n>\n> We could extend pg_show_all_settings() with a boolean parameter to\n> enforce listing all the parameters, even the ones that are marked as\n> NOSHOW, but this does not count on GetConfigOptionValues() that could\n> force a parameter to become noshow on a superuser-only GUC depending\n> on the role that's running the function. At the end, we'd better rely\n> on a separate superuser-only function to do this job, aka option 1.\n>\n> How much do we need to care with the duplication this would involve\n> with show_all_settings(), actually? If you don't use the SRF macros,\n> the code would just be a couple of lines with InitMaterializedSRF()\n> doing a loop on GetConfigOptionValues(). Even if that means listing\n> twice the parameters in pg_proc.dat, the chances of adding new\n> parameters in pg_settings is rather low so that would be a one-time\n> change?\n> --\n> Michael\n\n\n", "msg_date": "Wed, 18 Jan 2023 12:31:35 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> We could extend pg_show_all_settings() with a boolean parameter to\n> enforce listing all the parameters, even the ones that are marked as\n> NOSHOW, but this does not count on GetConfigOptionValues() that could\n> force a parameter to become noshow on a superuser-only GUC depending\n> on the role that's running the function. At the end, we'd better rely\n> on a separate superuser-only function to do this job, aka option 1.\n\nI had started a separate thread [1] to refactor the code around\nGetConfigOptionValues() and the patch is already committed. Now it\nmakes our job simpler to extend pg_show_all_settings() with a boolean\nparameter to enforce listing all the parameters, even the ones that\nare marked as NOSHOW. I have attached the patch for the same. Kindly\nlook into it and share your thoughts.\n\n[1]: https://www.postgresql.org/message-id/flat/CALj2ACXZMOGEtjk%2Beh0Zeiz%3D46ETVkr0koYAjWt8SoJUJJUe9g%40mail.gmail.com#04705e421e0dc63b1f0c862ae4929e6f\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Jan 18, 2023 at 12:31 PM Nitin Jadhav\n<nitinjadhavpostgres@gmail.com> wrote:\n>\n> > We would miss the names of the parameters that are marked as NO_SHOW,\n> > missing from pg_settings, making debugging harder.\n> >\n> > This would make the test more costly by forcing one SQL for each\n> > GUC..\n>\n> I agree.\n>\n>\n> > We could extend pg_show_all_settings() with a boolean parameter to\n> > enforce listing all the parameters, even the ones that are marked as\n> > NOSHOW, but this does not count on GetConfigOptionValues() that could\n> > force a parameter to become noshow on a superuser-only GUC depending\n> > on the role that's running the function. At the end, we'd better rely\n> > on a separate superuser-only function to do this job, aka option 1.\n>\n> I did not get it completely. To understand it better, I just gave a\n> thought of adding a boolean parameter to pg_show_all_settings(). Then\n> we should modify GetConfigOptionValues() like below [1]. When we call\n> pg_show_all_settings(false), it behaves like existing behaviour (with\n> super user and without super user). When we call\n> pg_show_all_settings(true) with super user privileges, it returns all\n> parameters including GUC_NO_SHOW_ALL as well as GUC_SUPER_USER_ONLY.\n> If we call pg_show_all_settings(true) without super user privileges,\n> then it returns all parameters except GUC_NO_SHOW_ALL and\n> GUC_SUPER_USER_ONLY. Can't we do it this way? Please share your\n> thoughts.\n>\n>\n> > How much do we need to care with the duplication this would involve\n> > with show_all_settings(), actually? If you don't use the SRF macros,\n> > the code would just be a couple of lines with InitMaterializedSRF()\n> > doing a loop on GetConfigOptionValues(). Even if that means listing\n> > twice the parameters in pg_proc.dat, the chances of adding new\n> > parameters in pg_settings is rather low so that would be a one-time\n> > change?\n>\n> How about just fetching the parameter name instead of fetching all its\n> details. This will meet our objective as well as it controls the code\n> duplication.\n>\n> [1]:\n> static void\n> GetConfigOptionValues(struct config_generic *conf, const char **values,\n> bool *noshow, bool is_show_all)\n> {\n> char buffer[256];\n>\n> if (noshow)\n> {\n> if (((conf->flags & GUC_NO_SHOW_ALL) && !is_show_all) ||\n> ((conf->flags & GUC_NO_SHOW_ALL) &&\n> !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)) ||\n> ((conf->flags & GUC_SUPERUSER_ONLY) &&\n> !has_privs_of_role(GetUserId(), ROLE_PG_READ_ALL_SETTINGS)))\n> *noshow = true;\n> else\n> *noshow = false;\n> }\n> -\n> -\n> -\n> }\n>\n> On Mon, Jan 16, 2023 at 7:58 AM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Sat, Jan 14, 2023 at 07:10:55PM +0530, Nitin Jadhav wrote:\n> > > Option-1 is, expose a function like pg_settings_get_no_show_all()\n> > > which just returns the parameters which are just listed as\n> > > GUC_NO_SHOW_ALL (Not in combination with NOT_IN_SAMPLE). We can then\n> > > use this function in the test file and verify whether there are config\n> > > entries for these.\n> > >\n> > > Option-2 is, if exposing new function and that too to expose\n> > > parameters which are listed as GUC_NO_SHOW_ALL is not recommended,\n> > > then how about exposing a function like pg_settings_get_count() which\n> > > returns the count of all parameters including GUC_NO_SHOW_ALL. We can\n> > > then use this number to verify whether these many are present in the\n> > > sample config file. But we cannot show the name of the parameters if\n> > > it is not matching. We can just display an error saying \"Parameter\n> > > with GUC_NO_SHOW_ALL is missing from postgresql.conf.sample\".\n> >\n> > We would miss the names of the parameters that are marked as NO_SHOW,\n> > missing from pg_settings, making debugging harder.\n> >\n> > > Option-3 is, if exposing both of the above functions is not\n> > > recommended, then we can use the existing function\n> > > pg_settings_get_flags() for each of the parameters while reading the\n> > > sample config file in 003_check_guc.pl. This validates the\n> > > GUC_NO_SHOW_ALL parameter if that is present in the sample config\n> > > file. It does not validate if it is present in guc.c and missing in\n> > > the sample config file.\n> >\n> > This would make the test more costly by forcing one SQL for each\n> > GUC..\n> >\n> > > Option-4 is, how about manually adding the parameter name to\n> > > 'all_params_array' in 003_check_guc.pl whenever we add such GUCs.\n> > >\n> > > I am not able to choose any of the above options as each has some\n> > > disadvantages but if no other options exist, then I would like to go\n> > > with option-3 as it validates more than the one currently doing.\n> > > Please share if any other better ideas.\n> >\n> > We could extend pg_show_all_settings() with a boolean parameter to\n> > enforce listing all the parameters, even the ones that are marked as\n> > NOSHOW, but this does not count on GetConfigOptionValues() that could\n> > force a parameter to become noshow on a superuser-only GUC depending\n> > on the role that's running the function. At the end, we'd better rely\n> > on a separate superuser-only function to do this job, aka option 1.\n> >\n> > How much do we need to care with the duplication this would involve\n> > with show_all_settings(), actually? If you don't use the SRF macros,\n> > the code would just be a couple of lines with InitMaterializedSRF()\n> > doing a loop on GetConfigOptionValues(). Even if that means listing\n> > twice the parameters in pg_proc.dat, the chances of adding new\n> > parameters in pg_settings is rather low so that would be a one-time\n> > change?\n> > --\n> > Michael", "msg_date": "Sun, 29 Jan 2023 17:22:13 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Sun, Jan 29, 2023 at 05:22:13PM +0530, Nitin Jadhav wrote:\n> > We could extend pg_show_all_settings() with a boolean parameter to\n> > enforce listing all the parameters, even the ones that are marked as\n> > NOSHOW, but this does not count on GetConfigOptionValues() that could\n> > force a parameter to become noshow on a superuser-only GUC depending\n> > on the role that's running the function. At the end, we'd better rely\n> > on a separate superuser-only function to do this job, aka option 1.\n> \n> I had started a separate thread [1] to refactor the code around\n> GetConfigOptionValues() and the patch is already committed. Now it\n> makes our job simpler to extend pg_show_all_settings() with a boolean\n> parameter to enforce listing all the parameters, even the ones that\n> are marked as NOSHOW. I have attached the patch for the same. Kindly\n> look into it and share your thoughts.\n\nSELECT pg_show_all_settings() ought to keep working when called with no\nparameter. Tom gave me a hint how to do that for system catalogs here:\nhttps://www.postgresql.org/message-id/17988.1584472261@sss.pgh.pa.us\n\nIn this case, it might be cleaner to add a second entry to pg_proc.dat\nthan to add \"CREATE OR REPLACE FUNCTION\" to system_functions.sql (I\ntried but couldn't get that to work just now).\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 29 Jan 2023 10:44:04 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> SELECT pg_show_all_settings() ought to keep working when called with no\n> parameter. Tom gave me a hint how to do that for system catalogs here:\n> https://www.postgresql.org/message-id/17988.1584472261@sss.pgh.pa.us\n> In this case, it might be cleaner to add a second entry to pg_proc.dat\n> than to add \"CREATE OR REPLACE FUNCTION\" to system_functions.sql (I\n> tried but couldn't get that to work just now).\n\nI kind of think this is a lot of unnecessary work. The case that is\nproblematic is a GUC that's marked GUC_NO_SHOW_ALL but not marked\nGUC_NOT_IN_SAMPLE. There aren't any of those, and I don't think there\nare likely to be any in future, because it doesn't make a lot of sense.\nWhy don't we just make a policy against doing that, and enforce it\nwith an assertion somewhere in GUC initialization?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 29 Jan 2023 13:05:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Sun, Jan 29, 2023 at 01:05:07PM -0500, Tom Lane wrote:\n> I kind of think this is a lot of unnecessary work. The case that is\n> problematic is a GUC that's marked GUC_NO_SHOW_ALL but not marked\n> GUC_NOT_IN_SAMPLE. There aren't any of those, and I don't think there\n> are likely to be any in future, because it doesn't make a lot of sense.\n> Why don't we just make a policy against doing that, and enforce it\n> with an assertion somewhere in GUC initialization?\n\n[..thinks..]\n\nLooking at guc.sql, I think that these is a second mistake: the test\nchecks for (no_show_all AND !no_reset_all) but this won't work \nbecause NO_SHOW_ALL GUCs cannot be scanned via SQL. There are two\nparameters that include this combination of flags: default_with_oids\nand ssl_renegotiation_limit, so the check would not do what it\nshould. I think that this part should be removed.\n\nFor the second part to prevent GUCs to be marked as NO_SHOW_ALL &&\n!NOT_IN_SAMPLE, check_GUC_init() looks like the right place to me,\nbecause this routine has been designed exactly for this purpose.\n\nSo, what do you think about the attached?\n--\nMichael", "msg_date": "Mon, 30 Jan 2023 14:06:10 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> I kind of think this is a lot of unnecessary work. The case that is\n> problematic is a GUC that's marked GUC_NO_SHOW_ALL but not marked\n> GUC_NOT_IN_SAMPLE. There aren't any of those, and I don't think there\n> are likely to be any in future, because it doesn't make a lot of sense.\n> Why don't we just make a policy against doing that, and enforce it\n> with an assertion somewhere in GUC initialization?\n>\n> Looking at guc.sql, I think that these is a second mistake: the test\n> checks for (no_show_all AND !no_reset_all) but this won't work\n> because NO_SHOW_ALL GUCs cannot be scanned via SQL. There are two\n> parameters that include this combination of flags: default_with_oids\n> and ssl_renegotiation_limit, so the check would not do what it\n> should. I think that this part should be removed.\n\nThanks Michael for identifying a new mistake. I am a little confused\nhere. I don't understand why GUC_NO_SHOW_ALL depends on other GUCs\nlike GUC_NOT_IN_SAMPLE or GUC_NO_RESET_ALL. Looks like the dependency\nbetween GUC_NO_RESET_ALL and GUC_NO_SHOW_ALL is removed in the above\npatch since we have these combinations now. Similarly why can't we\nhave a GUC marked as GUC_NO_SHOW_ALL but not GUC_NOT_IN_CONFIG. For me\nit makes sense if a GUC is marked as NO_SHOW_ALL and it can be present\nin a sample file. It's up to the author/developer to choose which all\nflags are applicable to the newly introduced GUCs. Please share your\nthoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Mon, Jan 30, 2023 at 10:36 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Jan 29, 2023 at 01:05:07PM -0500, Tom Lane wrote:\n> > I kind of think this is a lot of unnecessary work. The case that is\n> > problematic is a GUC that's marked GUC_NO_SHOW_ALL but not marked\n> > GUC_NOT_IN_SAMPLE. There aren't any of those, and I don't think there\n> > are likely to be any in future, because it doesn't make a lot of sense.\n> > Why don't we just make a policy against doing that, and enforce it\n> > with an assertion somewhere in GUC initialization?\n>\n> [..thinks..]\n>\n> Looking at guc.sql, I think that these is a second mistake: the test\n> checks for (no_show_all AND !no_reset_all) but this won't work\n> because NO_SHOW_ALL GUCs cannot be scanned via SQL. There are two\n> parameters that include this combination of flags: default_with_oids\n> and ssl_renegotiation_limit, so the check would not do what it\n> should. I think that this part should be removed.\n>\n> For the second part to prevent GUCs to be marked as NO_SHOW_ALL &&\n> !NOT_IN_SAMPLE, check_GUC_init() looks like the right place to me,\n> because this routine has been designed exactly for this purpose.\n>\n> So, what do you think about the attached?\n> --\n> Michael\n\n\n", "msg_date": "Mon, 30 Jan 2023 17:12:27 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Mon, Jan 30, 2023 at 05:12:27PM +0530, Nitin Jadhav wrote:\n> Thanks Michael for identifying a new mistake. I am a little confused\n> here. I don't understand why GUC_NO_SHOW_ALL depends on other GUCs\n> like GUC_NOT_IN_SAMPLE or GUC_NO_RESET_ALL. Looks like the dependency\n> between GUC_NO_RESET_ALL and GUC_NO_SHOW_ALL is removed in the above\n> patch since we have these combinations now.\n\npg_settings would be unable to show something marked as NO_SHOW_ALL,\nso the SQL check that looked after (NO_SHOW_ALL && !NO_RESET_ALL) is\na no-op. Postgres will likely gain more parameters that are kept\naround for compability reasons, and forcing a NO_RESET_ALL in such\ncases could impact applications using RESET on such GUCs, meaning\npotential compatibility breakages.\n\n> Similarly why can't we\n> have a GUC marked as GUC_NO_SHOW_ALL but not GUC_NOT_IN_CONFIG. For me\n> it makes sense if a GUC is marked as NO_SHOW_ALL and it can be present\n> in a sample file. It's up to the author/developer to choose which all\n> flags are applicable to the newly introduced GUCs. Please share your\n> thoughts.\n\nAs also mentioned upthread by Tom, I am not sure that this combination\nmakes much sense, actually, because I don't see why one would never\nwant to know what is the effective value loaded for a parameter stored\nin a file when he/she has the permission to do so. This could be\nchanged as of ALTER SYSTEM, postgresql.conf or even an included file,\nand the value can only be read if permission to see it is given to the\nrole querying SHOW or pg_settings. This combination of flags is not a\npractice to encourage.\n--\nMichael", "msg_date": "Wed, 1 Feb 2023 14:29:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Wed, Feb 01, 2023 at 02:29:23PM +0900, Michael Paquier wrote:\n> As also mentioned upthread by Tom, I am not sure that this combination\n> makes much sense, actually, because I don't see why one would never\n> want to know what is the effective value loaded for a parameter stored\n> in a file when he/she has the permission to do so. This could be\n> changed as of ALTER SYSTEM, postgresql.conf or even an included file,\n> and the value can only be read if permission to see it is given to the\n> role querying SHOW or pg_settings. This combination of flags is not a\n> practice to encourage.\n\nSo, coming back quickly to this one, it seems to me that the tests in\nguc.sql had better be adjusted down v15 where they have been\nintroduced, and that the extra check is worth doing on HEAD. Any\nthoughts?\n--\nMichael", "msg_date": "Thu, 2 Feb 2023 13:45:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> > Thanks Michael for identifying a new mistake. I am a little confused\n> > here. I don't understand why GUC_NO_SHOW_ALL depends on other GUCs\n> > like GUC_NOT_IN_SAMPLE or GUC_NO_RESET_ALL. Looks like the dependency\n> > between GUC_NO_RESET_ALL and GUC_NO_SHOW_ALL is removed in the above\n> > patch since we have these combinations now.\n>\n> pg_settings would be unable to show something marked as NO_SHOW_ALL,\n> so the SQL check that looked after (NO_SHOW_ALL && !NO_RESET_ALL) is\n> a no-op. Postgres will likely gain more parameters that are kept\n> around for compability reasons, and forcing a NO_RESET_ALL in such\n> cases could impact applications using RESET on such GUCs, meaning\n> potential compatibility breakages.\n>\n> > Similarly why can't we\n> > have a GUC marked as GUC_NO_SHOW_ALL but not GUC_NOT_IN_CONFIG. For me\n> > it makes sense if a GUC is marked as NO_SHOW_ALL and it can be present\n> > in a sample file. It's up to the author/developer to choose which all\n> > flags are applicable to the newly introduced GUCs. Please share your\n> > thoughts.\n>\n> As also mentioned upthread by Tom, I am not sure that this combination\n> makes much sense, actually, because I don't see why one would never\n> want to know what is the effective value loaded for a parameter stored\n> in a file when he/she has the permission to do so. This could be\n> changed as of ALTER SYSTEM, postgresql.conf or even an included file,\n> and the value can only be read if permission to see it is given to the\n> role querying SHOW or pg_settings. This combination of flags is not a\n> practice to encourage.\n\nGot it. Makes sense.\n\n\n> For the second part to prevent GUCs to be marked as NO_SHOW_ALL &&\n> !NOT_IN_SAMPLE, check_GUC_init() looks like the right place to me,\n> because this routine has been designed exactly for this purpose.\n>\n> So, what do you think about the attached?\n\nMy concern is if we do this, then we will end up having some policies\n(which can be read from pg_show_all_settings()) in guc.sql and some in\nguc.c. I feel all these should be at one place either at guc.c or\nguc.sql. It is better to move all other policies from guc.sql to\nguc.c. Otherwise, how about modifying the function\npg_show_all_settings as done in v1 patch and using this (as true)\nwhile creating the table tab_settings_flags in guc.sq and just remove\n(NO_SHOW_ALL && !NO_RESET_ALL) check from guc.sql. I don't think doing\nthis is a problem as we can retain the support of existing signatures\nof the pg_show_all_settings function as suggested by Justin upthread\nso that it will not cause any compatibility issues. Please share your\nthoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Feb 1, 2023 at 10:59 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 30, 2023 at 05:12:27PM +0530, Nitin Jadhav wrote:\n> > Thanks Michael for identifying a new mistake. I am a little confused\n> > here. I don't understand why GUC_NO_SHOW_ALL depends on other GUCs\n> > like GUC_NOT_IN_SAMPLE or GUC_NO_RESET_ALL. Looks like the dependency\n> > between GUC_NO_RESET_ALL and GUC_NO_SHOW_ALL is removed in the above\n> > patch since we have these combinations now.\n>\n> pg_settings would be unable to show something marked as NO_SHOW_ALL,\n> so the SQL check that looked after (NO_SHOW_ALL && !NO_RESET_ALL) is\n> a no-op. Postgres will likely gain more parameters that are kept\n> around for compability reasons, and forcing a NO_RESET_ALL in such\n> cases could impact applications using RESET on such GUCs, meaning\n> potential compatibility breakages.\n>\n> > Similarly why can't we\n> > have a GUC marked as GUC_NO_SHOW_ALL but not GUC_NOT_IN_CONFIG. For me\n> > it makes sense if a GUC is marked as NO_SHOW_ALL and it can be present\n> > in a sample file. It's up to the author/developer to choose which all\n> > flags are applicable to the newly introduced GUCs. Please share your\n> > thoughts.\n>\n> As also mentioned upthread by Tom, I am not sure that this combination\n> makes much sense, actually, because I don't see why one would never\n> want to know what is the effective value loaded for a parameter stored\n> in a file when he/she has the permission to do so. This could be\n> changed as of ALTER SYSTEM, postgresql.conf or even an included file,\n> and the value can only be read if permission to see it is given to the\n> role querying SHOW or pg_settings. This combination of flags is not a\n> practice to encourage.\n> --\n> Michael\n\n\n", "msg_date": "Sat, 4 Feb 2023 00:18:04 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> My concern is if we do this, then we will end up having some policies\n> (which can be read from pg_show_all_settings()) in guc.sql and some in\n> guc.c. I feel all these should be at one place either at guc.c or\n> guc.sql.\n\nI don't particularly see why that needs to be the case. Notably,\nif we're interested in enforcing a policy even for extension GUCs,\nguc.sql can't really do that since who knows whether the extension's\nauthor will bother to run that test with the extension loaded.\nOn the other hand, moving *all* those checks into guc.c is probably\nimpractical and certainly will add undesirable startup overhead.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 03 Feb 2023 14:37:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> I don't particularly see why that needs to be the case. Notably,\n> if we're interested in enforcing a policy even for extension GUCs,\n> guc.sql can't really do that since who knows whether the extension's\n> author will bother to run that test with the extension loaded.\n> On the other hand, moving *all* those checks into guc.c is probably\n> impractical and certainly will add undesirable startup overhead.\n\nOk. Understood the other problems. I have attached the v2 patch which\nuses the idea present in Michael's patch. In addition, I have removed\nfetching NO_SHOW_ALL parameters while creating tab_settings_flags\ntable in guc.sql and adjusted the test which checks for (NO_RESET_ALL\nAND NOT NO_SHOW_ALL) as this was misleading the developer who thinks\nthat tab_settings_flags table has NO_SHOW_ALL parameters which is\nincorrect.\n\nPlease review and share your thoughts.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Sat, Feb 4, 2023 at 1:07 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Nitin Jadhav <nitinjadhavpostgres@gmail.com> writes:\n> > My concern is if we do this, then we will end up having some policies\n> > (which can be read from pg_show_all_settings()) in guc.sql and some in\n> > guc.c. I feel all these should be at one place either at guc.c or\n> > guc.sql.\n>\n> I don't particularly see why that needs to be the case. Notably,\n> if we're interested in enforcing a policy even for extension GUCs,\n> guc.sql can't really do that since who knows whether the extension's\n> author will bother to run that test with the extension loaded.\n> On the other hand, moving *all* those checks into guc.c is probably\n> impractical and certainly will add undesirable startup overhead.\n>\n> regards, tom lane", "msg_date": "Sun, 5 Feb 2023 00:56:58 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Sun, Feb 05, 2023 at 12:56:58AM +0530, Nitin Jadhav wrote:\n> Ok. Understood the other problems. I have attached the v2 patch which\n> uses the idea present in Michael's patch. In addition, I have removed\n> fetching NO_SHOW_ALL parameters while creating tab_settings_flags\n> table in guc.sql and adjusted the test which checks for (NO_RESET_ALL\n> AND NOT NO_SHOW_ALL) as this was misleading the developer who thinks\n> that tab_settings_flags table has NO_SHOW_ALL parameters which is\n> incorrect.\n\nOkay, the part to add an initialization check for GUC_NO_SHOW_ALL and\nGUC_NOT_IN_SAMPLE looked fine by me, so applied after more comment\npolishing.\n\n+-- NO_RESET_ALL can be specified without NO_SHOW_ALL, like transaction_*.\n+-- tab_settings_flags does not contain NO_SHOW_ALL flags. Just checking for\n+-- NO_RESET_ALL implies NO_RESET_ALL AND NOT NO_SHOW_ALL.\n SELECT name FROM tab_settings_flags\n- WHERE NOT no_show_all AND no_reset_all\n+ WHERE no_reset_all\n ORDER BY 1;\n\nRemoving entirely no_show_all is fine by me, but keeping this SQL has\nlittle sense, then, because it would include any GUCs loaded by an\nexternal source when they define NO_RESET_ALL. I think that 0001\nshould be like the attached, instead, backpatched down to 15 (release\nweek, so it cannot be touched until the next version is stamped),\nwhere we just remove all the checks based on no_show_all.\n\nOn top of that, I have noticed an extra combination that would not\nmake sense and that could be checked with the SQL queries:\nGUC_DISALLOW_IN_FILE implies GUC_NOT_IN_SAMPLE. The opposite may not\nbe true, though, as some developer GUCs are marked as\nGUC_NOT_IN_SAMPLE but they are allowed in a file. The only exception\nto that currently is config_file. It is just a special case whose\nvalue is enforced at startup and it can be passed down as an option\nswitch via the postgres binary, still it seems like it would be better\nto also mark it as GUC_NOT_IN_SAMPLE? This is done in 0002, only for\nHEAD, as that would be a new check.\n\nThoughts?\n--\nMichael", "msg_date": "Mon, 6 Feb 2023 16:23:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Mon, Feb 06, 2023 at 04:23:02PM +0900, Michael Paquier wrote:\n> On top of that, I have noticed an extra combination that would not\n> make sense and that could be checked with the SQL queries:\n> GUC_DISALLOW_IN_FILE implies GUC_NOT_IN_SAMPLE. The opposite may not\n> be true, though, as some developer GUCs are marked as\n> GUC_NOT_IN_SAMPLE but they are allowed in a file. The only exception\n> to that currently is config_file. It is just a special case whose\n> value is enforced at startup and it can be passed down as an option\n> switch via the postgres binary, still it seems like it would be better\n> to also mark it as GUC_NOT_IN_SAMPLE? This is done in 0002, only for\n> HEAD, as that would be a new check.\n\n0001 has been applied to clean up the existing situation. Remains\n0002, that I am letting sleep to see if there's interest for it, or\nperhaps more ideas around it.\n--\nMichael", "msg_date": "Wed, 8 Feb 2023 16:59:39 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "> Okay, the part to add an initialization check for GUC_NO_SHOW_ALL and\n> GUC_NOT_IN_SAMPLE looked fine by me, so applied after more comment\n> polishing.\n>\n> 0001 has been applied to clean up the existing situation.\n\nThanks for committing these 2 changes.\n\n\n> On top of that, I have noticed an extra combination that would not\n> make sense and that could be checked with the SQL queries:\n> GUC_DISALLOW_IN_FILE implies GUC_NOT_IN_SAMPLE. The opposite may not\n> be true, though, as some developer GUCs are marked as\n> GUC_NOT_IN_SAMPLE but they are allowed in a file. The only exception\n> to that currently is config_file. It is just a special case whose\n> value is enforced at startup and it can be passed down as an option\n> switch via the postgres binary, still it seems like it would be better\n> to also mark it as GUC_NOT_IN_SAMPLE? This is done in 0002, only for\n> HEAD, as that would be a new check.\n>\n> Remains\n> 0002, that I am letting sleep to see if there's interest for it, or\n> perhaps more ideas around it.\n\nMakes sense and the patch looks good to me.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Wed, Feb 8, 2023 at 1:29 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Feb 06, 2023 at 04:23:02PM +0900, Michael Paquier wrote:\n> > On top of that, I have noticed an extra combination that would not\n> > make sense and that could be checked with the SQL queries:\n> > GUC_DISALLOW_IN_FILE implies GUC_NOT_IN_SAMPLE. The opposite may not\n> > be true, though, as some developer GUCs are marked as\n> > GUC_NOT_IN_SAMPLE but they are allowed in a file. The only exception\n> > to that currently is config_file. It is just a special case whose\n> > value is enforced at startup and it can be passed down as an option\n> > switch via the postgres binary, still it seems like it would be better\n> > to also mark it as GUC_NOT_IN_SAMPLE? This is done in 0002, only for\n> > HEAD, as that would be a new check.\n>\n> 0001 has been applied to clean up the existing situation. Remains\n> 0002, that I am letting sleep to see if there's interest for it, or\n> perhaps more ideas around it.\n> --\n> Michael\n\n\n", "msg_date": "Wed, 8 Feb 2023 16:21:57 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Wed, Feb 08, 2023 at 04:21:57PM +0530, Nitin Jadhav wrote:\n> Makes sense and the patch looks good to me.\n\nAh, OK. Thanks for the feedback!\n\nI am wondering.. Did people notice that this adds GUC_NOT_IN_SAMPLE\nto config_file in guc_tables.c? This makes sense in the long run\nbased on what this parameter is by design, still there may be an\nobjection to doing that?\n--\nMichael", "msg_date": "Thu, 9 Feb 2023 10:28:14 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Thu, Feb 09, 2023 at 10:28:14AM +0900, Michael Paquier wrote:\n> On Wed, Feb 08, 2023 at 04:21:57PM +0530, Nitin Jadhav wrote:\n> > Makes sense and the patch looks good to me.\n> \n> Ah, OK. Thanks for the feedback!\n> \n> I am wondering.. Did people notice that this adds GUC_NOT_IN_SAMPLE\n> to config_file in guc_tables.c? This makes sense in the long run\n> based on what this parameter is by design, still there may be an\n> objection to doing that?\n\nI think it's fine to add the flag.\n\nSee also:\n\nhttps://www.postgresql.org/message-id/flat/20211129030833.GJ17618@telsasoft.com\n|Since GUC_DISALLOW_IN_FILE effectively implies GUC_NOT_IN_SAMPLE in\n|src/backend/utils/misc/help_config.c:displayStruct(), many of the\n|redundant GUC_NOT_IN_SAMPLE could be removed.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 8 Feb 2023 20:32:12 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Thu, Feb 09, 2023 at 10:28:14AM +0900, Michael Paquier wrote:\n>> I am wondering.. Did people notice that this adds GUC_NOT_IN_SAMPLE\n>> to config_file in guc_tables.c? This makes sense in the long run\n>> based on what this parameter is by design, still there may be an\n>> objection to doing that?\n\n> I think it's fine to add the flag.\n\nHm. On the one hand, if it is in fact not in postgresql.conf.sample,\nthen that flag should be set for sure. OTOH I see that that flag\nisn't purely documentation: help_config.c thinks it should hide\nGUCs that are marked that way. Do we really want that behavior?\nNot sure. I can see an argument that you might want --describe-config\nto tell you that, but there are a lot of other GUC_NOT_IN_SAMPLE\nGUCs that maybe do indeed deserve to be left out.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 08 Feb 2023 21:42:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Wed, Feb 08, 2023 at 09:42:13PM -0500, Tom Lane wrote:\n> Hm. On the one hand, if it is in fact not in postgresql.conf.sample,\n> then that flag should be set for sure. OTOH I see that that flag\n> isn't purely documentation: help_config.c thinks it should hide\n> GUCs that are marked that way. Do we really want that behavior?\n> Not sure. I can see an argument that you might want --describe-config\n> to tell you that, but there are a lot of other GUC_NOT_IN_SAMPLE\n> GUCs that maybe do indeed deserve to be left out.\n\nI am not sure to follow. help_config() won't show something that's\neither marked NO_SHOW_ALL, NOT_IN_SAMPLE or DISALLOW_IN_FILE, hence\nconfig_file does not show up already in what postgres\n--describe-config prints, because it has DISALLOW_IN_FILE, so adding\nNOT_IN_SAMPLE changes nothing.\n--\nMichael", "msg_date": "Fri, 10 Feb 2023 16:43:15 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "At Fri, 10 Feb 2023 16:43:15 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 08, 2023 at 09:42:13PM -0500, Tom Lane wrote:\n> > Hm. On the one hand, if it is in fact not in postgresql.conf.sample,\n> > then that flag should be set for sure. OTOH I see that that flag\n> > isn't purely documentation: help_config.c thinks it should hide\n> > GUCs that are marked that way. Do we really want that behavior?\n> > Not sure. I can see an argument that you might want --describe-config\n> > to tell you that, but there are a lot of other GUC_NOT_IN_SAMPLE\n> > GUCs that maybe do indeed deserve to be left out.\n> \n> I am not sure to follow. help_config() won't show something that's\n> either marked NO_SHOW_ALL, NOT_IN_SAMPLE or DISALLOW_IN_FILE, hence\n> config_file does not show up already in what postgres\n> --describe-config prints, because it has DISALLOW_IN_FILE, so adding\n> NOT_IN_SAMPLE changes nothing.\n\nI think currently the output by --describe-config can be used only for\nconsulting while editing a (possiblly broken) config file. Thus I\nthink it's no use showing GIC_DISALLOW_IN_FILE items there unless we\nuse help_config() for an on-session use.\n\nOn the other hand, don't we need to remove the condition\nGUC_NOT_IN_SAMPLE from displayStruct? I think that help_config()\nshould show a value if it is marked as !GUC_DISALLOW_IN_FILE even if\nit is GUC_NOT_IN_SAMPLE. I'm not sure whether there's any variable\nthat are marked that way, though.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 13 Feb 2023 11:27:58 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On Mon, Feb 13, 2023 at 11:27:58AM +0900, Kyotaro Horiguchi wrote:\n> I think currently the output by --describe-config can be used only for\n> consulting while editing a (possiblly broken) config file. Thus I\n> think it's no use showing GIC_DISALLOW_IN_FILE items there unless we\n> use help_config() for an on-session use.\n> \n> On the other hand, don't we need to remove the condition\n> GUC_NOT_IN_SAMPLE from displayStruct? I think that help_config()\n> should show a value if it is marked as !GUC_DISALLOW_IN_FILE even if\n> it is GUC_NOT_IN_SAMPLE. I'm not sure whether there's any variable\n> that are marked that way, though.\n\nAs in marked with GUC_NOT_IN_SAMPLE but not GUC_DISALLOW_IN_FILE?\nThere are quite a lot, developer GUCs being one (say\nignore_invalid_pages). We don't want to list them in the sample file\nso as common users don't play with them, still they make sense if\nlisted in a file.\n\nIf you add a check meaning that GUC_DISALLOW_IN_FILE implies\nGUC_NOT_IN_SAMPLE, where one change would need to be applied to\nconfig_file as all the other GUC_DISALLOW_IN_FILE GUCs already do\nthat, you could remove GUC_DISALLOW_IN_FILE. However,\nGUC_NOT_IN_SAMPLE should be around to not expose options, we don't\nwant common users to know too much about.\n\nThe question about how much people rely on --describe-config these\ndays is a good one, so perhaps there could be an argument in removing\nGUC_NOT_IN_SAMPLE from the set. TBH, I would be really surprised that\nanybody able to use a developer option writes an configuration file in\nan incorrect format and needs to use this option, though :)\n--\nMichael", "msg_date": "Mon, 13 Feb 2023 12:18:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "At Mon, 13 Feb 2023 12:18:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Mon, Feb 13, 2023 at 11:27:58AM +0900, Kyotaro Horiguchi wrote:\n> > I think currently the output by --describe-config can be used only for\n> > consulting while editing a (possiblly broken) config file. Thus I\n> > think it's no use showing GIC_DISALLOW_IN_FILE items there unless we\n> > use help_config() for an on-session use.\n> > \n> > On the other hand, don't we need to remove the condition\n> > GUC_NOT_IN_SAMPLE from displayStruct? I think that help_config()\n> > should show a value if it is marked as !GUC_DISALLOW_IN_FILE even if\n> > it is GUC_NOT_IN_SAMPLE. I'm not sure whether there's any variable\n> > that are marked that way, though.\n> \n> As in marked with GUC_NOT_IN_SAMPLE but not GUC_DISALLOW_IN_FILE?\n> There are quite a lot, developer GUCs being one (say\n> ignore_invalid_pages). We don't want to list them in the sample file\n> so as common users don't play with them, still they make sense if\n> listed in a file.\n\nAh, right. I think I faintly had them in my mind.\n\n> If you add a check meaning that GUC_DISALLOW_IN_FILE implies\n> GUC_NOT_IN_SAMPLE, where one change would need to be applied to\n> config_file as all the other GUC_DISALLOW_IN_FILE GUCs already do\n> that, you could remove GUC_DISALLOW_IN_FILE. However,\n> GUC_NOT_IN_SAMPLE should be around to not expose options, we don't\n> want common users to know too much about.\n\nOkay, I thought that \"postgres --help-config\" was a sort of developer\noption, but your explanation above makes sense.\n\n> The question about how much people rely on --describe-config these\n> days is a good one, so perhaps there could be an argument in removing\n\nYeah, that the reason for my thought it was a developer option...\n\n> GUC_NOT_IN_SAMPLE from the set. TBH, I would be really surprised that\n> anybody able to use a developer option writes an configuration file in\n> an incorrect format and needs to use this option, though :)\n\nHmm. I didn't directly link GUC_NOT_IN_SAMPLE to being a developer\noption. But on second thought, it seems that it is. So, the current\ncode looks good for me now. Thanks for the explanation.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 14 Feb 2023 10:42:34 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" }, { "msg_contents": "On 14/02/2023 03:42, Kyotaro Horiguchi wrote:\n> At Mon, 13 Feb 2023 12:18:07 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n>> On Mon, Feb 13, 2023 at 11:27:58AM +0900, Kyotaro Horiguchi wrote:\n>>> I think currently the output by --describe-config can be used only for\n>>> consulting while editing a (possiblly broken) config file. Thus I\n>>> think it's no use showing GIC_DISALLOW_IN_FILE items there unless we\n>>> use help_config() for an on-session use.\n>>>\n>>> On the other hand, don't we need to remove the condition\n>>> GUC_NOT_IN_SAMPLE from displayStruct? I think that help_config()\n>>> should show a value if it is marked as !GUC_DISALLOW_IN_FILE even if\n>>> it is GUC_NOT_IN_SAMPLE. I'm not sure whether there's any variable\n>>> that are marked that way, though.\n>>\n>> As in marked with GUC_NOT_IN_SAMPLE but not GUC_DISALLOW_IN_FILE?\n>> There are quite a lot, developer GUCs being one (say\n>> ignore_invalid_pages). We don't want to list them in the sample file\n>> so as common users don't play with them, still they make sense if\n>> listed in a file.\n> \n> Ah, right. I think I faintly had them in my mind.\n> \n>> If you add a check meaning that GUC_DISALLOW_IN_FILE implies\n>> GUC_NOT_IN_SAMPLE, where one change would need to be applied to\n>> config_file as all the other GUC_DISALLOW_IN_FILE GUCs already do\n>> that, you could remove GUC_DISALLOW_IN_FILE. However,\n>> GUC_NOT_IN_SAMPLE should be around to not expose options, we don't\n>> want common users to know too much about.\n> \n> Okay, I thought that \"postgres --help-config\" was a sort of developer\n> option, but your explanation above makes sense.\n> \n>> The question about how much people rely on --describe-config these\n>> days is a good one, so perhaps there could be an argument in removing\n> \n> Yeah, that the reason for my thought it was a developer option...\n> \n>> GUC_NOT_IN_SAMPLE from the set. TBH, I would be really surprised that\n>> anybody able to use a developer option writes an configuration file in\n>> an incorrect format and needs to use this option, though :)\n> \n> Hmm. I didn't directly link GUC_NOT_IN_SAMPLE to being a developer\n> option. But on second thought, it seems that it is. So, the current\n> code looks good for me now. Thanks for the explanation.\n\nThe first patch was committed, and there's not much enthusiasm for \ndisallowing (GUC_DISALLOW_IN_FILE && !GUC_NOT_IN_SAMPLE), so I am \nmarking this as Committed in the commitfest app. Thanks!\n\n- Heikki\n\n\n\n", "msg_date": "Wed, 22 Feb 2023 11:08:49 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Fix GUC_NO_SHOW_ALL test scenario in 003_check_guc.pl" } ]
[ { "msg_contents": "Hi,\n\nI noticed one BF failure[1] when monitoring the BF for some other commit.\n\n# Failed test 'authentication success for method password, connstr user=scram_role: log matches'\n# at t/001_password.pl line 120.\n# '2023-01-13 07:33:46.741 EST [243628:5] LOG: received SIGHUP, reloading configuration files\n# 2023-01-13 07:33:46.742 EST [243662:1] [unknown] LOG: connection received: host=[local]\n# 2023-01-13 07:33:46.744 EST [243662:2] [unknown] LOG: connection authorized: user=scram_role database=postgres application_name=001_password.pl\n# 2023-01-13 07:33:46.748 EST [243662:3] 001_password.pl LOG: statement: SELECT $$connected with user=scram_role$$\n# '\n# doesn't match '(?^:connection authenticated: identity=\"scram_role\" method=password)'\n# Looks like you failed 1 test of 79.\n[08:33:47] t/001_password.pl ........ \n\nAfter checking the test and log, I can see the test failed at the following code:\n----\n# For plain \"password\" method, all users should also be able to connect.\nreset_pg_hba($node, 'all', 'all', 'password');\ntest_conn($node, 'user=scram_role', 'password', 0,\n\tlog_like =>\n\t [qr/connection authenticated: identity=\"scram_role\" method=password/]);\n----\n\n From the log, the expected \"xxx method=password \" log was not output, a simple\n\"connection authorized: user=scram_role database=postgres \" was output Instead.\nSo it seems the connection happens before pg_ident.conf is actually reloaded ?\nNot sure if we need to do something make sure the reload happen, because it's\nlooks like very rare failure which hasn't happen in last 90 days.\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=malleefowl&dt=2023-01-13%2009%3A54%3A51\n\nBest regards,\nHou zhijie\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 13:55:31 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> writes:\n> I noticed one BF failure[1] when monitoring the BF for some other commit.\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=malleefowl&dt=2023-01-13%2009%3A54%3A51\n> ...\n> So it seems the connection happens before pg_ident.conf is actually reloaded ?\n> Not sure if we need to do something make sure the reload happen, because it's\n> looks like very rare failure which hasn't happen in last 90 days.\n\nThat does look like a race condition between config reloading and\nnew-backend launching. However, I can't help being suspicious about\nthe fact that we haven't seen this symptom before and now here it is\nbarely a day after 7389aad63 (Use WaitEventSet API for postmaster's\nevent loop). It seems fairly plausible that that did something that\ncauses the postmaster to preferentially process connection-accept ahead\nof SIGHUP. I took a quick look through the code and did not see a\nsmoking gun, but I'm way too tired to be sure I didn't miss something.\n\nIn general, use of WaitEventSet instead of signals will tend to slot\nthe postmaster into non-temporally-ordered event responses in two\nways: (1) the latch.c code will report events happening at more-or-less\nthe same time in a specific order, and (2) the postmaster.c code will\nreact to signal-handler-set flags in a specific order. AFAICS, both\nof those code layers will prioritize latch events ahead of\nconnection-accept events, but did I misread it?\n\nAlso it seems like the various platform-specific code paths in latch.c\ncould diverge as to the priority order of events, which could cause\nannoying platform-specific behavior. Not sure there's much to be\ndone there other than to be sensitive to not letting such divergence\nhappen.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Jan 2023 02:55:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "On Sat, Jan 14, 2023 at 8:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com> writes:\n> > I noticed one BF failure[1] when monitoring the BF for some other commit.\n> > [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=malleefowl&dt=2023-01-13%2009%3A54%3A51\n> > ...\n> > So it seems the connection happens before pg_ident.conf is actually reloaded ?\n> > Not sure if we need to do something make sure the reload happen, because it's\n> > looks like very rare failure which hasn't happen in last 90 days.\n>\n> That does look like a race condition between config reloading and\n> new-backend launching. However, I can't help being suspicious about\n> the fact that we haven't seen this symptom before and now here it is\n> barely a day after 7389aad63 (Use WaitEventSet API for postmaster's\n> event loop). It seems fairly plausible that that did something that\n> causes the postmaster to preferentially process connection-accept ahead\n> of SIGHUP. I took a quick look through the code and did not see a\n> smoking gun, but I'm way too tired to be sure I didn't miss something.\n\nYeah, I guess the scheduling might go something like this:\n\n1. kill() runs and sets SIGHUP as pending in the postmaster process;\nthe postmaster is now runnable but not yet running.\n2. Meanwhile connect() starts.\n3. postmaster starts running, sees the pending signal and immediately\nruns the handler, which previously did the actual reload (before doing\nanything else) but now just sets our reload-pending flag and does\nkill(self, SIGURG), and then returns, so epoll_wait() is unblocked.\n4. epoll_wait() returns, reporting two events: signalfd ready to read\n(or self-pipe, or EVFILT_SIGNAL), AND connection ready to accept.\n5. Connection happens to be reported first so we accept/fork the\nconnection and reload later.\n\nI think epoll will report fds in the order they became ready\n(undocumented, but apparently well known that it's a kind of FIFO\nlinked list), but that itself is indeterminate, as 2 and 3 race. It\nlooks like melleefowl is slow/overloaded (often ~3 hours to run the\ntests, sometimes ~half and hour and sometimes ~4 hours). Now that I\nthink about it, it's surprising I haven't seen this before though,\nimplying that 3 always beats 2, so maybe I'm missing something else...\n\nBut if that's the general idea, I suppose there would be two ways to\ngive higher priority to signals/latches that arrive in the same set of\nevents: (1) scan the events array twice (for latches then\nconnections), or (2) check our pending flags every time through the\noutput events loop, at the top, even for connection events (ie just\nmove some lines up a bit). Probably 2 is the way to go (see also\ndiscussion about whether we should do that anyway, to give priority to\na shutdown request if it arrives while the server is looping over 64\nserver sockets that are all ready to accept).\n\n\n", "msg_date": "Sat, 14 Jan 2023 22:29:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "On Sat, Jan 14, 2023 at 10:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> But if that's the general idea, I suppose there would be two ways to\n> give higher priority to signals/latches that arrive in the same set of\n> events: (1) scan the events array twice (for latches then\n> connections), or (2) check our pending flags every time through the\n> output events loop, at the top, even for connection events (ie just\n> move some lines up a bit).\n\nHere's a sketch of the first idea. I also coded up the second idea\n(basically: when nevents > 1, behave as though the latch has been set\nevery time through the loop, and then also check for\nWL_SOCKET_ACCEPT), but I'm not sure I like it (it's less clear to\nread, harder to explain, and I'm also interested in exploring\nalternative ways to receive signals other than with handlers that set\nthese flags so I'm not sure I like baking in the assumption that we\ncan test the flags without having received a corresponding event).\nI'm going to be AFK for a day or so and I'd like to see if we can\ncollect some more evidence about this and maybe repro first so I'll\nwait for a bit.", "msg_date": "Sun, 15 Jan 2023 00:35:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "On Sun, Jan 15, 2023 at 12:35 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Here's a sketch of the first idea.\n\nTo hit the problem case, the signal needs to arrive in between the\nlatch->is_set check and the epoll_wait() call, and the handler needs\nto take a while to get started. (If it arrives before the\nlatch->is_set check we report WL_LATCH_SET immediately, and if it\narrives after the epoll_wait() call begins, we get EINTR and go back\naround to the latch->is_set check.) With some carefully placed sleeps\nto simulate a CPU-starved system (see attached) I managed to get a\nkill-then-connect sequence to produce:\n\n2023-01-17 10:48:32.508 NZDT [555849] LOG: nevents = 2\n2023-01-17 10:48:32.508 NZDT [555849] LOG: events[0] = WL_SOCKET_ACCEPT\n2023-01-17 10:48:32.508 NZDT [555849] LOG: events[1] = WL_LATCH_SET\n2023-01-17 10:48:32.508 NZDT [555849] LOG: received SIGHUP, reloading\nconfiguration files\n\nWith the patch I posted, we process that in the order we want:\n\n2023-01-17 11:06:31.340 NZDT [562262] LOG: nevents = 2\n2023-01-17 11:06:31.340 NZDT [562262] LOG: events[1] = WL_LATCH_SET\n2023-01-17 11:06:31.340 NZDT [562262] LOG: received SIGHUP, reloading\nconfiguration files\n2023-01-17 11:06:31.344 NZDT [562262] LOG: events[0] = WL_SOCKET_ACCEPT\n\nOther thoughts:\n\nAnother idea would be to teach the latch infrastructure itself to\nmagically swap latch events to position 0. Latches are usually\nprioritised; it's only in this rare race case that they are not.\n\nOr going the other way, I realise that we're lacking a \"wait for\nreload\" mechanism as discussed in other threads (usually people want\nthis if they care about its effects on backends other than the\npostmaster, where all bets are off and Andres once suggested the\nProcSignalBarrier hammer), and if we ever got something like that it\nmight be another solution to this particular problem.", "msg_date": "Tue, 17 Jan 2023 11:24:23 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "On Tue, Jan 17, 2023 at 11:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another idea would be to teach the latch infrastructure itself to\n> magically swap latch events to position 0. Latches are usually\n> prioritised; it's only in this rare race case that they are not.\n\nI liked that idea for a while, but I suspect it is not really possible\nto solve the problem completely this way, because it won't work on\nWindows (see below) and the race I described earlier is probably not\nthe only one. I think it must also be possible for poll() to ignore a\nsignal that becomes pending just as the system call begins and return\na socket fd that has also just become ready, without waiting (thus not\ncausing EINTR). Then the handler would run after we return to\nuserspace, we'd see only the socket event, and a later call would see\nthe latch event.\n\nSo I think we probably need something like the attached, which I was\noriginally trying to avoid.\n\nLooking into all that made me notice a related problem on Windows.\nThere's an interesting difference between the implementation of\nselect() in src/backend/port/win32/socket.c and the Windows\nimplementation of WaitEventSetBlock() in latch.c. The latch.c code\nonly reports one event at a time, in event array order, because that's\nWaitForMultipleObjects()'s contract and we expose that fairly\ndirectly. The older socket.c code uses that only for wakeup, and then\nit polls *all* sockets to be able to report more than one at a time.\nI was careful to use a large array of output events to preserve the\nexisting round-robin servicing of multiple server sockets, but I see\nnow that that only works on Unix. On Windows, I suspect that one\nsocket receiving a fast enough stream of new connections could prevent\na second socket from ever being serviced. I think we might want to do\nsomething about that.", "msg_date": "Fri, 20 Jan 2023 07:32:39 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> So I think we probably need something like the attached, which I was\n> originally trying to avoid.\n\nYeah, something like that. I also wonder if you don't need to think\na bit harder about the ordering of the flag checks, in particular\nit seems like servicing reload_request before child_exit might be\na good idea (remembering that child_exit might cause launching of\nnew children, so we want to be up to speed on relevant settings).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:16:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" }, { "msg_contents": "On Fri, Jan 20, 2023 at 9:16 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n> > So I think we probably need something like the attached, which I was\n> > originally trying to avoid.\n>\n> Yeah, something like that. I also wonder if you don't need to think\n> a bit harder about the ordering of the flag checks, in particular\n> it seems like servicing reload_request before child_exit might be\n> a good idea (remembering that child_exit might cause launching of\n> new children, so we want to be up to speed on relevant settings).\n\nAgreed, and done.\n\n\n", "msg_date": "Wed, 25 Jan 2023 15:09:39 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BF animal malleefowl reported an failure in 001_password.pl" } ]
[ { "msg_contents": "Hi,\nI was looking at commit b7ae03953690a1dee455ba3823cc8f71a72cbe1d .\n\nIn `pg_get_publication_tables`, attnums is allocated with size\n`desc->natts`. However, since some columns may be dropped, this size may be\nlarger than necessary.\nWhen `nattnums > 0` is false, there is no need to allocate the `attnums`\narray. In the current formation, `attnums` should be freed in this scenario.\n\nPlease take a look at the patch which moves the allocation to inside the\n`if (nattnums > 0)` block.\n\nThanks", "msg_date": "Fri, 13 Jan 2023 07:37:29 -0800", "msg_from": "Ted Yu <yuzhihong@gmail.com>", "msg_from_op": true, "msg_subject": "properly sizing attnums in pg_get_publication_tables" }, { "msg_contents": "On Fri, Jan 13, 2023 at 07:37:29AM -0800, Ted Yu wrote:\n> Hi,\n> I was looking at commit b7ae03953690a1dee455ba3823cc8f71a72cbe1d .\n> \n> In `pg_get_publication_tables`, attnums is allocated with size\n> `desc->natts`. However, since some columns may be dropped, this size may be\n> larger than necessary.\n> When `nattnums > 0` is false, there is no need to allocate the `attnums`\n> array. In the current formation, `attnums` should be freed in this scenario.\n> \n> Please take a look at the patch which moves the allocation to inside the\n> `if (nattnums > 0)` block.\n> \n> Thanks\n\nIt doesn't seem worth the bother of changing it or adding 10 lines of\ncode, or keeping track of whether \"attnums\" is initialized or not.\n\nAfter all, it wasn't worth pfree()ing the array (which seems to work as\nintended). The array can't be larger than ~3200 bytes, and I doubt\nanybody is going to be excited about saving a couple bytes per dropped\ncolumn.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 13 Jan 2023 13:26:26 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: properly sizing attnums in pg_get_publication_tables" } ]
[ { "msg_contents": "Over at [1] there was some discussion of moving knowledge of what's\nrequired to be fixed from old branch repos to be able to upgrade them\ninto the core code, instead of having it reside in a buildfarm client\nmodule.\n\nHere's a piece of WIP for that, in the form of a perl module that\nprovides a function that takes an old version number / tag and provides\nthe set of sql statements that need to be run to make the old repo\nupgradeable. It still needs a good deal of polish, but it's a start.\n\nThe advantage is that it makes it far less likely that the buildfarm\nmaintainer (i.e. me for now) is a bottleneck in fixing issues that arise\nfrom development. This is by far the biggest area where we have seen\nbuildfarm breakage for cross version upgrade testing.\n\n\ncheers\n\n\nandrew\n\n\n[1]  https://postgr.es/m/951602.1673535249@sss.pgh.pa.us\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Fri, 13 Jan 2023 17:20:41 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Fixes required for cross version update testing" }, { "msg_contents": "On Fri, Jan 13, 2023 at 05:20:41PM -0500, Andrew Dunstan wrote:\n> Over at [1] there was some discussion of moving knowledge of what's\n> required to be fixed from old branch repos to be able to upgrade them\n> into the core code, instead of having it reside in a buildfarm client\n> module.\n\nIs this instead of the idea for the buildfarm to use the same SQL script\nas the TAP test (upgrade_adapt.sql) ?\n\nDiscussed various places:\n\nhttps://www.postgresql.org/message-id/flat/1575064.1615060903@sss.pgh.pa.us\n\nhttps://github.com/PGBuildFarm/client-code/pull/23\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0df9641d39057f431655b92b8a490b89c508a0b3\n| The long-term plan is to make the buildfarm code re-use this new SQL\n| file, so as committers are able to fix any compatibility issues in the\n| tests of pg_upgrade with a refresh of the core code, without having to\n| poke at the buildfarm client. Note that this is only able to handle the\n| main regression test suite, and that nothing is done yet for contrib\n| modules yet (these have more issues like their database names).\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9814ff550046f825b751803191b29a2fbbc79283\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 13 Jan 2023 18:33:38 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Fixes required for cross version update testing" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Here's a piece of WIP for that, in the form of a perl module that\n> provides a function that takes an old version number / tag and provides\n> the set of sql statements that need to be run to make the old repo\n> upgradeable. It still needs a good deal of polish, but it's a start.\n\nOh! I've been hacking on exactly the same idea all day ...\n\nhttps://www.postgresql.org/message-id/891521.1673657296%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 13 Jan 2023 19:49:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fixes required for cross version update testing" }, { "msg_contents": "\nOn 2023-01-13 Fr 19:49, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Here's a piece of WIP for that, in the form of a perl module that\n>> provides a function that takes an old version number / tag and provides\n>> the set of sql statements that need to be run to make the old repo\n>> upgradeable. It still needs a good deal of polish, but it's a start.\n> Oh! I've been hacking on exactly the same idea all day ...\n>\n> https://www.postgresql.org/message-id/891521.1673657296%40sss.pgh.pa.us\n>\n> \t\t\t\n\n\n\nOh, sorry if I have wasted some of your time. I posted my outline idea\nand then it itched so I scratched.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 19:57:25 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Fixes required for cross version update testing" }, { "msg_contents": "\nOn 2023-01-13 Fr 19:33, Justin Pryzby wrote:\n> On Fri, Jan 13, 2023 at 05:20:41PM -0500, Andrew Dunstan wrote:\n>> Over at [1] there was some discussion of moving knowledge of what's\n>> required to be fixed from old branch repos to be able to upgrade them\n>> into the core code, instead of having it reside in a buildfarm client\n>> module.\n> Is this instead of the idea for the buildfarm to use the same SQL script\n> as the TAP test (upgrade_adapt.sql) ?\n>\n> Discussed various places:\n>\n> https://www.postgresql.org/message-id/flat/1575064.1615060903@sss.pgh.pa.us\n>\n> https://github.com/PGBuildFarm/client-code/pull/23\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=0df9641d39057f431655b92b8a490b89c508a0b3\n> | The long-term plan is to make the buildfarm code re-use this new SQL\n> | file, so as committers are able to fix any compatibility issues in the\n> | tests of pg_upgrade with a refresh of the core code, without having to\n> | poke at the buildfarm client. Note that this is only able to handle the\n> | main regression test suite, and that nothing is done yet for contrib\n> | modules yet (these have more issues like their database names).\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=9814ff550046f825b751803191b29a2fbbc79283\n>\n\nI didn't adopt the PR precisely because it didn't do enough, unlike the\nmodule I posted, which supports upgrades all the way from 9.2 forward,\nand for more databases than just regression.\n\nI frankly think this is a better approach.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 20:03:59 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Re: Fixes required for cross version update testing" } ]
[ { "msg_contents": "This is a followup to the discussion at [1], in which we agreed that\nit's time to fix the buildfarm client so that knowledge about\ncross-version discrepancies in pg_dump output can be moved into\nthe community git tree, making it feasible for people other than\nAndrew to fix problems when we change things of that sort.\nThe idea is to create helper files that live in the git tree and\nare used by the BF client to perform the activities that are likely\nto need tweaking.\n\nAttached are two patches, one for PG git and one for the buildfarm\nclient, that create a working POC for this approach. I've only\ncarried this as far as making a helper file for HEAD, but I believe\nthat helper files for the back branches would mostly just need to\nbe cut-down versions of this one. I've tested it successfully with\ncross-version upgrade tests down to 9.3. (9.2 would need some more\nwork, and I'm not sure if it's worth the trouble --- are we going to\nretire 9.2 soon?)\n\nI'm a very mediocre Perl programmer, so I'm sure there are stylistic\nand other problems, but I'm encouraged that this seems feasible.\n\nAlso, I wonder if we can't get rid of\nsrc/bin/pg_upgrade/upgrade_adapt.sql in favor of using this code.\nI tried to write adjust_database_contents() in such a way that it\ncould be pointed at a database by some other Perl code that's\nnot the buildfarm client.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/951602.1673535249%40sss.pgh.pa.us", "msg_date": "Fri, 13 Jan 2023 19:48:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-13 Fr 19:48, Tom Lane wrote:\n> This is a followup to the discussion at [1], in which we agreed that\n> it's time to fix the buildfarm client so that knowledge about\n> cross-version discrepancies in pg_dump output can be moved into\n> the community git tree, making it feasible for people other than\n> Andrew to fix problems when we change things of that sort.\n> The idea is to create helper files that live in the git tree and\n> are used by the BF client to perform the activities that are likely\n> to need tweaking.\n>\n> Attached are two patches, one for PG git and one for the buildfarm\n> client, that create a working POC for this approach. I've only\n> carried this as far as making a helper file for HEAD, but I believe\n> that helper files for the back branches would mostly just need to\n> be cut-down versions of this one. I've tested it successfully with\n> cross-version upgrade tests down to 9.3. (9.2 would need some more\n> work, and I'm not sure if it's worth the trouble --- are we going to\n> retire 9.2 soon?)\n>\n> I'm a very mediocre Perl programmer, so I'm sure there are stylistic\n> and other problems, but I'm encouraged that this seems feasible.\n>\n> Also, I wonder if we can't get rid of\n> src/bin/pg_upgrade/upgrade_adapt.sql in favor of using this code.\n> I tried to write adjust_database_contents() in such a way that it\n> could be pointed at a database by some other Perl code that's\n> not the buildfarm client.\n>\n> \t\t\tregards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/951602.1673535249%40sss.pgh.pa.us\n\n\nOK, we've been on parallel tracks (sorry about that). Let's run with\nyours, as it covers more ground.\n\nOne thing I would change is that your adjust_database_contents tries to\nmake the adjustments rather than passing back a set of statements. We\ncould make that work, although your attempt won't really work for the\nbuildfarm, but I would just make actually performing the adjustments the\nclient's responsibility. That would make for much less disturbance in\nthe buildfarm code.\n\nI also tried to remove a lot of the ugly release tag processing,\nleveraging our PostgreSQL::Version gadget. I think that's worthwhile too.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sat, 14 Jan 2023 09:49:12 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-13 Fr 19:48, Tom Lane wrote:\n>> Attached are two patches, one for PG git and one for the buildfarm\n>> client, that create a working POC for this approach.\n\n> OK, we've been on parallel tracks (sorry about that). Let's run with\n> yours, as it covers more ground.\n\nCool.\n\n> One thing I would change is that your adjust_database_contents tries to\n> make the adjustments rather than passing back a set of statements.\n\nAgreed. I'd thought maybe adjust_database_contents would need to\nactually interact with the target DB; but experience so far says\nthat IF EXISTS conditionality is sufficient, so we can just build\na static list of statements to issue. It's definitely a simpler\nAPI that way.\n\n> I also tried to remove a lot of the ugly release tag processing,\n> leveraging our PostgreSQL::Version gadget. I think that's worthwhile too.\n\nOK, I'll take a look at that and make a new draft.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 14 Jan 2023 10:47:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "I wrote:\n> OK, I'll take a look at that and make a new draft.\n\nHere's version 2, incorporating your suggestions and with some\nfurther work to make it handle 9.2 fully. I think this could\nbe committable so far as HEAD is concerned, though I still\nneed to make versions of AdjustUpgrade.pm for the back branches.\n\nI tried to use this to replace upgrade_adapt.sql, but failed so\nfar because I couldn't figure out exactly how you're supposed\nto use 002_pg_upgrade.pl with an old source installation.\nIt's not terribly well documented. In any case I think we\nneed a bit more thought about that, because it looks like\n002_pg_upgrade.pl thinks that you can supply any random dump\nfile to serve as the initial state of the old installation;\nbut neither what I have here nor any likely contents of\nupgrade_adapt.sql or the \"custom filter\" rules are going to\nwork on databases that aren't just the standard regression\ndatabase(s) of the old version.\n\nI assume we should plan on reverting 9814ff550 (Add custom filtering\nrules to the TAP tests of pg_upgrade)? Does that have any\nplausible use that's not superseded by this patchset?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 14 Jan 2023 15:06:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-14 Sa 15:06, Tom Lane wrote:\n> I wrote:\n>> OK, I'll take a look at that and make a new draft.\n> Here's version 2, incorporating your suggestions and with some\n> further work to make it handle 9.2 fully. I think this could\n> be committable so far as HEAD is concerned, though I still\n> need to make versions of AdjustUpgrade.pm for the back branches.\n\n\nThis looks pretty good to me.\n\nI'll probably change this line\n\n   my $adjust_cmds = adjust_database_contents($oversion, %dbnames);\n\nso it's only called if the old and new versions are different. Is there\nany case where a repo shouldn't be upgradeable to its own version\nwithout adjustment?\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 15 Jan 2023 08:21:21 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-14 Sa 15:06, Tom Lane wrote:\n>> Here's version 2, incorporating your suggestions and with some\n>> further work to make it handle 9.2 fully.\n\n> This looks pretty good to me.\n\nGreat! I'll work on making back-branch versions.\n\n> I'll probably change this line\n>    my $adjust_cmds = adjust_database_contents($oversion, %dbnames);\n> so it's only called if the old and new versions are different. Is there\n> any case where a repo shouldn't be upgradeable to its own version\n> without adjustment?\n\nMakes sense. I'd keep the check for $oversion eq 'HEAD' in the\nsubroutines, but that's mostly just to protect the version\nconversion code below it.\n\nAnother thing I was just thinking about was not bothering to run\n\"diff\" if the fixed dump strings are equal in-memory. You could\ntake that even further and not write out the fixed files at all,\nbut that seems like a bad idea for debuggability of the adjustment\nsubroutines. However, I don't see why we need to write an\nempty diff file, nor parse it.\n\nOne other question before I continue --- do the adjustment\nsubroutines need to worry about Windows newlines in the strings?\nIt's not clear to me whether Perl will automatically make \"\\n\"\nin a pattern match \"\\r\\n\", or whether it's not a problem because\nsomething upstream will have stripped \\r's.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 11:01:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On Sat, Jan 14, 2023 at 03:06:06PM -0500, Tom Lane wrote:\n> I tried to use this to replace upgrade_adapt.sql, but failed so\n> far because I couldn't figure out exactly how you're supposed\n> to use 002_pg_upgrade.pl with an old source installation.\n> It's not terribly well documented.\n\nAs in pg_upgrade's TESTING or the comments of the tests?\n\n> In any case I think we\n> need a bit more thought about that, because it looks like\n> 002_pg_upgrade.pl thinks that you can supply any random dump\n> file to serve as the initial state of the old installation;\n> but neither what I have here nor any likely contents of\n> upgrade_adapt.sql or the \"custom filter\" rules are going to\n> work on databases that aren't just the standard regression\n> database(s) of the old version.\n\nYeah, this code needs an extra push that I have not been able to\nfigure out yet, as we could recommend the creation of a dump with\ninstallcheck-world and USE_MODULE_DB=1. Perhaps a module is just\nbetter at the end.\n\n> I assume we should plan on reverting 9814ff550 (Add custom filtering\n> rules to the TAP tests of pg_upgrade)? Does that have any\n> plausible use that's not superseded by this patchset?\n\nNope, this could just be removed if we finish by adding a module that\ndoes exactly the same work. If you are planning on committing the\nmodule you have, i'd be happy to take care of a revert for this part.\n\n+ # Can't upgrade aclitem in user tables from pre 16 to 16+.\n+ _add_st($result, 'regression',\n+ 'alter table public.tab_core_types drop column aclitem');\nCould you just use a DO block here to detect tables with such\nattributes, like in upgrade_adapt.sql, rather than dropping the table\nfrom the core tests? That's more consistent with the treatment of\nWITH OIDS.\n\nIs this module pluggable with 002_pg_upgrade.pl?\n--\nMichael", "msg_date": "Mon, 16 Jan 2023 07:51:07 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Jan 14, 2023 at 03:06:06PM -0500, Tom Lane wrote:\n> + # Can't upgrade aclitem in user tables from pre 16 to 16+.\n> + _add_st($result, 'regression',\n> + 'alter table public.tab_core_types drop column aclitem');\n\n> Could you just use a DO block here to detect tables with such\n> attributes, like in upgrade_adapt.sql, rather than dropping the table\n> from the core tests? That's more consistent with the treatment of\n> WITH OIDS.\n\nI guess, but it seems like make-work as long as there's just the one\ncolumn.\n\n> Is this module pluggable with 002_pg_upgrade.pl?\n\nI did find that 002_pg_upgrade.pl could load it. I got stuck at\nthe point of trying to test things, because I didn't understand\nwhat the test process is supposed to be for an upgrade from a\nback branch. For some reason I thought that 002_pg_upgrade.pl\ncould automatically create the old regression database, but\nnow I see that's not implemented.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:02:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-15 Su 11:01, Tom Lane wrote:\n> Another thing I was just thinking about was not bothering to run\n> \"diff\" if the fixed dump strings are equal in-memory. You could\n> take that even further and not write out the fixed files at all,\n> but that seems like a bad idea for debuggability of the adjustment\n> subroutines. However, I don't see why we need to write an\n> empty diff file, nor parse it.\n\n\nYeah, that makes sense.\n\n> One other question before I continue --- do the adjustment\n> subroutines need to worry about Windows newlines in the strings?\n> It's not clear to me whether Perl will automatically make \"\\n\"\n> in a pattern match \"\\r\\n\", or whether it's not a problem because\n> something upstream will have stripped \\r's.\n>\n> \t\t\t\n\n\nI don't think we need to worry about them, but I will have a closer\nlook. Those replacement lines are very difficult to read. I think use of\nextended regexes and some multi-part replacements would help. I'll have\na go at that tomorrow.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:12:18 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> Those replacement lines are very difficult to read. I think use of\n> extended regexes and some multi-part replacements would help. I'll have\n> a go at that tomorrow.\n\nYeah, after I wrote that code I remembered about \\Q ... \\E, which would\neliminate the need for most of the backslashes and probably make things\nbetter that way. I didn't get around to improving it yet though, so\nfeel free to have a go.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 18:37:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On Sun, Jan 15, 2023 at 06:02:07PM -0500, Tom Lane wrote:\n> I guess, but it seems like make-work as long as there's just the one\n> column.\n\nWell, the query is already written, so I would use that, FWIW.\n\n> I did find that 002_pg_upgrade.pl could load it. I got stuck at\n> the point of trying to test things, because I didn't understand\n> what the test process is supposed to be for an upgrade from a\n> back branch. For some reason I thought that 002_pg_upgrade.pl\n> could automatically create the old regression database, but\n> now I see that's not implemented.\n\ntest.sh did that, until I noticed that we need to worry about\npg_regress from the past branches to be compatible in the script\nitself because we need to run it in the old source tree. This makes\nthe whole much more complicated to maintain, especially with the\nrecent removal of input/ and output/ folders in the regression tests\n:/ \n--\nMichael", "msg_date": "Mon, 16 Jan 2023 08:38:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-01-15 Su 18:37, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> Those replacement lines are very difficult to read. I think use of\n>> extended regexes and some multi-part replacements would help. I'll have\n>> a go at that tomorrow.\n> Yeah, after I wrote that code I remembered about \\Q ... \\E, which would\n> eliminate the need for most of the backslashes and probably make things\n> better that way. I didn't get around to improving it yet though, so\n> feel free to have a go.\n>\n> \t\t\t\n\n\nOK, here's my version. It tests clean against all of crake's dump files\nback to 9.2.\n\nTo some extent it's a matter of taste, but I hate very long regex lines\n- it makes it very hard to see what's actually changing, so I broke up\nmost of those.\n\nGiven that we are looking at newlines in some places I decided that\nafter all it was important to convert CRLF to LF.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 16 Jan 2023 14:08:38 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> OK, here's my version. It tests clean against all of crake's dump files\n> back to 9.2.\n> To some extent it's a matter of taste, but I hate very long regex lines\n> - it makes it very hard to see what's actually changing, so I broke up\n> most of those.\n\nI don't mind breaking things up, but I'm not terribly excited about\nmaking the patterns looser, as you've done in some places like\n\n \tif ($old_version < 14)\n \t{\n \t\t# Remove mentions of extended hash functions.\n-\t\t$dump =~\n-\t\t s/^(\\s+OPERATOR 1 =\\(integer,integer\\)) ,\\n\\s+FUNCTION 2 \\(integer, integer\\) public\\.part_hashint4_noop\\(integer,bigint\\);/$1;/mg;\n-\t\t$dump =~\n-\t\t s/^(\\s+OPERATOR 1 =\\(text,text\\)) ,\\n\\s+FUNCTION 2 \\(text, text\\) public\\.part_hashtext_length\\(text,bigint\\);/$1;/mg;\n+\t\t$dump =~ s {(^\\s+OPERATOR\\s1\\s=\\((?:integer,integer|text,text)\\))\\s,\\n\n+ \\s+FUNCTION\\s2\\s.*?public.part_hash.*?;}\n+\t\t\t\t {$1;}mxg;\n \t}\n\nI don't think that's any easier to read, and it risks masking\ndiffs that we'd wish to know about.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 14:34:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-16 Mo 14:34, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> OK, here's my version. It tests clean against all of crake's dump files\n>> back to 9.2.\n>> To some extent it's a matter of taste, but I hate very long regex lines\n>> - it makes it very hard to see what's actually changing, so I broke up\n>> most of those.\n> I don't mind breaking things up, but I'm not terribly excited about\n> making the patterns looser, as you've done in some places like\n>\n> \tif ($old_version < 14)\n> \t{\n> \t\t# Remove mentions of extended hash functions.\n> -\t\t$dump =~\n> -\t\t s/^(\\s+OPERATOR 1 =\\(integer,integer\\)) ,\\n\\s+FUNCTION 2 \\(integer, integer\\) public\\.part_hashint4_noop\\(integer,bigint\\);/$1;/mg;\n> -\t\t$dump =~\n> -\t\t s/^(\\s+OPERATOR 1 =\\(text,text\\)) ,\\n\\s+FUNCTION 2 \\(text, text\\) public\\.part_hashtext_length\\(text,bigint\\);/$1;/mg;\n> +\t\t$dump =~ s {(^\\s+OPERATOR\\s1\\s=\\((?:integer,integer|text,text)\\))\\s,\\n\n> + \\s+FUNCTION\\s2\\s.*?public.part_hash.*?;}\n> +\t\t\t\t {$1;}mxg;\n> \t}\n>\n> I don't think that's any easier to read, and it risks masking\n> diffs that we'd wish to know about.\n\n\n\nOK, I'll make another pass and tighten things up.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 16 Jan 2023 15:59:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-16 Mo 14:34, Tom Lane wrote:\n>> I don't think that's any easier to read, and it risks masking\n>> diffs that we'd wish to know about.\n\n> OK, I'll make another pass and tighten things up.\n\nDon't sweat it, I'm just working the bugs out of a new version.\nI'll have something to post shortly, I hope.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 16:00:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "OK, here's a v4:\n\n* It works with 002_pg_upgrade.pl now. The only substantive change\nI had to make for that was to define the $old_version arguments as\nbeing always PostgreSQL::Version objects not strings, because\notherwise I got complaints like\n\nArgument \"HEAD\" isn't numeric in numeric comparison (<=>) at /home/postgres/pgsql/src/bin/pg_upgrade/../../../src/test/perl/PostgreSQL/Version.pm line 130.\n\nSo now TestUpgradeXversion.pm is responsible for performing that\nconversion, and also for not doing any conversions on HEAD (which\nAndrew wanted anyway).\n\n* I improved pg_upgrade's TESTING directions after figuring out how\nto get it to work for contrib modules.\n\n* Incorporated (most of) Andrew's stylistic improvements.\n\n* Simplified TestUpgradeXversion.pm's use of diff, as discussed.\n\nI think we're about ready to go, except for cutting down\nAdjustUpgrade.pm to make versions to put in the back branches.\n\nI'm slightly tempted to back-patch 002_pg_upgrade.pl so that there\nis an in-tree way to verify back-branch AdjustUpgrade.pm files.\nOn the other hand, it's hard to believe that testing that in\nHEAD won't be sufficient; I doubt the back-branch copies will\nneed to change much.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 16 Jan 2023 16:48:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "I wrote:\n> I think we're about ready to go, except for cutting down\n> AdjustUpgrade.pm to make versions to put in the back branches.\n\nHmmm ... so upon trying to test in the back branches, I soon\ndiscovered that PostgreSQL/Version.pm isn't there before v15.\n\nI don't see a good reason why we couldn't back-patch it, though.\nAny objections?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 18:11:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-16 Mo 18:11, Tom Lane wrote:\n> I wrote:\n>> I think we're about ready to go, except for cutting down\n>> AdjustUpgrade.pm to make versions to put in the back branches.\n> Hmmm ... so upon trying to test in the back branches, I soon\n> discovered that PostgreSQL/Version.pm isn't there before v15.\n>\n> I don't see a good reason why we couldn't back-patch it, though.\n> Any objections?\n>\n> \t\t\t\n\n\nNo, that seems perfectly reasonable.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 16 Jan 2023 19:46:04 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "I've pushed the per-branch AdjustUpgrade.pm files and tested by performing\na fresh round of buildfarm runs with the patched TestUpgradeXversion.pm\nfile. I think we're in good shape with this project.\n\nI dunno if we want to stretch buildfarm owners' patience with yet\nanother BF client release right now. On the other hand, I'm antsy\nto see if we can un-revert 1b4d280ea after doing a little more\nwork in AdjustUpgrade.pm.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 21:58:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On Mon, Jan 16, 2023 at 04:48:28PM -0500, Tom Lane wrote:\n> I'm slightly tempted to back-patch 002_pg_upgrade.pl so that there\n> is an in-tree way to verify back-branch AdjustUpgrade.pm files.\n> On the other hand, it's hard to believe that testing that in\n> HEAD won't be sufficient; I doubt the back-branch copies will\n> need to change much.\n\nBackpatching 002_pg_upgrade.pl requires a bit more than the test:\nthere is one compatibility gotcha as of dc57366. I did not backpatch\nit because nobody has complained about it until I found out about it,\nbut the test would require it.\n\nBy the way, thanks for your work on this stuff :)\n--\nMichael", "msg_date": "Tue, 17 Jan 2023 12:32:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-16 Mo 21:58, Tom Lane wrote:\n> I've pushed the per-branch AdjustUpgrade.pm files and tested by performing\n> a fresh round of buildfarm runs with the patched TestUpgradeXversion.pm\n> file. I think we're in good shape with this project.\n>\n> I dunno if we want to stretch buildfarm owners' patience with yet\n> another BF client release right now. On the other hand, I'm antsy\n> to see if we can un-revert 1b4d280ea after doing a little more\n> work in AdjustUpgrade.pm.\n>\n> \t\t\t\n\n\nIt looks like the only animals doing the cross version tests crake,\ndrongo and fairywren. These are all mine, so I don't think we need to do\na new release for this.\n\nI think the next step is to push the buildfarm client changes, and\nupdate those three animals to use it, and make sure nothing breaks. I'll\ngo and do those things now. Then you should be able to try your unrevert.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 17 Jan 2023 08:35:02 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-16 Mo 21:58, Tom Lane wrote:\n>> I dunno if we want to stretch buildfarm owners' patience with yet\n>> another BF client release right now. On the other hand, I'm antsy\n>> to see if we can un-revert 1b4d280ea after doing a little more\n>> work in AdjustUpgrade.pm.\n\n> It looks like the only animals doing the cross version tests crake,\n> drongo and fairywren. These are all mine, so I don't think we need to do\n> a new release for this.\n\ncopperhead, kittiwake, snapper, and tadarida were running them\nuntil fairly recently.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Jan 2023 10:18:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-17 Tu 10:18, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2023-01-16 Mo 21:58, Tom Lane wrote:\n>>> I dunno if we want to stretch buildfarm owners' patience with yet\n>>> another BF client release right now. On the other hand, I'm antsy\n>>> to see if we can un-revert 1b4d280ea after doing a little more\n>>> work in AdjustUpgrade.pm.\n>> It looks like the only animals doing the cross version tests crake,\n>> drongo and fairywren. These are all mine, so I don't think we need to do\n>> a new release for this.\n> copperhead, kittiwake, snapper, and tadarida were running them\n> until fairly recently.\n>\n> \t\t\t\n\n\nAh, yes, true, I didn't look far enough back.\n\nThe new file can be downloaded from\n<https://raw.githubusercontent.com/PGBuildFarm/client-code/75efff0fbd70ca89b097593824911ab6ccbd258f/PGBuild/Modules/TestUpgradeXversion.pm>\n- it's a dropin replacement.\n\nFYI crake has just passed the test with flying colours.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 17 Jan 2023 11:04:47 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> FYI crake has just passed the test with flying colours.\n\nCool. I await the Windows machines' results with interest.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Jan 2023 11:30:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-16 Mo 21:58, Tom Lane wrote:\n>> I dunno if we want to stretch buildfarm owners' patience with yet\n>> another BF client release right now. On the other hand, I'm antsy\n>> to see if we can un-revert 1b4d280ea after doing a little more\n>> work in AdjustUpgrade.pm.\n\n> I think the next step is to push the buildfarm client changes, and\n> update those three animals to use it, and make sure nothing breaks. I'll\n> go and do those things now. Then you should be able to try your unrevert.\n\nIt looks like unrevert will require ~130 lines in AdjustUpgrade.pm,\nwhich is not great but not awful either. I think this is ready to\ngo once you've vetted your remaining buildfarm animals.\n\n\t\t\tregards, tom lane", "msg_date": "Tue, 17 Jan 2023 16:12:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-17 Tu 11:30, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> FYI crake has just passed the test with flying colours.\n> Cool. I await the Windows machines' results with interest.\n\n\nfairwren and drongo are clean except for fairywren upgrading 9.6 to 11.\nThis appears to be a longstanding issue that the fuzz processing was\ncausing us to ignore. See for example\n<https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2022-09-01%2018%3A27%3A28&stg=xversion-upgrade-REL_10_STABLE-REL_11_STABLE>\n\nIt's somewhat interesting that this doesn't appear to be an issue with\nthe MSVC builds on drongo. And it disappears when upgrading to release\n12 or later where we use the extra-float-digits=0 hack.\n\nI propose to add this to just the release 11 AdjustUpgrade.pm:\n\n\n    # float4 values in this table on Msys can have precision differences\n    # in representation between old and new versions\n    if ($old_version < 10 && $dbnames{contrib_regression_btree_gist} &&\n        $^O eq 'msys')\n    {\n        _add_st($result, 'contrib_regression_btree_gist',\n                'drop table if exists float4tmp');\n    }\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 07:36:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> fairwren and drongo are clean except for fairywren upgrading 9.6 to 11.\n> This appears to be a longstanding issue that the fuzz processing was\n> causing us to ignore. See for example\n> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2022-09-01%2018%3A27%3A28&stg=xversion-upgrade-REL_10_STABLE-REL_11_STABLE>\n\nInteresting. I suspected that removing the fuzz allowance would teach\nus some things we hadn't known about.\n\n> I propose to add this to just the release 11 AdjustUpgrade.pm:\n>     # float4 values in this table on Msys can have precision differences\n>     # in representation between old and new versions\n>     if ($old_version < 10 && $dbnames{contrib_regression_btree_gist} &&\n>         $^O eq 'msys')\n>     {\n>         _add_st($result, 'contrib_regression_btree_gist',\n>                 'drop table if exists float4tmp');\n>     }\n\nSeems reasonable (but I wonder if you don't need \"$old_version < 11\").\nA nicer answer would be to apply --extra-float-digits=0 across the\nboard, but pre-v12 pg_dump lacks that switch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:33:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "One more thing before we move on from this topic. I'd been testing\nmodified versions of the AdjustUpgrade.pm logic by building from a\n--from-source source tree, which seemed way easier than dealing\nwith a private git repo. As it stands, TestUpgradeXversion.pm\nrefuses to run under $from_source, but I just diked that check out\nand it seemed to work fine for my purposes. Now, that's going to be\na regular need going forward, so I'd like to not need a hacked version\nof the BF client code to do it.\n\nAlso, your committed version of TestUpgradeXversion.pm breaks that\nuse-case because you did\n\n- unshift(@INC, \"$self->{pgsql}/src/test/perl\");\n+ unshift(@INC, \"$self->{buildroot}/$this_branch/pgsql/src/test/perl\");\n\nwhich AFAICS is an empty directory in a $from_source run.\n\nI suppose that the reason for not running under $from_source is to\navoid corrupting the saved installations with unofficial versions.\nHowever, couldn't we skip the \"save\" step and still run the upgrade\ntests against whatever we have saved? (Maybe skip the same-version\ntest, as it's not quite reflecting any real case then.)\n\nHere's a quick draft patch showing what I have in mind. There may\nwell be a better way to deal with the wheres-the-source issue than\nwhat is in hunk 2. Also, I didn't reindent the unchanged code in\nsub installcheck, and I didn't add anything about skipping\nsame-version tests.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 18 Jan 2023 14:32:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-18 We 14:32, Tom Lane wrote:\n> One more thing before we move on from this topic. I'd been testing\n> modified versions of the AdjustUpgrade.pm logic by building from a\n> --from-source source tree, which seemed way easier than dealing\n> with a private git repo. As it stands, TestUpgradeXversion.pm\n> refuses to run under $from_source, but I just diked that check out\n> and it seemed to work fine for my purposes. Now, that's going to be\n> a regular need going forward, so I'd like to not need a hacked version\n> of the BF client code to do it.\n>\n> Also, your committed version of TestUpgradeXversion.pm breaks that\n> use-case because you did\n>\n> - unshift(@INC, \"$self->{pgsql}/src/test/perl\");\n> + unshift(@INC, \"$self->{buildroot}/$this_branch/pgsql/src/test/perl\");\n>\n> which AFAICS is an empty directory in a $from_source run.\n>\n> I suppose that the reason for not running under $from_source is to\n> avoid corrupting the saved installations with unofficial versions.\n> However, couldn't we skip the \"save\" step and still run the upgrade\n> tests against whatever we have saved? (Maybe skip the same-version\n> test, as it's not quite reflecting any real case then.)\n>\n> Here's a quick draft patch showing what I have in mind. There may\n> well be a better way to deal with the wheres-the-source issue than\n> what is in hunk 2. Also, I didn't reindent the unchanged code in\n> sub installcheck, and I didn't add anything about skipping\n> same-version tests.\n\n\nNo that won't work if we're using vpath builds (which was why I changed\nit from what you had). $self->{pgsql} is always the build directory.\n\nSomething like this should do it:\n\n\nmy $source_tree = $from_source || \"$self->{buildroot}/$this_branch/pgsql\";\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 16:05:51 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> On 2023-01-18 We 14:32, Tom Lane wrote:\n>> I suppose that the reason for not running under $from_source is to\n>> avoid corrupting the saved installations with unofficial versions.\n>> However, couldn't we skip the \"save\" step and still run the upgrade\n>> tests against whatever we have saved? (Maybe skip the same-version\n>> test, as it's not quite reflecting any real case then.)\n\n> Something like this should do it:\n> my $source_tree = $from_source || \"$self->{buildroot}/$this_branch/pgsql\";\n\nAh, I didn't understand that $from_source is a path not just a bool.\n\nWhat do you think about the above questions? Is this $from_source\nexclusion for the reason I guessed, or some other one?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Jan 2023 16:14:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-18 We 16:14, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> On 2023-01-18 We 14:32, Tom Lane wrote:\n>>> I suppose that the reason for not running under $from_source is to\n>>> avoid corrupting the saved installations with unofficial versions.\n>>> However, couldn't we skip the \"save\" step and still run the upgrade\n>>> tests against whatever we have saved? (Maybe skip the same-version\n>>> test, as it's not quite reflecting any real case then.)\n>> Something like this should do it:\n>> my $source_tree = $from_source || \"$self->{buildroot}/$this_branch/pgsql\";\n> Ah, I didn't understand that $from_source is a path not just a bool.\n>\n> What do you think about the above questions? Is this $from_source\n> exclusion for the reason I guessed, or some other one?\n>\n> \t\t\t\n\nYes, the reason is that, unlike almost everything else in the buildfarm,\ncross version upgrade testing requires saved state (binaries and data\ndirectory), and we don't want from-source builds corrupting that state.\n\nI think we can do what you want but it's a bit harder than what you've\ndone. If we're not going to save the current run's product then we need\nto run the upgrade test from a different directory (probably directly in\n\"$buildroot/$this_branch/inst\"). Otherwise we'll be testing upgrade to\nthe saved product of a previous run of this branch. I'll take a stab at\nit tomorrow if you like.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 17:08:28 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> I think we can do what you want but it's a bit harder than what you've\n> done. If we're not going to save the current run's product then we need\n> to run the upgrade test from a different directory (probably directly in\n> \"$buildroot/$this_branch/inst\"). Otherwise we'll be testing upgrade to\n> the saved product of a previous run of this branch.\n\nHmm, maybe that explains some inconsistent results I remember getting.\n\n> I'll take a stab at it tomorrow if you like.\n\nPlease do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 18 Jan 2023 17:14:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-18 We 17:14, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> I think we can do what you want but it's a bit harder than what you've\n>> done. If we're not going to save the current run's product then we need\n>> to run the upgrade test from a different directory (probably directly in\n>> \"$buildroot/$this_branch/inst\"). Otherwise we'll be testing upgrade to\n>> the saved product of a previous run of this branch.\n> Hmm, maybe that explains some inconsistent results I remember getting.\n>\n>> I'll take a stab at it tomorrow if you like.\n> Please do.\n>\n> \t\t\t\n\n\nSee\n<https://github.com/PGBuildFarm/client-code/commit/9415e1bd415e8c12ad009296eefc4c609ed9f533>\n\n\nI tested it and it seems to be doing the right thing.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:31:25 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "Andrew Dunstan <andrew@dunslane.net> writes:\n> See\n> <https://github.com/PGBuildFarm/client-code/commit/9415e1bd415e8c12ad009296eefc4c609ed9f533>\n> I tested it and it seems to be doing the right thing.\n\nYeah, seems to do what I want. Thanks!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:38:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "\nOn 2023-01-18 We 10:33, Tom Lane wrote:\n> Andrew Dunstan <andrew@dunslane.net> writes:\n>> fairwren and drongo are clean except for fairywren upgrading 9.6 to 11.\n>> This appears to be a longstanding issue that the fuzz processing was\n>> causing us to ignore. See for example\n>> <https://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=fairywren&dt=2022-09-01%2018%3A27%3A28&stg=xversion-upgrade-REL_10_STABLE-REL_11_STABLE>\n> Interesting. I suspected that removing the fuzz allowance would teach\n> us some things we hadn't known about.\n>\n>> I propose to add this to just the release 11 AdjustUpgrade.pm:\n>>     # float4 values in this table on Msys can have precision differences\n>>     # in representation between old and new versions\n>>     if ($old_version < 10 && $dbnames{contrib_regression_btree_gist} &&\n>>         $^O eq 'msys')\n>>     {\n>>         _add_st($result, 'contrib_regression_btree_gist',\n>>                 'drop table if exists float4tmp');\n>>     }\n> Seems reasonable (but I wonder if you don't need \"$old_version < 11\").\n> A nicer answer would be to apply --extra-float-digits=0 across the\n> board, but pre-v12 pg_dump lacks that switch.\n>\n> \t\t\t\n\n\nIt turns out this was due to the fact that fairywren's setup changed\nsome time after the EOL of 9.6. I have rebuilt 9.6 and earlier\nbackbranches and there should now be no need for this adjustment.\n\nThere is still a Windows issue with MSVC builds <= 9.4 that I'm trying\nto track down.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:49:56 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "I just hit a snag testing this. It turns out that the\nPostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\nsounds reasonable. However, because of that, the AdjustUpgrade.pm\nstanza that tries to drop tables public.gtest_normal_child{2} in\nversions earlier than 16 fails, because by 16 these tables are dropped\nin the test itself rather than left to linger, as was the case in\nversions 15 and earlier.\n\nSo, if you try to run the pg_upgrade test with a dump created by\n16beta2, it will fail to drop these tables (because they don't exist)\nand the whole test fails. Why hasn't the buildfarm detected this\nproblem? I see that Drongo is happy, but I don't understand why.\nApparently, the AdjustUpgrade.pm stuff leaves no trace.\n\nI can fix this either by using DROP IF EXISTS in that stanza, or by\nmaking AdjustUpgrade use 'version <= 15'. Any opinions on which to\nprefer?\n\n\n(Well, except that the tests added by c66a7d75e65 a few days ago fail\nfor some different reason -- the tests want pg_upgrade to fail, but it\ndoesn't fail for me.)\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n<Schwern> It does it in a really, really complicated way\n<crab> why does it need to be complicated?\n<Schwern> Because it's MakeMaker.\n\n\n", "msg_date": "Wed, 19 Jul 2023 13:05:04 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-07-19 We 07:05, Alvaro Herrera wrote:\n> I just hit a snag testing this. It turns out that the\n> PostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\n> sounds reasonable. However, because of that, the AdjustUpgrade.pm\n> stanza that tries to drop tables public.gtest_normal_child{2} in\n> versions earlier than 16 fails, because by 16 these tables are dropped\n> in the test itself rather than left to linger, as was the case in\n> versions 15 and earlier.\n>\n> So, if you try to run the pg_upgrade test with a dump created by\n> 16beta2, it will fail to drop these tables (because they don't exist)\n> and the whole test fails. Why hasn't the buildfarm detected this\n> problem? I see that Drongo is happy, but I don't understand why.\n> Apparently, the AdjustUpgrade.pm stuff leaves no trace.\n\n\nThe buildfarm module assumes that no adjustments are necessary if the \nold and new versions are the same (e.g. HEAD to HEAD). And it never \npasses in a version like '16beta2'. It extracts the version number from \nthe branch name, e.g. REL_16_STABLE => 16.\n\n\n>\n> I can fix this either by using DROP IF EXISTS in that stanza, or by\n> making AdjustUpgrade use 'version <= 15'. Any opinions on which to\n> prefer?\n>\n\nThe trouble is this could well break the next time someone puts in a \ntest like this.\n\n\nMaybe we need to make AdjustUpgrade just look at the major version, \nsomething like:\n\n\n    $old_version = PostgreSQL::Version->new($old_version->major);\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-19 We 07:05, Alvaro Herrera\n wrote:\n\n\nI just hit a snag testing this. It turns out that the\nPostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\nsounds reasonable. However, because of that, the AdjustUpgrade.pm\nstanza that tries to drop tables public.gtest_normal_child{2} in\nversions earlier than 16 fails, because by 16 these tables are dropped\nin the test itself rather than left to linger, as was the case in\nversions 15 and earlier.\n\nSo, if you try to run the pg_upgrade test with a dump created by\n16beta2, it will fail to drop these tables (because they don't exist)\nand the whole test fails. Why hasn't the buildfarm detected this\nproblem? I see that Drongo is happy, but I don't understand why.\nApparently, the AdjustUpgrade.pm stuff leaves no trace.\n\n\n\nThe buildfarm module assumes that no adjustments are necessary if\n the old and new versions are the same (e.g. HEAD to HEAD). And it\n never passes in a version like '16beta2'. It extracts the version\n number from the branch name, e.g. REL_16_STABLE => 16.\n\n\n\n\n\nI can fix this either by using DROP IF EXISTS in that stanza, or by\nmaking AdjustUpgrade use 'version <= 15'. Any opinions on which to\nprefer?\n\n\n\n\n\nThe trouble is this could well break the next time someone puts\n in a test like this.\n\n\nMaybe we need to make AdjustUpgrade just look at the major\n version, something like:\n\n\n   $old_version =\n PostgreSQL::Version->new($old_version->major);\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 19 Jul 2023 09:07:56 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-Jul-19, Andrew Dunstan wrote:\n\n> \n> On 2023-07-19 We 07:05, Alvaro Herrera wrote:\n> > I just hit a snag testing this. It turns out that the\n> > PostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\n> > sounds reasonable. However, because of that, the AdjustUpgrade.pm\n> > stanza that tries to drop tables public.gtest_normal_child{2} in\n> > versions earlier than 16 fails, because by 16 these tables are dropped\n> > in the test itself rather than left to linger, as was the case in\n> > versions 15 and earlier.\n\n> The buildfarm module assumes that no adjustments are necessary if the old\n> and new versions are the same (e.g. HEAD to HEAD). And it never passes in a\n> version like '16beta2'. It extracts the version number from the branch name,\n> e.g. REL_16_STABLE => 16.\n\nHmm, OK, but I'm not testing the same versions -- I'm testing 16beta2 to\n17devel.\n\n> > I can fix this either by using DROP IF EXISTS in that stanza, or by\n> > making AdjustUpgrade use 'version <= 15'. Any opinions on which to\n> > prefer?\n> \n> The trouble is this could well break the next time someone puts in a test\n> like this.\n\nHmm, I don't understand what you mean.\n\n> Maybe we need to make AdjustUpgrade just look at the major version,\n> something like:\n> \n>    $old_version = PostgreSQL::Version->new($old_version->major);\n\nIt seems like that does work, but if we do that, then we also need to\nchange this line:\n\n\tif ($old_version lt '9.5')\nto\n\tif ($old_version < '9.5')\n\notherwise you get some really mysterious failures about trying to drop\npublic.=>, which is in fact no longer accepted syntax since 9.5; and the\nstringwise comparison returns the wrong value here.\n\nTBH I'm getting a sense of discomfort with the idea of having developed\na Postgres-version-number Perl module, and in the only place where we\ncan use it, have to settle for numeric comparison instead.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n¡Ay, ay, ay! Con lo mucho que yo lo quería (bis)\nse fue de mi vera ... se fue para siempre, pa toíta ... pa toíta la vida\n¡Ay Camarón! ¡Ay Camarón! (Paco de Lucía)", "msg_date": "Wed, 19 Jul 2023 18:05:02 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-07-19 We 12:05, Alvaro Herrera wrote:\n> On 2023-Jul-19, Andrew Dunstan wrote:\n>\n>> On 2023-07-19 We 07:05, Alvaro Herrera wrote:\n>>> I just hit a snag testing this. It turns out that the\n>>> PostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\n>>> sounds reasonable. However, because of that, the AdjustUpgrade.pm\n>>> stanza that tries to drop tables public.gtest_normal_child{2} in\n>>> versions earlier than 16 fails, because by 16 these tables are dropped\n>>> in the test itself rather than left to linger, as was the case in\n>>> versions 15 and earlier.\n>> The buildfarm module assumes that no adjustments are necessary if the old\n>> and new versions are the same (e.g. HEAD to HEAD). And it never passes in a\n>> version like '16beta2'. It extracts the version number from the branch name,\n>> e.g. REL_16_STABLE => 16.\n> Hmm, OK, but I'm not testing the same versions -- I'm testing 16beta2 to\n> 17devel.\n\n\nYeah, but you asked why the buildfarm didn't see this effect, and the \nanswer is that it never uses version arguments like '16beta2'.\n\n\n>\n>>> I can fix this either by using DROP IF EXISTS in that stanza, or by\n>>> making AdjustUpgrade use 'version <= 15'. Any opinions on which to\n>>> prefer?\n>> The trouble is this could well break the next time someone puts in a test\n>> like this.\n> Hmm, I don't understand what you mean.\n\n\nI want to prevent things like this from happening in the future if \nsomeone puts a test in the development branch with  \"if ($oldversion < nn)\".\n\n\n>\n>> Maybe we need to make AdjustUpgrade just look at the major version,\n>> something like:\n>>\n>>    $old_version = PostgreSQL::Version->new($old_version->major);\n> It seems like that does work, but if we do that, then we also need to\n> change this line:\n>\n> \tif ($old_version lt '9.5')\n> to\n> \tif ($old_version < '9.5')\n>\n> otherwise you get some really mysterious failures about trying to drop\n> public.=>, which is in fact no longer accepted syntax since 9.5; and the\n> stringwise comparison returns the wrong value here.\n\n\nThat seems odd. String comparison like that is supposed to work. I will \ndo some tests.\n\n\n>\n> TBH I'm getting a sense of discomfort with the idea of having developed\n> a Postgres-version-number Perl module, and in the only place where we\n> can use it, have to settle for numeric comparison instead.\n\n\nThese comparisons only look like that. They are overloaded in \nPostgreSQL::Version.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-19 We 12:05, Alvaro Herrera\n wrote:\n\n\nOn 2023-Jul-19, Andrew Dunstan wrote:\n\n\n\n\nOn 2023-07-19 We 07:05, Alvaro Herrera wrote:\n\n\nI just hit a snag testing this. It turns out that the\nPostgreSQL::Version comparison stuff believes that 16beta2 < 16, which\nsounds reasonable. However, because of that, the AdjustUpgrade.pm\nstanza that tries to drop tables public.gtest_normal_child{2} in\nversions earlier than 16 fails, because by 16 these tables are dropped\nin the test itself rather than left to linger, as was the case in\nversions 15 and earlier.\n\n\n\n\n\n\nThe buildfarm module assumes that no adjustments are necessary if the old\nand new versions are the same (e.g. HEAD to HEAD). And it never passes in a\nversion like '16beta2'. It extracts the version number from the branch name,\ne.g. REL_16_STABLE => 16.\n\n\n\nHmm, OK, but I'm not testing the same versions -- I'm testing 16beta2 to\n17devel.\n\n\n\nYeah, but you asked why the buildfarm didn't see this effect, and\n the answer is that it never uses version arguments like '16beta2'.\n\n\n\n\n\n\n\n\n\nI can fix this either by using DROP IF EXISTS in that stanza, or by\nmaking AdjustUpgrade use 'version <= 15'. Any opinions on which to\nprefer?\n\n\n\nThe trouble is this could well break the next time someone puts in a test\nlike this.\n\n\n\nHmm, I don't understand what you mean.\n\n\n\nI want to prevent things like this from happening in the future\n if someone puts a test in the development branch with  \"if\n ($oldversion < nn)\".\n\n\n\n\n\n\n\n\nMaybe we need to make AdjustUpgrade just look at the major version,\nsomething like:\n\n   $old_version = PostgreSQL::Version->new($old_version->major);\n\n\n\nIt seems like that does work, but if we do that, then we also need to\nchange this line:\n\n\tif ($old_version lt '9.5')\nto\n\tif ($old_version < '9.5')\n\notherwise you get some really mysterious failures about trying to drop\npublic.=>, which is in fact no longer accepted syntax since 9.5; and the\nstringwise comparison returns the wrong value here.\n\n\n\nThat seems odd. String comparison like that is supposed to work.\n I will do some tests.\n\n\n\n\n\n\nTBH I'm getting a sense of discomfort with the idea of having developed\na Postgres-version-number Perl module, and in the only place where we\ncan use it, have to settle for numeric comparison instead.\n\n\n\n\nThese comparisons only look like that. They are overloaded in\n PostgreSQL::Version.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 19 Jul 2023 15:20:22 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-07-19 We 15:20, Andrew Dunstan wrote:\n>\n>\n> On 2023-07-19 We 12:05, Alvaro Herrera wrote:\n>\n>\n>>> Maybe we need to make AdjustUpgrade just look at the major version,\n>>> something like:\n>>>\n>>>    $old_version = PostgreSQL::Version->new($old_version->major);\n>> It seems like that does work, but if we do that, then we also need to\n>> change this line:\n>>\n>> \tif ($old_version lt '9.5')\n>> to\n>> \tif ($old_version < '9.5')\n>>\n>> otherwise you get some really mysterious failures about trying to drop\n>> public.=>, which is in fact no longer accepted syntax since 9.5; and the\n>> stringwise comparison returns the wrong value here.\n>\n>\n> That seems odd. String comparison like that is supposed to work. I \n> will do some tests.\n>\n>\n>> TBH I'm getting a sense of discomfort with the idea of having developed\n>> a Postgres-version-number Perl module, and in the only place where we\n>> can use it, have to settle for numeric comparison instead.\n>\n>\n> These comparisons only look like that. They are overloaded in \n> PostgreSQL::Version.\n>\n\nThe result you report suggest to me that somehow the old version is no \nlonger a PostgreSQL::Version object.  Here's the patch I suggest:\n\n\ndiff --git a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm \nb/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\nindex a241d2ceff..d7a7383deb 100644\n--- a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n+++ b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n@@ -74,6 +74,11 @@ values are arrayrefs to lists of statements to be run \nin those databases.\n  sub adjust_database_contents\n  {\n     my ($old_version, %dbnames) = @_;\n+\n+   die \"wrong type for \\$old_version\\n\"\n+     unless $old_version->isa(\"PostgreSQL::Version\");\n+   $old_version = PostgreSQL::Version->new($old_version->major);\n+\n     my $result = {};\n\n     # remove dbs of modules known to cause pg_upgrade to fail\n\n\nDo you still see errors with that?\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-19 We 15:20, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-07-19 We 12:05, Alvaro\n Herrera wrote:\n\n\n\n\n\n\nMaybe we need to make AdjustUpgrade just look at the major version,\nsomething like:\n\n   $old_version = PostgreSQL::Version->new($old_version->major);\n\n\nIt seems like that does work, but if we do that, then we also need to\nchange this line:\n\n\tif ($old_version lt '9.5')\nto\n\tif ($old_version < '9.5')\n\notherwise you get some really mysterious failures about trying to drop\npublic.=>, which is in fact no longer accepted syntax since 9.5; and the\nstringwise comparison returns the wrong value here.\n\n\n\nThat seems odd. String comparison like that is supposed to\n work. I will do some tests.\n\n\n\n\nTBH I'm getting a sense of discomfort with the idea of having developed\na Postgres-version-number Perl module, and in the only place where we\ncan use it, have to settle for numeric comparison instead.\n\n\n\n\nThese comparisons only look like that. They are overloaded in\n PostgreSQL::Version.\n\n\n\nThe result you report suggest to me that somehow the old version\n is no longer a PostgreSQL::Version object.  Here's the patch I\n suggest:\n\n\ndiff --git a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n index a241d2ceff..d7a7383deb 100644\n --- a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n +++ b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n @@ -74,6 +74,11 @@ values are arrayrefs to lists of statements to\n be run in those databases.\n  sub adjust_database_contents\n  {\n     my ($old_version, %dbnames) = @_;\n +\n +   die \"wrong type for \\$old_version\\n\"\n +     unless $old_version->isa(\"PostgreSQL::Version\");\n +   $old_version =\n PostgreSQL::Version->new($old_version->major);\n +\n     my $result = {};\n  \n     # remove dbs of modules known to cause pg_upgrade to fail\n\n\n\nDo you still see errors with that?\n\n\ncheers\n\n\nandrew\n\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 19 Jul 2023 16:44:15 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-07-19 We 16:44, Andrew Dunstan wrote:\n>\n>\n> On 2023-07-19 We 15:20, Andrew Dunstan wrote:\n>>\n>>\n>> On 2023-07-19 We 12:05, Alvaro Herrera wrote:\n>>\n>>\n>>>> Maybe we need to make AdjustUpgrade just look at the major version,\n>>>> something like:\n>>>>\n>>>>    $old_version = PostgreSQL::Version->new($old_version->major);\n>>> It seems like that does work, but if we do that, then we also need to\n>>> change this line:\n>>>\n>>> \tif ($old_version lt '9.5')\n>>> to\n>>> \tif ($old_version < '9.5')\n>>>\n>>> otherwise you get some really mysterious failures about trying to drop\n>>> public.=>, which is in fact no longer accepted syntax since 9.5; and the\n>>> stringwise comparison returns the wrong value here.\n>>\n>>\n>> That seems odd. String comparison like that is supposed to work. I \n>> will do some tests.\n>>\n>>\n>>> TBH I'm getting a sense of discomfort with the idea of having developed\n>>> a Postgres-version-number Perl module, and in the only place where we\n>>> can use it, have to settle for numeric comparison instead.\n>>\n>>\n>> These comparisons only look like that. They are overloaded in \n>> PostgreSQL::Version.\n>>\n>\n> The result you report suggest to me that somehow the old version is no \n> longer a PostgreSQL::Version object.  Here's the patch I suggest:\n>\n>\n> diff --git a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm \n> b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n> index a241d2ceff..d7a7383deb 100644\n> --- a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n> +++ b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n> @@ -74,6 +74,11 @@ values are arrayrefs to lists of statements to be \n> run in those databases.\n>  sub adjust_database_contents\n>  {\n>     my ($old_version, %dbnames) = @_;\n> +\n> +   die \"wrong type for \\$old_version\\n\"\n> +     unless $old_version->isa(\"PostgreSQL::Version\");\n> +   $old_version = PostgreSQL::Version->new($old_version->major);\n> +\n>     my $result = {};\n>\n>     # remove dbs of modules known to cause pg_upgrade to fail\n>\n>\n> Do you still see errors with that?\n>\n>\n>\n\nJust realized it would need to be applied in all three exported routines.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-19 We 16:44, Andrew Dunstan\n wrote:\n\n\n\n\n\nOn 2023-07-19 We 15:20, Andrew\n Dunstan wrote:\n\n\n\n\n\nOn 2023-07-19 We 12:05, Alvaro\n Herrera wrote:\n\n\n\n\n\n\nMaybe we need to make AdjustUpgrade just look at the major version,\nsomething like:\n\n   $old_version = PostgreSQL::Version->new($old_version->major);\n\n\nIt seems like that does work, but if we do that, then we also need to\nchange this line:\n\n\tif ($old_version lt '9.5')\nto\n\tif ($old_version < '9.5')\n\notherwise you get some really mysterious failures about trying to drop\npublic.=>, which is in fact no longer accepted syntax since 9.5; and the\nstringwise comparison returns the wrong value here.\n\n\n\nThat seems odd. String comparison like that is supposed to\n work. I will do some tests.\n\n\n\n\nTBH I'm getting a sense of discomfort with the idea of having developed\na Postgres-version-number Perl module, and in the only place where we\ncan use it, have to settle for numeric comparison instead.\n\n\n\n\nThese comparisons only look like that. They are overloaded in\n PostgreSQL::Version.\n\n\n\nThe result you report suggest to me that somehow the old\n version is no longer a PostgreSQL::Version object.  Here's the\n patch I suggest:\n\n\ndiff --git a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n index a241d2ceff..d7a7383deb 100644\n --- a/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n +++ b/src/test/perl/PostgreSQL/Test/AdjustUpgrade.pm\n @@ -74,6 +74,11 @@ values are arrayrefs to lists of statements\n to be run in those databases.\n  sub adjust_database_contents\n  {\n     my ($old_version, %dbnames) = @_;\n +\n +   die \"wrong type for \\$old_version\\n\"\n +     unless $old_version->isa(\"PostgreSQL::Version\");\n +   $old_version =\n PostgreSQL::Version->new($old_version->major);\n +\n     my $result = {};\n  \n     # remove dbs of modules known to cause pg_upgrade to fail\n\n\n\nDo you still see errors with that?\n\n\n\n\n\n\nJust realized it would need to be applied in all three exported\n routines.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Wed, 19 Jul 2023 16:47:28 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-Jul-19, Andrew Dunstan wrote:\n\n> The result you report suggest to me that somehow the old version is no\n> longer a PostgreSQL::Version object.  Here's the patch I suggest:\n\nAhh, okay, that makes more sense; and yes, it does work.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/", "msg_date": "Thu, 20 Jul 2023 11:52:11 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-07-20 Th 05:52, Alvaro Herrera wrote:\n> On 2023-Jul-19, Andrew Dunstan wrote:\n>\n>> The result you report suggest to me that somehow the old version is no\n>> longer a PostgreSQL::Version object.  Here's the patch I suggest:\n> Ahh, okay, that makes more sense; and yes, it does work.\n\n\nYour patch LGTM\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB:https://www.enterprisedb.com\n\n\n\n\n\n\n\n\nOn 2023-07-20 Th 05:52, Alvaro Herrera\n wrote:\n\n\nOn 2023-Jul-19, Andrew Dunstan wrote:\n\n\n\nThe result you report suggest to me that somehow the old version is no\nlonger a PostgreSQL::Version object.  Here's the patch I suggest:\n\n\n\nAhh, okay, that makes more sense; and yes, it does work.\n\n\n\n\nYour patch LGTM\n\n\ncheers\n\n\nandrew\n \n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Thu, 20 Jul 2023 08:29:53 -0400", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" }, { "msg_contents": "On 2023-Jul-20, Andrew Dunstan wrote:\n\n> On 2023-07-20 Th 05:52, Alvaro Herrera wrote:\n> > On 2023-Jul-19, Andrew Dunstan wrote:\n> > \n> > > The result you report suggest to me that somehow the old version is no\n> > > longer a PostgreSQL::Version object.  Here's the patch I suggest:\n> > Ahh, okay, that makes more sense; and yes, it does work.\n> \n> Your patch LGTM\n\nThanks for looking. I pushed it to 16 and master. I considered\napplying all the way down to 9.2, but I decided it'd be pointless.\nWe can backpatch later if we find there's need.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nBob [Floyd] used to say that he was planning to get a Ph.D. by the \"green\nstamp method,\" namely by saving envelopes addressed to him as 'Dr. Floyd'.\nAfter collecting 500 such letters, he mused, a university somewhere in\nArizona would probably grant him a degree. (Don Knuth)\n\n\n", "msg_date": "Mon, 24 Jul 2023 17:24:49 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Extracting cross-version-upgrade knowledge from buildfarm client" } ]
[ { "msg_contents": "Hot on the heels of Release 15 comes Release 16.\n\nThis release deals with some issues that have been discovered with the\ncheck for update feature of Release 15 and the |force_every| and\n|trigger_exclude| features, so that it now works correctly with those\nfeatures.\n\nIt also features these items:\n\n * a new |--check-for-work| mode of run_branches.pl\n This mode doesn't do any work but exits with a zero status if there\n is work to do and 1 if there is not. It is intended for use as an\n ExecCondition in |systemd| units\n * up to date filtering now works with an explicit list of branches, as\n well as with key words like |ALL|\n * reduce the verbosity of |\"Another process holds the lock\"| messages.\n These are now only emitted if the |verbose| setting is greater than 1\n * |update_personality| now has options to change the owner name and\n owner email\n This was in Release 15 but was accidentally omitted from the release\n notes. Up to now the only way to change these was by action from\n the administrators.\n * improve collection of logs in cross version upgrade testing\n\nThe release can be downloaded from\n\n<https://github.com/PGBuildFarm/client-code/releases/tag/REL_16> or\n<https://buildfarm.postgresql.org/downloads>\n\ncheers\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Fri, 13 Jan 2023 20:12:39 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Announcing Release 16 of the PostgreSQL Buildfarm client" } ]
[ { "msg_contents": "Hi,\n\nBefore scanning a relation, in the planner stage, I want to make a call to\nretrieve information about how many pages will be a hit for a specific\nrelation. The module pg_buffercache seems to be doing a similar thing.\nAlso, pg_statio_all_tables seems to be having that information, but it is\nupdated after execution. However, I want the information before execution.\nAlso not sure how pg_statio_all_tables is created and how I can access it\nin the code.\n\nThank you!\n\nHi,Before scanning a relation, in the planner stage, I want to make a call to retrieve information about how many pages will be a hit for a specific relation. The module pg_buffercache seems to be doing a similar thing. Also, pg_statio_all_tables seems to be having that information, but it is updated after execution. However, I want the information before execution. Also not sure how pg_statio_all_tables is created and how I can access it in the code.Thank you!", "msg_date": "Fri, 13 Jan 2023 17:28:31 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "How to find the number of cached pages for a relation?" }, { "msg_contents": "Hi,\n\nOn 2023-01-13 17:28:31 -0800, Amin wrote:\n> Before scanning a relation, in the planner stage, I want to make a call to\n> retrieve information about how many pages will be a hit for a specific\n> relation. The module pg_buffercache seems to be doing a similar thing.\n> Also, pg_statio_all_tables seems to be having that information, but it is\n> updated after execution. However, I want the information before execution.\n> Also not sure how pg_statio_all_tables is created and how I can access it\n> in the code.\n\nThere's no cheap way to do that. Currently the only ways are to:\n\na) Do one probe of the buffer mapping table for each block of the\n relation. Thus O(#relation blocks).\n\nb) Scan all of buffer headers, check which are for the relation. Thus\n O(#NBuffers)\n\nNeither of which are a good idea during planning.\n\n\nIt might be a bit more realistic to get very rough estimates:\n\nYou could compute the table's historic cache hit ratio from pgstats (i.e. use\nthe data backing pg_statio_all_tables). Of course that's not going to be\nspecific to your query (for index scans etc), and might have changed more\nrecently. It'd also be completely wrong after a restart.\n\nIf we had information about *recent* cache hit patterns for the relation, it'd\nbe a lot better, but we don't have the infrastructure for that, and\nintroducing it would increase the size of the stats entries noticably.\n\nAnother way could be to probe the buffer mapping table for a small subset of\nthe locks and infer the likelihood of other blocks being in shared buffers\nthat way.\n\nA third way could be to track the cache hit for relations in backend local\nmemory, likely in the relache entry. The big disadvantage would be that query\nplans would differ between connections and that connections would need to\n\"warm up\" to have good plans. But it'd handle restarts nicely.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 13 Jan 2023 18:27:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: How to find the number of cached pages for a relation?" }, { "msg_contents": "Thank you Andres.\n\nIf I want to do \"a\" ( Do one probe of the buffer mapping table for each\nblock of the relation. Thus O(#relation blocks)) what function calls can I\nuse, assuming I only have access to the relation id? How can I access and\nscan the buffer mapping table?\n\nOn Fri, Jan 13, 2023 at 6:27 PM Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2023-01-13 17:28:31 -0800, Amin wrote:\n> > Before scanning a relation, in the planner stage, I want to make a call\n> to\n> > retrieve information about how many pages will be a hit for a specific\n> > relation. The module pg_buffercache seems to be doing a similar thing.\n> > Also, pg_statio_all_tables seems to be having that information, but it is\n> > updated after execution. However, I want the information before\n> execution.\n> > Also not sure how pg_statio_all_tables is created and how I can access it\n> > in the code.\n>\n> There's no cheap way to do that. Currently the only ways are to:\n>\n> a) Do one probe of the buffer mapping table for each block of the\n> relation. Thus O(#relation blocks).\n>\n> b) Scan all of buffer headers, check which are for the relation. Thus\n> O(#NBuffers)\n>\n> Neither of which are a good idea during planning.\n>\n>\n> It might be a bit more realistic to get very rough estimates:\n>\n> You could compute the table's historic cache hit ratio from pgstats (i.e.\n> use\n> the data backing pg_statio_all_tables). Of course that's not going to be\n> specific to your query (for index scans etc), and might have changed more\n> recently. It'd also be completely wrong after a restart.\n>\n> If we had information about *recent* cache hit patterns for the relation,\n> it'd\n> be a lot better, but we don't have the infrastructure for that, and\n> introducing it would increase the size of the stats entries noticably.\n>\n> Another way could be to probe the buffer mapping table for a small subset\n> of\n> the locks and infer the likelihood of other blocks being in shared buffers\n> that way.\n>\n> A third way could be to track the cache hit for relations in backend local\n> memory, likely in the relache entry. The big disadvantage would be that\n> query\n> plans would differ between connections and that connections would need to\n> \"warm up\" to have good plans. But it'd handle restarts nicely.\n>\n> Greetings,\n>\n> Andres Freund\n>\n\nThank you Andres.If I want to do \"a\" (\nDo one probe of the buffer mapping table for each block of the relation. Thus O(#relation blocks)) what function calls can I use, assuming I only have access to the relation id? How can I access and scan the buffer mapping table?On Fri, Jan 13, 2023 at 6:27 PM Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2023-01-13 17:28:31 -0800, Amin wrote:\n> Before scanning a relation, in the planner stage, I want to make a call to\n> retrieve information about how many pages will be a hit for a specific\n> relation. The module pg_buffercache seems to be doing a similar thing.\n> Also, pg_statio_all_tables seems to be having that information, but it is\n> updated after execution. However, I want the information before execution.\n> Also not sure how pg_statio_all_tables is created and how I can access it\n> in the code.\n\nThere's no cheap way to do that. Currently the only ways are to:\n\na) Do one probe of the buffer mapping table for each block of the\n   relation. Thus O(#relation blocks).\n\nb) Scan all of buffer headers, check which are for the relation. Thus\n   O(#NBuffers)\n\nNeither of which are a good idea during planning.\n\n\nIt might be a bit more realistic to get very rough estimates:\n\nYou could compute the table's historic cache hit ratio from pgstats (i.e. use\nthe data backing pg_statio_all_tables). Of course that's not going to be\nspecific to your query (for index scans etc), and might have changed more\nrecently. It'd also be completely wrong after a restart.\n\nIf we had information about *recent* cache hit patterns for the relation, it'd\nbe a lot better, but we don't have the infrastructure for that, and\nintroducing it would increase the size of the stats entries noticably.\n\nAnother way could be to probe the buffer mapping table for a small subset of\nthe locks and infer the likelihood of other blocks being in shared buffers\nthat way.\n\nA third way could be to track the cache hit for relations in backend local\nmemory, likely in the relache entry. The big disadvantage would be that query\nplans would differ between connections and that connections would need to\n\"warm up\" to have good plans. But it'd handle restarts nicely.\n\nGreetings,\n\nAndres Freund", "msg_date": "Fri, 27 Jan 2023 16:11:26 -0800", "msg_from": "Amin <amin.fallahi@gmail.com>", "msg_from_op": true, "msg_subject": "Re: How to find the number of cached pages for a relation?" } ]
[ { "msg_contents": "There seem to be a small typo in backup.sgml\n(<varname>archive_command</varname> is unnecessarily\nrepeated). Attached is a patch to fix that.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp", "msg_date": "Sat, 14 Jan 2023 11:02:34 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "backup.sgml typo" }, { "msg_contents": "On Sat, Jan 14, 2023 at 7:32 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>\n> There seem to be a small typo in backup.sgml\n> (<varname>archive_command</varname> is unnecessarily\n> repeated). Attached is a patch to fix that.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Sat, 14 Jan 2023 12:04:33 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: backup.sgml typo" }, { "msg_contents": "> On Sat, Jan 14, 2023 at 7:32 AM Tatsuo Ishii <ishii@sraoss.co.jp> wrote:\n>>\n>> There seem to be a small typo in backup.sgml\n>> (<varname>archive_command</varname> is unnecessarily\n>> repeated). Attached is a patch to fix that.\n>>\n> \n> LGTM.\n\nFix pushed. Thanks.\n\nBest reagards,\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Sat, 14 Jan 2023 18:19:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: backup.sgml typo" } ]
[ { "msg_contents": "I've attached a patch for $SUBJECT, which allows us to remove a use of the\nunconstify macro in basic_archive. This is just a pet peeve, but maybe it\nbothers others, too.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 14 Jan 2023 15:11:26 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "constify arguments of copy_file() and copydir()" }, { "msg_contents": "\nOn 2023-01-14 Sa 18:11, Nathan Bossart wrote:\n> I've attached a patch for $SUBJECT, which allows us to remove a use of the\n> unconstify macro in basic_archive. This is just a pet peeve, but maybe it\n> bothers others, too.\n\n\nI don't like using unconstify where it can be avoided, so this looks\nreasonable to me.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 15 Jan 2023 08:23:13 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On Sun, Jan 15, 2023 at 08:23:13AM -0500, Andrew Dunstan wrote:\n> On 2023-01-14 Sa 18:11, Nathan Bossart wrote:\n>> I've attached a patch for $SUBJECT, which allows us to remove a use of the\n>> unconstify macro in basic_archive. This is just a pet peeve, but maybe it\n>> bothers others, too.\n> \n> I don't like using unconstify where it can be avoided, so this looks\n> reasonable to me.\n\nThanks. Added to the next commitfest:\n\n\thttps://commitfest.postgresql.org/42/4126/\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 15 Jan 2023 08:13:28 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On Sun, Jan 15, 2023 at 08:13:28AM -0800, Nathan Bossart wrote:\n> On Sun, Jan 15, 2023 at 08:23:13AM -0500, Andrew Dunstan wrote:\n>> On 2023-01-14 Sa 18:11, Nathan Bossart wrote:\n>>> I've attached a patch for $SUBJECT, which allows us to remove a use of the\n>>> unconstify macro in basic_archive. This is just a pet peeve, but maybe it\n>>> bothers others, too.\n>> \n>> I don't like using unconstify where it can be avoided, so this looks\n>> reasonable to me.\n\n+1.\n--\nMichael", "msg_date": "Mon, 16 Jan 2023 10:53:40 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On Mon, Jan 16, 2023 at 10:53:40AM +0900, Michael Paquier wrote:\n> +1.\n\nWhile I don't forget about this thread.. Any objections if I were to\napply that?\n--\nMichael", "msg_date": "Tue, 17 Jan 2023 15:24:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On 17.01.23 07:24, Michael Paquier wrote:\n> On Mon, Jan 16, 2023 at 10:53:40AM +0900, Michael Paquier wrote:\n>> +1.\n> \n> While I don't forget about this thread.. Any objections if I were to\n> apply that?\n\nLooks good to me.\n\n\n\n", "msg_date": "Tue, 17 Jan 2023 20:23:49 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On Tue, Jan 17, 2023 at 08:23:49PM +0100, Peter Eisentraut wrote:\n> Looks good to me.\n\nThanks, done.\n--\nMichael", "msg_date": "Wed, 18 Jan 2023 09:05:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: constify arguments of copy_file() and copydir()" }, { "msg_contents": "On Wed, Jan 18, 2023 at 09:05:52AM +0900, Michael Paquier wrote:\n> Thanks, done.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 16:08:02 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": true, "msg_subject": "Re: constify arguments of copy_file() and copydir()" } ]
[ { "msg_contents": "Without this patch:\n\n$ mkdir 000; chmod 000 ./000\n$ strace -fe open,stat ./tmp_install/usr/local/pgsql/bin/pg_restore -vvv -l ./000\n...\npg_restore: allocating AH for ./000, format 0\npg_restore: attempting to ascertain archive format\nstat(\"./000\", {st_mode=S_IFDIR|000, st_size=4096, ...}) = 0\nstat(\"./000/toc.dat\", 0x7ffc679fb3a0) = -1 EACCES (Permission denied)\nstat(\"./000/toc.dat.gz\", 0x7ffc679fb3a0) = -1 EACCES (Permission denied)\npg_restore: error: directory \"./000\" does not appear to be a valid archive (\"toc.dat\" does not exist)\n+++ exited with 1 +++\n\nWith:\nstat(\"./000/toc.dat\", 0x7ffc29ad0eb0) = -1 EACCES (Permission denied)\npg_restore: error: could not open input file \"./000\": Permission denied\n\nI \"learned\" some time ago to infer what the error message should have\nsaid, and finally wrote this so it does what it should. I'd consider\nthis a backpatchable fix to save people the trouble of diagnosing the\nmeaning of the error message.\n\ncommit 45722852d98b8e9c2702aabe2745d7df200da124\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Sat Jan 14 19:51:31 2023 -0600\n\n pg_restore: use strerror for errors other than ENOENT\n\ndiff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c\nindex 7f7a0f1ce7b..e36ee7af157 100644\n--- a/src/bin/pg_dump/pg_backup_archiver.c\n+++ b/src/bin/pg_dump/pg_backup_archiver.c\n@@ -2107,7 +2107,12 @@ _discoverArchiveFormat(ArchiveHandle *AH)\n \t\t\tif (snprintf(buf, MAXPGPATH, \"%s/toc.dat\", AH->fSpec) >= MAXPGPATH)\n \t\t\t\tpg_fatal(\"directory name too long: \\\"%s\\\"\",\n \t\t\t\t\t\t AH->fSpec);\n-\t\t\tif (stat(buf, &st) == 0 && S_ISREG(st.st_mode))\n+\t\t\tif (stat(buf, &st) != 0)\n+\t\t\t{\n+\t\t\t\tif (errno != ENOENT)\n+\t\t\t\t\tpg_fatal(\"could not open input file \\\"%s\\\": %m\", AH->fSpec);\n+\t\t\t}\n+\t\t\telse if (S_ISREG(st.st_mode))\n \t\t\t{\n \t\t\t\tAH->format = archDirectory;\n \t\t\t\treturn AH->format;\n@@ -2117,7 +2122,12 @@ _discoverArchiveFormat(ArchiveHandle *AH)\n \t\t\tif (snprintf(buf, MAXPGPATH, \"%s/toc.dat.gz\", AH->fSpec) >= MAXPGPATH)\n \t\t\t\tpg_fatal(\"directory name too long: \\\"%s\\\"\",\n \t\t\t\t\t\t AH->fSpec);\n-\t\t\tif (stat(buf, &st) == 0 && S_ISREG(st.st_mode))\n+\t\t\tif (stat(buf, &st) != 0)\n+\t\t\t{\n+\t\t\t\tif (errno != ENOENT)\n+\t\t\t\t\tpg_fatal(\"could not open input file \\\"%s\\\": %m\", AH->fSpec);\n+\t\t\t}\n+\t\t\telse if (S_ISREG(st.st_mode))\n \t\t\t{\n \t\t\t\tAH->format = archDirectory;\n \t\t\t\treturn AH->format;\n\n\n", "msg_date": "Sat, 14 Jan 2023 20:16:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "[PATCH] pg_restore: use strerror for errors other than ENOENT" } ]
[ { "msg_contents": "Hello,\n\nLogical replication sometimes gets stuck with\n ERROR: int2vector has too many elements\n\nI can't find the exact circumstances that cause it but it has something \nto do with many columns (or adding many columns) in combination with \nperhaps generated columns.\n\nThis replication test, in a slightly different form, used to work. This \nis also suggested by the fact that the attached runs without errors in \nREL_15_STABLE but gets stuck in HEAD.\n\nWhat it does: it initdbs and runs two instances, primary and replica. In \nthe primary 'pgbench -is1' done, and many columns, including 1 generated \ncolumn, are added to all 4 pgbench tables. This is then \npg_dump/pg_restored to the replica, and a short pgbench is run. The \nresult tables on primary and replica are compared for the final result. \n(To run it will need some tweaks to directory and connection parms)\n\nI ran it on both v15 and v16 for 25 runs: with the parameters as given \n15 has no problem while 16 always got stuck with the int2vector error. \n(15 can actually be pushed up to the max of 1600 columns per table \nwithout errors)\n\nBoth REL_15_STABLE and 16devel built from recent master on Debian 10, \ngcc 12.2.0.\n\nI hope someone understands what's going wrong.\n\nThanks,\n\nErik Rijkers", "msg_date": "Sun, 15 Jan 2023 10:35:24 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "On 2023-Jan-15, Erik Rijkers wrote:\n\n> Hello,\n> \n> Logical replication sometimes gets stuck with\n> ERROR: int2vector has too many elements\n\nWeird. This error comes from int2vectorin which amusingly only wants to\nread up to FUNC_MAX_ARGS values in the array (100 in the default config,\nbut it can be changed in pg_config_manual.h). I wonder how come we\nhaven't noticed this before ... surely we use int2vector's for other\nthings than function argument lists nowadays.\n\nAt the same time, I don't understand why it fails in 16 but not in 15.\nMaybe something changed in the way we process the column lists in 16?\n\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Update: super-fast reaction on the Postgres bugs mailing list. The report\nwas acknowledged [...], and a fix is under discussion.\nThe wonders of open-source !\"\n https://twitter.com/gunnarmorling/status/1596080409259003906\n\n\n", "msg_date": "Sun, 15 Jan 2023 12:33:40 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "On 1/15/23 12:33, Alvaro Herrera wrote:\n> On 2023-Jan-15, Erik Rijkers wrote:\n> \n>> Hello,\n>>\n>> Logical replication sometimes gets stuck with\n>> ERROR: int2vector has too many elements\n> \n> Weird. This error comes from int2vectorin which amusingly only wants to\n> read up to FUNC_MAX_ARGS values in the array (100 in the default config,\n> but it can be changed in pg_config_manual.h). I wonder how come we\n> haven't noticed this before ... surely we use int2vector's for other\n> things than function argument lists nowadays.\n> \n> At the same time, I don't understand why it fails in 16 but not in 15.\n> Maybe something changed in the way we process the column lists in 16?\n\nI wrote as comment in the script, but that's maybe vague so let me be \nmore explicit: 16 also accepts many columns, up to 1600, without error, \nas long as that is not combined with generated column(s) such as in the \nscript. It seems the combination becomes quickly problematic. Although \nadding just 50 columns + a generated column is still ok, 100 is already \ntoo high (see the ADD_COLUMNS variable in my script).\n\nWeird indeed.\n\n\nErik\n\n\n", "msg_date": "Sun, 15 Jan 2023 13:17:35 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "On Sunday, January 15, 2023 5:35 PM Erik Rijkers <er@xs4all.nl> wrote:\r\n> \r\n> I can't find the exact circumstances that cause it but it has something to do with\r\n> many columns (or adding many columns) in combination with perhaps\r\n> generated columns.\r\n> \r\n> This replication test, in a slightly different form, used to work. This is also\r\n> suggested by the fact that the attached runs without errors in REL_15_STABLE but\r\n> gets stuck in HEAD.\r\n> \r\n> What it does: it initdbs and runs two instances, primary and replica. In the\r\n> primary 'pgbench -is1' done, and many columns, including 1 generated column,\r\n> are added to all 4 pgbench tables. This is then pg_dump/pg_restored to the\r\n> replica, and a short pgbench is run. The result tables on primary and replica are\r\n> compared for the final result.\r\n> (To run it will need some tweaks to directory and connection parms)\r\n> \r\n> I ran it on both v15 and v16 for 25 runs: with the parameters as given\r\n> 15 has no problem while 16 always got stuck with the int2vector error.\r\n> (15 can actually be pushed up to the max of 1600 columns per table without\r\n> errors)\r\n> \r\n> Both REL_15_STABLE and 16devel built from recent master on Debian 10, gcc\r\n> 12.2.0.\r\n> \r\n> I hope someone understands what's going wrong.\r\n\r\nThanks for reporting.\r\n\r\nI think the basic problem is that we try to fetch the column list as a int2vector\r\nwhen doing table sync, and then if the number of columns is larger than 100, we\r\nwill get an ERROR like the $subject.\r\n\r\nWe can also hit this ERROR by manually specifying a long(>100) column\r\nlist in the publication Like:\r\n\r\ncreate publication pub for table test(a1,a2,a3... a200);\r\ncreate subscription xxx.\r\n\r\nThe script didn't reproduce this in PG15, because we didn't filter out\r\ngenerated column when fetching the column list, so it assumes all columns are\r\nreplicated and will return NULL for the column list(int2vector) value. But in\r\nPG16 (b7ae039), we started to filter out generated column(because generated columns are\r\nnot replicated in logical replication), so we get a valid int2vector and get\r\nthe ERROR. \r\nI will think and work on a fix for this.\r\n\r\nBest regards,\r\nHou zj\r\n", "msg_date": "Sun, 15 Jan 2023 14:46:42 +0000", "msg_from": "\"houzj.fnst@fujitsu.com\" <houzj.fnst@fujitsu.com>", "msg_from_op": false, "msg_subject": "RE: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2023-Jan-15, Erik Rijkers wrote:\n>> Logical replication sometimes gets stuck with\n>> ERROR: int2vector has too many elements\n\n> Weird. This error comes from int2vectorin which amusingly only wants to\n> read up to FUNC_MAX_ARGS values in the array (100 in the default config,\n> but it can be changed in pg_config_manual.h). I wonder how come we\n> haven't noticed this before ... surely we use int2vector's for other\n> things than function argument lists nowadays.\n\nYeah. So remove the limit in int2vectorin (probably also oidvectorin),\nor change logrep to use int2[] not int2vector, or both.\n\n> At the same time, I don't understand why it fails in 16 but not in 15.\n> Maybe something changed in the way we process the column lists in 16?\n\nProbably didn't have a dependency on int2vector before.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 11:06:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "I wrote:\n> Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n>> At the same time, I don't understand why it fails in 16 but not in 15.\n>> Maybe something changed in the way we process the column lists in 16?\n\n> Probably didn't have a dependency on int2vector before.\n\nIt looks like the proximate cause is that fd0b9dceb started fetching\nthe remote's pg_get_publication_tables() result as-is rather than\nunnesting it, so that the on-the-wire representation is now int2vector\nnot a series of int2. However, that just begs the question of who\nthought that making pg_publication_rel.prattrs be int2vector instead\nof int2[] was a good idea. Quite aside from this issue, int2vector\nisn't toastable, which'll lead to bloat in pg_publication_rel.\n\nBut I suppose we are stuck with that, seeing that this datatype choice\nis effectively part of the logrep protocol now. I think the only\nreasonable solution is to get rid of the FUNC_MAX_ARGS restriction\nin int2vectorin. We probably ought to back-patch that as far as\npg_publication_rel.prattrs exists, too.\n\nBTW, fd0b9dceb is in v15, so are you sure this doesn't fail in 15?\nIt looks like the code path is only taken if the remote is also >= 15,\nso maybe your test case didn't expose it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 14:39:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 14:39:41 -0500, Tom Lane wrote:\n> It looks like the proximate cause is that fd0b9dceb started fetching\n> the remote's pg_get_publication_tables() result as-is rather than\n> unnesting it, so that the on-the-wire representation is now int2vector\n> not a series of int2. However, that just begs the question of who\n> thought that making pg_publication_rel.prattrs be int2vector instead\n> of int2[] was a good idea. Quite aside from this issue, int2vector\n> isn't toastable, which'll lead to bloat in pg_publication_rel.\n\nThere's no easily visible comments about these restrictions of int2vector. And\nthere's plenty other places using it, where it's not immediatelly obvious that\nthe number of entries is very constrained, even if they are\n(e.g. pg_trigger.tgattr).\n\n\n> But I suppose we are stuck with that, seeing that this datatype choice\n> is effectively part of the logrep protocol now. I think the only\n> reasonable solution is to get rid of the FUNC_MAX_ARGS restriction\n> in int2vectorin. We probably ought to back-patch that as far as\n> pg_publication_rel.prattrs exists, too.\n\nAre you thinking of introducing another, or just \"rely\" on too long arrays to\ntrigger errors when forming tuples?\n\nI guess we'll have to process the input twice? Pre-allocating an int2vector\nfor 100 elements is one thing, for 1600 another.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 11:56:39 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2023-01-15 14:39:41 -0500, Tom Lane wrote:\n>> But I suppose we are stuck with that, seeing that this datatype choice\n>> is effectively part of the logrep protocol now. I think the only\n>> reasonable solution is to get rid of the FUNC_MAX_ARGS restriction\n>> in int2vectorin. We probably ought to back-patch that as far as\n>> pg_publication_rel.prattrs exists, too.\n\n> Are you thinking of introducing another, or just \"rely\" on too long arrays to\n> trigger errors when forming tuples?\n\nThere's enough protections already, eg repalloc will complain if you\ntry to go past 1GB. I'm thinking of the attached for HEAD (it'll\ntake minor mods to back-patch).\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 15 Jan 2023 15:17:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 15:17:16 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2023-01-15 14:39:41 -0500, Tom Lane wrote:\n> >> But I suppose we are stuck with that, seeing that this datatype choice\n> >> is effectively part of the logrep protocol now. I think the only\n> >> reasonable solution is to get rid of the FUNC_MAX_ARGS restriction\n> >> in int2vectorin. We probably ought to back-patch that as far as\n> >> pg_publication_rel.prattrs exists, too.\n> \n> > Are you thinking of introducing another, or just \"rely\" on too long arrays to\n> > trigger errors when forming tuples?\n> \n> There's enough protections already, eg repalloc will complain if you\n> try to go past 1GB.\n\nWe'll practically error out at a much lower limit than that, due to, due to\nreaching the max length of a row and int2vector not being toastable. So I'm\njust wondering if we want to set the limit to something that'll commonly avoid\nerroring out out with something like\n ERROR: 54000: row is too big: size 10048, maximum size 8160\n\nFor the purpose here a limit of MaxTupleAttributeNumber or such instead of\nFUNC_MAX_ARGS would do the trick, I think?\n\n\n> diff --git a/src/backend/utils/adt/int.c b/src/backend/utils/adt/int.c\n> index e47c15a54f..44d1c7ad0c 100644\n> --- a/src/backend/utils/adt/int.c\n> +++ b/src/backend/utils/adt/int.c\n> @@ -143,11 +143,13 @@ int2vectorin(PG_FUNCTION_ARGS)\n> \tchar\t *intString = PG_GETARG_CSTRING(0);\n> \tNode\t *escontext = fcinfo->context;\n> \tint2vector *result;\n> +\tint\t\t\tnalloc;\n> \tint\t\t\tn;\n> \n> -\tresult = (int2vector *) palloc0(Int2VectorSize(FUNC_MAX_ARGS));\n> +\tnalloc = 32;\t\t\t\t/* arbitrary initial size guess */\n> +\tresult = (int2vector *) palloc0(Int2VectorSize(nalloc));\n> \n> -\tfor (n = 0; n < FUNC_MAX_ARGS; n++)\n> +\tfor (n = 0;; n++)\n> \t{\n> \t\tlong\t\tl;\n> \t\tchar\t *endp;\n> @@ -157,6 +159,12 @@ int2vectorin(PG_FUNCTION_ARGS)\n> \t\tif (*intString == '\\0')\n> \t\t\tbreak;\n> \n> +\t\tif (n >= nalloc)\n> +\t\t{\n> +\t\t\tnalloc *= 2;\n> +\t\t\tresult = (int2vector *) repalloc(result, Int2VectorSize(nalloc));\n> +\t\t}\n\nShould this be repalloc0? I don't know if the palloc0 above was just used with\nthe goal of initializing the \"header\" fields, or also to avoid trailing\nuninitialized bytes?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 12:36:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "I wrote:\n> BTW, fd0b9dceb is in v15, so are you sure this doesn't fail in 15?\n\nAh-hah: simple test cases only fail since b7ae03953. Before\nthat, the default situation was that pg_publication_rel.prattrs\nwas null and that would be passed on as the transmitted value.\nb7ae03953 decided it'd be cool if pg_get_publication_tables()\nexpanded that to show all the actually-transmitted columns,\nand as of that point we can get an overflow in int2vectorin\ngiven the default case where all columns are published.\n\nTo break it in v15, you need a test case that publishes more than\n100 columns, but not all of them (else the optimization in\nfetch_remote_table_info's query prevents the issue).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 15:48:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> For the purpose here a limit of MaxTupleAttributeNumber or such instead of\n> FUNC_MAX_ARGS would do the trick, I think?\n\nAs long as we have to change the code, we might as well remove the\narbitrary restriction.\n\n> Should this be repalloc0? I don't know if the palloc0 above was just used with\n> the goal of initializing the \"header\" fields, or also to avoid trailing\n> uninitialized bytes?\n\nI think probably the palloc0 was mostly about belt-and-suspenders\nprogramming. But yeah, its only real value is to ensure that all\nthe header fields are zero, so I don't think we need repalloc0\nwhen enlarging. After we set the array size at the end of the\nloop, it'd be a programming bug to touch any bytes beyond that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 15:53:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Hi,\n\nOn 2023-01-15 15:53:09 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > For the purpose here a limit of MaxTupleAttributeNumber or such instead of\n> > FUNC_MAX_ARGS would do the trick, I think?\n> \n> As long as we have to change the code, we might as well remove the\n> arbitrary restriction.\n\nWFM, just wanted to be sure we thought about the errors it could cause. I'm\nnot sure we've exercised cases of tuples being too wide due to variable-width\nplain storage types exhaustively. There's only a small number of these types:\nint2vector, oidvector, gtsvector, tsquery\n\nWhat's behind using plain for these types? Is it just because we want to use\nit in tables that don't have a toast table (namely pg_index)? Obviously we\ncan't change the storage in existing releases...\n\n\n> > Should this be repalloc0? I don't know if the palloc0 above was just used with\n> > the goal of initializing the \"header\" fields, or also to avoid trailing\n> > uninitialized bytes?\n> \n> I think probably the palloc0 was mostly about belt-and-suspenders\n> programming. But yeah, its only real value is to ensure that all\n> the header fields are zero, so I don't think we need repalloc0\n> when enlarging. After we set the array size at the end of the\n> loop, it'd be a programming bug to touch any bytes beyond that.\n\nAgreed.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 15 Jan 2023 14:06:27 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> WFM, just wanted to be sure we thought about the errors it could cause. I'm\n> not sure we've exercised cases of tuples being too wide due to variable-width\n> plain storage types exhaustively. There's only a small number of these types:\n> int2vector, oidvector, gtsvector, tsquery\n\n> What's behind using plain for these types? Is it just because we want to use\n> it in tables that don't have a toast table (namely pg_index)? Obviously we\n> can't change the storage in existing releases...\n\nFor int2vector and oidvector, I think it boils down to wanting to access\ncolumns like pg_proc.proargtypes without detoasting. We could fix that,\nbut it'd likely be invasive and not a net positive.\n\nIt seems a bit broken that tsquery is marked that way, though; I doubt\nwe are getting any notational benefit from it.\n\nDunno about gtsvector.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 15 Jan 2023 17:18:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: logrep stuck with 'ERROR: int2vector has too many elements'" } ]
[ { "msg_contents": "Hi, hackers\n\nFound  some functions in dsa.c are not used anymore.\n\ndsa_create\ndsa_attach\ndsa_get_handle\ndsa_trim\ndsa_dump\n\nWe once used dsa_create to create DSA and  it ’s all replaced by dsa_create_in_place since commit 31ae1638ce.\ndsa_attach and dsa_get_handle cooperate with dsa_create.\ndsa_trim and dsa_dump are introduced by DSA original commit 13df76a537 , but not used since then.\n\nSo, they are all dead codes, provide a patch to remove them.\n\nRegards,\nZhang Mingli", "msg_date": "Sun, 15 Jan 2023 23:43:49 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Code review in dsa.c" }, { "msg_contents": "HI,\n\nOn Jan 15, 2023, 23:43 +0800, Zhang Mingli <zmlpostgres@gmail.com>, wrote:\n> Hi, hackers\n>\n> Found  some functions in dsa.c are not used anymore.\n>\n> dsa_create\n> dsa_attach\n> dsa_get_handle\n> dsa_trim\n> dsa_dump\n>\n> We once used dsa_create to create DSA and  it ’s all replaced by dsa_create_in_place since commit 31ae1638ce.\n> dsa_attach and dsa_get_handle cooperate with dsa_create.\n> dsa_trim and dsa_dump are introduced by DSA original commit 13df76a537 , but not used since then.\n>\n> So, they are all dead codes, provide a patch to remove them.\n\nPatch updated.\nForget to remove dsa_unpin in dsa.h, dsa_unpin is also not used since commit 13df76a537.\nThe gemel function dsa_pin is only used in pg_stat. Seems reasonable that we don’t need to call dsa_unpin in pg_stat.\n\nRegards,\nZhang Mingli", "msg_date": "Mon, 16 Jan 2023 00:04:56 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Code review in dsa.c" }, { "msg_contents": "On Mon, Jan 16, 2023 at 12:04:56AM +0800, Zhang Mingli wrote:\n> So, they are all dead codes, provide a patch to remove them.\n\nI am proposing a new use of dsa_create, dsa_attach, and dsa_get_handle in\nhttps://commitfest.postgresql.org/41/4020/. These might also be useful for\nextensions, so IMHO we should keep this stuff.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 15 Jan 2023 08:10:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Code review in dsa.c" }, { "msg_contents": "HI,\n\nOn Jan 16, 2023, 00:10 +0800, Nathan Bossart <nathandbossart@gmail.com>, wrote:\n> On Mon, Jan 16, 2023 at 12:04:56AM +0800, Zhang Mingli wrote:\n> > So, they are all dead codes, provide a patch to remove them.\n>\n> I am proposing a new use of dsa_create, dsa_attach, and dsa_get_handle in\n> https://commitfest.postgresql.org/41/4020/. These might also be useful for\n> extensions, so IMHO we should keep this stuff.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\nOK, thanks.\n\nRegards,\nZhang Mingli\n\n\n\n\n\n\n\nHI,\n\nOn Jan 16, 2023, 00:10 +0800, Nathan Bossart <nathandbossart@gmail.com>, wrote:\nOn Mon, Jan 16, 2023 at 12:04:56AM +0800, Zhang Mingli wrote:\nSo, they are all dead codes, provide a patch to remove them.\n\nI am proposing a new use of dsa_create, dsa_attach, and dsa_get_handle in\nhttps://commitfest.postgresql.org/41/4020/. These might also be useful for\nextensions, so IMHO we should keep this stuff.\n\n--\nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\nOK, thanks.\n\n\nRegards,\nZhang Mingli", "msg_date": "Mon, 16 Jan 2023 14:57:55 +0800", "msg_from": "Zhang Mingli <zmlpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Code review in dsa.c" } ]
[ { "msg_contents": "This is a test. Please ignore...\n--\nTatsuo Ishii\nSRA OSS LLC\nEnglish: http://www.sraoss.co.jp/index_en/\nJapanese:http://www.sraoss.co.jp\n\n\n", "msg_date": "Mon, 16 Jan 2023 07:35:20 +0900 (JST)", "msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "test" } ]
[ { "msg_contents": "Hi,\n\nI find if there are more than one functions in different schemas,\n\nand the functions have the same name and the same arguments,\n\n\\df[+] only display the function that schema earlier appeared in the search_path.\n\nAnd SELECT pg_function_is_visible(funoid) returns f.\n\n\n\nBecause in FunctionIsVisible(Oid funcid) function, only use proname to see if the function can be found by FuncnameGetCandidates.\nI think \\df[+] should display all the functions, and in FunctionIsVisible(Oid funcid) function, should use pronamespace and proname\nto see if the function can be found by FuncnameGetCandidates.\n\n\nNext is my test cases. The PostgreSQL version is 15.1.\n\n\n\nCREATE OR REPLACE FUNCTION fun1(arg1 INT, arg2 OUT int, arg3 IN OUT int)\n\nRETURNS RECORD\n\nAS\n\n$$\n\nBEGIN\n\n arg3 := arg1 + arg2;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n\n\n\nCREATE OR REPLACE PROCEDURE proc1(arg1 INT, arg2 IN OUT INT, arg3 OUT INT)\n\nAS\n\n$$\n\nBEGIN\n\n arg3 := arg1 + arg2;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n\n\n\n\n\n\npostgres=# \\df\n\n List of functions\n\n Schema | Name | Result data type | Argument data types | Type \n\n--------+-------+------------------+-------------------------------------------------------+------\n\n public | fun1 | record | arg1 integer, OUT arg2 integer, INOUT arg3 integer | func\n\n public | proc1 | | IN arg1 integer, INOUT arg2 integer, OUT arg3 integer | proc\n\n(2 rows)\n\n\n\n\n\n\n\nset search_path=\"$user\", public, s1;\n\n\n\n\nCREATE SCHEMA s1;\n\n\n\n\nCREATE OR REPLACE FUNCTION s1.fun1(arg1 INT, arg2 OUT int, arg3 IN OUT int)\n\nRETURNS RECORD\n\nAS\n\n$$\n\nBEGIN\n\n arg3 := arg1 + arg2;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n\n\n\nCREATE OR REPLACE PROCEDURE s1.proc1(arg1 INT, arg2 IN OUT INT, arg3 OUT INT)\n\nAS\n\n$$\n\nBEGIN\n\n arg3 := arg1 + arg2;\n\nEND;\n\n$$ LANGUAGE plpgsql;\n\n\n\n\npostgres=# \\df\n\n List of functions\n\n Schema | Name | Result data type | Argument data types | Type \n\n--------+-------+------------------+-------------------------------------------------------+------\n\n public | fun1 | record | arg1 integer, OUT arg2 integer, INOUT arg3 integer | func\n\n public | proc1 | | IN arg1 integer, INOUT arg2 integer, OUT arg3 integer | proc\n\n(2 rows)\n\n\n\n\n\n\n\npostgres=# \\df fun1\n\n List of functions\n\n Schema | Name | Result data type | Argument data types | Type \n\n--------+------+------------------+----------------------------------------------------+------\n\n public | fun1 | record | arg1 integer, OUT arg2 integer, INOUT arg3 integer | func\n\n(1 row)\n\n\npostgres=# select * from pg_proc where proname like 'fun1';\n oid | proname | pronamespace | proowner | prolang | procost | prorows | provariadic | prosupport | prokind | prosecdef | proleakproof | proisstrict | proretset | provolatile | proparallel | pronargs | pronargdefa\nults | prorettype | proargtypes | proallargtypes | proargmodes | proargnames | proargdefaults | protrftypes | prosrc | probin | prosqlbody | proconfig | proacl \n-------+---------+--------------+----------+---------+---------+---------+-------------+------------+---------+-----------+--------------+-------------+-----------+-------------+-------------+----------+------------\n-----+------------+-------------+----------------+-------------+------------------+----------------+-------------+------------------------+--------+------------+-----------+--------\n 16386 | fun1 | 2200 | 10 | 13677 | 100 | 0 | 0 | - | f | f | f | f | f | v | u | 2 | \n 0 | 2249 | 23 23 | {23,23,23} | {i,o,b} | {arg1,arg2,arg3} | | | +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | BEGIN +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | arg3 := arg1 + arg2;+| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | END; +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | | | | | \n 16389 | fun1 | 16388 | 10 | 13677 | 100 | 0 | 0 | - | f | f | f | f | f | v | u | 2 | \n 0 | 2249 | 23 23 | {23,23,23} | {i,o,b} | {arg1,arg2,arg3} | | | +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | BEGIN +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | arg3 := arg1 + arg2;+| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | END; +| | | | \n | | | | | | | | | | | | | | | | | \n | | | | | | | | | | | | \n(2 rows)\n\n\n\n\n\n\npostgres=# SELECT pg_function_is_visible(16386);\n pg_function_is_visible \n------------------------\n t\n(1 row)\n\n\npostgres=# SELECT pg_function_is_visible(16389); --Should display t?\n pg_function_is_visible \n------------------------\n f\n(1 row)\n\n\n\n\nHi,I find if there are more than one functions in different schemas,and the functions have the same name and the same arguments,\\df[+] only display the function that schema earlier appeared in the search_path.And SELECT pg_function_is_visible(funoid) returns f.Because in FunctionIsVisible(Oid funcid) function,  only use proname to see if the function can be found by FuncnameGetCandidates.I think \\df[+] should display all the functions, and in FunctionIsVisible(Oid funcid) function, should use pronamespace and pronameto see if the function can be found by FuncnameGetCandidates.Next is my test cases. The PostgreSQL version is 15.1.CREATE OR REPLACE FUNCTION fun1(arg1 INT, arg2 OUT int, arg3 IN OUT int)RETURNS RECORDAS$$BEGIN  arg3 := arg1 + arg2;END;$$ LANGUAGE plpgsql;CREATE OR REPLACE PROCEDURE proc1(arg1 INT, arg2 IN OUT INT, arg3 OUT INT)AS$$BEGIN  arg3 := arg1 + arg2;END;$$ LANGUAGE plpgsql;postgres=# \\df                                        List of functions Schema | Name  | Result data type |                  Argument data types                  | Type --------+-------+------------------+-------------------------------------------------------+------ public | fun1  | record           | arg1 integer, OUT arg2 integer, INOUT arg3 integer    | func public | proc1 |                  | IN arg1 integer, INOUT arg2 integer, OUT arg3 integer | proc(2 rows)set search_path=\"$user\", public, s1;CREATE SCHEMA s1;CREATE OR REPLACE FUNCTION s1.fun1(arg1 INT, arg2 OUT int, arg3 IN OUT int)RETURNS RECORDAS$$BEGIN  arg3 := arg1 + arg2;END;$$ LANGUAGE plpgsql;CREATE OR REPLACE PROCEDURE s1.proc1(arg1 INT, arg2 IN OUT INT, arg3 OUT INT)AS$$BEGIN  arg3 := arg1 + arg2;END;$$ LANGUAGE plpgsql;postgres=# \\df                                        List of functions Schema | Name  | Result data type |                  Argument data types                  | Type --------+-------+------------------+-------------------------------------------------------+------ public | fun1  | record           | arg1 integer, OUT arg2 integer, INOUT arg3 integer    | func public | proc1 |                  | IN arg1 integer, INOUT arg2 integer, OUT arg3 integer | proc(2 rows)postgres=# \\df fun1                                      List of functions Schema | Name | Result data type |                Argument data types                 | Type --------+------+------------------+----------------------------------------------------+------ public | fun1 | record           | arg1 integer, OUT arg2 integer, INOUT arg3 integer | func(1 row)postgres=# select * from pg_proc where proname like 'fun1';  oid  | proname | pronamespace | proowner | prolang | procost | prorows | provariadic | prosupport | prokind | prosecdef | proleakproof | proisstrict | proretset | provolatile | proparallel | pronargs | pronargdefaults | prorettype | proargtypes | proallargtypes | proargmodes |   proargnames    | proargdefaults | protrftypes |         prosrc         | probin | prosqlbody | proconfig | proacl -------+---------+--------------+----------+---------+---------+---------+-------------+------------+---------+-----------+--------------+-------------+-----------+-------------+-------------+----------+-----------------+------------+-------------+----------------+-------------+------------------+----------------+-------------+------------------------+--------+------------+-----------+-------- 16386 | fun1    |         2200 |       10 |   13677 |     100 |       0 |           0 | -          | f       | f         | f            | f           | f         | v           | u           |        2 |               0 |       2249 | 23 23       | {23,23,23}     | {i,o,b}     | {arg1,arg2,arg3} |                |             |                       +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             | BEGIN                 +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             |   arg3 := arg1 + arg2;+|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             | END;                  +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             |                        |        |            |           |  16389 | fun1    |        16388 |       10 |   13677 |     100 |       0 |           0 | -          | f       | f         | f            | f           | f         | v           | u           |        2 |               0 |       2249 | 23 23       | {23,23,23}     | {i,o,b}     | {arg1,arg2,arg3} |                |             |                       +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             | BEGIN                 +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             |   arg3 := arg1 + arg2;+|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             | END;                  +|        |            |           |        |         |              |          |         |         |         |             |            |         |           |              |             |           |             |             |          |                 |            |             |                |             |                  |                |             |                        |        |            |           | (2 rows)postgres=# SELECT pg_function_is_visible(16386); pg_function_is_visible ------------------------ t(1 row)postgres=# SELECT pg_function_is_visible(16389);  --Should display t? pg_function_is_visible ------------------------ f(1 row)", "msg_date": "Mon, 16 Jan 2023 14:21:26 +0800 (CST)", "msg_from": "=?GBK?B?vfA=?= <jinbinge@126.com>", "msg_from_op": true, "msg_subject": "If there are more than two functions in different schemas, the\n functions have the same name and same arguments, \\df[+] only display the\n function that schema first appeared in the search_path." }, { "msg_contents": "On Sunday, January 15, 2023, 金 <jinbinge@126.com> wrote:\n\n>\n> postgres=# \\df fun1\n>\n> List of functions\n>\n> Schema | Name | Result data type | Argument data types\n> | Type\n>\n> --------+------+------------------+-------------------------\n> ---------------------------+------\n>\n> public | fun1 | record | arg1 integer, OUT arg2 integer, INOUT\n> arg3 integer | func\n> (1 row)\n>\n\nWorking as documented.\n\n\n>\n> postgres=# SELECT pg_function_is_visible(16386);\n> pg_function_is_visible\n> ------------------------\n> t\n> (1 row)\n>\n>\n\n\n> postgres=# SELECT pg_function_is_visible(16389); --Should display t?\n> pg_function_is_visible\n> ------------------------\n> f\n> (1 row)\n>\n\nNo, visible means is the one found by looking through the current\nsearch_path, and the one in s1 is not visible for the very reason that the\none in public is visible.\n\nYou need to read the Patterns section of the psql documentation where the\nbehavior to expect is described.\n\nDavid J.\n\nOn Sunday, January 15, 2023, 金 <jinbinge@126.com> wrote:postgres=# \\df fun1                                      List of functions Schema | Name | Result data type |                Argument data types                 | Type --------+------+------------------+----------------------------------------------------+------ public | fun1 | record           | arg1 integer, OUT arg2 integer, INOUT arg3 integer | func(1 row)Working as documented. postgres=# SELECT pg_function_is_visible(16386); pg_function_is_visible ------------------------ t(1 row) postgres=# SELECT pg_function_is_visible(16389);  --Should display t? pg_function_is_visible ------------------------ f(1 row)No, visible means is the one found by looking through the current search_path, and the one in s1 is not visible for the very reason that the one in public is visible.You need to read the Patterns section of the psql documentation where the behavior to expect is described.David J.", "msg_date": "Sun, 15 Jan 2023 23:33:56 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: If there are more than two functions in different schemas, the\n functions have the same name and same arguments, \\df[+] only display the\n function that schema first appeared in the search_path." } ]
[ { "msg_contents": "Hi.\n\nWe've run regress isolation tests on partitioned tables and found \ninteresting VACUUM behavior. I'm not sure, if it's intended.\n\nIn the following example, partitioned tables and regular tables behave \ndifferently:\n\nCREATE TABLE vacuum_tab (a int) PARTITION BY HASH (a);\nCREATE TABLE vacuum_tab_1 PARTITION OF vacuum_tab FOR VALUES WITH \n(MODULUS 2, REMAINDER 0);\nCREATE TABLE vacuum_tab_2 PARTITION OF vacuum_tab FOR VALUES WITH \n(MODULUS 2, REMAINDER 1);\nCREATE ROLE regress_vacuum_conflict;\n\nIn the first session:\n\nbegin;\n LOCK vacuum_tab IN SHARE UPDATE EXCLUSIVE MODE;\n\nIn the second:\nSET ROLE regress_vacuum_conflict;\n VACUUM vacuum_tab;\n WARNING: permission denied to vacuum \"vacuum_tab\", skipping it <---- \nhangs here, trying to lock vacuum_tab_1\n\nIn non-partitioned case second session exits after emitting warning. In \npartitioned case, it hangs, trying to get locks.\nThis is due to the fact that in expand_vacuum_rel() we skip parent table \nif vacuum_is_permitted_for_relation(), but don't perform such check for \nits child.\nThe check will be performed later in vacuum_rel(), but after \nvacuum_open_relation(), which leads to hang in the second session.\n\nIs it intended? Why don't we perform vacuum_is_permitted_for_relation() \ncheck for inheritors in expand_vacuum_rel()?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 16 Jan 2023 11:18:08 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Inconsistency in vacuum behavior" }, { "msg_contents": "Hi!\n\nI've checked this expand_vacuum_rel() and made a quick fix for this.Here's\nthe result of the test:\n\npostgres@postgres=# set role regress_vacuum_conflict;\nSET\nTime: 0.369 ms\npostgres@postgres=> vacuum vacuum_tab;\nWARNING: permission denied to vacuum \"vacuum_tab\", skipping it\nWARNING: permission denied to vacuum \"vacuum_tab_1\", skipping it\nWARNING: permission denied to vacuum \"vacuum_tab_2\", skipping it\nVACUUM\nTime: 0.936 ms\npostgres@postgres=>\n\nLooks like it's a subject for a patch.\n\nOn Mon, Jan 16, 2023 at 11:18 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n\n> Hi.\n>\n> We've run regress isolation tests on partitioned tables and found\n> interesting VACUUM behavior. I'm not sure, if it's intended.\n>\n> In the following example, partitioned tables and regular tables behave\n> differently:\n>\n> CREATE TABLE vacuum_tab (a int) PARTITION BY HASH (a);\n> CREATE TABLE vacuum_tab_1 PARTITION OF vacuum_tab FOR VALUES WITH\n> (MODULUS 2, REMAINDER 0);\n> CREATE TABLE vacuum_tab_2 PARTITION OF vacuum_tab FOR VALUES WITH\n> (MODULUS 2, REMAINDER 1);\n> CREATE ROLE regress_vacuum_conflict;\n>\n> In the first session:\n>\n> begin;\n> LOCK vacuum_tab IN SHARE UPDATE EXCLUSIVE MODE;\n>\n> In the second:\n> SET ROLE regress_vacuum_conflict;\n> VACUUM vacuum_tab;\n> WARNING: permission denied to vacuum \"vacuum_tab\", skipping it <----\n> hangs here, trying to lock vacuum_tab_1\n>\n> In non-partitioned case second session exits after emitting warning. In\n> partitioned case, it hangs, trying to get locks.\n> This is due to the fact that in expand_vacuum_rel() we skip parent table\n> if vacuum_is_permitted_for_relation(), but don't perform such check for\n> its child.\n> The check will be performed later in vacuum_rel(), but after\n> vacuum_open_relation(), which leads to hang in the second session.\n>\n> Is it intended? Why don't we perform vacuum_is_permitted_for_relation()\n> check for inheritors in expand_vacuum_rel()?\n>\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I've checked this expand_vacuum_rel() and made a quick fix for this.Here's the result of the test:postgres@postgres=# set role regress_vacuum_conflict;SETTime: 0.369 mspostgres@postgres=> vacuum vacuum_tab;WARNING:  permission denied to vacuum \"vacuum_tab\", skipping itWARNING:  permission denied to vacuum \"vacuum_tab_1\", skipping itWARNING:  permission denied to vacuum \"vacuum_tab_2\", skipping itVACUUMTime: 0.936 mspostgres@postgres=>Looks like it's a subject for a patch.On Mon, Jan 16, 2023 at 11:18 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:Hi.\n\nWe've run regress isolation tests on partitioned tables and found \ninteresting VACUUM behavior. I'm not sure, if it's intended.\n\nIn the following example, partitioned tables and regular tables behave \ndifferently:\n\nCREATE TABLE vacuum_tab (a int) PARTITION BY HASH (a);\nCREATE TABLE vacuum_tab_1 PARTITION OF vacuum_tab FOR VALUES WITH \n(MODULUS 2, REMAINDER 0);\nCREATE TABLE vacuum_tab_2 PARTITION OF vacuum_tab FOR VALUES WITH \n(MODULUS 2, REMAINDER 1);\nCREATE ROLE regress_vacuum_conflict;\n\nIn the first session:\n\nbegin;\n  LOCK vacuum_tab IN SHARE UPDATE EXCLUSIVE MODE;\n\nIn the second:\nSET ROLE regress_vacuum_conflict;\n  VACUUM vacuum_tab;\n  WARNING:  permission denied to vacuum \"vacuum_tab\", skipping it <---- \nhangs here, trying to lock vacuum_tab_1\n\nIn non-partitioned case second session exits after emitting warning. In \npartitioned case, it hangs, trying to get locks.\nThis is due to the fact that in expand_vacuum_rel() we skip parent table \nif vacuum_is_permitted_for_relation(), but don't perform such check for \nits child.\nThe check will be performed later in vacuum_rel(), but after \nvacuum_open_relation(), which leads to hang in the second session.\n\nIs it intended? Why don't we perform vacuum_is_permitted_for_relation() \ncheck for inheritors in expand_vacuum_rel()?\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 16 Jan 2023 16:48:12 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi!\n\nHere's the patch that fixes this case, please check it out.\nThe patch adds vacuum_is_permitted_for_relation() check before adding\npartition relation to the vacuum list, and if permission is denied the\nrelation\nis not added, so it is not passed to vacuum_rel() and there are no try to\nacquire the lock.\n\nCheers!\n\nOn Mon, Jan 16, 2023 at 4:48 PM Nikita Malakhov <hukutoc@gmail.com> wrote:\n\n> Hi!\n>\n> I've checked this expand_vacuum_rel() and made a quick fix for this.Here's\n> the result of the test:\n>\n> postgres@postgres=# set role regress_vacuum_conflict;\n> SET\n> Time: 0.369 ms\n> postgres@postgres=> vacuum vacuum_tab;\n> WARNING: permission denied to vacuum \"vacuum_tab\", skipping it\n> WARNING: permission denied to vacuum \"vacuum_tab_1\", skipping it\n> WARNING: permission denied to vacuum \"vacuum_tab_2\", skipping it\n> VACUUM\n> Time: 0.936 ms\n> postgres@postgres=>\n>\n> Looks like it's a subject for a patch.\n>\n> On Mon, Jan 16, 2023 at 11:18 AM Alexander Pyhalov <\n> a.pyhalov@postgrespro.ru> wrote:\n>\n>> Hi.\n>>\n>> We've run regress isolation tests on partitioned tables and found\n>> interesting VACUUM behavior. I'm not sure, if it's intended.\n>>\n>> In the following example, partitioned tables and regular tables behave\n>> differently:\n>>\n>> CREATE TABLE vacuum_tab (a int) PARTITION BY HASH (a);\n>> CREATE TABLE vacuum_tab_1 PARTITION OF vacuum_tab FOR VALUES WITH\n>> (MODULUS 2, REMAINDER 0);\n>> CREATE TABLE vacuum_tab_2 PARTITION OF vacuum_tab FOR VALUES WITH\n>> (MODULUS 2, REMAINDER 1);\n>> CREATE ROLE regress_vacuum_conflict;\n>>\n>> In the first session:\n>>\n>> begin;\n>> LOCK vacuum_tab IN SHARE UPDATE EXCLUSIVE MODE;\n>>\n>> In the second:\n>> SET ROLE regress_vacuum_conflict;\n>> VACUUM vacuum_tab;\n>> WARNING: permission denied to vacuum \"vacuum_tab\", skipping it <----\n>> hangs here, trying to lock vacuum_tab_1\n>>\n>> In non-partitioned case second session exits after emitting warning. In\n>> partitioned case, it hangs, trying to get locks.\n>> This is due to the fact that in expand_vacuum_rel() we skip parent table\n>> if vacuum_is_permitted_for_relation(), but don't perform such check for\n>> its child.\n>> The check will be performed later in vacuum_rel(), but after\n>> vacuum_open_relation(), which leads to hang in the second session.\n>>\n>> Is it intended? Why don't we perform vacuum_is_permitted_for_relation()\n>> check for inheritors in expand_vacuum_rel()?\n>>\n>> --\n>> Best regards,\n>> Alexander Pyhalov,\n>> Postgres Professional\n>>\n>>\n>>\n>\n> --\n> Regards,\n> Nikita Malakhov\n> Postgres Professional\n> https://postgrespro.ru/\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/", "msg_date": "Mon, 16 Jan 2023 17:26:20 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Nikita Malakhov писал 2023-01-16 17:26:\n> Hi!\n> \n> Here's the patch that fixes this case, please check it out.\n> The patch adds vacuum_is_permitted_for_relation() check before adding\n> partition relation to the vacuum list, and if permission is denied the\n> relation\n> is not added, so it is not passed to vacuum_rel() and there are no try\n> to\n> acquire the lock.\n> \n> Cheers!\n\nHi.\n\nThe patch seems to solve the issue.\nTwo minor questions I have:\n1) should we error out if HeapTupleIsValid(part_tuple) is false?\n2) comment \"Check partition relations for vacuum permit\" seems to be \nbroken in some way.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 16 Jan 2023 19:46:04 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi,\n\nCurrently there is no error in this case, so additional thrown error would\nrequire a new test.\nBesides, throwing an error here does not make sense - it is just a check\nfor a vacuum\npermission, I think the right way is to just skip a relation that is not\nsuitable for vacuum.\nAny thoughts or objections?\n\nOn Mon, Jan 16, 2023 at 7:46 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n\n> Nikita Malakhov писал 2023-01-16 17:26:\n> > Hi!\n> >\n> > Here's the patch that fixes this case, please check it out.\n> > The patch adds vacuum_is_permitted_for_relation() check before adding\n> > partition relation to the vacuum list, and if permission is denied the\n> > relation\n> > is not added, so it is not passed to vacuum_rel() and there are no try\n> > to\n> > acquire the lock.\n> >\n> > Cheers!\n>\n> Hi.\n>\n> The patch seems to solve the issue.\n> Two minor questions I have:\n> 1) should we error out if HeapTupleIsValid(part_tuple) is false?\n> 2) comment \"Check partition relations for vacuum permit\" seems to be\n> broken in some way.\n>\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi,Currently there is no error in this case, so additional thrown error would require a new test.Besides, throwing an error here does not make sense - it is just a check for a vacuumpermission, I think the right way is to just skip a relation that is not suitable for vacuum.Any thoughts or objections?On Mon, Jan 16, 2023 at 7:46 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:Nikita Malakhov писал 2023-01-16 17:26:\n> Hi!\n> \n> Here's the patch that fixes this case, please check it out.\n> The patch adds vacuum_is_permitted_for_relation() check before adding\n> partition relation to the vacuum list, and if permission is denied the\n> relation\n> is not added, so it is not passed to vacuum_rel() and there are no try\n> to\n> acquire the lock.\n> \n> Cheers!\n\nHi.\n\nThe patch seems to solve the issue.\nTwo minor questions I have:\n1) should we error out if HeapTupleIsValid(part_tuple) is false?\n2) comment \"Check partition relations for vacuum permit\" seems to be \nbroken in some way.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Mon, 16 Jan 2023 20:12:18 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Nikita Malakhov писал 2023-01-16 20:12:\n> Hi,\n> \n> Currently there is no error in this case, so additional thrown error\n> would require a new test.\n> Besides, throwing an error here does not make sense - it is just a\n> check for a vacuum\n> permission, I think the right way is to just skip a relation that is\n> not suitable for vacuum.\n> Any thoughts or objections?\n> \n\nNo objections for not throwing an error.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Mon, 16 Jan 2023 20:15:24 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi hackers!\n\nAlexander found a very good issue.\nPlease check the solution above. Any objections? It's a production case,\nplease review,\nany thoughts and objections are welcome.\n\nOn Mon, Jan 16, 2023 at 8:15 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n\n> Nikita Malakhov писал 2023-01-16 20:12:\n> > Hi,\n> >\n> > Currently there is no error in this case, so additional thrown error\n> > would require a new test.\n> > Besides, throwing an error here does not make sense - it is just a\n> > check for a vacuum\n> > permission, I think the right way is to just skip a relation that is\n> > not suitable for vacuum.\n> > Any thoughts or objections?\n> >\n>\n> No objections for not throwing an error.\n>\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi hackers!Alexander found a very good issue.Please check the solution above. Any objections? It's a production case, please review,any thoughts and objections are welcome.On Mon, Jan 16, 2023 at 8:15 PM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:Nikita Malakhov писал 2023-01-16 20:12:\n> Hi,\n> \n> Currently there is no error in this case, so additional thrown error\n> would require a new test.\n> Besides, throwing an error here does not make sense - it is just a\n> check for a vacuum\n> permission, I think the right way is to just skip a relation that is\n> not suitable for vacuum.\n> Any thoughts or objections?\n> \n\nNo objections for not throwing an error.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Wed, 18 Jan 2023 11:27:19 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n> Hi,\n> \n> Currently there is no error in this case, so additional thrown error would\n> require a new test.\n> Besides, throwing an error here does not make sense - it is just a check\n> for a vacuum\n> permission, I think the right way is to just skip a relation that is not\n> suitable for vacuum.\n> Any thoughts or objections?\n\nCould you check if this is consistent between the behavior of VACUUM\nFULL and CLUSTER ? See also Nathan's patches.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 18 Jan 2023 19:49:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi!\n\nI've found the discussion you'd mentioned before, checking now.\n\nOn Thu, Jan 19, 2023 at 4:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n> > Hi,\n> >\n> > Currently there is no error in this case, so additional thrown error\n> would\n> > require a new test.\n> > Besides, throwing an error here does not make sense - it is just a check\n> > for a vacuum\n> > permission, I think the right way is to just skip a relation that is not\n> > suitable for vacuum.\n> > Any thoughts or objections?\n>\n> Could you check if this is consistent between the behavior of VACUUM\n> FULL and CLUSTER ? See also Nathan's patches.\n>\n> --\n> Justin\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!I've found the discussion you'd mentioned before, checking now.On Thu, Jan 19, 2023 at 4:49 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n> Hi,\n> \n> Currently there is no error in this case, so additional thrown error would\n> require a new test.\n> Besides, throwing an error here does not make sense - it is just a check\n> for a vacuum\n> permission, I think the right way is to just skip a relation that is not\n> suitable for vacuum.\n> Any thoughts or objections?\n\nCould you check if this is consistent between the behavior of VACUUM\nFULL and CLUSTER ?  See also Nathan's patches.\n\n-- \nJustin\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 19 Jan 2023 10:34:38 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Justin Pryzby писал 2023-01-19 04:49:\n> On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n>> Hi,\n>> \n>> Currently there is no error in this case, so additional thrown error \n>> would\n>> require a new test.\n>> Besides, throwing an error here does not make sense - it is just a \n>> check\n>> for a vacuum\n>> permission, I think the right way is to just skip a relation that is \n>> not\n>> suitable for vacuum.\n>> Any thoughts or objections?\n> \n> Could you check if this is consistent between the behavior of VACUUM\n> FULL and CLUSTER ? See also Nathan's patches.\n\nHi.\n\nCluster behaves in a different way - it errors out immediately if \nrelation is not owned by user. For partitioned rel it would anyway raise \nerror later.\nVACUUM and VACUUM FULL behave consistently after applying Nikita's patch \n(for partitioned and regular tables) - issue warning \"skipping \nTABLE_NAME --- only table or database owner can vacuum it\" and return \nsuccess status.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:37:27 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi!\n\nFor the test Alexander described in the beginning of the discussion - the\nresults are\npostgres@postgres=# set role regress_vacuum_conflict;\nSET\nTime: 0.324 ms\npostgres@postgres=> vacuum vacuum_tab;\nWARNING: permission denied to vacuum \"vacuum_tab\", skipping it\nWARNING: permission denied to vacuum \"vacuum_tab_1\", skipping it\nWARNING: permission denied to vacuum \"vacuum_tab_2\", skipping it\nVACUUM\nTime: 1.782 ms\npostgres@postgres=> vacuum full;\nWARNING: permission denied to vacuum \"pg_statistic\", skipping it\nWARNING: permission denied to vacuum \"vacuum_tab\", skipping it\n...\npostgres@postgres=> cluster vacuum_tab;\nERROR: must be owner of table vacuum_tab\nTime: 0.384 ms\n\nFor the standard role \"Postgres\" the behavior is the same as before patch.\n\nOn Thu, Jan 19, 2023 at 10:37 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru>\nwrote:\n\n> Justin Pryzby писал 2023-01-19 04:49:\n> > On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n> >> Hi,\n> >>\n> >> Currently there is no error in this case, so additional thrown error\n> >> would\n> >> require a new test.\n> >> Besides, throwing an error here does not make sense - it is just a\n> >> check\n> >> for a vacuum\n> >> permission, I think the right way is to just skip a relation that is\n> >> not\n> >> suitable for vacuum.\n> >> Any thoughts or objections?\n> >\n> > Could you check if this is consistent between the behavior of VACUUM\n> > FULL and CLUSTER ? See also Nathan's patches.\n>\n> Hi.\n>\n> Cluster behaves in a different way - it errors out immediately if\n> relation is not owned by user. For partitioned rel it would anyway raise\n> error later.\n> VACUUM and VACUUM FULL behave consistently after applying Nikita's patch\n> (for partitioned and regular tables) - issue warning \"skipping\n> TABLE_NAME --- only table or database owner can vacuum it\" and return\n> success status.\n>\n> --\n> Best regards,\n> Alexander Pyhalov,\n> Postgres Professional\n>\n\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!For the test Alexander described in the beginning of the discussion - the results arepostgres@postgres=# set role regress_vacuum_conflict;SETTime: 0.324 mspostgres@postgres=> vacuum vacuum_tab;WARNING:  permission denied to vacuum \"vacuum_tab\", skipping itWARNING:  permission denied to vacuum \"vacuum_tab_1\", skipping itWARNING:  permission denied to vacuum \"vacuum_tab_2\", skipping itVACUUMTime: 1.782 mspostgres@postgres=> vacuum full;WARNING:  permission denied to vacuum \"pg_statistic\", skipping itWARNING:  permission denied to vacuum \"vacuum_tab\", skipping it...postgres@postgres=> cluster vacuum_tab;ERROR:  must be owner of table vacuum_tabTime: 0.384 msFor the standard role \"Postgres\" the behavior is the same as before patch.On Thu, Jan 19, 2023 at 10:37 AM Alexander Pyhalov <a.pyhalov@postgrespro.ru> wrote:Justin Pryzby писал 2023-01-19 04:49:\n> On Mon, Jan 16, 2023 at 08:12:18PM +0300, Nikita Malakhov wrote:\n>> Hi,\n>> \n>> Currently there is no error in this case, so additional thrown error \n>> would\n>> require a new test.\n>> Besides, throwing an error here does not make sense - it is just a \n>> check\n>> for a vacuum\n>> permission, I think the right way is to just skip a relation that is \n>> not\n>> suitable for vacuum.\n>> Any thoughts or objections?\n> \n> Could you check if this is consistent between the behavior of VACUUM\n> FULL and CLUSTER ?  See also Nathan's patches.\n\nHi.\n\nCluster behaves in a different way - it errors out immediately if \nrelation is not owned by user. For partitioned rel it would anyway raise \nerror later.\nVACUUM and VACUUM FULL behave consistently after applying Nikita's patch \n(for partitioned and regular tables) - issue warning \"skipping \nTABLE_NAME --- only table or database owner can vacuum it\" and return \nsuccess status.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 19 Jan 2023 14:56:03 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "On Mon, Jan 16, 2023 at 11:18:08AM +0300, Alexander Pyhalov wrote:\n> Is it intended? Why don't we perform vacuum_is_permitted_for_relation()\n> check for inheritors in expand_vacuum_rel()?\n\nSince no lock is held on the partition, the calls to functions like\nobject_ownercheck() and pg_class_aclcheck() in\nvacuum_is_permitted_for_relation() will produce cache lookup ERRORs if the\nrelation is concurrently dropped.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 17:12:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" }, { "msg_contents": "Hi!\n\nYes, I've checked that. What would be desirable behavior in the case above?\nAnyway, waiting for table unlock seems to be not quite right.\n\nOn Sat, Jan 21, 2023 at 4:12 AM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> On Mon, Jan 16, 2023 at 11:18:08AM +0300, Alexander Pyhalov wrote:\n> > Is it intended? Why don't we perform vacuum_is_permitted_for_relation()\n> > check for inheritors in expand_vacuum_rel()?\n>\n> Since no lock is held on the partition, the calls to functions like\n> object_ownercheck() and pg_class_aclcheck() in\n> vacuum_is_permitted_for_relation() will produce cache lookup ERRORs if the\n> relation is concurrently dropped.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n>\n>\n>\n\n-- \nRegards,\nNikita Malakhov\nPostgres Professional\nhttps://postgrespro.ru/\n\nHi!Yes, I've checked that. What would be desirable behavior in the case above?Anyway, waiting for table unlock seems to be not quite right.On Sat, Jan 21, 2023 at 4:12 AM Nathan Bossart <nathandbossart@gmail.com> wrote:On Mon, Jan 16, 2023 at 11:18:08AM +0300, Alexander Pyhalov wrote:\n> Is it intended? Why don't we perform vacuum_is_permitted_for_relation()\n> check for inheritors in expand_vacuum_rel()?\n\nSince no lock is held on the partition, the calls to functions like\nobject_ownercheck() and pg_class_aclcheck() in\nvacuum_is_permitted_for_relation() will produce cache lookup ERRORs if the\nrelation is concurrently dropped.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n-- Regards,Nikita MalakhovPostgres Professional https://postgrespro.ru/", "msg_date": "Thu, 26 Jan 2023 16:08:11 +0300", "msg_from": "Nikita Malakhov <hukutoc@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Inconsistency in vacuum behavior" } ]
[ { "msg_contents": "I happened to notice we have the case in memoize.sql that tests for\nmemoize node with LATERAL joins, which is\n\n-- Try with LATERAL joins\nSELECT explain_memoize('\nSELECT COUNT(*),AVG(t2.unique1) FROM tenk1 t1,\nLATERAL (SELECT t2.unique1 FROM tenk1 t2 WHERE t1.twenty = t2.unique1) t2\nWHERE t1.unique1 < 1000;', false);\n\nISTM this is not the right query for the test. After the subquery being\npulled up into the parent query, there will be no lateral references any\nmore. I'm thinking maybe we can add an ORDER BY clause in the subquery\nto prevent it from being pulled up.\n\n-- Try with LATERAL joins\nSELECT explain_memoize('\nSELECT COUNT(*),AVG(t2.unique1) FROM tenk1 t1,\nLATERAL (SELECT t2.unique1 FROM tenk1 t2 WHERE t1.twenty = t2.unique1 ORDER\nBY 1) t2\nWHERE t1.unique1 < 1000;', false);\n\nAttach a trivial patch for the change.\n\nThanks\nRichard", "msg_date": "Mon, 16 Jan 2023 17:27:07 +0800", "msg_from": "Richard Guo <guofenglinux@gmail.com>", "msg_from_op": true, "msg_subject": "Improve LATERAL join case in test memoize.sql" }, { "msg_contents": "On Mon, 16 Jan 2023 at 22:27, Richard Guo <guofenglinux@gmail.com> wrote:\n> I happened to notice we have the case in memoize.sql that tests for\n> memoize node with LATERAL joins, which is\n>\n> -- Try with LATERAL joins\n> SELECT explain_memoize('\n> SELECT COUNT(*),AVG(t2.unique1) FROM tenk1 t1,\n> LATERAL (SELECT t2.unique1 FROM tenk1 t2 WHERE t1.twenty = t2.unique1) t2\n> WHERE t1.unique1 < 1000;', false);\n>\n> ISTM this is not the right query for the test.\n\n> Attach a trivial patch for the change.\n\nGood catch. I've applied this back to v14.\n\nDavid\n\n\n", "msg_date": "Tue, 24 Jan 2023 12:31:59 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Improve LATERAL join case in test memoize.sql" } ]
[ { "msg_contents": "src/tools/darwin_sysroot (previously in src/template/darwin) contains this:\n\n# [...] Using a version-specific sysroot seems\n# desirable, so if the path is a non-version-specific symlink, expand\n# it.\n\nOn my system, the non-version-specific symlink is\n\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk\n\nand this script turns that into\n\n/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX13.1.sdk\n\nNow, every time the minor version of macOS is updated (e.g., 13.1 -> \n13.2), the sysroot path is no longer there and the build fails. The fix \nis to reconfigure and rebuild.\n\nMaybe in the past these minor versions were rare, but at the moment it \nlooks like there is one about every two months. So every two months I \nhave to reconfigure and rebuild all my Postgres checkouts, of which I \nhave about 20 to 30, so this is getting a bit insane.\n\nThis code has been whacked around quite a bit, so it's hard to find the \norigin of this. But I'm going to submit a vote for \"seems *not* desirable\".\n\n(There is a workaround by setting PG_SYSROOT in the environment or \nsetting a meson option. But the default shouldn't be so fragile.)\n\n\n", "msg_date": "Mon, 16 Jan 2023 11:15:27 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "macOS versioned sysroot" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> src/tools/darwin_sysroot (previously in src/template/darwin) contains this:\n> # [...] Using a version-specific sysroot seems\n> # desirable, so if the path is a non-version-specific symlink, expand\n> # it.\n\n> This code has been whacked around quite a bit, so it's hard to find the \n> origin of this. But I'm going to submit a vote for \"seems *not* desirable\".\n\nThe reasoning for this is in the commit log for 4823621db:\n\n Also, \"xcrun --show-sdk-path\" may give a path that is valid but lacks\n any OS version identifier. We don't really want that, since most\n of the motivation for wiring -isysroot into the build flags at all\n is to ensure that all parts of a PG installation are built against\n the same SDK, even when considering extensions built later and/or on\n a different machine. Insist on finding \"N.N\" in the directory name\n before accepting the result. (Adding \"--sdk macosx\" to the xcrun\n call seems to produce the same answer as xcodebuild, but usually\n more quickly because it's cached, so we also try that as a fallback.)\n\n The core reason why we don't want to use Xcode's default SDK in cases\n like this is that Apple's technology for introducing new syscalls\n does not play nice with Autoconf: for example, configure will think\n that preadv/pwritev exist when using a Big Sur SDK, even when building\n on an older macOS version where they don't exist. It'd be nice to\n have a better solution to that problem, but this patch doesn't attempt\n to fix that.\n\n Discussion: https://postgr.es/m/ed3b8e5d-0da8-6ebd-fd1c-e0ac80a4b204@postgrespro.ru\n\nRe-reading the linked discussion makes me quite loath to remove\nthe version dependency logic; we'd certainly just be reinventing\nthat wheel next time Apple adds a new syscall that we care about.\n\nPerhaps it would be adequate if we could select MacOSX13.sdk instead\nof MacOSX13.1.sdk given the available choices:\n\n$ ll /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs\ntotal 0\ndrwxr-xr-x 7 root wheel 224 Nov 12 16:18 MacOSX.sdk/\nlrwxr-xr-x 1 root wheel 10 Dec 14 10:51 MacOSX13.1.sdk@ -> MacOSX.sdk\nlrwxr-xr-x 1 root wheel 10 Dec 14 10:51 MacOSX13.sdk@ -> MacOSX.sdk\n\nIt does not seem that xcrun/xcodebuild will offer that, but we\ncould contemplate putting in some ad-hoc pathname munging to\nstrip any dot-digits part.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 16 Jan 2023 10:19:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: macOS versioned sysroot" } ]
[ { "msg_contents": "Hi hackers,\n\nWhile playing with 64-bit XIDs [1] my attention was drawn by the\nfollowing statement in the docs [2]:\n\n\"\"\"\nIf these warnings are ignored, the system will shut down and refuse to\nstart any new transactions once there are fewer than three million\ntransactions left until wraparound.\n\"\"\"\n\nI decided to check this.\n\nUnfortunately it can't be done easily e.g. by modifying\nShmemVariableCache->nextXid in gdb, because the system will PANIC with\nsomething like \"could not access status of transaction 12345\".\nHopefully [3] will change the situation someday.\n\nMeanwhile I choose the hard way. In one session I did:\n\n```\nCREATE TABLE phonebook(\n \"id\" SERIAL PRIMARY KEY NOT NULL,\n \"name\" NAME NOT NULL,\n \"phone\" INT NOT NULL);\n\nBEGIN;\nINSERT INTO phonebook VALUES (1, 'Alex', 123);\n\n-- don't commit!\n\n```\n\nThen I did the following:\n\n```\necho \"SELECT pg_current_xact_id();\" > t.sql\npgbench -j 8 -c 8 -f t.sql -T 86400 eax\n```\n\nAfter 20-24 hours on the typical hardware (perhaps faster if only I\ndidn't forget to use `synchronous_commit = off`) pgbench will use up\nthe XID pool. The old tuples can't be frozen because the transaction\nwe created in the beginning is still in progress. So now we can\nobserve what actually happens when the system reaches xidStopLimit.\n\nFirstly, the system doesn't shutdown as the documentation says.\nSecondly, it executes new transactions just fine as long as these\ntransactions don't allocate new XIDs.\n\nXIDs are allocated not for every transaction but rather lazily, when\nneeded (see backend_xid in pg_stat_activity). A transaction doesn't\nneed an assigned XID for checking the visibility of the tuples. Rather\nit uses xmin horizon, and only when using an isolation level above\nREAD COMMITTED, see backend_xmin in pg_stat_activity. Assigning a xmin\nhorizon doesn't increase nextXid.\n\nAs a result, PostgreSQL can still execute read-only transactions even\nafter reaching xidStopLimit. Similarly to how it can do this on hot\nstandby replicas without having conflicts with the leader server.\n\nThirdly, if there was a transaction created before reaching\nxidStopLimit, it will continue to execute after reaching xidStopLimit,\nand it can be successfully committed.\n\nAll in all, the actual behavior is far from \"system shutdown\" and\n\"refusing to start any new transactions\". It's closer to entering\nread-only mode, similarly to what hot standbys allow to do.\n\nThe proposed patchset changes the documentation and the error messages\naccordingly, making them less misleading. 0001 corrects the\ndocumentation but doesn't touch the code. 0002 and 0003 correct the\nmessages shown when approaching xidWrapLimit and xidWarnLimit\naccordingly.\n\nThoughts?\n\n[1]: https://commitfest.postgresql.org/41/3594/\n[2]: https://www.postgresql.org/docs/current/routine-vacuuming.html#VACUUM-FOR-WRAPAROUND\n[3]: https://commitfest.postgresql.org/41/3729/\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 16 Jan 2023 13:35:39 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi hackers,\n\n> The proposed patchset changes the documentation and the error messages\n> accordingly, making them less misleading. 0001 corrects the\n> documentation but doesn't touch the code. 0002 and 0003 correct the\n> messages shown when approaching xidWrapLimit and xidWarnLimit\n> accordingly.\n\nA colleague of mine, Oleksii Kliukin, pointed out that the\nrecommendation about running VACUUM in a single-user mode is also\noutdated, as it was previously reported in [1]. I didn't believe it at\nfirst and decided to double-check:\n\n```\n=# select * from phonebook;\n id | name | phone\n----+---------+-------\n 1 | Alex | 123\n 5 | Charlie | 789\n 2 | Bob | 456\n 6 | Ololo | 789\n(4 rows)\n\n=# insert into phonebook values (7, 'Trololo', 987);\nERROR: database is not accepting commands to avoid wraparound data\nloss in database \"template1\"\nHINT: Stop the postmaster and vacuum that database in single-user mode.\nYou might also need to commit or roll back old prepared transactions,\nor drop stale replication slots.\n\n=# VACUUM FREEZE;\nVACUUM\n\n=# insert into phonebook values (7, 'Trololo', 987);\nINSERT 0 1\n\n=# SELECT current_setting('wal_level');\n current_setting\n-----------------\n logical\n```\n\nUnfortunately the [1] discussion went nowhere. So I figured it would\nbe appropriate to add corresponding changes to the proposed patchset\nsince it's relevant and is registered in the CF app already. PFA\npatchset v2 which now also includes 0004.\n\n[1]: https://www.postgresql.org/message-id/flat/CAMT0RQTmRj_Egtmre6fbiMA9E2hM3BsLULiV8W00stwa3URvzA%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 16 Jan 2023 15:50:57 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Jan 16, 2023 at 03:50:57PM +0300, Aleksander Alekseev wrote:\n> Hi hackers,\n> \n> > The proposed patchset changes the documentation and the error messages\n> > accordingly, making them less misleading. 0001 corrects the\n> > documentation but doesn't touch the code. 0002 and 0003 correct the\n> > messages shown when approaching xidWrapLimit and xidWarnLimit\n> > accordingly.\n> \n> A colleague of mine, Oleksii Kliukin, pointed out that the\n> recommendation about running VACUUM in a single-user mode is also\n> outdated, as it was previously reported in [1]. I didn't believe it at\n> first and decided to double-check:\n\nand again at:\nhttps://www.postgresql.org/message-id/flat/CA%2BTgmoYPfofQmRtUan%3DA3aWE9wFsJaOFr%2BW_ys2pPkNPr-2FZw%40mail.gmail.com#e7dd25fdcd171c5775f3f9e3f86b2082\n\n> Unfortunately the [1] discussion went nowhere. \n\nlikewise...\n\n> So I figured it would be appropriate to add corresponding changes to\n> the proposed patchset since it's relevant and is registered in the CF\n> app already. PFA patchset v2 which now also includes 0004.\n> \n> [1]:\n> https://www.postgresql.org/message-id/flat/CAMT0RQTmRj_Egtmre6fbiMA9E2hM3BsLULiV8W00stwa3URvzA%40mail.gmail.com\n\nI suggest to resend this with a title like the 2021 thread [1] (I was\nunable to find this just now when I looked)\n| doc: stop telling users to \"vacuum that database in single-user mode\"\n\nAnd copy the participants of the previous two iterations of this thread.\n\n-- \nJustin\n\n\n", "msg_date": "Wed, 25 Jan 2023 17:28:43 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound (stop telling users to \"vacuum that database in single-user\n mode\")" }, { "msg_contents": "Thanks for picking up this badly-needed topic again! I was irresponsible\nlast year and let it fall off my radar, but I'm looking at the patches, as\nwell as revisiting discussions from the last four (!?) years that didn't\nlead to action.\n\n0001:\n\n+ In this condition the system can still execute read-only transactions.\n+ The active transactions will continue to execute and will be able to\n+ commit.\n\nThis is ambiguous. I'd first say that any transactions already started can\ncontinue, and then say that only new read-only transactions can be started.\n\n0004:\n\n-HINT: Stop the postmaster and vacuum that database in single-user mode.\n+HINT: VACUUM or VACUUM FREEZE that database.\n\nVACUUM FREEZE is worse and should not be mentioned, since it does\nunnecessary work. Emergency vacuum is not school -- you don't get extra\ncredit for doing unnecessary work.\n\nAlso, we may consider adding a boxed NOTE warning specifically against\nsingle-user mode, especially if this recommendation will change in at least\nsome minor releases so people may not hear about it. See also [1].\n\n- * If we're past xidStopLimit, refuse to execute transactions, unless\n- * we are running in single-user mode (which gives an escape hatch\n- * to the DBA who somehow got past the earlier defenses).\n+ * If we're past xidStopLimit, refuse to allocate new XIDs.\n\nThis patch doesn't completely get rid of the need for single-user mode, so\nit should keep all information about it. If a DBA wanted to e.g. drop or\ntruncate a table to save vacuum time, it is still possible to do it in\nsingle-user mode, so the escape hatch is still useful.\n\nIn swapping this topic back in my head, I also saw [2] where Robert\nsuggested\n\n\"that old prepared transactions and stale replication\nslots should be emphasized more prominently. Maybe something like:\n\nHINT: Commit or roll back old prepared transactions, drop stale\nreplication slots, or kill long-running sessions.\nEnsure that autovacuum is progressing, or run a manual database-wide\nVACUUM.\"\n\nThat sounds like a good direction to me. There is more we could do here to\nmake the message more specific [3][4][5], but the patches here are in the\nright direction.\n\nNote for possible backpatching: It seems straightforward to go back to\nPG14, which has the failsafe, but we should have better testing in place\nfirst. There is a patch in this CF to make it easier to get close to\nwraparound, so I'll look at what it does as well.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoadjx%2Br8_gGbbnNifL6vEyjZntiQRPzyixrUihvtZ5jdQ%40mail.gmail.com\n[2]\nhttps://www.postgresql.org/message-id/CA+Tgmob1QCMJrHwRBK8HZtGsr+6cJANRQw2mEgJ9e=D+z7cOsw@mail.gmail.com\n[3]\nhttps://www.postgresql.org/message-id/20190504023015.5mgpbl27tld4irw5%40alap3.anarazel.de\n[4]\nhttps://www.postgresql.org/message-id/20220204013539.qdegpqzvayq3d4y2%40alap3.anarazel.de\n[5]\nhttps://www.postgresql.org/message-id/20220220045757.GA3733812%40rfd.leadboat.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nThanks for picking up this badly-needed topic again! I was irresponsible last year and let it fall off my radar, but I'm looking at the patches, as well as revisiting discussions from the last four (!?) years that didn't lead to action.0001:+    In this condition the system can still execute read-only transactions.+    The active transactions will continue to execute and will be able to+    commit.This is ambiguous. I'd first say that any transactions already started can continue, and then say that only new read-only transactions can be started.0004:-HINT:  Stop the postmaster and vacuum that database in single-user mode.+HINT:  VACUUM or VACUUM FREEZE that database.VACUUM FREEZE is worse and should not be mentioned, since it does unnecessary work. Emergency vacuum is not school -- you don't get extra credit for doing unnecessary work.Also, we may consider adding a boxed NOTE warning specifically against single-user mode, especially if this recommendation will change in at least some minor releases so people may not hear about it. See also [1].-\t * If we're past xidStopLimit, refuse to execute transactions, unless-\t * we are running in single-user mode (which gives an escape hatch-\t * to the DBA who somehow got past the earlier defenses).+\t * If we're past xidStopLimit, refuse to allocate new XIDs.This patch doesn't completely get rid of the need for single-user mode, so it should keep all information about it. If a DBA wanted to e.g. drop or truncate a table to save vacuum time, it is still possible to do it in single-user mode, so the escape hatch is still useful.In swapping this topic back in my head, I also saw [2] where Robert suggested\"that old prepared transactions and stale replicationslots should be emphasized more prominently.  Maybe something like:HINT:  Commit or roll back old prepared transactions, drop stalereplication slots, or kill long-running sessions.Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\"That sounds like a good direction to me. There is more we could do here to make the message more specific [3][4][5], but the patches here are in the right direction.Note for possible backpatching: It seems straightforward to go back to PG14, which has the failsafe, but we should have better testing in place first. There is a patch in this CF to make it easier to get close to wraparound, so I'll look at what it does as well.[1] https://www.postgresql.org/message-id/CA%2BTgmoadjx%2Br8_gGbbnNifL6vEyjZntiQRPzyixrUihvtZ5jdQ%40mail.gmail.com[2] https://www.postgresql.org/message-id/CA+Tgmob1QCMJrHwRBK8HZtGsr+6cJANRQw2mEgJ9e=D+z7cOsw@mail.gmail.com[3] https://www.postgresql.org/message-id/20190504023015.5mgpbl27tld4irw5%40alap3.anarazel.de[4] https://www.postgresql.org/message-id/20220204013539.qdegpqzvayq3d4y2%40alap3.anarazel.de[5] https://www.postgresql.org/message-id/20220220045757.GA3733812%40rfd.leadboat.com--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sat, 18 Mar 2023 17:33:29 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi John,\n\n> Thanks for picking up this badly-needed topic again!\n\nMany thanks for the review!\n\n> 0001:\n>\n> + In this condition the system can still execute read-only transactions.\n> + The active transactions will continue to execute and will be able to\n> + commit.\n>\n> This is ambiguous. I'd first say that any transactions already started can continue, and then say that only new read-only transactions can be started.\n\nFixed.\n\n> 0004:\n>\n> -HINT: Stop the postmaster and vacuum that database in single-user mode.\n> +HINT: VACUUM or VACUUM FREEZE that database.\n>\n> VACUUM FREEZE is worse and should not be mentioned, since it does unnecessary work. Emergency vacuum is not school -- you don't get extra credit for doing unnecessary work.\n\nFixed.\n\n> Also, we may consider adding a boxed NOTE warning specifically against single-user mode, especially if this recommendation will change in at least some minor releases so people may not hear about it. See also [1].\n\nDone.\n\n> - * If we're past xidStopLimit, refuse to execute transactions, unless\n> - * we are running in single-user mode (which gives an escape hatch\n> - * to the DBA who somehow got past the earlier defenses).\n> + * If we're past xidStopLimit, refuse to allocate new XIDs.\n>\n> This patch doesn't completely get rid of the need for single-user mode, so it should keep all information about it. If a DBA wanted to e.g. drop or truncate a table to save vacuum time, it is still possible to do it in single-user mode, so the escape hatch is still useful.\n\nFixed.\n\n> In swapping this topic back in my head, I also saw [2] where Robert suggested\n>\n> \"that old prepared transactions and stale replication\n> slots should be emphasized more prominently. Maybe something like:\n>\n> HINT: Commit or roll back old prepared transactions, drop stale\n> replication slots, or kill long-running sessions.\n> Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\"\n\nIt looks like the hint regarding replication slots was added at some\npoint. Currently we have:\n\n```\nerrhint( [...]\n \"You might also need to commit or roll back old prepared\ntransactions, or drop stale replication slots.\")));\n```\n\nSo I choose to keep it as is for now. Please let me know if you think\nwe should also add a suggestion to kill long-running sessions, etc.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 21 Mar 2023 14:44:33 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, Mar 21, 2023 at 6:44 PM Aleksander Alekseev <\naleksander@timescale.com> wrote:\n\nOkay, the changes look good. To go further, I think we need to combine into\ntwo patches, one with 0001-0003 and one with 0004:\n\n1. Correct false statements about \"shutdown\" etc. This should contain\nchanges that can safely be patched all the way to v11.\n2. Change bad advice (single-user mode) into good advice. We can target\nhead first, and then try to adopt as far back as we safely can (and test).\n\n(...and future work, so not part of the CF here) 3. Tell the user what\ncaused the problem, instead of saying \"go figure it out yourself\".\n\n> > In swapping this topic back in my head, I also saw [2] where Robert\nsuggested\n> >\n> > \"that old prepared transactions and stale replication\n> > slots should be emphasized more prominently. Maybe something like:\n> >\n> > HINT: Commit or roll back old prepared transactions, drop stale\n> > replication slots, or kill long-running sessions.\n> > Ensure that autovacuum is progressing, or run a manual database-wide\nVACUUM.\"\n>\n> It looks like the hint regarding replication slots was added at some\n> point. Currently we have:\n>\n> ```\n> errhint( [...]\n> \"You might also need to commit or roll back old prepared\n> transactions, or drop stale replication slots.\")));\n> ```\n\nYes, the exact same text as it appeared in the [2] thread above, which\nprompted Robert's comment I quoted. I take the point to mean: All of these\nthings need to be taken care of *first*, before vacuuming, so the hint\nshould order things so that it is clear.\n\n> Please let me know if you think\n> we should also add a suggestion to kill long-running sessions, etc.\n\n+1 for also adding that.\n\n- errmsg(\"database is not accepting commands to avoid wraparound data loss\nin database \\\"%s\\\"\",\n+ errmsg(\"database is not accepting commands that generate new XIDs to\navoid wraparound data loss in database \\\"%s\\\"\",\n\nI'm not quite on board with the new message, but welcome additional\nopinions. For one, it's a bit longer and now ambiguous. I also bet that\n\"generate XIDs\" doesn't really communicate anything useful. The people who\nunderstand exactly what that means, and what the consequences are, are\nunlikely to let the system get near wraparound in the first place, and\nmight even know enough to ignore the hint.\n\nI'm thinking we might need to convey something about \"writes\". While it's\nless technically correct, I imagine it's more useful. Remember, many users\nhave it drilled in their heads that they need to drop immediately to\nsingle-user mode. I'd like to give some idea of what they can and cannot do.\n\n+ Previously it was required to stop the postmaster and VACUUM the\ndatabase\n+ in a single-user mode. There is no need to use a single-user mode\nanymore.\n\nI think we need to go further and actively warn against it: It's slow,\nimpossible to monitor, disables replication and disables safeguards against\nwraparound. (Other bad things too, but these are easily understandable for\nusers)\n\nMaybe mention also that it's main use in wraparound situations is for a way\nto perform DROPs and TRUNCATEs if that would help speed up resolution.\n\nI propose for discussion that 0004 should show in the docs all the queries\nfor finding prepared xacts, repl slots etc. If we ever show the info at\nruntime, we can dispense with the queries, but there seems to be no urgency\nfor that...\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, Mar 21, 2023 at 6:44 PM Aleksander Alekseev <aleksander@timescale.com> wrote:Okay, the changes look good. To go further, I think we need to combine into two patches, one with 0001-0003 and one with 0004:1. Correct false statements about \"shutdown\" etc. This should contain changes that can safely be patched all the way to v11.2. Change bad advice (single-user mode) into good advice. We can target head first, and then try to adopt as far back as we safely can (and test).(...and future work, so not part of the CF here) 3. Tell the user what caused the problem, instead of saying \"go figure it out yourself\".> > In swapping this topic back in my head, I also saw [2] where Robert suggested> >> > \"that old prepared transactions and stale replication> > slots should be emphasized more prominently.  Maybe something like:> >> > HINT:  Commit or roll back old prepared transactions, drop stale> > replication slots, or kill long-running sessions.> > Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\">> It looks like the hint regarding replication slots was added at some> point. Currently we have:>> ```> errhint( [...]>     \"You might also need to commit or roll back old prepared> transactions, or drop stale replication slots.\")));> ```Yes, the exact same text as it appeared in the [2] thread above, which prompted Robert's comment I quoted. I take the point to mean: All of these things need to be taken care of *first*, before vacuuming, so the hint should order things so that it is clear.> Please let me know if you think> we should also add a suggestion to kill long-running sessions, etc.+1 for also adding that. - errmsg(\"database is not accepting commands to avoid wraparound data loss in database \\\"%s\\\"\",+ errmsg(\"database is not accepting commands that generate new XIDs to avoid wraparound data loss in database \\\"%s\\\"\",I'm not quite on board with the new message, but welcome additional opinions. For one, it's a bit longer and now ambiguous. I also bet that \"generate XIDs\" doesn't  really communicate anything useful. The people who understand exactly what that means, and what the consequences are, are unlikely to let the system get near wraparound in the first place, and might even know enough to ignore the hint.I'm thinking we might need to convey something about \"writes\". While it's less technically correct, I imagine it's more useful. Remember, many users have it drilled in their heads that they need to drop immediately to single-user mode. I'd like to give some idea of what they can and cannot do.+     Previously it was required to stop the postmaster and VACUUM the database+     in a single-user mode. There is no need to use a single-user mode anymore.I think we need to go further and actively warn against it: It's slow, impossible to monitor, disables replication and disables safeguards against wraparound. (Other bad things too, but these are easily understandable for users)Maybe mention also that it's main use in wraparound situations is for a way to perform DROPs and TRUNCATEs if that would help speed up resolution.I propose for discussion that 0004 should show in the docs all the queries for finding prepared xacts, repl slots etc. If we ever show the info at runtime, we can dispense with the queries, but there seems to be no urgency for that...--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 31 Mar 2023 14:38:55 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi John,\n\nMany thanks for all the great feedback!\n\n> Okay, the changes look good. To go further, I think we need to combine into two patches, one with 0001-0003 and one with 0004:\n>\n> 1. Correct false statements about \"shutdown\" etc. This should contain changes that can safely be patched all the way to v11.\n> 2. Change bad advice (single-user mode) into good advice. We can target head first, and then try to adopt as far back as we safely can (and test).\n\nDone.\n\n> > > In swapping this topic back in my head, I also saw [2] where Robert suggested\n> > >\n> > > \"that old prepared transactions and stale replication\n> > > slots should be emphasized more prominently. Maybe something like:\n> > >\n> > > HINT: Commit or roll back old prepared transactions, drop stale\n> > > replication slots, or kill long-running sessions.\n> > > Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\"\n> >\n> > It looks like the hint regarding replication slots was added at some\n> > point. Currently we have:\n> >\n> > ```\n> > errhint( [...]\n> > \"You might also need to commit or roll back old prepared\n> > transactions, or drop stale replication slots.\")));\n> > ```\n>\n> Yes, the exact same text as it appeared in the [2] thread above, which prompted Robert's comment I quoted. I take the point to mean: All of these things need to be taken care of *first*, before vacuuming, so the hint should order things so that it is clear.\n>\n> > Please let me know if you think\n> > we should also add a suggestion to kill long-running sessions, etc.\n>\n> +1 for also adding that.\n\nOK, done. I included this change as a separate patch. It can be\nsquashed with another one if necessary.\n\nWhile on it, I noticed that multixact.c still talks about a\n\"shutdown\". I made corresponding changes in 0001.\n\n> - errmsg(\"database is not accepting commands to avoid wraparound data loss in database \\\"%s\\\"\",\n> + errmsg(\"database is not accepting commands that generate new XIDs to avoid wraparound data loss in database \\\"%s\\\"\",\n>\n> I'm not quite on board with the new message, but welcome additional opinions. For one, it's a bit longer and now ambiguous. I also bet that \"generate XIDs\" doesn't really communicate anything useful. The people who understand exactly what that means, and what the consequences are, are unlikely to let the system get near wraparound in the first place, and might even know enough to ignore the hint.\n>\n> I'm thinking we might need to convey something about \"writes\". While it's less technically correct, I imagine it's more useful. Remember, many users have it drilled in their heads that they need to drop immediately to single-user mode. I'd like to give some idea of what they can and cannot do.\n\nThis particular wording was chosen for consistency with multixact.c:\n\n```\nerrmsg(\"database is not accepting commands that generate new\nMultiXactIds to avoid wraparound data loss in database \\\"%s\\\"\",\n```\n\nThe idea of using \"writes\" is sort of OK, but note that the same\nmessage will appear for a query like:\n\n```\nSELECT pg_current_xact_id();\n```\n\n... which doesn't do writes. The message will be misleading in this case.\n\nOn top of that, although a PostgreSQL user may not be aware of\nMultiXactIds, arguably there are many users that are aware of XIDs.\nNot to mention the fact that XIDs are well documented.\n\nI didn't make this change in v4 since it seems to be controversial and\nprobably not the highest priority at the moment. I suggest we discuss\nit separately.\n\n> I propose for discussion that 0004 should show in the docs all the queries for finding prepared xacts, repl slots etc. If we ever show the info at runtime, we can dispense with the queries, but there seems to be no urgency for that...\n\nGood idea.\n\n> + Previously it was required to stop the postmaster and VACUUM the database\n> + in a single-user mode. There is no need to use a single-user mode anymore.\n>\n> I think we need to go further and actively warn against it: It's slow, impossible to monitor, disables replication and disables safeguards against wraparound. (Other bad things too, but these are easily understandable for users)\n>\n> Maybe mention also that it's main use in wraparound situations is for a way to perform DROPs and TRUNCATEs if that would help speed up resolution.\n\nFixed.\n\n\n--\nBest regards,\nAleksander Alekseev", "msg_date": "Mon, 3 Apr 2023 15:33:28 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Apr 3, 2023 at 7:33 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n\n> > Yes, the exact same text as it appeared in the [2] thread above, which\nprompted Robert's comment I quoted. I take the point to mean: All of these\nthings need to be taken care of *first*, before vacuuming, so the hint\nshould order things so that it is clear.\n> >\n> > > Please let me know if you think\n> > > we should also add a suggestion to kill long-running sessions, etc.\n> >\n> > +1 for also adding that.\n>\n> OK, done. I included this change as a separate patch. It can be\n> squashed with another one if necessary.\n\nOkay, great. For v4-0003:\n\nEach hint mentions vacuum twice, because it's adding the vacuum message at\nthe end, but not removing it from the beginning. The vacuum string should\nbe on its own line, since we will have to modify that for back branches\n(skip indexes and truncation).\n\nI'm also having second thoughts about \"Ensure that autovacuum is\nprogressing\" in the hint. That might be better in the docs, if we decide to\ngo ahead with adding a specific checklist there.\n\nIn vacuum.c:\n\n errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"\n- \"You might also need to commit or roll back old prepared transactions, or\ndrop stale replication slots.\")));\n+ \"You might also need to commit or roll back old prepared transactions,\ndrop stale replication slots, or kill long-running sessions. Ensure that\nautovacuum is progressing, or run a manual database-wide VACUUM.\")));\n\nThis appears in vacuum_get_cutoffs(), which is called by vacuum and\ncluster, and the open transactions were already mentioned, so this is not\nthe place for this change.\n\n> This particular wording was chosen for consistency with multixact.c:\n>\n> ```\n> errmsg(\"database is not accepting commands that generate new\n> MultiXactIds to avoid wraparound data loss in database \\\"%s\\\"\",\n> ```\n\nOkay, I didn't look into that -- seems like a good precedent.\n\nv4-0002:\n\n- errhint(\"Stop the postmaster and vacuum that database in single-user\nmode.\\n\"\n+ errhint(\"VACUUM that database.\\n\"\n\nFurther in the spirit of consistency, the mulixact path already has\n\"Execute a database-wide VACUUM in that database.\\n\", and that seems like\nbetter wording.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Mon, Apr 3, 2023 at 7:33 PM Aleksander Alekseev <aleksander@timescale.com> wrote:> > Yes, the exact same text as it appeared in the [2] thread above, which prompted Robert's comment I quoted. I take the point to mean: All of these things need to be taken care of *first*, before vacuuming, so the hint should order things so that it is clear.> >> > > Please let me know if you think> > > we should also add a suggestion to kill long-running sessions, etc.> >> > +1 for also adding that.>> OK, done. I included this change as a separate patch. It can be> squashed with another one if necessary.Okay, great. For v4-0003:Each hint mentions vacuum twice, because it's adding the vacuum message at the end, but not removing it from the beginning. The vacuum string should be on its own line, since we will have to modify that for back branches (skip indexes and truncation).I'm also having second thoughts about \"Ensure that autovacuum is progressing\" in the hint. That might be better in the docs, if we decide to go ahead with adding a specific checklist there.In vacuum.c: errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"- \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));+ \"You might also need to commit or roll back old prepared transactions, drop stale replication slots, or kill long-running sessions. Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\")));This appears in vacuum_get_cutoffs(), which is called by vacuum and cluster, and the open transactions were already mentioned, so this is not the place for this change.> This particular wording was chosen for consistency with multixact.c:>> ```> errmsg(\"database is not accepting commands that generate new> MultiXactIds to avoid wraparound data loss in database \\\"%s\\\"\",> ```Okay, I didn't look into that -- seems like a good precedent.v4-0002:- errhint(\"Stop the postmaster and vacuum that database in single-user mode.\\n\"+ errhint(\"VACUUM that database.\\n\"Further in the spirit of consistency, the mulixact path already has \"Execute a database-wide VACUUM in that database.\\n\", and that seems like better wording.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Tue, 4 Apr 2023 15:12:10 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi!\n\nI've looked into the patches v4.\nFor 0001:\nI think long \"not accepting commands that generate\" is equivalent to\nmore concise \"can't generate\".\nFor 0003:\nI think double mentioning of Vacuum at each errhist i.e.: \"Execute a\ndatabase-wide VACUUM in that database\" and \"...or run a manual\ndatabase-wide VACUUM.\" are redundant. The advice \"Ensure that\nautovacuum is progressing,...\" is also not needed after advice to\n\"Execute a database-wide VACUUM in that database\".\n\nFor all:\nIn a errhint's list what _might_ be done I think AND is a little bit\nbetter that OR as the word _might_ means that each of the proposals in\nthe list is a probable, not a sure one.\n\nThe proposed changes are in patchset v5.\n\nKind regards,\nPavel Borisov,\nSupabase.", "msg_date": "Tue, 4 Apr 2023 13:57:45 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi,\n\n> The proposed changes are in patchset v5.\n\nPavel, John, thanks for your feedback.\n\n> I've looked into the patches v4.\n> For 0001:\n> I think long \"not accepting commands that generate\" is equivalent to\n> more concise \"can't generate\".\n\nFrankly I don't think this is a good change for this particular CF\nentry and it violates the consistency with multixact.c. Additionally\nthe new message is not accurate. The DBMS _can_ generate new XIDs,\nquite a few of them actually. It merely refuses to do so.\n\n> For all:\n> In a errhint's list what _might_ be done I think AND is a little bit\n> better that OR as the word _might_ means that each of the proposals in\n> the list is a probable, not a sure one.\n\nWell, that's debatable... IMO \"or\" makes a bit more sense since the\nuser knows better whether he/she needs to kill a long-running\ntransaction, or run VACUUM, or maybe do both. \"And\" would imply that\nthe user needs to do all of it, which is not necessarily true. Since\noriginally it was \"or\" I suggest we keep it as is for now.\n\nAll in all I would prefer keeping the focus on the particular problem\nthe patch tries to address.\n\n> For 0003:\n> I think double mentioning of Vacuum at each errhist i.e.: \"Execute a\n> database-wide VACUUM in that database\" and \"...or run a manual\n> database-wide VACUUM.\" are redundant. The advice \"Ensure that\n> autovacuum is progressing,...\" is also not needed after advice to\n> \"Execute a database-wide VACUUM in that database\".\n> [...]\n\n> Okay, great. For v4-0003:\n>\n> Each hint mentions vacuum twice, because it's adding the vacuum message at the end, but not removing it from the beginning. The vacuum string should be on its own line, since we will have to modify that for back branches (skip indexes and truncation).\n\nMy bad. Fixed.\n\n> I'm also having second thoughts about \"Ensure that autovacuum is progressing\" in the hint. That might be better in the docs, if we decide to go ahead with adding a specific checklist there.\n\nOK, done.\n\n> In vacuum.c:\n>\n> errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"\n> - \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n> + \"You might also need to commit or roll back old prepared transactions, drop stale replication slots, or kill long-running sessions. Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\")));\n>\n> This appears in vacuum_get_cutoffs(), which is called by vacuum and cluster, and the open transactions were already mentioned, so this is not the place for this change.\n\nFixed.\n\n> v4-0002:\n>\n> - errhint(\"Stop the postmaster and vacuum that database in single-user mode.\\n\"\n> + errhint(\"VACUUM that database.\\n\"\n>\n> Further in the spirit of consistency, the mulixact path already has \"Execute a database-wide VACUUM in that database.\\n\", and that seems like better wording.\n\nAgree. Fixed.\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 4 Apr 2023 16:08:14 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, 4 Apr 2023 at 17:08, Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n>\n> Hi,\n>\n> > The proposed changes are in patchset v5.\n>\n> Pavel, John, thanks for your feedback.\n>\n> > I've looked into the patches v4.\n> > For 0001:\n> > I think long \"not accepting commands that generate\" is equivalent to\n> > more concise \"can't generate\".\n>\n> Frankly I don't think this is a good change for this particular CF\n> entry and it violates the consistency with multixact.c. Additionally\n> the new message is not accurate. The DBMS _can_ generate new XIDs,\n> quite a few of them actually. It merely refuses to do so.\n>\n> > For all:\n> > In a errhint's list what _might_ be done I think AND is a little bit\n> > better that OR as the word _might_ means that each of the proposals in\n> > the list is a probable, not a sure one.\n>\n> Well, that's debatable... IMO \"or\" makes a bit more sense since the\n> user knows better whether he/she needs to kill a long-running\n> transaction, or run VACUUM, or maybe do both. \"And\" would imply that\n> the user needs to do all of it, which is not necessarily true. Since\n> originally it was \"or\" I suggest we keep it as is for now.\n>\n> All in all I would prefer keeping the focus on the particular problem\n> the patch tries to address.\n>\n> > For 0003:\n> > I think double mentioning of Vacuum at each errhist i.e.: \"Execute a\n> > database-wide VACUUM in that database\" and \"...or run a manual\n> > database-wide VACUUM.\" are redundant. The advice \"Ensure that\n> > autovacuum is progressing,...\" is also not needed after advice to\n> > \"Execute a database-wide VACUUM in that database\".\n> > [...]\n>\n> > Okay, great. For v4-0003:\n> >\n> > Each hint mentions vacuum twice, because it's adding the vacuum message at the end, but not removing it from the beginning. The vacuum string should be on its own line, since we will have to modify that for back branches (skip indexes and truncation).\n>\n> My bad. Fixed.\n>\n> > I'm also having second thoughts about \"Ensure that autovacuum is progressing\" in the hint. That might be better in the docs, if we decide to go ahead with adding a specific checklist there.\n>\n> OK, done.\n>\n> > In vacuum.c:\n> >\n> > errhint(\"Close open transactions soon to avoid wraparound problems.\\n\"\n> > - \"You might also need to commit or roll back old prepared transactions, or drop stale replication slots.\")));\n> > + \"You might also need to commit or roll back old prepared transactions, drop stale replication slots, or kill long-running sessions. Ensure that autovacuum is progressing, or run a manual database-wide VACUUM.\")));\n> >\n> > This appears in vacuum_get_cutoffs(), which is called by vacuum and cluster, and the open transactions were already mentioned, so this is not the place for this change.\n>\n> Fixed.\n>\n> > v4-0002:\n> >\n> > - errhint(\"Stop the postmaster and vacuum that database in single-user mode.\\n\"\n> > + errhint(\"VACUUM that database.\\n\"\n> >\n> > Further in the spirit of consistency, the mulixact path already has \"Execute a database-wide VACUUM in that database.\\n\", and that seems like better wording.\n>\n> Agree. Fixed.\n\nAlexander,\nOk, nice! I think it could be moved to committer then.\n\nPavel.\n\n\n", "msg_date": "Tue, 4 Apr 2023 18:52:58 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, Apr 4, 2023 at 8:08 PM Aleksander Alekseev <aleksander@timescale.com>\nwrote:\n> [v6]\n\n0001:\n\nLooks good to me. I've just made some small edits for v7 and wrote a commit\nmessage to explain how we got here. This can be backpatched all the way, as\nit's simply a correction. I do want to test on v11 first just for\ncompleteness. (The reality has already been tested by others back to 9.6\nbut there's no substitute for trying it yourself). I hope to commit soon\nafter that.\n\n0002:\n\nI've been testing wraparound using the v3 convenience function in [1] to\nreach xidStopLimit:\n\n-- reduce log spam\nalter system set log_min_messages = error;\nalter system set client_min_messages = error;\n-- restart\n\n-- no actual replication, just for testing dropping it\nselect pg_create_physical_replication_slot('foo', true, false);\n\ncreate table t (i int);\n\nBEGIN;\ninsert into t values(1);\nPREPARE TRANSACTION 'trx_id_pin';\n\n-- get to xidStopLimit\nselect consume_xids(1*1000*1000*1000);\ninsert into t values(1);\nselect consume_xids(1*1000*1000*1000);\ninsert into t values(1);\nselect consume_xids( 140*1000*1000);\ninsert into t values(1);\nselect consume_xids( 10*1000*1000);\n\nSELECT datname, age(datfrozenxid) FROM pg_database;\n\n-- works just fine\nselect pg_drop_replication_slot('foo');\n\nCOMMIT PREPARED 'trx_id_pin';\n\n-- watch autovacuum take care of it automatically:\nSELECT datname, age(datfrozenxid) FROM pg_database;\n\nThe consume_xids function builds easily on PG14, but before that it would\nneed a bit of work because data types changed. That coincidentally was the\nfirst version to include the failsafe, which is convenient in this\nscenario. I'd like to do testing on PG12/13 before commit, which would\nrequire turning off truncation in the command (and can also be made faster\nby turning off index cleanup), but I'm also okay with going ahead with just\ngoing back to PG14 at first. That also safest.\n\nI made some small changes and wrote a suitably comprehensive commit\nmessage. I separated the possible uses for single-user mode into a separate\nparagraph in the \"Note:\" , moved the justification for the 3-million xid\nmargin there, and restored the link to how to run it (I already mentioned\nwe still need this info, but didn't notice this part didn't make it back\nin).\n\n0003:\n\nIt really needs a more comprehensive change, and just making a long hint\neven longer doesn't seem worth doing. I'd like to set that aside and come\nback to it. I've left it out of the attached set.\n\n[1]\nhttps://www.postgresql.org/message-id/CAD21AoBZ3t%2BfRtVamQTA%2BwBJaapFUY1dfP08-rxsQ%2BfouPvgKg%40mail.gmail.com\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 29 Apr 2023 15:09:13 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Sat, Apr 29, 2023 at 1:09 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Looks good to me.\n\nI'm strongly in favor of this. It's most unfortunate that it took this long.\n\n> I've just made some small edits for v7 and wrote a commit message to explain how we got here. This can be backpatched all the way, as it's simply a correction.\n\n+1 to backpatching at least back until v14. After all, it wouldn't\nmake any sense for us to not backpatch to 14; the whole justification\nfor doing this was in no way influenced by anything that happened\nsince the failsafe went in.\n\nI'm also in favor of backpatching to 11, 12, and 13 -- though I\nacknowledge that that's more of a judgement call. As you note in\ncomments in the draft patch, the story with these earlier releases\n(especially 11) is a little more complicated for users.\n\n> I made some small changes and wrote a suitably comprehensive commit message. I separated the possible uses for single-user mode into a separate paragraph in the \"Note:\" , moved the justification for the 3-million xid margin there, and restored the link to how to run it (I already mentioned we still need this info, but didn't notice this part didn't make it back in).\n\nI notice that you've called xidStopLimit \"read-only mode\" in the docs.\nI think that it makes sense that you wouldn't use the term\nxidStopLimit here, but I'm not sure about this terminology, either. It\nseems to me that it should be something quite specific, since we're\ntalking about a very specific mechanism. Whatever it is, It shouldn't\ncontain the word \"wraparound\".\n\nSeparately, there is a need to update a couple of other places to use\nthis new terminology. The documentation for vacuum_sailsafe_age and\nvacuum_multixact_failsafe_age refer to \"system-wide transaction ID\nwraparound failure\", which seems less than ideal, even without your\npatch.\n\nDo we need two new names? One for xidStopLimit, another for\nmultiStopLimit? The latter really can't be called read-only mode.\n\n> 0003:\n>\n> It really needs a more comprehensive change, and just making a long hint even longer doesn't seem worth doing. I'd like to set that aside and come back to it. I've left it out of the attached set.\n\nYeah, 0003 can be treated as independent work IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sat, 29 Apr 2023 14:14:35 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Sun, Apr 30, 2023 at 4:15 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Apr 29, 2023 at 1:09 AM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n\n> > I made some small changes and wrote a suitably comprehensive commit\nmessage. I separated the possible uses for single-user mode into a separate\nparagraph in the \"Note:\" , moved the justification for the 3-million xid\nmargin there, and restored the link to how to run it (I already mentioned\nwe still need this info, but didn't notice this part didn't make it back\nin).\n>\n> I notice that you've called xidStopLimit \"read-only mode\" in the docs.\n> I think that it makes sense that you wouldn't use the term\n> xidStopLimit here, but I'm not sure about this terminology, either. It\n> seems to me that it should be something quite specific, since we're\n> talking about a very specific mechanism. Whatever it is, It shouldn't\n> contain the word \"wraparound\".\n\nHow about\n\n-HINT: To avoid a database shutdown, [...]\n+HINT: To prevent XID exhaustion, [...]\n\n...and \"MXID\", respectively? We could explain in the docs that vacuum and\nread-only queries still work \"when XIDs have been exhausted\", etc.\n\n(I should probably also add in the commit message that the \"shutdown\" in\nthe message was carried over to MXIDs when they arrived also in 2005).\n\n> Separately, there is a need to update a couple of other places to use\n> this new terminology. The documentation for vacuum_sailsafe_age and\n> vacuum_multixact_failsafe_age refer to \"system-wide transaction ID\n> wraparound failure\", which seems less than ideal, even without your\n> patch.\n\nRight, I'll have a look.\n\n> Do we need two new names? One for xidStopLimit, another for\n> multiStopLimit? The latter really can't be called read-only mode.\n\nThanks for that correction.\n\nSomewhat related to the now-postponed 0003: I think the docs would do well\nto have ordered steps for recovering from both XID and MXID exhaustion. The\nprevious practice of shutting down had the side-effect of e.g. rolling back\nall in-progress transactions whose XID appeared in a MXID but if you remain\nin normal mode there is a bit more to check. Manual VACUUM will warn about\n\"cutoff for removing and freezing tuples is far in the past\", but the docs\nshould be clear on this.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Sun, Apr 30, 2023 at 4:15 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Sat, Apr 29, 2023 at 1:09 AM John Naylor> <john.naylor@enterprisedb.com> wrote:> > I made some small changes and wrote a suitably comprehensive commit message. I separated the possible uses for single-user mode into a separate paragraph in the \"Note:\" , moved the justification for the 3-million xid margin there, and restored the link to how to run it (I already mentioned we still need this info, but didn't notice this part didn't make it back in).>> I notice that you've called xidStopLimit \"read-only mode\" in the docs.> I think that it makes sense that you wouldn't use the term> xidStopLimit here, but I'm not sure about this terminology, either. It> seems to me that it should be something quite specific, since we're> talking about a very specific mechanism. Whatever it is, It shouldn't> contain the word \"wraparound\".How about-HINT:  To avoid a database shutdown, [...]+HINT:  To prevent XID exhaustion, [...]...and \"MXID\", respectively? We could explain in the docs that vacuum and read-only queries still work \"when XIDs have been exhausted\", etc.(I should probably also add in the commit message that the \"shutdown\" in the message was carried over to MXIDs when they arrived also in 2005).> Separately, there is a need to update a couple of other places to use> this new terminology. The documentation for vacuum_sailsafe_age and> vacuum_multixact_failsafe_age refer to \"system-wide transaction ID> wraparound failure\", which seems less than ideal, even without your> patch.Right, I'll have a look.> Do we need two new names? One for xidStopLimit, another for> multiStopLimit? The latter really can't be called read-only mode.Thanks for that correction.Somewhat related to the now-postponed 0003: I think the docs would do well to have ordered steps for recovering from both XID and MXID exhaustion. The previous practice of shutting down had the side-effect of e.g. rolling back all in-progress transactions whose XID appeared in a MXID but if you remain in normal mode there is a bit more to check. Manual VACUUM will warn about \"cutoff for removing and freezing tuples is far in the past\", but the docs should be clear on this.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Sun, 30 Apr 2023 09:29:51 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Sat, Apr 29, 2023 at 7:30 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> How about\n>\n> -HINT: To avoid a database shutdown, [...]\n> +HINT: To prevent XID exhaustion, [...]\n>\n> ...and \"MXID\", respectively? We could explain in the docs that vacuum and read-only queries still work \"when XIDs have been exhausted\", etc.\n\nI think that that particular wording works in this example -- we *are*\navoiding XID exhaustion. But it still doesn't really address my\nconcern -- at least not on its own. I think that we need a term for\nxidStopLimit mode (and perhaps multiStopLimit) itself. This is a\ndiscrete state/mode that is associated with a specific mechanism. I'd\nlike to emphasize the purpose of xidStopLimit (over when xidStopLimit\nhappens) in choosing this user-facing name.\n\nAs you know, the point of xidStopLimit mode is to give autovacuum the\nchance to catch up with managing the XID space through freezing: the\navailable supply of XIDs doesn't meet present demand, and hasn't for\nsome time, so it finally came to this. Even if we had true 64-bit XIDs\nwe'd probably still need something similar -- there would still have\nto be *some* point that allowing the \"freezing deficit\" to continue to\ngrow just wasn't tenable. If a person consistently spends more than\nthey take in, their \"initial bankroll\" isn't necessarily relevant. If\nour ~2.1 billion XID \"bankroll\" wasn't enough to avoid xidStopLimit,\nwhy would we expect 8 billion or 20 billion XIDs to have been enough?\n\nI'm thinking of a user-facing name for xidStopLimit along the lines of\n\"emergency XID allocation restoration mode\" (admittedly that's quite a\nmouthful). Something that carries the implication of \"imbalance\". The\nsystem was configured in a way that turned out to be unsustainable.\nThe system was therefore forced to \"restore sustainability\" using the\nonly tool that remained. This is closely related to the failsafe.\n\nAs bad as xidStopLimit is, it won't always be the end of the world --\nmuch depends on individual application requirements.\n\n> (I should probably also add in the commit message that the \"shutdown\" in the message was carried over to MXIDs when they arrived also in 2005).\n>\n> > Separately, there is a need to update a couple of other places to use\n> > this new terminology. The documentation for vacuum_sailsafe_age and\n> > vacuum_multixact_failsafe_age refer to \"system-wide transaction ID\n> > wraparound failure\", which seems less than ideal, even without your\n> > patch.\n>\n> Right, I'll have a look.\n\nAs you know, there is a more general problem with the use of the term\n\"wraparound\" in the docs, and in the system itself (in places like\npg_stat_activity). Even the very basic terminology in this area is\nneedlessly scary. Terms like \"VACUUM (to prevent wraparound)\" are\nuncomfortably close to \"a race against time to avoid data corruption\".\nThe system isn't ever supposed to corrupt data, even if misconfigured\n(unless the misconfiguration is very low-level, such as \"fsync=off\").\nUsers should be able to take that much for granted.\n\nI don't expect either of us to address that problem in the short term\n-- the term \"wraparound\" is too baked-in for it to be okay to just\nremove it overnight. But, it could still make sense for your patch (or\nmy own) to fully own the fact that \"wraparound\" is actually a\nmisnomer. At least when used in contexts like \"to prevent wraparound\"\n(xidStopLimit actually \"prevents wraparound\", though we shouldn't say\nanything about it in a place of prominence). Let's reassure users that\nthey should continue to take \"we won't corrupt your data for no good\nreason\" for granted.\n\n> I think the docs would do well to have ordered steps for recovering from both XID and MXID exhaustion.\n\nI had planned to address this with my ongoing work on the \"Routine\nVacuuming\" docs, but I think that you're right about the necessity of\naddressing it as part of this patch.\n\nThese extra steps will be required whenever the problem is a leaked\nprepared transaction, or something along those lines. That is\nincreasingly likely to turn out to be the underlying cause of entering\nxidStopLimit, given the work we've done on VACUUM over the years. I\nstill think that \"imbalance\" is the right way to frame discussion of\nxidStopLimit. After all, autovacuum/VACUUM will still spin its wheels\nin a futile effort to \"restore balance\". So it's kinda still about\nrestoring imbalance IMV.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 30 Apr 2023 12:30:03 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, May 1, 2023 at 2:30 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sat, Apr 29, 2023 at 7:30 PM John Naylor\n> <john.naylor@enterprisedb.com> wrote:\n> > How about\n> >\n> > -HINT: To avoid a database shutdown, [...]\n> > +HINT: To prevent XID exhaustion, [...]\n> >\n> > ...and \"MXID\", respectively? We could explain in the docs that vacuum\nand read-only queries still work \"when XIDs have been exhausted\", etc.\n>\n> I think that that particular wording works in this example -- we *are*\n> avoiding XID exhaustion. But it still doesn't really address my\n> concern -- at least not on its own. I think that we need a term for\n> xidStopLimit mode (and perhaps multiStopLimit) itself. This is a\n> discrete state/mode that is associated with a specific mechanism.\n\nWell, since you have a placeholder \"xidStopLimit mode\" in your other patch,\nI'll confine my response to there. Inventing \"modes\" seems like an awkward\nthing to backpatch, not to mention moving the goalposts. My modest goal\nhere is quite limited: to stop lying to our users about \"not accepting\ncommands\", and change tragically awful advice into sensible advice.\n\nHere's my new idea:\n\n-HINT: To avoid a database shutdown, [...]\n+HINT: To prevent XID generation failure, [...]\n\nActually, I like \"allocation\" better, but the v8 patch now has \"generation\"\nsimply because one MXID message already has \"generate\" and I did it that\nway before thinking too hard. I'd be okay with either one as long as it's\nconsistent.\n\n> > (I should probably also add in the commit message that the \"shutdown\"\nin the message was carried over to MXIDs when they arrived also in 2005).\n\nDone\n\n> > > Separately, there is a need to update a couple of other places to use\n> > > this new terminology. The documentation for vacuum_sailsafe_age and\n> > > vacuum_multixact_failsafe_age refer to \"system-wide transaction ID\n> > > wraparound failure\", which seems less than ideal, even without your\n> > > patch.\n> >\n> > Right, I'll have a look.\n\nLooking now, I'm even less inclined to invent new terminology in back\nbranches.\n\n> As you know, there is a more general problem with the use of the term\n> \"wraparound\" in the docs, and in the system itself (in places like\n> pg_stat_activity). Even the very basic terminology in this area is\n> needlessly scary. Terms like \"VACUUM (to prevent wraparound)\" are\n> uncomfortably close to \"a race against time to avoid data corruption\".\n> The system isn't ever supposed to corrupt data, even if misconfigured\n> (unless the misconfiguration is very low-level, such as \"fsync=off\").\n> Users should be able to take that much for granted.\n\nGranted. Whatever form your rewrite ends up in, it could make a lot of\nsense to then backpatch a few localized corrections. I wouldn't even object\nto including a few substitutions of s/wraparound failure/allocation\nfailure/ where appropriate. Let's see how that shakes out first.\n\n> > I think the docs would do well to have ordered steps for recovering\nfrom both XID and MXID exhaustion.\n>\n> I had planned to address this with my ongoing work on the \"Routine\n> Vacuuming\" docs, but I think that you're right about the necessity of\n> addressing it as part of this patch.\n\n0003 is now a quick-and-dirty attempt at that, only in the docs. The MXID\npart is mostly copy-pasted from the XID part, just to get something\ntogether. I'd like to abbreviate that somehow.\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 1 May 2023 19:33:52 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, May 1, 2023 at 5:34 AM John Naylor <john.naylor@enterprisedb.com> wrote:\n> Well, since you have a placeholder \"xidStopLimit mode\" in your other patch, I'll confine my response to there. Inventing \"modes\" seems like an awkward thing to backpatch, not to mention moving the goalposts. My modest goal here is quite limited: to stop lying to our users about \"not accepting commands\", and change tragically awful advice into sensible advice.\n\nI can't argue with that.\n\n> Here's my new idea:\n>\n> -HINT: To avoid a database shutdown, [...]\n> +HINT: To prevent XID generation failure, [...]\n>\n> Actually, I like \"allocation\" better, but the v8 patch now has \"generation\" simply because one MXID message already has \"generate\" and I did it that way before thinking too hard. I'd be okay with either one as long as it's consistent.\n\nWFM.\n\n> Granted. Whatever form your rewrite ends up in, it could make a lot of sense to then backpatch a few localized corrections. I wouldn't even object to including a few substitutions of s/wraparound failure/allocation failure/ where appropriate. Let's see how that shakes out first.\n\nMakes sense.\n\n> > > I think the docs would do well to have ordered steps for recovering from both XID and MXID exhaustion.\n> >\n> > I had planned to address this with my ongoing work on the \"Routine\n> > Vacuuming\" docs, but I think that you're right about the necessity of\n> > addressing it as part of this patch.\n>\n> 0003 is now a quick-and-dirty attempt at that, only in the docs. The MXID part is mostly copy-pasted from the XID part, just to get something together. I'd like to abbreviate that somehow.\n\nYeah, the need to abbreviate statements about MultiXact IDs by saying\nthat they work analogously to XIDs in some particular respect\nis...another thing that makes this tricky.\n\nI don't think that Multis are fundamentally different to XIDs. I\nbelieve that the process through which VACUUM establishes its\nOldestMXact cutoff can be pessimistic compared to OldestXmin, but I\ndon't think that it changes the guidance you'll need to give here.\nVACUUM should always be able to advance relminmxid right up until\nOldestMXact, if that's what the user insists on. For example, VACUUM\nFREEZE sometimes allocates new Multis, just to be able to do that.\n\nObviously there are certain things that can hold back OldestMXact by a\nwildly excessive amount. But I don't think that there is anything that\ncan hold back OldestMXact by a wildly excessive amount that won't more\nor less do the same thing to OldestXmin.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 19:55:08 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, May 1, 2023 at 7:55 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Obviously there are certain things that can hold back OldestMXact by a\n> wildly excessive amount. But I don't think that there is anything that\n> can hold back OldestMXact by a wildly excessive amount that won't more\n> or less do the same thing to OldestXmin.\n\nActually, it's probably possible for a transaction that only has a\nvirtual transaction ID to call MultiXactIdSetOldestVisible(), which\nwill then have the effect of holding back OldestMXact without also\nholding back OldestXmin (in READ COMMITTED mode).\n\nWill have to check to make sure, but that won't happen today.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Mon, 1 May 2023 20:03:47 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, May 2, 2023 at 9:55 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, May 1, 2023 at 5:34 AM John Naylor <john.naylor@enterprisedb.com>\nwrote:\n\n> > 0003 is now a quick-and-dirty attempt at that, only in the docs. The\nMXID part is mostly copy-pasted from the XID part, just to get something\ntogether. I'd like to abbreviate that somehow.\n>\n> Yeah, the need to abbreviate statements about MultiXact IDs by saying\n> that they work analogously to XIDs in some particular respect\n> is...another thing that makes this tricky.\n\nThen it sounds like they should stay separate. A direct copy-paste is not\ngood for style, so I will add things like:\n\n- If for some reason autovacuum fails to clear old MXIDs from a table, the\n+ As in the case with XIDs, it is possible for autovacuum to fail to [...]\n\nIt might least be good for readability to gloss over the warning and only\nquote the MXID limit error message, but we'll have to see how it looks.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Tue, May 2, 2023 at 9:55 AM Peter Geoghegan <pg@bowt.ie> wrote:>> On Mon, May 1, 2023 at 5:34 AM John Naylor <john.naylor@enterprisedb.com> wrote:> > 0003 is now a quick-and-dirty attempt at that, only in the docs. The MXID part is mostly copy-pasted from the XID part, just to get something together. I'd like to abbreviate that somehow.>> Yeah, the need to abbreviate statements about MultiXact IDs by saying> that they work analogously to XIDs in some particular respect> is...another thing that makes this tricky.Then it sounds like they should stay separate. A direct copy-paste is not good for style, so I will add things like:- If for some reason autovacuum fails to clear old MXIDs from a table, the+ As in the case with XIDs, it is possible for autovacuum to fail to [...]It might least be good for readability to gloss over the warning and only quote the MXID limit error message, but we'll have to see how it looks.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 May 2023 08:46:15 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, May 2, 2023 at 6:46 PM John Naylor <john.naylor@enterprisedb.com> wrote:\n> It might least be good for readability to gloss over the warning and only quote the MXID limit error message, but we'll have to see how it looks.\n\nApparently you expect me to join you in pretending that you didn't\nlambast my review on this thread less than 24 hours ago [1]. I happen\nto believe that this particular patch is of great strategic\nimportance, so I'll admit that I thought about it for a second. But\njust a second -- I have more self-respect than that.\n\nThat's not the only reason, though. I also genuinely don't have the\nfoggiest notion what was behind what you said. In particular, this\npart still makes zero sense to me:\n\n\"Claim that others are holding you back, and then try to move the\ngoalposts in their work\"\n\nLet me get this straight: \"Moving the goalposts of their work\" refers\nto something *I* did to *you*, on *this* thread...right?\n\nIf anything, I'm biased in *favor* of this patch. The fact that you\nseem to think that I was being obstructionist just doesn't make any\nsense to me, at all. I really don't know where to go from there. I'm\nnot so much upset as baffled.\n\n[1] https://postgr.es/m/CAFBsxsGJMp43QO2cLAh0==ueYVL35pbbEHeXZ0cnZkU=q8sFkg@mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 2 May 2023 20:04:01 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Wed, May 3, 2023 at 10:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> That's not the only reason, though. I also genuinely don't have the\n> foggiest notion what was behind what you said. In particular, this\n> part still makes zero sense to me:\n>\n> \"Claim that others are holding you back, and then try to move the\n> goalposts in their work\"\n\nI went to go find the phrase that I thought I was reacted to, and ...\nnothing. I am also baffled. My comment was inexcusable.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Wed, May 3, 2023 at 10:04 AM Peter Geoghegan <pg@bowt.ie> wrote:>> That's not the only reason, though. I also genuinely don't have the> foggiest notion what was behind what you said. In particular, this> part still makes zero sense to me:>> \"Claim that others are holding you back, and then try to move the> goalposts in their work\"I went to go find the phrase that I thought I was reacted to, and ... nothing. I am also baffled.  My comment was inexcusable. --John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Wed, 3 May 2023 14:30:13 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Wed, May 3, 2023 at 12:30 AM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> I went to go find the phrase that I thought I was reacted to, and ... nothing. I am also baffled. My comment was inexcusable.\n\nI'd quite like to drop this topic, and get on with the work at hand.\nBut before I do that, I ask you to consider one thing: if you were\nmistaken about my words (or their intent) on this occasion, isn't it\nalso possible that it wasn't the first time?\n\nI never had the opportunity to sit down to talk with you face to face\nbefore now. If things had been different (if we managed to talk at one\nof the PGCons before COVID, say), then maybe this incident would have\nhappened in just the same way. I can't help but think that some face\ntime would have prevented the whole episode, though.\n\nYou have every right to dislike me on a personal level, of course, but\nif you do then I'd very much prefer that it be due to one of my actual\nflaws. I'm not a petty man -- I don't resent the success of others.\nI've always thought that you do rather good work. Plus I'm just not in\nthe habit of obstructing things that I directly benefit from.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 3 May 2023 10:58:53 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Thu, May 4, 2023 at 12:59 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'd quite like to drop this topic, and get on with the work at hand.\n\nI'd be grateful, and the other points you made are, of course, valid.\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\nOn Thu, May 4, 2023 at 12:59 AM Peter Geoghegan <pg@bowt.ie> wrote:> I'd quite like to drop this topic, and get on with the work at hand.I'd be grateful, and the other points you made are, of course, valid.--John NaylorEDB: http://www.enterprisedb.com", "msg_date": "Fri, 5 May 2023 09:44:57 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Attached is v9, which is mostly editing the steps for restoring normal\noperation, which are in 0003 now but will be squashed into 0002. Comments\nto polish the wording welcome.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com", "msg_date": "Sat, 13 May 2023 11:13:50 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Fri, May 12, 2023 at 9:14 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n> Attached is v9, which is mostly editing the steps for restoring normal operation, which are in 0003 now but will be squashed into 0002. Comments to polish the wording welcome.\n\nI'll try to get you more feedback on this soon.\n\nBTW, Google cloud already just instruct their users to ignore the\nxidStopLimit HINT about single user mode:\n\nhttps://cloud.google.com/sql/docs/postgres/txid-wraparound\n\nI checked with archive.org. This directive to just ignore the HINT\nappeared for the first time no later than December 2021. Fixing this\nin Postgres is long overdue.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 14 May 2023 18:06:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Sun, May 14, 2023 at 6:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> BTW, Google cloud already just instruct their users to ignore the\n> xidStopLimit HINT about single user mode:\n>\n> https://cloud.google.com/sql/docs/postgres/txid-wraparound\n\nI read this just today, and was reminded of this thread:\n\nhttps://cloud.google.com/blog/products/databases/alloydb-for-postgresql-under-the-hood-adaptive-autovacuum\n\nIt reads:\n\n\"1. Transaction ID wraparound: PostgreSQL transaction IDs or XIDs are\n32-bit unsigned integers that are assigned to each transaction and\nalso get incremented. When they reach their maximum value, it would\nwrap around to zero (similar to a ring buffer) and can lead to data\ncorruption.\"\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 19 Sep 2023 20:41:07 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On 20.09.23 05:41, Peter Geoghegan wrote:\n> On Sun, May 14, 2023 at 6:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> BTW, Google cloud already just instruct their users to ignore the\n>> xidStopLimit HINT about single user mode:\n>>\n>> https://cloud.google.com/sql/docs/postgres/txid-wraparound\n> \n> I read this just today, and was reminded of this thread:\n> \n> https://cloud.google.com/blog/products/databases/alloydb-for-postgresql-under-the-hood-adaptive-autovacuum\n> \n> It reads:\n> \n> \"1. Transaction ID wraparound: PostgreSQL transaction IDs or XIDs are\n> 32-bit unsigned integers that are assigned to each transaction and\n> also get incremented. When they reach their maximum value, it would\n> wrap around to zero (similar to a ring buffer) and can lead to data\n> corruption.\"\n\nWhat is the status of this patch discussion now? It had been set as \nReady for Committer for some months. Do these recent discussions \ninvalidate that? Does it need more discussion?\n\n\n\n", "msg_date": "Sun, 1 Oct 2023 20:46:02 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Sun, Oct 1, 2023 at 11:46 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> What is the status of this patch discussion now? It had been set as\n> Ready for Committer for some months. Do these recent discussions\n> invalidate that? Does it need more discussion?\n\nI don't think that recent discussion invalidated anything. I meant to\nfollow-up on investigating the extent to which anything could hold up\nOldestMXact without also holding up OldestXmin/removable cutoff, but\nthat doesn't seem essential.\n\nThis patch does indeed seem \"ready for committer\". John?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Sun, 1 Oct 2023 16:33:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi!\n\nOn Mon, 2 Oct 2023 at 03:34, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Sun, Oct 1, 2023 at 11:46 AM Peter Eisentraut <peter@eisentraut.org> wrote:\n> > What is the status of this patch discussion now? It had been set as\n> > Ready for Committer for some months. Do these recent discussions\n> > invalidate that? Does it need more discussion?\n>\n> I don't think that recent discussion invalidated anything. I meant to\n> follow-up on investigating the extent to which anything could hold up\n> OldestMXact without also holding up OldestXmin/removable cutoff, but\n> that doesn't seem essential.\n>\n> This patch does indeed seem \"ready for committer\". John?\n>\n> --\n> Peter Geoghegan\n\nFWIW I think the patch is still in good shape and worth to be committed.\n\nRegards,\nPavel Borisov\nSupabase\n\n\n", "msg_date": "Mon, 2 Oct 2023 13:15:09 +0400", "msg_from": "Pavel Borisov <pashkin.elfe@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Oct 2, 2023 at 11:52 AM Pavel Borisov <pashkin.elfe@gmail.com> wrote:\n> FWIW I think the patch is still in good shape and worth to be committed.\n\nI'm also pretty happy with these patches and would like to see at\nleast 0001 and 0002 committed, and probably 0003 as well. I am,\nhowever, -1 on back-patching. Perhaps that is overly cautious, but I\ndon't like changing existing messages in back-branches. It will break\ntranslations, and potentially monitoring scripts, etc.\n\nIf John's not available to take this forward, I can volunteer as\nsubstitute committer, unless Peter or Peter would like to handle it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 2 Oct 2023 13:24:48 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Oct 2, 2023 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I'm also pretty happy with these patches and would like to see at\n> least 0001 and 0002 committed, and probably 0003 as well. I am,\n> however, -1 on back-patching. Perhaps that is overly cautious, but I\n> don't like changing existing messages in back-branches. It will break\n> translations, and potentially monitoring scripts, etc.\n>\n> If John's not available to take this forward, I can volunteer as\n> substitute committer, unless Peter or Peter would like to handle it.\n\nIf you're willing to take over as committer here, I'll let the issue\nof backpatching go.\n\nI only ask that you note why you've not backpatched in the commit message.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Wed, 4 Oct 2023 08:07:23 -0400", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Wed, Oct 4, 2023 at 8:07 AM Peter Geoghegan <pg@bowt.ie> wrote:\n> If you're willing to take over as committer here, I'll let the issue\n> of backpatching go.\n>\n> I only ask that you note why you've not backpatched in the commit message.\n\nWill do, but see also the last point below.\n\nI have looked over these patches in some detail and here are my thoughts:\n\n- I find the use of the word \"generate\" in error messages slightly\nodd. I think it's reasonable given the existing precedent, but the\nword I would have picked is \"assign\", which I see is what Aleksander\nactually had in v1. How would people feel about changing the two\nexisting messages that say \"database is not accepting commands that\ngenerate new MultiXactIds to avoid wraparound data loss ...\" to use\n\"assign\" instead, and then make the new messages match that?\n\n- I think that 0002 needs a bit of wordsmithing. I will work on that.\nIn particular, I don't like this sentence: \"It increases downtime,\nmakes monitoring impossible, disables replication, bypasses safeguards\nagainst wraparound, etc.\" While there's nothing untrue there, it feels\nmore like a sentence from a pgsql-hackers email where most people\nparticipating in the discussion understand the general contours of the\nproblem already than like polished documentation that really lays\nthings out methodically.\n\n- I'm somewhat inclined to have a go at restructuring these patches a\nbit so that some of the documentation changes can potentially be\nback-patched without back-patching the message changes. Even if we\neventually decide to back-patch everything or nothing, there are\nwording adjustments spread across all 3 patches that seem somewhat\nindependent of the changes to the server messages. I think it would be\nclearer to have one patch that is mostly about documentation wording\nchanges, and a second one that is about changing the server messages\nand then making documentation changes that are directly dependent on\nthose message changes. And I might also be inclined to back-patch the\nformer patch as far as it makes sense to do so, while leaving the\nlatter one master-only.\n\nComments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 12 Oct 2023 11:54:05 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Thu, Oct 12, 2023 at 8:54 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> - I find the use of the word \"generate\" in error messages slightly\n> odd. I think it's reasonable given the existing precedent, but the\n> word I would have picked is \"assign\", which I see is what Aleksander\n> actually had in v1. How would people feel about changing the two\n> existing messages that say \"database is not accepting commands that\n> generate new MultiXactIds to avoid wraparound data loss ...\" to use\n> \"assign\" instead, and then make the new messages match that?\n\nWFM.\n\n> - I think that 0002 needs a bit of wordsmithing. I will work on that.\n> In particular, I don't like this sentence: \"It increases downtime,\n> makes monitoring impossible, disables replication, bypasses safeguards\n> against wraparound, etc.\" While there's nothing untrue there, it feels\n> more like a sentence from a pgsql-hackers email where most people\n> participating in the discussion understand the general contours of the\n> problem already than like polished documentation that really lays\n> things out methodically.\n\nI agree.\n\n> - I'm somewhat inclined to have a go at restructuring these patches a\n> bit so that some of the documentation changes can potentially be\n> back-patched without back-patching the message changes. Even if we\n> eventually decide to back-patch everything or nothing, there are\n> wording adjustments spread across all 3 patches that seem somewhat\n> independent of the changes to the server messages. I think it would be\n> clearer to have one patch that is mostly about documentation wording\n> changes, and a second one that is about changing the server messages\n> and then making documentation changes that are directly dependent on\n> those message changes. And I might also be inclined to back-patch the\n> former patch as far as it makes sense to do so, while leaving the\n> latter one master-only.\n\nNo objections from me.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Oct 2023 09:00:29 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Thu, Oct 12, 2023 at 12:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> No objections from me.\n\nHere is a doc-only patch that I think could be back-patched as far as\nemergency mode exists. It combines all of the wording changes to the\ndocumentation from v1-v3 of the previous version, but without changing\nthe message text that is quoted in the documentation, and without\nadding more instances of similar message texts to the documentation,\nand with a bunch of additional hacking by me. Some things I changed:\n\n- I made it so that the MXID section refers back to the XID section\ninstead of duplicating it, but with a short list of differences.\n- I weakened the existing claim that says you must be a superuser or\nVACUUM definitely won't fix it to say instead that you SHOULD run\nVACUUM as the superuser, because the former is false and the latter is\ntrue.\n- I made the list of steps for recovering more explicit.\n- I split out the bit about running autovacuum in the affected\ndatabase into a separate step to be performed after VACUUM for\ncontinued good operation, rather than a necessary ingredient in\nrecovery, because it isn't.\n- A bit of other minor rejiggering.\n\nI'm not forgetting about the rest of the proposed patch set, or the\nchange I proposed earlier. I'm just posting this much now because this\nis how far I got today, and it would be useful to get comments before\nI go further. I think the residual portion of the patch set not\nincluded in this documentation patch will be quite small, and I think\nthat's a good thing, but again, I don't intend to blow that off.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 12 Oct 2023 16:10:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Thu, Oct 12, 2023 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Thu, Oct 12, 2023 at 12:01 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > No objections from me.\n>\n> Here is a doc-only patch that I think could be back-patched as far as\n> emergency mode exists. It combines all of the wording changes to the\n> documentation from v1-v3 of the previous version, but without changing\n> the message text that is quoted in the documentation, and without\n> adding more instances of similar message texts to the documentation,\n> and with a bunch of additional hacking by me.\n\nIt's a bit weird that we're effectively saying \"pay no attention to\nthat terrible HINT\"...but I get it. The simple fact is that the docs\nwere written in a way that allowed misinformation to catch on -- the\ndamage that needs to be undone isn't exactly limited to the docs\nthemselves.\n\nYour choice to not backpatch the changes to the log messages makes a\nlot more sense, now that I see that I see the wider context built by\nthis preparatory patch. Arguably, it would be counterproductive to\npretend that we didn't make this mistake on the backbranches. Better\nto own the mistake.\n\n> Some things I changed:\n>\n> - I made it so that the MXID section refers back to the XID section\n> instead of duplicating it, but with a short list of differences.\n> - I weakened the existing claim that says you must be a superuser or\n> VACUUM definitely won't fix it to say instead that you SHOULD run\n> VACUUM as the superuser, because the former is false and the latter is\n> true.\n> - I made the list of steps for recovering more explicit.\n> - I split out the bit about running autovacuum in the affected\n> database into a separate step to be performed after VACUUM for\n> continued good operation, rather than a necessary ingredient in\n> recovery, because it isn't.\n> - A bit of other minor rejiggering.\n\nThose all make sense to me.\n\n> I'm not forgetting about the rest of the proposed patch set, or the\n> change I proposed earlier. I'm just posting this much now because this\n> is how far I got today, and it would be useful to get comments before\n> I go further. I think the residual portion of the patch set not\n> included in this documentation patch will be quite small, and I think\n> that's a good thing, but again, I don't intend to blow that off.\n\nOf course. Your general approach seems wise.\n\nThanks for working on this. I will be relieved once this is finally\ntaken care of.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 12 Oct 2023 14:52:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi,\n\n> Those all make sense to me.\n>\n> > [...]\n>\n> Of course. Your general approach seems wise.\n>\n> Thanks for working on this. I will be relieved once this is finally\n> taken care of.\n\n+1, and many thanks for your attention to the patchset and all the details!\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Fri, 13 Oct 2023 12:03:42 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Fri, Oct 13, 2023 at 5:03 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> > Thanks for working on this. I will be relieved once this is finally\n> > taken care of.\n>\n> +1, and many thanks for your attention to the patchset and all the details!\n\nCool. I committed that and back-patched to v14, and here's the rest.\n0001 makes the terminology change that I proposed earlier, and then\n0002 is the remainder of what was in the previous patch set that\nwasn't covered by what I committed already, with a few adjustments.\n\nIn particular, I preferred to stick with \"avoid\" rather than changing\nto \"prevent,\" and I thought it was clearer to refer to \"failures\"\nplural rather than \"failure\" collective. These are arguable decisions,\nthough.\n\nI propose to commit these changes only to master. I have included a\nfairly long paragraph about that in the commit message for 0002.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Mon, 16 Oct 2023 14:06:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Oct 16, 2023 at 11:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I propose to commit these changes only to master. I have included a\n> fairly long paragraph about that in the commit message for 0002.\n\nLGTM, except for one small detail: I noticed that you misspelled\n\"translations\" in the commit message.\n\nThanks for getting this over the line\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Mon, 16 Oct 2023 12:45:54 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Mon, Oct 16, 2023 at 3:46 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Mon, Oct 16, 2023 at 11:06 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I propose to commit these changes only to master. I have included a\n> > fairly long paragraph about that in the commit message for 0002.\n>\n> LGTM, except for one small detail: I noticed that you misspelled\n> \"translations\" in the commit message.\n\nOops. Fixed locally.\n\n> Thanks for getting this over the line\n\nSure thing. I'm glad we're finally doing something about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Oct 2023 16:00:49 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hi,\n\n> > LGTM, except for one small detail: I noticed that you misspelled\n> > \"translations\" in the commit message.\n>\n> Oops. Fixed locally.\n\nv11-0001 and v11-0002 LGTM too. IMO \"to assign a XID\" sounds better\nthan \"to generate a XID\".\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 17 Oct 2023 11:57:33 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, Oct 17, 2023 at 4:57 AM Aleksander Alekseev\n<aleksander@timescale.com> wrote:\n> v11-0001 and v11-0002 LGTM too.\n\nCool. Seems we are all in agreement, so committed these. Thanks!\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Oct 2023 10:39:40 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "On Tue, Oct 17, 2023 at 9:39 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> Cool. Seems we are all in agreement, so committed these. Thanks!\n\nThank you for getting this across the finish line!\n\n\n", "msg_date": "Wed, 25 Oct 2023 11:08:59 +0700", "msg_from": "John Naylor <johncnaylorls@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" }, { "msg_contents": "Hello Robert,\n\n17.10.2023 17:39, Robert Haas wrote:\n> On Tue, Oct 17, 2023 at 4:57 AM Aleksander Alekseev\n> <aleksander@timescale.com> wrote:\n>> v11-0001 and v11-0002 LGTM too.\n> Cool. Seems we are all in agreement, so committed these. Thanks!\n\nPlease look at the following sentence added by the commit:\n        ...\n        to issue manual <command>VACUUM</command> commands on the tables where\n        <structfield>relminxid</structfield> is oldest.\n\nIsn't relminxid a typo there?\n\nBest regards,\nAlexander\n\n\n", "msg_date": "Thu, 2 Nov 2023 08:00:00 +0300", "msg_from": "Alexander Lakhin <exclusion@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Clarify the behavior of the system when approaching XID\n wraparound" } ]
[ { "msg_contents": "Hi,\n\nAs far as I read the manual below, auto_explain.log_verbose should \nrecord logs equivalent to VERBOSE option of EXPLAIN.\n\n> -- https://www.postgresql.org/docs/devel/auto-explain.html\n> auto_explain.log_verbose controls whether verbose details are printed \n> when an execution plan is logged; it's equivalent to the VERBOSE option \n> of EXPLAIN.\n\nHowever, when compute_query_id is on, query identifiers are only printed \nwhen using VERBOSE option of EXPLAIN.\n\nEXPLAIN VERBOSE:\n```\n=# show auto_explain.log_verbose;\n auto_explain.log_verbose\n--------------------------\n on\n(1 row)\n\n=# show compute_query_id;\n compute_query_id\n------------------\n on\n(1 row)\n\n=# explain verbose select 1;\n QUERY PLAN\n------------------------------------------\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n Query Identifier: -1801652217649936326\n(3 rows)\n```\n\nauto_explain:\n```\nLOG: 00000: duration: 0.000 ms plan:\n Query Text: explain verbose select 1;\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n```\n\nAttached patch makes auto_explain also print query identifiers.\n\nWhat do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Mon, 16 Jan 2023 21:36:59 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "Hi,\n\nOn Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n>\n> As far as I read the manual below, auto_explain.log_verbose should record\n> logs equivalent to VERBOSE option of EXPLAIN.\n\nAh good catch, that's clearly an oversight!\n\n> Attached patch makes auto_explain also print query identifiers.\n>\n> What do you think?\n\n@@ -407,6 +408,9 @@ explain_ExecutorEnd(QueryDesc *queryDesc)\n \t\t\t\tExplainPrintTriggers(es, queryDesc);\n \t\t\tif (es->costs)\n \t\t\t\tExplainPrintJITSummary(es, queryDesc);\n+\t\t\tif (es->verbose && queryDesc->plannedstmt->queryId != UINT64CONST(0))\n+\t\t\t\tExplainPropertyInteger(\"Query Identifier\", NULL, (int64)\n+\t\t\t\t\t\t\t\t\t queryDesc->plannedstmt->queryId, es);\n\nFor interactive EXPLAIN the query identifier is printed just after the plan,\nbefore the triggers and the JIT summary so auto_explain should do the same.\n\nOther than that looks good to me.\n\n\n", "msg_date": "Mon, 16 Jan 2023 21:07:29 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n> Attached patch makes auto_explain also print query identifiers.\n\nThis was asked during the initial patch; does your patch address the\nissues here ?\n\nhttps://www.postgresql.org/message-id/20200308142644.vlihk7djpwqjkp7w%40nol\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 16 Jan 2023 12:53:13 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On 2023-01-16 22:07, Julien Rouhaud wrote:\n> Hi,\n> \n> On Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n>> \n>> As far as I read the manual below, auto_explain.log_verbose should \n>> record\n>> logs equivalent to VERBOSE option of EXPLAIN.\n> \n> Ah good catch, that's clearly an oversight!\n> \n>> Attached patch makes auto_explain also print query identifiers.\n>> \n>> What do you think?\n> \n> @@ -407,6 +408,9 @@ explain_ExecutorEnd(QueryDesc *queryDesc)\n> \t\t\t\tExplainPrintTriggers(es, queryDesc);\n> \t\t\tif (es->costs)\n> \t\t\t\tExplainPrintJITSummary(es, queryDesc);\n> +\t\t\tif (es->verbose && queryDesc->plannedstmt->queryId != \n> UINT64CONST(0))\n> +\t\t\t\tExplainPropertyInteger(\"Query Identifier\", NULL, (int64)\n> +\t\t\t\t\t\t\t\t\t queryDesc->plannedstmt->queryId, es);\n> \n> For interactive EXPLAIN the query identifier is printed just after the \n> plan,\n> before the triggers and the JIT summary so auto_explain should do the \n> same.\nThanks for the comment!\nAgreed and updated the patch.\n\n\nOn 2023-01-17 03:53, Justin Pryzby wrote:\n> On Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n>> Attached patch makes auto_explain also print query identifiers.\n> \n> This was asked during the initial patch; does your patch address the\n> issues here ?\n> \n> https://www.postgresql.org/message-id/20200308142644.vlihk7djpwqjkp7w%40nol\n\nThanks!\nI may misunderstand something, but it seems that the issue occurred \nsince queryid was calculated in pgss_post_parse_analyze() at that time.\n\n```\n--- queryid_exposure-v6.diff, which is the patch just before the \ndiscussion\n@@ -792,16 +801,34 @@ pgss_post_parse_analyze(ParseState *pstate, Query \n*query)\n..snip..\n\n if (query->utilityStmt)\n {\n- query->queryId = UINT64CONST(0);\n+ if (pgss_track_utility && \nPGSS_HANDLED_UTILITY(query->utilityStmt)\n+ && pstate->p_sourcetext)\n+ {\n+ const char *querytext = pstate->p_sourcetext;\n+ int query_location = query->stmt_location;\n+ int query_len = query->stmt_len;\n+\n+ /*\n+ * Confine our attention to the relevant part of the string, \nif the\n+ * query is a portion of a multi-statement source string.\n+ */\n+ querytext = pgss_clean_querytext(pstate->p_sourcetext,\n+ &query_location,\n+ &query_len);\n+\n+ query->queryId = pgss_compute_utility_queryid(querytext, \nquery_len);\n```\n\nCurrently queryId is not calculated in pgss_post_parse_analyze() and the \nissue does not occur, doesn't it?\nI confirmed the attached patch logged queryid for some utility \nstatements.\n\n```\nLOG: 00000: duration: 0.017 ms plan:\n Query Text: prepare p1 as select 1;\n Result (cost=0.00..0.01 rows=1 width=4)\n Output: 1\n Query Identifier: -1801652217649936326\n```\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 17 Jan 2023 22:53:23 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "Hi,\n\nOn Tue, Jan 17, 2023 at 10:53:23PM +0900, torikoshia wrote:\n> >\n> > For interactive EXPLAIN the query identifier is printed just after the\n> > plan,\n> > before the triggers and the JIT summary so auto_explain should do the\n> > same.\n> Thanks for the comment!\n> Agreed and updated the patch.\n\nThanks!\n\n> On 2023-01-17 03:53, Justin Pryzby wrote:\n> > On Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n> > > Attached patch makes auto_explain also print query identifiers.\n> >\n> > This was asked during the initial patch; does your patch address the\n> > issues here ?\n> >\n> > https://www.postgresql.org/message-id/20200308142644.vlihk7djpwqjkp7w%40nol\n>\n> Thanks!\n> I may misunderstand something, but it seems that the issue occurred since\n> queryid was calculated in pgss_post_parse_analyze() at that time.\n>\n> ```\n> --- queryid_exposure-v6.diff, which is the patch just before the discussion\n> @@ -792,16 +801,34 @@ pgss_post_parse_analyze(ParseState *pstate, Query\n> *query)\n> ..snip..\n>\n> if (query->utilityStmt)\n> {\n> - query->queryId = UINT64CONST(0);\n> + if (pgss_track_utility && PGSS_HANDLED_UTILITY(query->utilityStmt)\n> + && pstate->p_sourcetext)\n> + {\n> + const char *querytext = pstate->p_sourcetext;\n> + int query_location = query->stmt_location;\n> + int query_len = query->stmt_len;\n> +\n> + /*\n> + * Confine our attention to the relevant part of the string, if\n> the\n> + * query is a portion of a multi-statement source string.\n> + */\n> + querytext = pgss_clean_querytext(pstate->p_sourcetext,\n> + &query_location,\n> + &query_len);\n> +\n> + query->queryId = pgss_compute_utility_queryid(querytext,\n> query_len);\n> ```\n>\n> Currently queryId is not calculated in pgss_post_parse_analyze() and the\n> issue does not occur, doesn't it?\n> I confirmed the attached patch logged queryid for some utility statements.\n>\n> ```\n> LOG: 00000: duration: 0.017 ms plan:\n> Query Text: prepare p1 as select 1;\n> Result (cost=0.00..0.01 rows=1 width=4)\n> Output: 1\n> Query Identifier: -1801652217649936326\n> ```\n\nYes, this problem was fixed a long time ago. Just to be sure I checked that\nauto_explain and explain agree on the queryid:\n\n=# select count(*) from pg_class;\n[...]\nLOG: duration: 0.239 ms plan:\n\tQuery Text: select count(*) from pg_class;\n\tAggregate (cost=15.45..15.46 rows=1 width=8)\n\t Output: count(*)\n\t -> Index Only Scan using pg_class_tblspc_relfilenode_index on pg_catalog.pg_class (cost=0.15..14.40 rows=417 width=0)\n\t Output: reltablespace, relfilenode\n\tQuery Identifier: 2855866587085353326\n\n=# explain (verbose) select count(*) from pg_class;\n QUERY PLAN >\n------------------------------------------------------------------------------------------------------------->\n Aggregate (cost=15.45..15.46 rows=1 width=8)\n Output: count(*)\n -> Index Only Scan using pg_class_tblspc_relfilenode_index on pg_catalog.pg_class (cost=0.15..14.40 rows>\n Output: reltablespace, relfilenode\n Query Identifier: 2855866587085353326\n\nLOG: duration: 0.000 ms plan:\n\tQuery Text: explain (verbose) select count(*) from pg_class;\n\tAggregate (cost=15.45..15.46 rows=1 width=8)\n\t Output: count(*)\n\t -> Index Only Scan using pg_class_tblspc_relfilenode_index on pg_catalog.pg_class (cost=0.15..14.40 rows=417 width=0)\n\t Output: reltablespace, relfilenode\n\tQuery Identifier: 2855866587085353326\n\nSo the patch looks good to me. I didn't find any entry in the commitfest, did\nI miss it? If not, could you create one (feel free to add me and Justin as\nreviewer, and probably mark is as RfC).\n\nIt's a bit annoying that the info is missing since pg 14, but we probably can't\nbackpatch this as it might break log parser tools.\n\n\n", "msg_date": "Thu, 19 Jan 2023 18:05:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On 2023-01-19 19:05, Julien Rouhaud wrote:\n> Hi,\n> \n> On Tue, Jan 17, 2023 at 10:53:23PM +0900, torikoshia wrote:\n>> >\n>> > For interactive EXPLAIN the query identifier is printed just after the\n>> > plan,\n>> > before the triggers and the JIT summary so auto_explain should do the\n>> > same.\n>> Thanks for the comment!\n>> Agreed and updated the patch.\n> \n> Thanks!\n> \n>> On 2023-01-17 03:53, Justin Pryzby wrote:\n>> > On Mon, Jan 16, 2023 at 09:36:59PM +0900, torikoshia wrote:\n>> > > Attached patch makes auto_explain also print query identifiers.\n>> >\n>> > This was asked during the initial patch; does your patch address the\n>> > issues here ?\n>> >\n>> > https://www.postgresql.org/message-id/20200308142644.vlihk7djpwqjkp7w%40nol\n>> \n>> Thanks!\n>> I may misunderstand something, but it seems that the issue occurred \n>> since\n>> queryid was calculated in pgss_post_parse_analyze() at that time.\n>> \n>> ```\n>> --- queryid_exposure-v6.diff, which is the patch just before the \n>> discussion\n>> @@ -792,16 +801,34 @@ pgss_post_parse_analyze(ParseState *pstate, \n>> Query\n>> *query)\n>> ..snip..\n>> \n>> if (query->utilityStmt)\n>> {\n>> - query->queryId = UINT64CONST(0);\n>> + if (pgss_track_utility && \n>> PGSS_HANDLED_UTILITY(query->utilityStmt)\n>> + && pstate->p_sourcetext)\n>> + {\n>> + const char *querytext = pstate->p_sourcetext;\n>> + int query_location = query->stmt_location;\n>> + int query_len = query->stmt_len;\n>> +\n>> + /*\n>> + * Confine our attention to the relevant part of the \n>> string, if\n>> the\n>> + * query is a portion of a multi-statement source string.\n>> + */\n>> + querytext = pgss_clean_querytext(pstate->p_sourcetext,\n>> + &query_location,\n>> + &query_len);\n>> +\n>> + query->queryId = pgss_compute_utility_queryid(querytext,\n>> query_len);\n>> ```\n>> \n>> Currently queryId is not calculated in pgss_post_parse_analyze() and \n>> the\n>> issue does not occur, doesn't it?\n>> I confirmed the attached patch logged queryid for some utility \n>> statements.\n>> \n>> ```\n>> LOG: 00000: duration: 0.017 ms plan:\n>> Query Text: prepare p1 as select 1;\n>> Result (cost=0.00..0.01 rows=1 width=4)\n>> Output: 1\n>> Query Identifier: -1801652217649936326\n>> ```\n> \n> Yes, this problem was fixed a long time ago. Just to be sure I checked \n> that\n> auto_explain and explain agree on the queryid:\n\nThanks for your comment and check!\n> \n> =# select count(*) from pg_class;\n> [...]\n> LOG: duration: 0.239 ms plan:\n> \tQuery Text: select count(*) from pg_class;\n> \tAggregate (cost=15.45..15.46 rows=1 width=8)\n> \t Output: count(*)\n> \t -> Index Only Scan using pg_class_tblspc_relfilenode_index on\n> pg_catalog.pg_class (cost=0.15..14.40 rows=417 width=0)\n> \t Output: reltablespace, relfilenode\n> \tQuery Identifier: 2855866587085353326\n> \n> =# explain (verbose) select count(*) from pg_class;\n> QUERY PLAN\n> >\n> ------------------------------------------------------------------------------------------------------------->\n> Aggregate (cost=15.45..15.46 rows=1 width=8)\n> Output: count(*)\n> -> Index Only Scan using pg_class_tblspc_relfilenode_index on\n> pg_catalog.pg_class (cost=0.15..14.40 rows>\n> Output: reltablespace, relfilenode\n> Query Identifier: 2855866587085353326\n> \n> LOG: duration: 0.000 ms plan:\n> \tQuery Text: explain (verbose) select count(*) from pg_class;\n> \tAggregate (cost=15.45..15.46 rows=1 width=8)\n> \t Output: count(*)\n> \t -> Index Only Scan using pg_class_tblspc_relfilenode_index on\n> pg_catalog.pg_class (cost=0.15..14.40 rows=417 width=0)\n> \t Output: reltablespace, relfilenode\n> \tQuery Identifier: 2855866587085353326\n> \n> So the patch looks good to me. I didn't find any entry in the \n> commitfest, did\n> I miss it? If not, could you create one (feel free to add me and \n> Justin as\n> reviewer, and probably mark is as RfC).\n\nSorry to make you go through the trouble of looking for it.\nI've now created it.\nhttps://commitfest.postgresql.org/42/4136/\n\n> \n> It's a bit annoying that the info is missing since pg 14, but we \n> probably can't\n> backpatch this as it might break log parser tools.\n\n+1\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 20 Jan 2023 11:43:51 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Fri, Jan 20, 2023 at 11:43:51AM +0900, torikoshia wrote:\n> Sorry to make you go through the trouble of looking for it.\n> I've now created it.\n> https://commitfest.postgresql.org/42/4136/\n\nFWIW, no objections from here. This maps with EXPLAIN where the query\nID is only printed under VERBOSE.\n--\nMichael", "msg_date": "Fri, 20 Jan 2023 12:32:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Fri, Jan 20, 2023 at 12:32:58PM +0900, Michael Paquier wrote:\n> FWIW, no objections from here. This maps with EXPLAIN where the query\n> ID is only printed under VERBOSE.\n\nWhile looking at this change, I have been wondering about something..\nIsn't the knowledge of the query ID something that should be pushed\nwithin ExplainPrintPlan() so as we don't duplicate in two places the\nchecks that show it? In short, the patch ignores the case where\ncompute_query_id = regress in auto_explain.\n\nExplainPrintTriggers() is kind of different because there is\nauto_explain_log_triggers. Still, we could add a flag in ExplainState\ndeciding if the triggers should be printed, so as it would be possible\nto move ExplainPrintTriggers() and ExplainPrintJITSummary() within\nExplainPrintPlan(), as well? The same kind of logic could be applied\nfor the planning time and the buffer usage if these are tracked in\nExplainState rather than being explicit arguments of ExplainOnePlan().\nNot to mention that this reduces the differences between\nExplainOneUtility() and ExplainOnePlan().\n\nLeaving this comment aside, I think that this should have at least one\nregression test in 001_auto_explain.pl, where query_log() can be\ncalled while the verbose GUC of auto_explain is enabled.\n--\nMichael", "msg_date": "Mon, 23 Jan 2023 09:35:52 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On 2023-01-23 09:35, Michael Paquier wrote:\n> On Fri, Jan 20, 2023 at 12:32:58PM +0900, Michael Paquier wrote:\n>> FWIW, no objections from here. This maps with EXPLAIN where the query\n>> ID is only printed under VERBOSE.\n> \n> While looking at this change, I have been wondering about something..\n> Isn't the knowledge of the query ID something that should be pushed\n> within ExplainPrintPlan() so as we don't duplicate in two places the\n> checks that show it? In short, the patch ignores the case where\n> compute_query_id = regress in auto_explain.\n\nThanks!\nIt seems better and updated the patch.\n\n> \n> ExplainPrintTriggers() is kind of different because there is\n> auto_explain_log_triggers. Still, we could add a flag in ExplainState\n> deciding if the triggers should be printed, so as it would be possible\n> to move ExplainPrintTriggers() and ExplainPrintJITSummary() within\n> ExplainPrintPlan(), as well? The same kind of logic could be applied\n> for the planning time and the buffer usage if these are tracked in\n> ExplainState rather than being explicit arguments of ExplainOnePlan().\n> Not to mention that this reduces the differences between\n> ExplainOneUtility() and ExplainOnePlan().\n\nHmm, this refactoring would worth considering, but should be done in \nanother patch?\n\n> Leaving this comment aside, I think that this should have at least one\n> regression test in 001_auto_explain.pl, where query_log() can be\n> called while the verbose GUC of auto_explain is enabled.\n\nAgreed.\nAdded a test for queryid logging.\n\n> --\n> Michael\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION", "msg_date": "Tue, 24 Jan 2023 23:01:46 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Tue, Jan 24, 2023 at 11:01:46PM +0900, torikoshia wrote:\n> On 2023-01-23 09:35, Michael Paquier wrote:\n>> ExplainPrintTriggers() is kind of different because there is\n>> auto_explain_log_triggers. Still, we could add a flag in ExplainState\n>> deciding if the triggers should be printed, so as it would be possible\n>> to move ExplainPrintTriggers() and ExplainPrintJITSummary() within\n>> ExplainPrintPlan(), as well? The same kind of logic could be applied\n>> for the planning time and the buffer usage if these are tracked in\n>> ExplainState rather than being explicit arguments of ExplainOnePlan().\n>> Not to mention that this reduces the differences between\n>> ExplainOneUtility() and ExplainOnePlan().\n> \n> Hmm, this refactoring would worth considering, but should be done in another\n> patch?\n\nIt could be. That's fine by me to not do that as a first step as the\nquery ID computation is done just after ExplainPrintPlan(). An\nargument could be made about ExplainPrintPlan(), though\ncompute_query_id = regress offers an option to control that, as well.\n\n>> Leaving this comment aside, I think that this should have at least one\n>> regression test in 001_auto_explain.pl, where query_log() can be\n>> called while the verbose GUC of auto_explain is enabled.\n> \n> Agreed.\n> Added a test for queryid logging.\n\nThanks. Will check and probably apply on HEAD.\n--\nMichael", "msg_date": "Wed, 25 Jan 2023 16:46:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Wed, Jan 25, 2023 at 04:46:36PM +0900, Michael Paquier wrote:\n> Thanks. Will check and probably apply on HEAD.\n\nDone, after adding one test case with compute_query_id=regress and\napplying some indentation.\n--\nMichael", "msg_date": "Thu, 26 Jan 2023 12:40:06 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On 2023-01-26 12:40, Michael Paquier wrote:\n> On Wed, Jan 25, 2023 at 04:46:36PM +0900, Michael Paquier wrote:\n>> Thanks. Will check and probably apply on HEAD.\n> \n> Done, after adding one test case with compute_query_id=regress and\n> applying some indentation.\n> --\n> Michael\n\nThanks!\n\n>> On 2023-01-23 09:35, Michael Paquier wrote:\n>>> ExplainPrintTriggers() is kind of different because there is\n>>> auto_explain_log_triggers. Still, we could add a flag in \n>>> ExplainState\n>>> deciding if the triggers should be printed, so as it would be \n>>> possible\n>>> to move ExplainPrintTriggers() and ExplainPrintJITSummary() within\n>>> ExplainPrintPlan(), as well? The same kind of logic could be applied\n>>> for the planning time and the buffer usage if these are tracked in\n>>> ExplainState rather than being explicit arguments of \n>>> ExplainOnePlan().\n>>> Not to mention that this reduces the differences between\n>>> ExplainOneUtility() and ExplainOnePlan().\n\nI'll work on this next.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 26 Jan 2023 22:00:04 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On Thu, Jan 26, 2023 at 10:00:04PM +0900, torikoshia wrote:\n> I'll work on this next.\n\nCool, thanks!\n--\nMichael", "msg_date": "Fri, 27 Jan 2023 09:34:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "I am wondering if this patch should be backpatched?\r\n\r\nThe reason being is in auto_explain documentation [1],\r\nthere is a claim of equivalence of the auto_explain.log_verbose\r\noption and EXPLAIN(verbose)\r\n\r\n\"..... it's equivalent to the VERBOSE option of EXPLAIN.\"\r\n\r\nThis can be quite confusing for users of the extension.\r\nThe documentation should either be updated or a backpatch\r\nall the way down to 14, which the version the query identifier\r\nwas moved to core. I am in favor of the latter.\r\n\r\nAny thoughts?\r\n\r\n\r\n[1] https://www.postgresql.org/docs/14/auto-explain.html\r\n\r\nRegards,\r\n\r\n--\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n", "msg_date": "Mon, 6 Mar 2023 23:50:08 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "On 2023-03-07 08:50, Imseih (AWS), Sami wrote:\n> I am wondering if this patch should be backpatched?\n> \n> The reason being is in auto_explain documentation [1],\n> there is a claim of equivalence of the auto_explain.log_verbose\n> option and EXPLAIN(verbose)\n> \n> \"..... it's equivalent to the VERBOSE option of EXPLAIN.\"\n> \n> This can be quite confusing for users of the extension.\n> The documentation should either be updated or a backpatch\n> all the way down to 14, which the version the query identifier\n> was moved to core. I am in favor of the latter.\n> \n> Any thoughts?\n\nWe discussed a bit whether to backpatch this, but agreed that it would \nbe better not to do so for the following reasons:\n\n> It's a bit annoying that the info is missing since pg 14, but we \n> probably can't\n> backpatch this as it might break log parser tools.\n\nWhat do you think?\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 07 Mar 2023 11:18:29 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" }, { "msg_contents": "> > It's a bit annoying that the info is missing since pg 14, but we\r\n> > probably can't\r\n> > backpatch this as it might break log parser tools.\r\n\r\n\r\n> What do you think?\r\n\r\nThat's a good point about log parsing tools, i.e. pgbadger.\r\n\r\nBackpatching does not sounds to appealing to me after\r\ngiving this a second thought.\r\n\r\nHowever, I do feel it needs to be called out in docs,\r\nthat \"Query Identifier\" is not available in auto_explain\r\nuntil version 16.\r\n\r\nRegards,\r\n\r\n--\r\n\r\nSami Imseih\r\nAmazon Web Services (AWS)\r\n\r\n\r\n\r\n", "msg_date": "Tue, 7 Mar 2023 03:21:46 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Record queryid when auto_explain.log_verbose is on" } ]
[ { "msg_contents": "Hi,\n\nnot sure if this is known behavior.\n\nServer version is 14.6 (Debian 14.6-1.pgdg110+1).\n\nIn a PITR setup I have these settings:\n\nrecovery_target_xid = '852381'\nrecovery_target_inclusive = 'false'\n\nIn the log file I see this message:\n\nLOG: recovery stopping before commit of transaction 852381, time\n2000-01-01 00:00:00+00\n\nBut:\n\npostgres=# select * from pg_last_committed_xact();\n xid | timestamp | roident\n--------+-------------------------------+---------\n 852380 | 2023-01-16 18:00:35.054495+00 | 0\n\nSo, the timestamp displayed in the log message is certainly wrong.\n\nThanks,\nTorsten\n\nHi,not sure if this is known behavior.Server version is 14.6 (Debian 14.6-1.pgdg110+1).In a PITR setup I have these settings:recovery_target_xid = '852381'recovery_target_inclusive = 'false'In the log file I see this message:LOG:  recovery stopping before commit of transaction 852381, time 2000-01-01 00:00:00+00But:postgres=# select * from pg_last_committed_xact();  xid   |           timestamp           | roident --------+-------------------------------+--------- 852380 | 2023-01-16 18:00:35.054495+00 |       0So, the timestamp displayed in the log message is certainly wrong.Thanks,Torsten", "msg_date": "Mon, 16 Jan 2023 19:59:33 +0100", "msg_from": "=?UTF-8?Q?Torsten_F=C3=B6rtsch?= <tfoertsch123@gmail.com>", "msg_from_op": true, "msg_subject": "minor bug" }, { "msg_contents": "On Mon, 2023-01-16 at 19:59 +0100, Torsten Förtsch wrote:\n> not sure if this is known behavior.\n> \n> Server version is 14.6 (Debian 14.6-1.pgdg110+1).\n> \n> In a PITR setup I have these settings:\n> \n> recovery_target_xid = '852381'\n> recovery_target_inclusive = 'false'\n> \n> In the log file I see this message:\n> \n> LOG:  recovery stopping before commit of transaction 852381, time 2000-01-01 00:00:00+00\n> \n> But:\n> \n> postgres=# select * from pg_last_committed_xact();\n>   xid   |           timestamp           | roident \n> --------+-------------------------------+---------\n>  852380 | 2023-01-16 18:00:35.054495+00 |       0\n> \n> So, the timestamp displayed in the log message is certainly wrong.\n\nRedirected to -hackers.\n\nIf recovery stops at a WAL record that has no timestamp, you get this\nbogus recovery stop time. I think we should show the recovery stop time\nonly if time was the target, as in the attached patch.\n\nYours,\nLaurenz Albe", "msg_date": "Tue, 17 Jan 2023 10:42:03 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "On Tue, Jan 17, 2023 at 10:42:03AM +0100, Laurenz Albe wrote:\n> If recovery stops at a WAL record that has no timestamp, you get this\n> bogus recovery stop time. I think we should show the recovery stop time\n> only if time was the target, as in the attached patch.\n\nGood catch! I'll try to take a look.\n--\nMichael", "msg_date": "Tue, 17 Jan 2023 19:12:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Mon, 2023-01-16 at 19:59 +0100, Torsten Förtsch wrote:\n>> So, the timestamp displayed in the log message is certainly wrong.\n\n> If recovery stops at a WAL record that has no timestamp, you get this\n> bogus recovery stop time. I think we should show the recovery stop time\n> only if time was the target, as in the attached patch.\n\nI don't think that is a tremendously useful definition: the user\nalready knows what recoveryStopTime is, or can find it out from\ntheir settings easily enough.\n\nI seem to recall that the original idea was to report the timestamp\nof the commit/abort record we are stopping at. Maybe my memory is\nfaulty, but I think that'd be significantly more useful than the\ncurrent behavior, *especially* when the replay stopping point is\ndefined by something other than time.\n\n(Also, the wording of the log message suggests that that's what\nthe reported time is supposed to be. I wonder if somebody messed\nthis up somewhere along the way.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 17 Jan 2023 10:32:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "On Tue, 2023-01-17 at 10:32 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Mon, 2023-01-16 at 19:59 +0100, Torsten Förtsch wrote:\n> > > So, the timestamp displayed in the log message is certainly wrong.\n> \n> > If recovery stops at a WAL record that has no timestamp, you get this\n> > bogus recovery stop time.  I think we should show the recovery stop time\n> > only if time was the target, as in the attached patch.\n> \n> I don't think that is a tremendously useful definition: the user\n> already knows what recoveryStopTime is, or can find it out from\n> their settings easily enough.\n> \n> I seem to recall that the original idea was to report the timestamp\n> of the commit/abort record we are stopping at.  Maybe my memory is\n> faulty, but I think that'd be significantly more useful than the\n> current behavior, *especially* when the replay stopping point is\n> defined by something other than time.\n> \n> (Also, the wording of the log message suggests that that's what\n> the reported time is supposed to be.  I wonder if somebody messed\n> this up somewhere along the way.)\n\nrecoveryStopTime is set to recordXtime (the time of the xlog record)\na few lines above that patch, so this is useful information if it is\npresent.\n\nI realized that my original patch might be a problem for translation;\nhere is an updated version that does not take any shortcuts.\n\nYours,\nLaurenz Albe", "msg_date": "Wed, 18 Jan 2023 09:27:02 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2023-01-17 at 10:32 -0500, Tom Lane wrote:\n>> I seem to recall that the original idea was to report the timestamp\n>> of the commit/abort record we are stopping at. Maybe my memory is\n>> faulty, but I think that'd be significantly more useful than the\n>> current behavior, *especially* when the replay stopping point is\n>> defined by something other than time.\n>> (Also, the wording of the log message suggests that that's what\n>> the reported time is supposed to be. I wonder if somebody messed\n>> this up somewhere along the way.)\n\n> recoveryStopTime is set to recordXtime (the time of the xlog record)\n> a few lines above that patch, so this is useful information if it is\n> present.\n\nAh, but that only happens if recoveryTarget == RECOVERY_TARGET_TIME.\nDigging in the git history, I see that this did use to work as\nI remember: we always extracted the record time before printing it.\nThat was accidentally broken in refactoring in c945af80c. I think\nthe correct fix is more like the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 18 Jan 2023 15:03:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "On Wed, 2023-01-18 at 15:03 -0500, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Tue, 2023-01-17 at 10:32 -0500, Tom Lane wrote:\n> > > I seem to recall that the original idea was to report the timestamp\n> > > of the commit/abort record we are stopping at.  Maybe my memory is\n> > > faulty, but I think that'd be significantly more useful than the\n> > > current behavior, *especially* when the replay stopping point is\n> > > defined by something other than time.\n> > > (Also, the wording of the log message suggests that that's what\n> > > the reported time is supposed to be.  I wonder if somebody messed\n> > > this up somewhere along the way.)\n> \n> > recoveryStopTime is set to recordXtime (the time of the xlog record)\n> > a few lines above that patch, so this is useful information if it is\n> > present.\n> \n> Ah, but that only happens if recoveryTarget == RECOVERY_TARGET_TIME.\n> Digging in the git history, I see that this did use to work as\n> I remember: we always extracted the record time before printing it.\n> That was accidentally broken in refactoring in c945af80c.  I think\n> the correct fix is more like the attached.\n\nYes, you are right. Your patch looks fine to me.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Thu, 19 Jan 2023 13:57:14 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "If we never expect getRecordTimestamp to fail, then why put it in the\nif-condition?\n\ngetRecordTimestamp can fail if the record is not a restore point nor a\ncommit or abort record. A few lines before in the same function there is\nthis:\n\n /* Otherwise we only consider stopping before COMMIT or ABORT records. */\nif (XLogRecGetRmid(record) != RM_XACT_ID)\n return false;\n\nI guess that make sure getRecordTimestamp can never fail.\n\nThe way it is written in your patch invites it to be optimized out again.\nThe only thing that prevents it is the comment.\n\nWhy not\n\n(void)getRecordTimestamp(record, &recordXtime);\nif (recoveryTarget == RECOVERY_TARGET_TIME)\n...\n\n\n\n\nOn Wed, Jan 18, 2023 at 9:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> > On Tue, 2023-01-17 at 10:32 -0500, Tom Lane wrote:\n> >> I seem to recall that the original idea was to report the timestamp\n> >> of the commit/abort record we are stopping at. Maybe my memory is\n> >> faulty, but I think that'd be significantly more useful than the\n> >> current behavior, *especially* when the replay stopping point is\n> >> defined by something other than time.\n> >> (Also, the wording of the log message suggests that that's what\n> >> the reported time is supposed to be. I wonder if somebody messed\n> >> this up somewhere along the way.)\n>\n> > recoveryStopTime is set to recordXtime (the time of the xlog record)\n> > a few lines above that patch, so this is useful information if it is\n> > present.\n>\n> Ah, but that only happens if recoveryTarget == RECOVERY_TARGET_TIME.\n> Digging in the git history, I see that this did use to work as\n> I remember: we always extracted the record time before printing it.\n> That was accidentally broken in refactoring in c945af80c. I think\n> the correct fix is more like the attached.\n>\n> regards, tom lane\n>\n>\n\nIf we never expect getRecordTimestamp to fail, then why put it in the if-condition?getRecordTimestamp can fail if the record is not a restore point nor a commit or abort record. A few lines before in the same function there is this: /* Otherwise we only consider stopping before COMMIT or ABORT records. */\tif (XLogRecGetRmid(record) != RM_XACT_ID)    return false;I guess that make sure getRecordTimestamp can never fail.The way it is written in your patch invites it to be optimized out again. The only thing that prevents it is the comment.Why not(void)getRecordTimestamp(record, &recordXtime);if (recoveryTarget == RECOVERY_TARGET_TIME)...On Wed, Jan 18, 2023 at 9:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Tue, 2023-01-17 at 10:32 -0500, Tom Lane wrote:\n>> I seem to recall that the original idea was to report the timestamp\n>> of the commit/abort record we are stopping at.  Maybe my memory is\n>> faulty, but I think that'd be significantly more useful than the\n>> current behavior, *especially* when the replay stopping point is\n>> defined by something other than time.\n>> (Also, the wording of the log message suggests that that's what\n>> the reported time is supposed to be.  I wonder if somebody messed\n>> this up somewhere along the way.)\n\n> recoveryStopTime is set to recordXtime (the time of the xlog record)\n> a few lines above that patch, so this is useful information if it is\n> present.\n\nAh, but that only happens if recoveryTarget == RECOVERY_TARGET_TIME.\nDigging in the git history, I see that this did use to work as\nI remember: we always extracted the record time before printing it.\nThat was accidentally broken in refactoring in c945af80c.  I think\nthe correct fix is more like the attached.\n\n                        regards, tom lane", "msg_date": "Thu, 19 Jan 2023 18:18:04 +0100", "msg_from": "=?UTF-8?Q?Torsten_F=C3=B6rtsch?= <tfoertsch123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: minor bug" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> On Wed, 2023-01-18 at 15:03 -0500, Tom Lane wrote:\n>> Ah, but that only happens if recoveryTarget == RECOVERY_TARGET_TIME.\n>> Digging in the git history, I see that this did use to work as\n>> I remember: we always extracted the record time before printing it.\n>> That was accidentally broken in refactoring in c945af80c. I think\n>> the correct fix is more like the attached.\n\n> Yes, you are right. Your patch looks fine to me.\n\nPushed then. Thanks for the report!\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 12:25:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minor bug" }, { "msg_contents": "=?UTF-8?Q?Torsten_F=C3=B6rtsch?= <tfoertsch123@gmail.com> writes:\n> Why not\n\n> (void)getRecordTimestamp(record, &recordXtime);\n> if (recoveryTarget == RECOVERY_TARGET_TIME)\n> ...\n\nCould've done it like that, but I already pushed the other\nversion, and I don't think it's worth the trouble to change.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 19 Jan 2023 12:29:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: minor bug" } ]
[ { "msg_contents": "Due to cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb,\ne5b8a4c098ad6add39626a14475148872cd687e0, and prior commits touching\nrelated code, it should now be possible to consider handing out\nCREATEROLE as a reasonable alternative to handing out SUPERUSER. Prior\nto cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb, giving CREATEROLE meant\ngiving away control of pg_execute_server_programs and every other\nbuilt-in role, so it wasn't possible to give CREATEROLE to a user who\nisn't completely trusted. Now, that should be OK. CREATEROLE users\nwill only gain control over roles they create (and any others that the\nsuperuser grants to them). Furthermore, if you set\ncreaterole_self_grant to 'inherit' or 'set, inherit', a CREATEROLE\nuser will automatically inherit the privileges of the users they\ncreate, hopefully making them feel like they are almost a superuser\nwithout letting them actually take over the world.\n\nNot very surprisingly, those commits failed to solve every single\nproblem that anyone has ever thought about in this area.\n\nHere is a probably-incomplete list of related problems that are so far unsolved:\n\n1. It's still possible for a CREATEROLE user to hand out role\nattributes that they don't possess. The new prohibitions in\ncf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb prevent a CREATEROLE user\nfrom handing out membership in a role on which they lack sufficient\npermissions, but they don't prevent a CREATEROLE user who lacks\nCREATEDB from creating a new user who does have CREATEDB. I think we\nshould subject the CREATEDB, REPLICATION, and BYPASSRLS attributes to\nthe same rule that we now use for role memberships: you've got to have\nthe property in order to give it to someone else. In the case of\nCREATEDB, this would tighten the current rules, which allow you to\ngive out CREATEDB without having it. In the case of REPLICATION and\nBYPASSRLS, this would liberalize the current rules: right now, a\nCREATEROLE user cannot give REPLICATION or BYPASSRLS to another user\neven if they possess those attributes.\n\nThis proposal doesn't address the CREATEROLE or CONNECTION LIMIT\nproperties. It seems possible to me that someone might want to set up\na CREATEROLE user who can't make more such users, and this proposal\ndoesn't manufacture any way of doing that. It also doesn't let you\nconstraint the ability of a CREATEROLE user to set a CONNECTION LIMIT\nfor some other user. I think that's OK. It might be nice to have ways\nof imposing such restrictions at some point in the future, but it is\nnot very obvious what to do about such cases and, importantly, I don't\nthink there's any security impact from failing to address those cases.\nIf a CREATEROLE user without CREATEDB can create a new role that does\nhave CREATEDB, that's a privilege escalation. If they can hand out\nCREATEROLE, that isn't: they already have it.\n\n2. It's still impossible for a CREATEROLE user to execute CREATE\nSUBSCRIPTION, so they can't get logical replication working. There was\na previous thread about fixing this at\nhttps://www.postgresql.org/message-id/flat/9DFC88D3-1300-4DE8-ACBC-4CEF84399A53%40enterprisedb.com\nand the corresponding CF entry is listed as committed, but\nCreateSubscription() still requires superuser, so I think that maybe\nthat thread only got some of the preliminary permissions-check work\ncommitted and the core problem is yet to be solved.\n\n3. Only superusers can control event triggers. In the thread at\nhttps://www.postgresql.org/message-id/flat/914FF898-5AC4-4E02-8A05-3876087007FB%40enterprisedb.com\nit was proposed, based on an idea from Tom, to allow any user to\ncreate event triggers but, approximately, to only have them fire for\ncode running as a user whose privileges the creator already has. I\ndon't recall the precise rule that was proposed and it might need\nrethinking in view of 3d14e171e9e2236139e8976f3309a588bcc8683b, and I\nthink there was also some opposition to that proposal, so I'm not sure\nwhat the way forward here is.\n\n4. You can reserve a small number of connections for the superuser\nwith superuser_reserved_connections, but there's no way to do a\nsimilar thing for any other user. As mentioned above, a CREATEROLE\nuser could set connection limits for every created role such that the\nsum of those limits is less than max_connections by some margin, but\nthat restricts each of those roles individually, not all of them in\nthe aggregate. Maybe we could address this by inventing a new GUC\nreserved_connections and a predefined role\npg_use_reserved_connections.\n\n5. If you set createrole_self_grant = 'set, inherit' and make alice a\nCREATEROLE user and she goes around and creates a bunch of other users\nand they all run around and create a bunch of objects and then alice\ntries to pg_dump the entire database, it will work ... provided that\nthere are no tables owned by any other user. If the superuser has\ncreated any tables, or there's another CREATEROLE user wandering\naround creating tables, or even a non-CREATEROLE user whose\npermissions alice does not have, pg_dump will try to lock them and\ndie. I don't see any perfect solution to this problem: we can neither\nlet alice dump objects on which she does not have permission, nor can\nwe silently skip them in the interest of giving alice a better user\nexperience, because if we do that then somebody will end up with a\npartial database backup that they think is a complete database backup\nand that will be a really bad day. However, I think we could add a\npg_dump option that says, hey, please only try to dump tables we have\npermission to dump, and skip the others. Or, of course, alice could\nuse -T and -N as required, but a dedicated switch for\nskip-stuff-i-can't-access-quietly might be a better user experience. I\nguess you could also argue that this isn't really a problem in the\nfirst place because you could always choose to grant\npg_read_all_tables to the almost-super-user, but maybe that's not\nalways desirable. Not sure.\n\nJust to be clear, there are lots of other things that a non-superuser\ncannot do, such as CREATE LANGUAGE. However, I'm excluding that kind\nof thing from this list because it's intrinsically unsafe to allow a\nnon-superuser to do that, since it's probably a gateway to arbitrary\ncode execution and then you can probably get superuser for real, and\ncontrol of the OS account, too. What I'm interested in is developing a\nlist of things that could, with the right infrastructure, be delegated\nto non-superusers safely but which, as things stand today, cannot be\ndelegated to non-superusers. Contributions to the list are most\nwelcome as are thoughts on the proposals above.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Jan 2023 14:29:56 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Mon, Jan 16, 2023 at 02:29:56PM -0500, Robert Haas wrote:\n> 4. You can reserve a small number of connections for the superuser\n> with superuser_reserved_connections, but there's no way to do a\n> similar thing for any other user. As mentioned above, a CREATEROLE\n> user could set connection limits for every created role such that the\n> sum of those limits is less than max_connections by some margin, but\n> that restricts each of those roles individually, not all of them in\n> the aggregate. Maybe we could address this by inventing a new GUC\n> reserved_connections and a predefined role\n> pg_use_reserved_connections.\n\nI've written something like this before, and I'd be happy to put together a\npatch if there is interest.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 16 Jan 2023 14:37:28 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Mon, Jan 16, 2023 at 5:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Mon, Jan 16, 2023 at 02:29:56PM -0500, Robert Haas wrote:\n> > 4. You can reserve a small number of connections for the superuser\n> > with superuser_reserved_connections, but there's no way to do a\n> > similar thing for any other user. As mentioned above, a CREATEROLE\n> > user could set connection limits for every created role such that the\n> > sum of those limits is less than max_connections by some margin, but\n> > that restricts each of those roles individually, not all of them in\n> > the aggregate. Maybe we could address this by inventing a new GUC\n> > reserved_connections and a predefined role\n> > pg_use_reserved_connections.\n>\n> I've written something like this before, and I'd be happy to put together a\n> patch if there is interest.\n\nCool. I had been thinking of coding it up myself, but you doing it works, too.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 16 Jan 2023 21:06:10 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Mon, Jan 16, 2023 at 09:06:10PM -0500, Robert Haas wrote:\n> On Mon, Jan 16, 2023 at 5:37 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Mon, Jan 16, 2023 at 02:29:56PM -0500, Robert Haas wrote:\n>> > 4. You can reserve a small number of connections for the superuser\n>> > with superuser_reserved_connections, but there's no way to do a\n>> > similar thing for any other user. As mentioned above, a CREATEROLE\n>> > user could set connection limits for every created role such that the\n>> > sum of those limits is less than max_connections by some margin, but\n>> > that restricts each of those roles individually, not all of them in\n>> > the aggregate. Maybe we could address this by inventing a new GUC\n>> > reserved_connections and a predefined role\n>> > pg_use_reserved_connections.\n>>\n>> I've written something like this before, and I'd be happy to put together a\n>> patch if there is interest.\n> \n> Cool. I had been thinking of coding it up myself, but you doing it works, too.\n\nAlright. The one design question I have is whether this should be a new\nset of reserved connections or replace superuser_reserved_connections\nentirely.\n\nIf we create a new batch of reserved connections, only roles with\nprivileges of pg_use_reserved_connections would be able to connect if the\nnumber of remaining slots is greater than superuser_reserved_connections\nbut less than or equal to superuser_reserved_connections +\nreserved_connections. Only superusers would be able to connect if the\nnumber of remaining slots is less than or equal to\nsuperuser_reserved_connections. This helps avoid blocking new superuser\nconnections even if you've reserved some connections for non-superusers.\n\nІf we replace superuser_reserved_connections, we're basically opening up\nthe existing functionality to non-superusers, which is simpler and probably\nmore in the spirit of this thread, but it doesn't provide a way to prevent\nblocking new superuser connections.\n\nMy preference is the former approach. This is closest to what I've written\nbefore, and if I read your words carefully, it seems to be what you are\nproposing. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 10:42:30 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Tue, Jan 17, 2023 at 1:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Alright. The one design question I have is whether this should be a new\n> set of reserved connections or replace superuser_reserved_connections\n> entirely.\n\nI think it should definitely be something new, not a replacement.\n\n> If we create a new batch of reserved connections, only roles with\n> privileges of pg_use_reserved_connections would be able to connect if the\n> number of remaining slots is greater than superuser_reserved_connections\n> but less than or equal to superuser_reserved_connections +\n> reserved_connections. Only superusers would be able to connect if the\n> number of remaining slots is less than or equal to\n> superuser_reserved_connections. This helps avoid blocking new superuser\n> connections even if you've reserved some connections for non-superusers.\n\nThis is precisely what I had in mind.\n\nI think the documentation will need some careful wordsmithing,\nincluding adjustments to superuser_reserved_connections. We want to\nrecast superuser_reserved_connections as a final reserve to be touched\nafter even reserved_connections has been exhausted.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:59:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Tue, Jan 17, 2023 at 02:59:31PM -0500, Robert Haas wrote:\n> On Tue, Jan 17, 2023 at 1:42 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> If we create a new batch of reserved connections, only roles with\n>> privileges of pg_use_reserved_connections would be able to connect if the\n>> number of remaining slots is greater than superuser_reserved_connections\n>> but less than or equal to superuser_reserved_connections +\n>> reserved_connections. Only superusers would be able to connect if the\n>> number of remaining slots is less than or equal to\n>> superuser_reserved_connections. This helps avoid blocking new superuser\n>> connections even if you've reserved some connections for non-superusers.\n> \n> This is precisely what I had in mind.\n\nGreat. Here is a first attempt at the patch.\n\n> I think the documentation will need some careful wordsmithing,\n> including adjustments to superuser_reserved_connections. We want to\n> recast superuser_reserved_connections as a final reserve to be touched\n> after even reserved_connections has been exhausted.\n\nI tried to do this, but there is probably still room for improvement,\nespecially for the parts that discuss the relationship between\nmax_connections, superuser_reserved_connections, and reserved_connections.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 17 Jan 2023 16:15:23 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Tue, Jan 17, 2023 at 7:15 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Great. Here is a first attempt at the patch.\n\nIn general, looks good. I think this will often call HaveNFreeProcs\ntwice, though, and that would be better to avoid, e.g.\n\nif (!am_superuser && !am_walsender && (SuperuserReservedBackends +\nReservedBackends) > 0)\n && !HaveNFreeProcs(SuperuserReservedBackends + ReservedBackends))\n{\n if (!HaveNFreeProcs(SuperuserReservedBackends))\n remaining connection slots are reserved for non-replication\nsuperuser connections;\n if (!has_privs_of_role(GetUserId(), ROLE_PG_USE_RESERVED_CONNECTIONS))\n remaining connection slots are reserved for roles with\nprivileges of pg_use_reserved_backends;\n}\n\nIn the common case where we hit neither limit, this only counts free\nconnection slots once. We could do even better by making\nHaveNFreeProcs have an out parameter for the number of free procs\nactually found when it returns false, but that's probably not\nimportant.\n\nI don't think that we should default both the existing GUC and the new\none to 3, because that raises the default limit in the case where the\nnew feature is not used from 3 to 6. I think we should default one of\nthem to 0 and the other one to 3. Not sure which one should get which\nvalue.\n\n> > I think the documentation will need some careful wordsmithing,\n> > including adjustments to superuser_reserved_connections. We want to\n> > recast superuser_reserved_connections as a final reserve to be touched\n> > after even reserved_connections has been exhausted.\n>\n> I tried to do this, but there is probably still room for improvement,\n> especially for the parts that discuss the relationship between\n> max_connections, superuser_reserved_connections, and reserved_connections.\n\nI think it's pretty good the way you have it. I agree that there might\nbe a way to make it even better, but I don't think I know what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 11:28:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Mon, Jan 16, 2023 at 2:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> 1. It's still possible for a CREATEROLE user to hand out role\n> attributes that they don't possess. The new prohibitions in\n> cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb prevent a CREATEROLE user\n> from handing out membership in a role on which they lack sufficient\n> permissions, but they don't prevent a CREATEROLE user who lacks\n> CREATEDB from creating a new user who does have CREATEDB. I think we\n> should subject the CREATEDB, REPLICATION, and BYPASSRLS attributes to\n> the same rule that we now use for role memberships: you've got to have\n> the property in order to give it to someone else. In the case of\n> CREATEDB, this would tighten the current rules, which allow you to\n> give out CREATEDB without having it. In the case of REPLICATION and\n> BYPASSRLS, this would liberalize the current rules: right now, a\n> CREATEROLE user cannot give REPLICATION or BYPASSRLS to another user\n> even if they possess those attributes.\n>\n> This proposal doesn't address the CREATEROLE or CONNECTION LIMIT\n> properties. It seems possible to me that someone might want to set up\n> a CREATEROLE user who can't make more such users, and this proposal\n> doesn't manufacture any way of doing that. It also doesn't let you\n> constraint the ability of a CREATEROLE user to set a CONNECTION LIMIT\n> for some other user. I think that's OK. It might be nice to have ways\n> of imposing such restrictions at some point in the future, but it is\n> not very obvious what to do about such cases and, importantly, I don't\n> think there's any security impact from failing to address those cases.\n> If a CREATEROLE user without CREATEDB can create a new role that does\n> have CREATEDB, that's a privilege escalation. If they can hand out\n> CREATEROLE, that isn't: they already have it.\n\nHere is a patch implementing the above proposal. Since this is fairly\nclosely related to already-committed work, I would like to get this\ninto v16. That way, all the changes to how CREATEROLE works will go\ninto a single release, which seems less confusing for users. It is\nalso fairly clear to me that this is an improvement over the status\nquo. Sometimes things that seem clear to me turn out to be false, so\nif this change seems like a problem to you, please let me know.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 18 Jan 2023 12:15:33 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "CREATEROLE users vs. role properties" }, { "msg_contents": "On Wed, Jan 18, 2023 at 11:28:57AM -0500, Robert Haas wrote:\n> In general, looks good. I think this will often call HaveNFreeProcs\n> twice, though, and that would be better to avoid, e.g.\n\nI should have thought of this. This is fixed in v2.\n\n> In the common case where we hit neither limit, this only counts free\n> connection slots once. We could do even better by making\n> HaveNFreeProcs have an out parameter for the number of free procs\n> actually found when it returns false, but that's probably not\n> important.\n\nActually, I think it might be important. IIUC the separate calls to\nHaveNFreeProcs might return different values for the same input, which\ncould result in incorrect error messages (e.g., you might get the\nreserved_connections message despite setting reserved_connections to 0).\nSo, I made this change in v2, too.\n\n> I don't think that we should default both the existing GUC and the new\n> one to 3, because that raises the default limit in the case where the\n> new feature is not used from 3 to 6. I think we should default one of\n> them to 0 and the other one to 3. Not sure which one should get which\n> value.\n\nI chose to set reserved_connections to 0 since it is new and doesn't have a\npre-existing default value.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Jan 2023 11:00:12 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Wed, Jan 18, 2023 at 2:00 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 11:28:57AM -0500, Robert Haas wrote:\n> > In general, looks good. I think this will often call HaveNFreeProcs\n> > twice, though, and that would be better to avoid, e.g.\n>\n> I should have thought of this. This is fixed in v2.\n\nShould (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n\n> > In the common case where we hit neither limit, this only counts free\n> > connection slots once. We could do even better by making\n> > HaveNFreeProcs have an out parameter for the number of free procs\n> > actually found when it returns false, but that's probably not\n> > important.\n>\n> Actually, I think it might be important. IIUC the separate calls to\n> HaveNFreeProcs might return different values for the same input, which\n> could result in incorrect error messages (e.g., you might get the\n> reserved_connections message despite setting reserved_connections to 0).\n> So, I made this change in v2, too.\n\nI thought of that briefly and it didn't seem that important, but the\nway you did it seems fine, so let's go with that.\n\nWhat's the deal with removing \"and no new replication connections will\nbe accepted\" from the documentation? Is the existing documentation\njust wrong? If so, should we fix that first? And maybe delete\n\"non-replication\" from the error message that says \"remaining\nconnection slots are reserved for non-replication superuser\nconnections\"? It seems like right now the comments say that\nreplication connections are a completely separate pool of connections,\nbut the documentation and the error message make it sound otherwise.\nIf that's true, then one of them is wrong, and I think it's the\ndocs/error message. Or am I just misreading it?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 14:51:38 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n> Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n\nI believe < is correct. At this point, the new backend will have already\nclaimed a proc struct, so if the number of remaining free slots equals the\nnumber of reserved slots, it is okay.\n\n> What's the deal with removing \"and no new replication connections will\n> be accepted\" from the documentation? Is the existing documentation\n> just wrong? If so, should we fix that first? And maybe delete\n> \"non-replication\" from the error message that says \"remaining\n> connection slots are reserved for non-replication superuser\n> connections\"? It seems like right now the comments say that\n> replication connections are a completely separate pool of connections,\n> but the documentation and the error message make it sound otherwise.\n> If that's true, then one of them is wrong, and I think it's the\n> docs/error message. Or am I just misreading it?\n\nI think you are right. This seems to have been missed in ea92368. I moved\nthis part to a new patch that should probably be back-patched to v12.\n\nOn that note, I wonder if it's worth changing the \"sorry, too many clients\nalready\" message to make it clear that max_connections has been reached.\nIME some users are confused by this error, and I think it would be less\nconfusing if it pointed to the parameter that governs the number of\nconnection slots. I'll create a new thread for this.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Jan 2023 13:14:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Wed, Jan 18, 2023 at 12:15:33PM -0500, Robert Haas wrote:\n> On Mon, Jan 16, 2023 at 2:29 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> 1. It's still possible for a CREATEROLE user to hand out role\n>> attributes that they don't possess. The new prohibitions in\n>> cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb prevent a CREATEROLE user\n>> from handing out membership in a role on which they lack sufficient\n>> permissions, but they don't prevent a CREATEROLE user who lacks\n>> CREATEDB from creating a new user who does have CREATEDB. I think we\n>> should subject the CREATEDB, REPLICATION, and BYPASSRLS attributes to\n>> the same rule that we now use for role memberships: you've got to have\n>> the property in order to give it to someone else. In the case of\n>> CREATEDB, this would tighten the current rules, which allow you to\n>> give out CREATEDB without having it. In the case of REPLICATION and\n>> BYPASSRLS, this would liberalize the current rules: right now, a\n>> CREATEROLE user cannot give REPLICATION or BYPASSRLS to another user\n>> even if they possess those attributes.\n>>\n>> This proposal doesn't address the CREATEROLE or CONNECTION LIMIT\n>> properties. It seems possible to me that someone might want to set up\n>> a CREATEROLE user who can't make more such users, and this proposal\n>> doesn't manufacture any way of doing that. It also doesn't let you\n>> constraint the ability of a CREATEROLE user to set a CONNECTION LIMIT\n>> for some other user. I think that's OK. It might be nice to have ways\n>> of imposing such restrictions at some point in the future, but it is\n>> not very obvious what to do about such cases and, importantly, I don't\n>> think there's any security impact from failing to address those cases.\n>> If a CREATEROLE user without CREATEDB can create a new role that does\n>> have CREATEDB, that's a privilege escalation. If they can hand out\n>> CREATEROLE, that isn't: they already have it.\n> \n> Here is a patch implementing the above proposal. Since this is fairly\n> closely related to already-committed work, I would like to get this\n> into v16. That way, all the changes to how CREATEROLE works will go\n> into a single release, which seems less confusing for users. It is\n> also fairly clear to me that this is an improvement over the status\n> quo. Sometimes things that seem clear to me turn out to be false, so\n> if this change seems like a problem to you, please let me know.\n\nThis seems like a clear improvement to me. However, as the attribute\nsystem becomes more sophisticated, I think we ought to improve the error\nmessages in user.c. IMHO messages like \"permission denied\" could be\ngreatly improved with some added context.\n\nFor example, if I want to change a role's password, I need both CREATEROLE\nand ADMIN OPTION on the role, but the error message only mentions\nCREATEROLE.\n\n\tpostgres=# create role createrole with createrole;\n\tCREATE ROLE\n\tpostgres=# create role otherrole;\n\tCREATE ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole password 'test';\n\tERROR: must have CREATEROLE privilege to change another user's password\n\nSimilarly, if I want to allow a role to grant REPLICATION to another role,\nI have to give it CREATEROLE, REPLICATION, and membership with ADMIN\nOPTION. If the role is missing CREATEROLE or membership with ADMIN OPTION,\nit'll only ever see a \"permission denied\" error.\n\n\tpostgres=# create role createrole with createrole;\n\tCREATE ROLE\n\tpostgres=# create role otherrole;\n\tCREATE ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole with replication;\n\tERROR: permission denied\n\tpostgres=> reset role;\n\tRESET\n\tpostgres=# alter role createrole with replication;\n\tALTER ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole with replication;\n\tERROR: permission denied\n\tpostgres=> reset role;\n\tRESET\n\tpostgres=# grant otherrole to createrole;\n\tGRANT ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole with replication;\n\tERROR: permission denied\n\tpostgres=> reset role;\n\tRESET\n\tpostgres=# grant otherrole to createrole with admin option;\n\tGRANT ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole with replication;\n\tALTER ROLE\n\nIf it has both CREATEROLE and membership with ADMIN OPTION (but not\nREPLICATION), it'll see a more helpful message.\n\n\tpostgres=# create role createrole with createrole;\n\tCREATE ROLE\n\tpostgres=# create role otherrole;\n\tCREATE ROLE\n\tpostgres=# grant otherrole to createrole with admin option;\n\tGRANT ROLE\n\tpostgres=# set role createrole;\n\tSET\n\tpostgres=> alter role otherrole with replication;\n\tERROR: must have replication privilege to change replication attribute\n\nThis probably shouldn't block your patch, but I think it's worth doing in\nv16 since there are other changes in this area. I'm happy to help.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 15:17:49 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On 1/19/23 4:47 AM, Nathan Bossart wrote:\n> This seems like a clear improvement to me. However, as the attribute\n> system becomes more sophisticated, I think we ought to improve the error\n> messages in user.c. IMHO messages like \"permission denied\" could be\n> greatly improved with some added context.\nI observed this behavior where the role is having creatrole but still \nit's unable to pass it to another user.\n\npostgres=# create role abc1 login createrole;\nCREATE ROLE\npostgres=# create user test1;\nCREATE ROLE\npostgres=# \\c - abc1\nYou are now connected to database \"postgres\" as user \"abc1\".\npostgres=> alter role test1 with createrole ;\nERROR:  permission denied\npostgres=>\n\nwhich was working previously without patch.\n\nIs this an expected behavior?\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:05:01 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On 1/19/23 3:05 PM, tushar wrote:\n> which was working previously without patch. \nMy bad, I was testing against PG v15 but this issue is not\nreproducible on master (without patch).\n\nAs you mentioned- \"This implements the standard idea that you can't give \npermissions\nyou don't have (but you can give the ones you do have)\" but here the \nrole is having\ncreaterole  privilege that he cannot pass on to another user? Is this \nexpected?\n\npostgres=# create role fff with createrole;\nCREATE ROLE\npostgres=# create role xxx;\nCREATE ROLE\npostgres=# set role fff;\nSET\npostgres=> alter role xxx with createrole;\nERROR:  permission denied\npostgres=>\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:45:22 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On 1/19/23 2:44 AM, Nathan Bossart wrote:\n> On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n>> Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n> I believe < is correct. At this point, the new backend will have already\n> claimed a proc struct, so if the number of remaining free slots equals the\n> number of reserved slots, it is okay.\n>\n>> What's the deal with removing \"and no new replication connections will\n>> be accepted\" from the documentation? Is the existing documentation\n>> just wrong? If so, should we fix that first? And maybe delete\n>> \"non-replication\" from the error message that says \"remaining\n>> connection slots are reserved for non-replication superuser\n>> connections\"? It seems like right now the comments say that\n>> replication connections are a completely separate pool of connections,\n>> but the documentation and the error message make it sound otherwise.\n>> If that's true, then one of them is wrong, and I think it's the\n>> docs/error message. Or am I just misreading it?\n> I think you are right. This seems to have been missed in ea92368. I moved\n> this part to a new patch that should probably be back-patched to v12.\n>\n> On that note, I wonder if it's worth changing the \"sorry, too many clients\n> already\" message to make it clear that max_connections has been reached.\n> IME some users are confused by this error, and I think it would be less\n> confusing if it pointed to the parameter that governs the number of\n> connection slots. I'll create a new thread for this.\n>\nThere is� one typo , for the doc changes, it is� mentioned \n\"pg_use_reserved_backends\" but i think it supposed to be \n\"pg_use_reserved_connections\"\nunder Table 22.1. Predefined Roles.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 18:28:02 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 6:28 PM tushar <tushar.ahuja@enterprisedb.com>\nwrote:\n\n> On 1/19/23 2:44 AM, Nathan Bossart wrote:\n> > On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n> >> Should (nfree < SuperuserReservedBackends) be using <=, or am I\n> confused?\n> > I believe < is correct. At this point, the new backend will have already\n> > claimed a proc struct, so if the number of remaining free slots equals\n> the\n> > number of reserved slots, it is okay.\n> >\n> >> What's the deal with removing \"and no new replication connections will\n> >> be accepted\" from the documentation? Is the existing documentation\n> >> just wrong? If so, should we fix that first? And maybe delete\n> >> \"non-replication\" from the error message that says \"remaining\n> >> connection slots are reserved for non-replication superuser\n> >> connections\"? It seems like right now the comments say that\n> >> replication connections are a completely separate pool of connections,\n> >> but the documentation and the error message make it sound otherwise.\n> >> If that's true, then one of them is wrong, and I think it's the\n> >> docs/error message. Or am I just misreading it?\n> > I think you are right. This seems to have been missed in ea92368. I\n> moved\n> > this part to a new patch that should probably be back-patched to v12.\n> >\n> > On that note, I wonder if it's worth changing the \"sorry, too many\n> clients\n> > already\" message to make it clear that max_connections has been reached.\n> > IME some users are confused by this error, and I think it would be less\n> > confusing if it pointed to the parameter that governs the number of\n> > connection slots. I'll create a new thread for this.\n> >\n> There is one typo , for the doc changes, it is mentioned\n> \"pg_use_reserved_backends\" but i think it supposed to be\n> \"pg_use_reserved_connections\"\n> under Table 22.1. Predefined Roles.\n>\n> and in the error message too\n\n[edb@centos7tushar bin]$ ./psql postgres -U r2\n\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed:\nFATAL: remaining connection slots are reserved for roles with privileges\nof pg_use_reserved_backends\n[edb@centos7tushar bin]$\n\nregards,\n\nOn Thu, Jan 19, 2023 at 6:28 PM tushar <tushar.ahuja@enterprisedb.com> wrote:On 1/19/23 2:44 AM, Nathan Bossart wrote:\n> On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n>> Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n> I believe < is correct.  At this point, the new backend will have already\n> claimed a proc struct, so if the number of remaining free slots equals the\n> number of reserved slots, it is okay.\n>\n>> What's the deal with removing \"and no new replication connections will\n>> be accepted\" from the documentation? Is the existing documentation\n>> just wrong? If so, should we fix that first? And maybe delete\n>> \"non-replication\" from the error message that says \"remaining\n>> connection slots are reserved for non-replication superuser\n>> connections\"? It seems like right now the comments say that\n>> replication connections are a completely separate pool of connections,\n>> but the documentation and the error message make it sound otherwise.\n>> If that's true, then one of them is wrong, and I think it's the\n>> docs/error message. Or am I just misreading it?\n> I think you are right.  This seems to have been missed in ea92368.  I moved\n> this part to a new patch that should probably be back-patched to v12.\n>\n> On that note, I wonder if it's worth changing the \"sorry, too many clients\n> already\" message to make it clear that max_connections has been reached.\n> IME some users are confused by this error, and I think it would be less\n> confusing if it pointed to the parameter that governs the number of\n> connection slots.  I'll create a new thread for this.\n>\nThere is  one typo , for the doc changes, it is  mentioned \n\"pg_use_reserved_backends\" but i think it supposed to be \n\"pg_use_reserved_connections\"\nunder Table 22.1. Predefined Roles.and in the error message too \n[edb@centos7tushar bin]$ ./psql postgres -U r2\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:  remaining connection slots are reserved for roles with privileges of pg_use_reserved_backends\n[edb@centos7tushar bin]$ regards,", "msg_date": "Thu, 19 Jan 2023 18:50:33 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 6:50 PM tushar <tushar.ahuja@enterprisedb.com>\nwrote:\n\n> and in the error message too\n>\n> [edb@centos7tushar bin]$ ./psql postgres -U r2\n>\n> psql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed:\n> FATAL: remaining connection slots are reserved for roles with privileges\n> of pg_use_reserved_backends\n> [edb@centos7tushar bin]$\n>\n\n\nI think there is also a need to improve the error message if non\nsuper users are not able to connect due to slot unavailability.\n--Connect to psql terminal, create a user\ncreate user t1;\n\n--set these GUC parameters in postgresql.conf and restart the server\n\nmax_connections = 3 # (change requires restart)\n\nsuperuser_reserved_connections = 1 # (change requires restart)\n\nreserved_connections = 1\n\npsql terminal ( connect to superuser), ./psql postgres\npsql terminal (try to connect to user t1) , ./psql postgres -U t1\nError message is\n\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed:\nFATAL: remaining connection slots are reserved for roles with privileges\nof pg_use_reserved_backends\n\n\n\nthat is not true because the superuser can still able to connect,\n\nprobably in this case message should be like this -\n\n\"remaining connection slots are reserved for roles with privileges of\npg_use_reserved_connections and for superusers\" or something better.\n\nregards,\n\nOn Thu, Jan 19, 2023 at 6:50 PM tushar <tushar.ahuja@enterprisedb.com> wrote:and in the error message too \n[edb@centos7tushar bin]$ ./psql postgres -U r2\npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:  remaining connection slots are reserved for roles with privileges of pg_use_reserved_backends\n[edb@centos7tushar bin]$  I think there is also a need to improve the error message if non super users are not able to connect due to slot unavailability. --Connect to psql terminal, create a usercreate user t1;\n--set these GUC parameters in postgresql.conf and restart the server\nmax_connections = 3                     # (change requires restart)\nsuperuser_reserved_connections = 1      # (change requires restart)\nreserved_connections = 1        psql terminal ( connect to superuser),  ./psql postgres psql terminal (try to connect to user t1) ,  ./psql postgres -U t1 Error message is \npsql: error: connection to server on socket \"/tmp/.s.PGSQL.5432\" failed: FATAL:  remaining connection slots are reserved for roles with privileges of pg_use_reserved_backendsthat is not true because the superuser can still able to connect, probably in this case message should be like this -\"remaining connection slots are reserved for roles with privileges of pg_use_reserved_connections and for superusers\" or something better.regards,", "msg_date": "Thu, 19 Jan 2023 19:51:09 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 6:15 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> postgres=# create role fff with createrole;\n> CREATE ROLE\n> postgres=# create role xxx;\n> CREATE ROLE\n> postgres=# set role fff;\n> SET\n> postgres=> alter role xxx with createrole;\n> ERROR: permission denied\n> postgres=>\n\nHere fff would need ADMIN OPTION on xxx to be able to make modifications to it.\n\nSee https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:04:11 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On Wed, Jan 18, 2023 at 6:17 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > Here is a patch implementing the above proposal. Since this is fairly\n> > closely related to already-committed work, I would like to get this\n> > into v16. That way, all the changes to how CREATEROLE works will go\n> > into a single release, which seems less confusing for users. It is\n> > also fairly clear to me that this is an improvement over the status\n> > quo. Sometimes things that seem clear to me turn out to be false, so\n> > if this change seems like a problem to you, please let me know.\n>\n> This seems like a clear improvement to me.\n\nCool.\n\n> However, as the attribute\n> system becomes more sophisticated, I think we ought to improve the error\n> messages in user.c. IMHO messages like \"permission denied\" could be\n> greatly improved with some added context.\n>\n> This probably shouldn't block your patch, but I think it's worth doing in\n> v16 since there are other changes in this area. I'm happy to help.\n\nThat would be great. I agree that it's good to try to improve the\nerror messages. It hasn't been entirely clear to me how to do that.\nFor instance, I don't think we want to say something like:\n\nERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\nrole, or in lieu of both of those to be superuser, to set the\nCONNECTION LIMIT for another role\nERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\nrole, plus also CREATEDB, or in lieu of all that to be superuser, to\nremove the CREATEDB property from another role\n\nSuch messages are long and we'd end up with a lot of variants. It's\npossible that the messages could be multi-tier. For instance, if we\ndetermine that you're trying to manage users and you don't have\npermission to manage ANY user, we could say:\n\nERROR: permission to manage roles denied\nDETAIL: You must have the CREATEROLE privilege or be a superuser to\nmanage roles.\n\nIf you could potentially manage some user, but not the one you're\ntrying to manage, we could say:\n\nERROR: permission to manage role \"%s\" denied\nDETAIL: You need ADMIN OPTION on the target role to manage it.\n\nIf you have permission to manage the target role but not in the\nrequested manner, we could then say something like:\n\nERROR: permission to manage CREATEDB for role \"%s\" denied\nDETAIL: You need CREATEDB to manage it.\n\nThis is just one idea, and maybe not the best one. I'm just trying to\nsay that I think this is basically an organizational problem. We need\na plan for how we're going to report errors that is not too\ncomplicated to implement with reasonable effort, and that will produce\nmessages that users will understand. I'd be delighted if you wanted to\nprovide either ideas or patches...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:20:33 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On Thu, Jan 19, 2023 at 9:21 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> that is not true because the superuser can still able to connect,\n\nIt is true, but because superusers have all privileges.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 10:31:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Wed, Jan 18, 2023 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n> > Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n>\n> I believe < is correct. At this point, the new backend will have already\n> claimed a proc struct, so if the number of remaining free slots equals the\n> number of reserved slots, it is okay.\n\nOK. Might be worth a short comment.\n\n> > What's the deal with removing \"and no new replication connections will\n> > be accepted\" from the documentation? Is the existing documentation\n> > just wrong? If so, should we fix that first? And maybe delete\n> > \"non-replication\" from the error message that says \"remaining\n> > connection slots are reserved for non-replication superuser\n> > connections\"? It seems like right now the comments say that\n> > replication connections are a completely separate pool of connections,\n> > but the documentation and the error message make it sound otherwise.\n> > If that's true, then one of them is wrong, and I think it's the\n> > docs/error message. Or am I just misreading it?\n>\n> I think you are right. This seems to have been missed in ea92368. I moved\n> this part to a new patch that should probably be back-patched to v12.\n\nI'm inclined to commit it to master and not back-patch. It doesn't\nseem important enough to perturb translations.\n\nTushar seems to have a point about pg_use_reserved_connections vs.\npg_use_reserved_backends. I think we should standardize on the former,\nas backends is an internal term.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 11:40:53 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 11:40:53AM -0500, Robert Haas wrote:\n> On Wed, Jan 18, 2023 at 4:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> On Wed, Jan 18, 2023 at 02:51:38PM -0500, Robert Haas wrote:\n>> > Should (nfree < SuperuserReservedBackends) be using <=, or am I confused?\n>>\n>> I believe < is correct. At this point, the new backend will have already\n>> claimed a proc struct, so if the number of remaining free slots equals the\n>> number of reserved slots, it is okay.\n> \n> OK. Might be worth a short comment.\n\nI added one.\n\n>> > What's the deal with removing \"and no new replication connections will\n>> > be accepted\" from the documentation? Is the existing documentation\n>> > just wrong? If so, should we fix that first? And maybe delete\n>> > \"non-replication\" from the error message that says \"remaining\n>> > connection slots are reserved for non-replication superuser\n>> > connections\"? It seems like right now the comments say that\n>> > replication connections are a completely separate pool of connections,\n>> > but the documentation and the error message make it sound otherwise.\n>> > If that's true, then one of them is wrong, and I think it's the\n>> > docs/error message. Or am I just misreading it?\n>>\n>> I think you are right. This seems to have been missed in ea92368. I moved\n>> this part to a new patch that should probably be back-patched to v12.\n> \n> I'm inclined to commit it to master and not back-patch. It doesn't\n> seem important enough to perturb translations.\n\nThat seems reasonable to me.\n\n> Tushar seems to have a point about pg_use_reserved_connections vs.\n> pg_use_reserved_backends. I think we should standardize on the former,\n> as backends is an internal term.\n\nOops. This is what I meant to do. I probably flubbed it because I was\nwondering why the parameter uses \"connections\" and the variable uses\n\"backends,\" especially considering that the variable for max_connections is\ncalled MaxConnections. I went ahead and renamed everything to use\n\"connections.\"\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 19 Jan 2023 09:54:21 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 12:54 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> > OK. Might be worth a short comment.\n>\n> I added one.\n\nThanks. I'd move it to the inner indentation level so it's closer to\nthe test at issue.\n\nI would also suggest reordering the documentation and the\npostgresql.conf.sample file so that reserved_connections precedes\nsuperuser_reserved_connections, instead of following it.\n\nOther than that, this seems like it might be about ready to commit,\nbarring objections or bug reports.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:17:35 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 02:17:35PM -0500, Robert Haas wrote:\n> On Thu, Jan 19, 2023 at 12:54 PM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n>> > OK. Might be worth a short comment.\n>>\n>> I added one.\n> \n> Thanks. I'd move it to the inner indentation level so it's closer to\n> the test at issue.\n\nI meant for it to cover the call to HaveNFreeProcs() as well since the same\nidea applies. I left it the same for now, but if you still think it makes\nsense to move it, I'll do so.\n\n> I would also suggest reordering the documentation and the\n> postgresql.conf.sample file so that reserved_connections precedes\n> superuser_reserved_connections, instead of following it.\n\nMakes sense.\n\n> Other than that, this seems like it might be about ready to commit,\n> barring objections or bug reports.\n\nAwesome.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 19 Jan 2023 11:46:01 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 2:46 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > Thanks. I'd move it to the inner indentation level so it's closer to\n> > the test at issue.\n>\n> I meant for it to cover the call to HaveNFreeProcs() as well since the same\n> idea applies. I left it the same for now, but if you still think it makes\n> sense to move it, I'll do so.\n\nHmm, OK. If you want to leave it where it is, I won't argue further.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 19 Jan 2023 15:13:39 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On 1/19/23 6:28 PM, tushar wrote:\n>>\n> There is� one typo , for the doc changes, it is� mentioned \n> \"pg_use_reserved_backends\" but i think it supposed to be \n> \"pg_use_reserved_connections\"\n> under Table 22.1. Predefined Roles.\n\nThanks, this is fixed now with the latest patches.\n\n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n\n\nOn 1/19/23 6:28 PM, tushar wrote:\n\n\n\n\n There is� one typo , for the doc changes, it is� mentioned\n \"pg_use_reserved_backends\" but i think it supposed to be\n \"pg_use_reserved_connections\"\n \n under Table 22.1. Predefined Roles.\n \n\n\n Thanks, this is fixed now with the latest patches. \n-- \nregards,tushar\nEnterpriseDB https://www.enterprisedb.com/\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 20 Jan 2023 19:04:58 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Fri, Jan 20, 2023 at 07:04:58PM +0530, tushar wrote:\n> On 1/19/23 6:28 PM, tushar wrote:\n>> There is� one typo , for the doc changes, it is� mentioned\n>> \"pg_use_reserved_backends\" but i think it supposed to be\n>> \"pg_use_reserved_connections\"\n>> under Table 22.1. Predefined Roles.\n> \n> Thanks, this is fixed now with the latest patches.\n\nThank you for reviewing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 10:10:38 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Fri, Jan 20, 2023 at 1:10 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > Thanks, this is fixed now with the latest patches.\n>\n> Thank you for reviewing.\n\nThanks to you both. I have committed these patches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 15:42:03 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Fri, Jan 20, 2023 at 03:42:03PM -0500, Robert Haas wrote:\n> Thanks to you both. I have committed these patches.\n\nThanks! Does this need a catversion bump?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 13:02:11 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Fri, Jan 20, 2023 at 4:02 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Fri, Jan 20, 2023 at 03:42:03PM -0500, Robert Haas wrote:\n> > Thanks to you both. I have committed these patches.\n>\n> Thanks! Does this need a catversion bump?\n\nI was surprised by this question because I thought I'd included one.\n\nBut it turns out I didn't include that in the commit and it's still in\nmy working tree. *facepalm*\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 20 Jan 2023 16:37:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: almost-super-user problems that we haven't fixed yet" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jan 19, 2023 at 6:15 AM tushar <tushar.ahuja@enterprisedb.com>\n> wrote:\n> > postgres=# create role fff with createrole;\n> > CREATE ROLE\n> > postgres=# create role xxx;\n> > CREATE ROLE\n> > postgres=# set role fff;\n> > SET\n> > postgres=> alter role xxx with createrole;\n> > ERROR: permission denied\n> > postgres=>\n>\n> Here fff would need ADMIN OPTION on xxx to be able to make modifications\n> to it.\n>\n> See\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cb\n\n\nThanks, Robert, that was helpful.\n\nPlease refer to this scenario where I am able to give createrole privileges\nbut not replication privilege to role\n\npostgres=# create role t1 createrole;\nCREATE ROLE\npostgres=# create role t2 replication;\nCREATE ROLE\npostgres=# create role t3;\nCREATE ROLE\npostgres=# grant t3 to t1,t2 with admin option;\nGRANT ROLE\npostgres=# set session authorization t1;\nSET\n\n*postgres=> alter role t3 createrole ;ALTER ROLE*\npostgres=> set session authorization t2;\nSET\npostgres=> alter role t3 replication;\nERROR: permission denied\n\nThis same behavior was observed in v14 as well but why i am able to give\ncreaterole grant but not replication?\n\nregards,\n\nOn Thu, Jan 19, 2023 at 8:34 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jan 19, 2023 at 6:15 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> postgres=# create role fff with createrole;\n> CREATE ROLE\n> postgres=# create role xxx;\n> CREATE ROLE\n> postgres=# set role fff;\n> SET\n> postgres=> alter role xxx with createrole;\n> ERROR:  permission denied\n> postgres=>\n\nHere fff would need ADMIN OPTION on xxx to be able to make modifications to it.\n\nSee https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=cf5eb37c5ee0cc54c80d95c1695d7fca1f7c68cbThanks, Robert, that was helpful.Please refer to this scenario where I am able to give createrole privileges but not replication  privilege to rolepostgres=# create role t1 createrole;CREATE ROLEpostgres=# create role t2 replication;CREATE ROLEpostgres=# create role t3;CREATE ROLEpostgres=# grant t3 to t1,t2 with admin option;GRANT ROLEpostgres=# set session authorization t1;SETpostgres=> alter role t3 createrole ;ALTER ROLEpostgres=> set session authorization t2;SETpostgres=> alter role t3 replication;ERROR:  permission deniedThis same behavior was observed in v14 as well but why i am able to give createrole grant but not replication?regards,", "msg_date": "Mon, 23 Jan 2023 20:55:01 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On Mon, Jan 23, 2023 at 10:25 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> Please refer to this scenario where I am able to give createrole privileges but not replication privilege to role\n>\n> postgres=# create role t1 createrole;\n> CREATE ROLE\n> postgres=# create role t2 replication;\n> CREATE ROLE\n> postgres=# create role t3;\n> CREATE ROLE\n> postgres=# grant t3 to t1,t2 with admin option;\n> GRANT ROLE\n> postgres=# set session authorization t1;\n> SET\n> postgres=> alter role t3 createrole ;\n> ALTER ROLE\n> postgres=> set session authorization t2;\n> SET\n> postgres=> alter role t3 replication;\n> ERROR: permission denied\n>\n> This same behavior was observed in v14 as well but why i am able to give createrole grant but not replication?\n\nIn previous releases, you needed to have CREATEROLE in order to be\nable to perform user management functions. In master, you still need\nCREATEROLE, and you also need ADMIN OPTION on the role. In this\nscenario, only t1 meets those requirements with respect to t3, so only\nt1 can manage t3. t2 can SET ROLE to t3 and grant membership in t3,\nbut it can't set role properties on t3 or change t3's password or\nthings like that, because the ability to make user management changes\nis controlled by CREATEROLE.\n\nThe patch is only intended to change behavior in the case where you\npossess both CREATEROLE and also ADMIN OPTION on the target role (but\nnot SUPERUSER). In that scenario, it intends to change whether you can\ngive or remove the CREATEDB, REPLICATION, and BYPASSRLS properties\nfrom a user.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 23 Jan 2023 11:57:52 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On Mon, Jan 23, 2023 at 10:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> In previous releases, you needed to have CREATEROLE in order to be\n> able to perform user management functions. In master, you still need\n> CREATEROLE, and you also need ADMIN OPTION on the role. In this\n> scenario, only t1 meets those requirements with respect to t3, so only\n> t1 can manage t3. t2 can SET ROLE to t3 and grant membership in t3,\n> but it can't set role properties on t3 or change t3's password or\n> things like that, because the ability to make user management changes\n> is controlled by CREATEROLE.\n>\nok.\n\n>\n> The patch is only intended to change behavior in the case where you\n> possess both CREATEROLE and also ADMIN OPTION on the target role (but\n> not SUPERUSER). In that scenario, it intends to change whether you can\n> give or remove the CREATEDB, REPLICATION, and BYPASSRLS properties\n> from a user.\n>\n\nright, Neha/I have tested with different scenarios using\ncreatedb/replication/bypassrls and other\nprivileges properties on the role. also checked\npg_dumpall/pg_basebackup and everything looks fine.\n\nregards,\n\nOn Mon, Jan 23, 2023 at 10:28 PM Robert Haas <robertmhaas@gmail.com> wrote:\nIn previous releases, you needed to have CREATEROLE in order to be\nable to perform user management functions. In master, you still need\nCREATEROLE, and you also need ADMIN OPTION on the role. In this\nscenario, only t1 meets those requirements with respect to t3, so only\nt1 can manage t3. t2 can SET ROLE to t3 and grant membership in t3,\nbut it can't set role properties on t3 or change t3's password or\nthings like that, because the ability to make user management changes\nis controlled by CREATEROLE.ok. \n\nThe patch is only intended to change behavior in the case where you\npossess both CREATEROLE and also ADMIN OPTION on the target role (but\nnot SUPERUSER). In that scenario, it intends to change whether you can\ngive or remove the CREATEDB, REPLICATION, and BYPASSRLS properties\nfrom a user.right, Neha/I have tested with different scenarios using createdb/replication/bypassrls and otherprivileges properties on the role. also checked pg_dumpall/pg_basebackup and everything looks fine.regards,", "msg_date": "Tue, 24 Jan 2023 19:37:25 +0530", "msg_from": "tushar <tushar.ahuja@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "On Tue, Jan 24, 2023 at 9:07 AM tushar <tushar.ahuja@enterprisedb.com> wrote:\n> right, Neha/I have tested with different scenarios using createdb/replication/bypassrls and other\n> privileges properties on the role. also checked pg_dumpall/pg_basebackup and everything looks fine.\n\nThanks. I have committed the patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 24 Jan 2023 11:00:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: CREATEROLE users vs. role properties" }, { "msg_contents": "moving this discussion to a new thread...\n\nOn Thu, Jan 19, 2023 at 10:20:33AM -0500, Robert Haas wrote:\n> On Wed, Jan 18, 2023 at 6:17 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> However, as the attribute\n>> system becomes more sophisticated, I think we ought to improve the error\n>> messages in user.c. IMHO messages like \"permission denied\" could be\n>> greatly improved with some added context.\n>>\n>> This probably shouldn't block your patch, but I think it's worth doing in\n>> v16 since there are other changes in this area. I'm happy to help.\n> \n> That would be great. I agree that it's good to try to improve the\n> error messages. It hasn't been entirely clear to me how to do that.\n> For instance, I don't think we want to say something like:\n> \n> ERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\n> role, or in lieu of both of those to be superuser, to set the\n> CONNECTION LIMIT for another role\n> ERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\n> role, plus also CREATEDB, or in lieu of all that to be superuser, to\n> remove the CREATEDB property from another role\n> \n> Such messages are long and we'd end up with a lot of variants. It's\n> possible that the messages could be multi-tier. For instance, if we\n> determine that you're trying to manage users and you don't have\n> permission to manage ANY user, we could say:\n> \n> ERROR: permission to manage roles denied\n> DETAIL: You must have the CREATEROLE privilege or be a superuser to\n> manage roles.\n> \n> If you could potentially manage some user, but not the one you're\n> trying to manage, we could say:\n> \n> ERROR: permission to manage role \"%s\" denied\n> DETAIL: You need ADMIN OPTION on the target role to manage it.\n> \n> If you have permission to manage the target role but not in the\n> requested manner, we could then say something like:\n> \n> ERROR: permission to manage CREATEDB for role \"%s\" denied\n> DETAIL: You need CREATEDB to manage it.\n> \n> This is just one idea, and maybe not the best one. I'm just trying to\n> say that I think this is basically an organizational problem. We need\n> a plan for how we're going to report errors that is not too\n> complicated to implement with reasonable effort, and that will produce\n> messages that users will understand. I'd be delighted if you wanted to\n> provide either ideas or patches...\n\nHere is an early draft of some modest improvements to the user.c error\nmessages. I basically just tried to standardize the style of and add\ncontext to the existing error messages. I used errhint() for this extra\ncontext, but errdetail() would work, too. This isn't perfect. You might\nstill have to go through a couple rounds of errors before your role has all\nthe privileges it needs for a command, but this seems to improve matters a\nlittle.\n\nI think there is still a lot of room for improvement, but I wanted to at\nleast get the discussion started before I went too far.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 25 Jan 2023 16:22:51 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "improving user.c error messages" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 19, 2023 at 10:20:33AM -0500, Robert Haas wrote:\n>> That would be great. I agree that it's good to try to improve the\n>> error messages. It hasn't been entirely clear to me how to do that.\n>> For instance, I don't think we want to say something like:\n>> \n>> ERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\n>> role, or in lieu of both of those to be superuser, to set the\n>> CONNECTION LIMIT for another role\n>> ERROR: must have CREATEROLE privilege and ADMIN OPTION on the target\n>> role, plus also CREATEDB, or in lieu of all that to be superuser, to\n>> remove the CREATEDB property from another role\n\n> Here is an early draft of some modest improvements to the user.c error\n> messages. I basically just tried to standardize the style of and add\n> context to the existing error messages. I used errhint() for this extra\n> context, but errdetail() would work, too.\n\nYeah, I think the right fix is to keep the primary message pretty terse\nand add detail in secondary messages. IMO most of these are errdetail not\nerrhint, because they are factual details about the rules [1]. But other\nthan that quibble, Nathan's draft looked pretty good in a quick once-over.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n\n", "msg_date": "Wed, 25 Jan 2023 20:45:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "\nPlease use \n\t\terrdetail(\"You must have %s privilege to create roles with %s.\",\n\t\t\t\"SUPERUSER\", \"SUPERUSER\")));\n\nin this kind of message where multiple copies appear that only differ in\nthe keyword to use, to avoid creating four copies of essentially the\nsame string.\n\nThis applies in several places.\n\n\n> -\t\t\t\t\t errmsg(\"must have createdb privilege to change createdb attribute\")));\n> +\t\t\t\t\t errmsg(\"permission denied to alter role\"),\n> +\t\t\t\t\t errhint(\"You must have CREATEDB privilege to alter roles with CREATEDB.\")));\n\nI think this one is a bit ambiguous; does \"with\" mean that roles that\nhave that priv cannot be changed, or does it mean that you cannot meddle\nwith that bit in particular? I think it'd be better to say\n \"You must have %s privilege to change the %s attribute.\"\nor something like that.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\nMaybe there's lots of data loss but the records of data loss are also lost.\n(Lincoln Yeoh)\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:07:39 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "Thanks for taking a look.\n\nOn Thu, Jan 26, 2023 at 10:07:39AM +0100, Alvaro Herrera wrote:\n> Please use \n> \t\terrdetail(\"You must have %s privilege to create roles with %s.\",\n> \t\t\t\"SUPERUSER\", \"SUPERUSER\")));\n> \n> in this kind of message where multiple copies appear that only differ in\n> the keyword to use, to avoid creating four copies of essentially the\n> same string.\n> \n> This applies in several places.\n\nI did this in v2.\n\n>> -\t\t\t\t\t errmsg(\"must have createdb privilege to change createdb attribute\")));\n>> +\t\t\t\t\t errmsg(\"permission denied to alter role\"),\n>> +\t\t\t\t\t errhint(\"You must have CREATEDB privilege to alter roles with CREATEDB.\")));\n> \n> I think this one is a bit ambiguous; does \"with\" mean that roles that\n> have that priv cannot be changed, or does it mean that you cannot meddle\n> with that bit in particular? I think it'd be better to say\n> \"You must have %s privilege to change the %s attribute.\"\n> or something like that.\n\nYeah, it's probably better to say \"to alter roles with %s\" to refer to\nroles that presently have the attribute and \"to change the %s attribute\"\nwhen referring to privileges for the attribute. I did this in v2, too.\n\nI've also switched from errhint() to errdetail() as suggested by Tom.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 26 Jan 2023 11:13:58 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Jan 26, 2023 at 2:14 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Yeah, it's probably better to say \"to alter roles with %s\" to refer to\n> roles that presently have the attribute and \"to change the %s attribute\"\n> when referring to privileges for the attribute. I did this in v2, too.\n>\n> I've also switched from errhint() to errdetail() as suggested by Tom.\n\nThis seems fine to me in general but I'm not entirely sure about this part:\n\n@@ -758,16 +776,13 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n {\n /* things an unprivileged user certainly can't do */\n if (dinherit || dcreaterole || dcreatedb || dcanlogin || dconnlimit ||\n- dvalidUntil || disreplication || dbypassRLS)\n+ dvalidUntil || disreplication || dbypassRLS ||\n+ (dpassword && roleid != currentUserId))\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n- errmsg(\"permission denied\")));\n-\n- /* an unprivileged user can change their own password */\n- if (dpassword && roleid != currentUserId)\n- ereport(ERROR,\n- (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n- errmsg(\"must have CREATEROLE privilege to change another user's password\")));\n+ errmsg(\"permission denied to alter role\"),\n+ errdetail(\"You must have %s privilege and %s on role \\\"%s\\\".\",\n+ \"CREATEROLE\", \"ADMIN OPTION\", rolename)));\n }\n else if (!superuser())\n {\n\nBasically my question is whether having one error message for all of\nthose cases is good enough, or whether we should be trying harder. I\ndon't mind if the conclusion is that it's OK as-is, and I'm not\nentirely sure what would be better. But when I was working on this\ncode, all of those cases OR'd together feeding into a single error\nmessage seemed a little sketchy to me, so I am wondering what others\nthink.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 14:42:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Jan 26, 2023 at 02:42:05PM -0500, Robert Haas wrote:\n> @@ -758,16 +776,13 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n> {\n> /* things an unprivileged user certainly can't do */\n> if (dinherit || dcreaterole || dcreatedb || dcanlogin || dconnlimit ||\n> - dvalidUntil || disreplication || dbypassRLS)\n> + dvalidUntil || disreplication || dbypassRLS ||\n> + (dpassword && roleid != currentUserId))\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> - errmsg(\"permission denied\")));\n> -\n> - /* an unprivileged user can change their own password */\n> - if (dpassword && roleid != currentUserId)\n> - ereport(ERROR,\n> - (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> - errmsg(\"must have CREATEROLE privilege to change another user's password\")));\n> + errmsg(\"permission denied to alter role\"),\n> + errdetail(\"You must have %s privilege and %s on role \\\"%s\\\".\",\n> + \"CREATEROLE\", \"ADMIN OPTION\", rolename)));\n> }\n> else if (!superuser())\n> {\n> \n> Basically my question is whether having one error message for all of\n> those cases is good enough, or whether we should be trying harder. I\n> don't mind if the conclusion is that it's OK as-is, and I'm not\n> entirely sure what would be better. But when I was working on this\n> code, all of those cases OR'd together feeding into a single error\n> message seemed a little sketchy to me, so I am wondering what others\n> think.\n\nI wondered the same thing, but I hesitated because I didn't want to change\ntoo much in a patch for error messaging. I can give it a try.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 11:59:29 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 02:42:05PM -0500, Robert Haas wrote:\n>> Basically my question is whether having one error message for all of\n>> those cases is good enough, or whether we should be trying harder.\n\nI think the password case needs to be kept separate, because the\nconditions for it are different (specifically the exception that\nyou can alter your own password). Lumping the rest together\nseems OK to me.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 15:07:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Jan 26, 2023 at 03:07:43PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Thu, Jan 26, 2023 at 02:42:05PM -0500, Robert Haas wrote:\n>>> Basically my question is whether having one error message for all of\n>>> those cases is good enough, or whether we should be trying harder.\n> \n> I think the password case needs to be kept separate, because the\n> conditions for it are different (specifically the exception that\n> you can alter your own password). Lumping the rest together\n> seems OK to me.\n\nHm. In v2, the error message for both cases is the same:\n\n\tERROR: permission denied to alter role\n\tDETAIL: You must have CREATEROLE privilege and ADMIN OPTION on role \"regress_priv_user2\".\n\nWe could add \"to change its attributes\" and \"to change its password\" to\nseparate the two, but I'm not sure that adds much. ISTM the current error\nmessage for ALTER ROLE PASSWORD implies that you can change your own\npassword, and that's lost with my patch. Perhaps we should add an\nerrhint() with that information instead. WDYT?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 14:02:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Thu, Jan 26, 2023 at 03:07:43PM -0500, Tom Lane wrote:\n>> I think the password case needs to be kept separate, because the\n>> conditions for it are different (specifically the exception that\n>> you can alter your own password). Lumping the rest together\n>> seems OK to me.\n\n> Hm. In v2, the error message for both cases is the same:\n\n> \tERROR: permission denied to alter role\n> \tDETAIL: You must have CREATEROLE privilege and ADMIN OPTION on role \"regress_priv_user2\".\n\n> We could add \"to change its attributes\" and \"to change its password\" to\n> separate the two, but I'm not sure that adds much. ISTM the current error\n> message for ALTER ROLE PASSWORD implies that you can change your own\n> password, and that's lost with my patch. Perhaps we should add an\n> errhint() with that information instead. WDYT?\n\nWell, it's not a hint. I think the above is fine for non-password\ncases, but for passwords maybe\n\n\tERROR: permission denied to alter role password\n\tDETAIL: To change another role's password, you must have CREATEROLE privilege and ADMIN OPTION on role \"%s\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 26 Jan 2023 17:41:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Jan 26, 2023 at 05:41:32PM -0500, Tom Lane wrote:\n> Well, it's not a hint. I think the above is fine for non-password\n> cases, but for passwords maybe\n> \n> \tERROR: permission denied to alter role password\n> \tDETAIL: To change another role's password, you must have CREATEROLE privilege and ADMIN OPTION on role \"%s\".\n\nOkay. I used this phrasing in v3.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 26 Jan 2023 16:09:36 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 26.01.23 01:22, Nathan Bossart wrote:\n> Here is an early draft of some modest improvements to the user.c error\n> messages. I basically just tried to standardize the style of and add\n> context to the existing error messages. I used errhint() for this extra\n> context, but errdetail() would work, too. This isn't perfect. You might\n> still have to go through a couple rounds of errors before your role has all\n> the privileges it needs for a command, but this seems to improve matters a\n> little.\n> \n> I think there is still a lot of room for improvement, but I wanted to at\n> least get the discussion started before I went too far.\n\nThis is good. If I may assign some more work ;-), we have a bunch of \nerror messages like\n\nerrmsg(\"must be superuser or a role with privileges of the \npg_write_server_files role to create backup stored on server\")\n\nerrmsg(\"must be superuser or have privileges of the \npg_execute_server_program role to COPY to or from an external program\")\n\nerrmsg(\"must be superuser or have privileges of pg_read_all_settings to \nexamine \\\"%s\\\"\", ...)\n\nwhich could also be split up into a pair of\n\nerrmsg(\"permission denied to xxx\")\nerrdetail(\"You must be superuser or ...\")\n\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:00:01 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Jan 27, 2023 at 5:00 AM Peter Eisentraut\n<peter.eisentraut@enterprisedb.com> wrote:\n> This is good. If I may assign some more work ;-), we have a bunch of\n> error messages like\n>\n> errmsg(\"must be superuser or a role with privileges of the\n> pg_write_server_files role to create backup stored on server\")\n>\n> errmsg(\"must be superuser or have privileges of the\n> pg_execute_server_program role to COPY to or from an external program\")\n>\n> errmsg(\"must be superuser or have privileges of pg_read_all_settings to\n> examine \\\"%s\\\"\", ...)\n>\n> which could also be split up into a pair of\n>\n> errmsg(\"permission denied to xxx\")\n> errdetail(\"You must be superuser or ...\")\n\nI almost hate to bring this up since I'm not sure how far we want to\ngo down this rat hole, but what should be our policy about mentioning\nsuperuser? I don't think we're entirely consistent right now, and I'm\nnot sure whether every error message needs to mention that if you were\nthe superuser you could do everything. Is that something we should\nmention always, never, or in some set of circumstances?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 08:31:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Jan 27, 2023 at 08:31:32AM -0500, Robert Haas wrote:\n> I almost hate to bring this up since I'm not sure how far we want to\n> go down this rat hole, but what should be our policy about mentioning\n> superuser? I don't think we're entirely consistent right now, and I'm\n> not sure whether every error message needs to mention that if you were\n> the superuser you could do everything. Is that something we should\n> mention always, never, or in some set of circumstances?\n\nIMHO superuser should typically only be mentioned when it is the only way\nto do something. Since superusers have all privileges, I think logs like\n\"superuser or privileges of X\" are kind of redundant. If Robert has\nprivileges of X, we wouldn't say \"privileges of X or Robert.\" We'd just\npoint to X. Ultimately, I feel like mentioning superuser in error messages\nusually just makes the message longer without adding any useful\ninformation.\n\nI recognize that this is a bold opinion and that the policy to mention\nsuperuser might need to be more nuanced in practice...\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 07:52:36 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Jan 27, 2023 at 10:52 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> IMHO superuser should typically only be mentioned when it is the only way\n> to do something. Since superusers have all privileges, I think logs like\n> \"superuser or privileges of X\" are kind of redundant. If Robert has\n> privileges of X, we wouldn't say \"privileges of X or Robert.\" We'd just\n> point to X. Ultimately, I feel like mentioning superuser in error messages\n> usually just makes the message longer without adding any useful\n> information.\n\nThat's kind of my opinion too, but I'm not sure whether there are\ncases where it will lead to confusion.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 27 Jan 2023 10:53:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I almost hate to bring this up since I'm not sure how far we want to\n> go down this rat hole, but what should be our policy about mentioning\n> superuser? I don't think we're entirely consistent right now, and I'm\n> not sure whether every error message needs to mention that if you were\n> the superuser you could do everything. Is that something we should\n> mention always, never, or in some set of circumstances?\n\nGood point. My vote is for standardizing on *not* mentioning it.\nError messages should say \"you need privilege X\". That is not\nthe place to go into all the ways you could hold privilege X\n(one of which is being superuser).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 27 Jan 2023 11:17:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 2023-Jan-27, Tom Lane wrote:\n\n> Good point. My vote is for standardizing on *not* mentioning it.\n> Error messages should say \"you need privilege X\". That is not\n> the place to go into all the ways you could hold privilege X\n> (one of which is being superuser).\n\n+1\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El sabio habla porque tiene algo que decir;\nel tonto, porque tiene que decir algo\" (Platon).\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:13:58 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "While we're here,\n\nOn 2023-Jan-26, Nathan Bossart wrote:\n\n> @@ -838,7 +867,8 @@ AlterRole(ParseState *pstate, AlterRoleStmt *stmt)\n> \t\tif (!should_be_super && roleid == BOOTSTRAP_SUPERUSERID)\n> \t\t\tereport(ERROR,\n> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> -\t\t\t\t\t errmsg(\"permission denied: bootstrap user must be superuser\")));\n> +\t\t\t\t\t errmsg(\"permission denied to alter role\"),\n> +\t\t\t\t\t errdetail(\"The bootstrap user must be superuser.\")));\n\nI think this one isn't using the right errcode; this is not a case of\ninsufficient privileges. There's no priv you can acquire that lets you\ndo it. So I'd change it to unsupported operation.\n\n\nI was confused a bit by this one:\n\n> /* an unprivileged user can change their own password */\n> if (dpassword && roleid != currentUserId)\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> - errmsg(\"must have CREATEROLE privilege to change another user's password\")));\n> + errmsg(\"permission denied to alter role\"),\n> + errdetail(\"To change another role's password, you must have %s privilege and %s on the role.\",\n> + \"CREATEROLE\", \"ADMIN OPTION\")));\n> }\n\nIn no other message we say what operation is being attempted in the\nerrdetail; all the others start with \"You must have\" and that's it.\nHowever, looking closer I think this one being different is okay,\nbecause the errmsg() you're using is vague, and I think the error report\nwould be confusing if you were to remove the \"To change another role's\npassword\" bit.\n\nThe patch looks good to me.\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 27 Jan 2023 19:31:19 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Jan 27, 2023 at 07:31:19PM +0100, Alvaro Herrera wrote:\n> On 2023-Jan-26, Nathan Bossart wrote:\n>> \t\t\tereport(ERROR,\n>> \t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> -\t\t\t\t\t errmsg(\"permission denied: bootstrap user must be superuser\")));\n>> +\t\t\t\t\t errmsg(\"permission denied to alter role\"),\n>> +\t\t\t\t\t errdetail(\"The bootstrap user must be superuser.\")));\n> \n> I think this one isn't using the right errcode; this is not a case of\n> insufficient privileges. There's no priv you can acquire that lets you\n> do it. So I'd change it to unsupported operation.\n\nІ fixed this in v4. I've also attached a second patch in which I've\nadjusted the messages that Peter mentioned upthread.\n\nOne thing that feels a bit odd is how some of the DETAILs mention the\noperation being attempted while others do not. For example, we have\n\n\tERROR: permission denied to drop role\n\tDETAIL: You must have SUPERUSER privilege to drop roles with SUPERUSER.\n\nIn this case, the DETAIL explains the action that is prohibited. In other\ncases, we have something like\n\n\tERROR: permission denied to alter role\n\tDETAIL: You must have CREATEROLE privilege and ADMIN OPTION on role \"myrole\".\n\nwhich does not. I think this is okay because adding \"to alter the role\" to\nthe end of the DETAIL seems kind of awkward. But in other cases, such as\n\n\tERROR: permission denied to use replication slots\n\tDETAIL: You must have REPLICATION privilege.\n\nadding the operation to the end seems less awkward (i.e., \"You must have\nREPLICATION privilege to use replication slots.\"). I don't think there's\nany information lost by omitting the action in the DETAIL, so perhaps this\nis just a stylistic choice. I think I'm inclined to add the action to the\nDETAIL whenever it doesn't make the message lengthy and awkward, and leave\nit out otherwise. Thoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 27 Jan 2023 15:15:07 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Jan 27, 2023 at 03:15:07PM -0800, Nathan Bossart wrote:\n> One thing that feels a bit odd is how some of the DETAILs mention the\n> operation being attempted while others do not. For example, we have\n> \n> \tERROR: permission denied to drop role\n> \tDETAIL: You must have SUPERUSER privilege to drop roles with SUPERUSER.\n> \n> In this case, the DETAIL explains the action that is prohibited. In other\n> cases, we have something like\n> \n> \tERROR: permission denied to alter role\n> \tDETAIL: You must have CREATEROLE privilege and ADMIN OPTION on role \"myrole\".\n> \n> which does not. I think this is okay because adding \"to alter the role\" to\n> the end of the DETAIL seems kind of awkward. But in other cases, such as\n> \n> \tERROR: permission denied to use replication slots\n> \tDETAIL: You must have REPLICATION privilege.\n> \n> adding the operation to the end seems less awkward (i.e., \"You must have\n> REPLICATION privilege to use replication slots.\"). I don't think there's\n> any information lost by omitting the action in the DETAIL, so perhaps this\n> is just a stylistic choice. I think I'm inclined to add the action to the\n> DETAIL whenever it doesn't make the message lengthy and awkward, and leave\n> it out otherwise. Thoughts?\n\nHere is a new patch set with this change and some other light editing.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 7 Feb 2023 12:10:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 07.02.23 21:10, Nathan Bossart wrote:\n>> \tERROR: permission denied to use replication slots\n>> \tDETAIL: You must have REPLICATION privilege.\n>>\n>> adding the operation to the end seems less awkward (i.e., \"You must have\n>> REPLICATION privilege to use replication slots.\"). I don't think there's\n>> any information lost by omitting the action in the DETAIL, so perhaps this\n>> is just a stylistic choice. I think I'm inclined to add the action to the\n>> DETAIL whenever it doesn't make the message lengthy and awkward, and leave\n>> it out otherwise. Thoughts?\n> Here is a new patch set with this change and some other light editing.\n\nI'm concerned about the loose use of \"privilege\" here. A privilege is \nsomething I can grant. So if someone doesn't have the \"REPLICATION \nprivilege\", as in the above example, I would expect to be able to do \n\"GRANT REPLICATION TO someuser\". Since that is not what is happening, \nwe should use some other term. The documentation around CREATE USER \nuses the terms \"attribute\" and \"option\" (and also \"privilege\") for these \nthings.\n\nSimilarly -- this is an existing issue but we might as well look at it \n-- in something like\n\n must be superuser or a role with privileges of the\n pg_write_server_files role\n\nthe phrase \"a role with the privileges of that other role\" seems \nambiguous. Doesn't it really mean you must be a member of that role?\n\nI also feel that in sentences like\n\n \"You must have %s privilege to create roles.\"\n\na \"the\" is missing.\n\n\n\n", "msg_date": "Mon, 20 Feb 2023 08:54:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Mon, Feb 20, 2023 at 08:54:48AM +0100, Peter Eisentraut wrote:\n> I'm concerned about the loose use of \"privilege\" here. A privilege is\n> something I can grant. So if someone doesn't have the \"REPLICATION\n> privilege\", as in the above example, I would expect to be able to do \"GRANT\n> REPLICATION TO someuser\". Since that is not what is happening, we should\n> use some other term. The documentation around CREATE USER uses the terms\n> \"attribute\" and \"option\" (and also \"privilege\") for these things.\n\nGood point. I will adjust these to use \"attribute\" instead.\n\n> Similarly -- this is an existing issue but we might as well look at it -- in\n> something like\n> \n> must be superuser or a role with privileges of the\n> pg_write_server_files role\n> \n> the phrase \"a role with the privileges of that other role\" seems ambiguous.\n> Doesn't it really mean you must be a member of that role?\n\nMembership alone is not sufficient. You must also inherit the privileges\nof the role via the INHERIT option. I thought about making this something\nlike\n\n\tmust have the INHERIT option on role %s\n\nbut I'm not sure that's accurate either. That wording makes it sound lіke\nyou need to be granted membership to the role directly WITH INHERIT OPTION,\nbut what you really need is membership, direct or indirect, with an INHERIT\nchain up to the role in question. However, it looks like \"must have the\nADMIN option on role %s\" is used to mean something similar, so perhaps I am\noverthinking it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 20 Feb 2023 11:02:10 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Mon, Feb 20, 2023 at 11:02:10AM -0800, Nathan Bossart wrote:\n> On Mon, Feb 20, 2023 at 08:54:48AM +0100, Peter Eisentraut wrote:\n>> I'm concerned about the loose use of \"privilege\" here. A privilege is\n>> something I can grant. So if someone doesn't have the \"REPLICATION\n>> privilege\", as in the above example, I would expect to be able to do \"GRANT\n>> REPLICATION TO someuser\". Since that is not what is happening, we should\n>> use some other term. The documentation around CREATE USER uses the terms\n>> \"attribute\" and \"option\" (and also \"privilege\") for these things.\n> \n> Good point. I will adjust these to use \"attribute\" instead.\n\ndone in v6\n\n>> Similarly -- this is an existing issue but we might as well look at it -- in\n>> something like\n>> \n>> must be superuser or a role with privileges of the\n>> pg_write_server_files role\n>> \n>> the phrase \"a role with the privileges of that other role\" seems ambiguous.\n>> Doesn't it really mean you must be a member of that role?\n> \n> Membership alone is not sufficient. You must also inherit the privileges\n> of the role via the INHERIT option. I thought about making this something\n> like\n> \n> \tmust have the INHERIT option on role %s\n> \n> but I'm not sure that's accurate either. That wording makes it sound lіke\n> you need to be granted membership to the role directly WITH INHERIT OPTION,\n> but what you really need is membership, direct or indirect, with an INHERIT\n> chain up to the role in question. However, it looks like \"must have the\n> ADMIN option on role %s\" is used to mean something similar, so perhaps I am\n> overthinking it.\n\nFor now, I've reworded these as \"must inherit privileges of\".\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 20 Feb 2023 14:58:52 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 20.02.23 23:58, Nathan Bossart wrote:\n>>> Similarly -- this is an existing issue but we might as well look at it -- in\n>>> something like\n>>>\n>>> must be superuser or a role with privileges of the\n>>> pg_write_server_files role\n>>>\n>>> the phrase \"a role with the privileges of that other role\" seems ambiguous.\n>>> Doesn't it really mean you must be a member of that role?\n>>\n>> Membership alone is not sufficient. You must also inherit the privileges\n>> of the role via the INHERIT option. I thought about making this something\n>> like\n>>\n>> \tmust have the INHERIT option on role %s\n>>\n>> but I'm not sure that's accurate either. That wording makes it sound lіke\n>> you need to be granted membership to the role directly WITH INHERIT OPTION,\n>> but what you really need is membership, direct or indirect, with an INHERIT\n>> chain up to the role in question. However, it looks like \"must have the\n>> ADMIN option on role %s\" is used to mean something similar, so perhaps I am\n>> overthinking it.\n> \n> For now, I've reworded these as \"must inherit privileges of\".\n\nI don't have a good mental model of all this role inheritance, \npersonally, but I fear that this change makes the messages more jargony \nand less clear. Maybe the original wording was good enough.\n\nA couple of other thoughts:\n\n\"admin option\" is sort of a natural language term, I think, so we don't \nneed to parametrize it as \"%s option\". Also, there are no other \n\"options\" in this context, I think.\n\nA general thought: It seems we currently don't have any error messages \nthat address the user like \"You must do this\". Do we want to go there? \nShould we try for a more impersonal wording like\n\n\"You must have the %s attribute to create roles.\"\n\n\"Current user must have the %s attribute to create roles.\"\n\n\"%s attribute is required to create roles.\"\n\nBy the way, I'm not sure what the separation between 0001 and 0002 is \nsupposed to be.\n\n\n\n", "msg_date": "Thu, 9 Mar 2023 10:55:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 10:55:54AM +0100, Peter Eisentraut wrote:\n> On 20.02.23 23:58, Nathan Bossart wrote:\n>> For now, I've reworded these as \"must inherit privileges of\".\n> \n> I don't have a good mental model of all this role inheritance, personally,\n> but I fear that this change makes the messages more jargony and less clear.\n> Maybe the original wording was good enough.\n\nI'm fine with that.\n\n> \"admin option\" is sort of a natural language term, I think, so we don't need\n> to parametrize it as \"%s option\". Also, there are no other \"options\" in\n> this context, I think.\n\nv16 introduces the INHERIT and SET options. I don't have a strong opinion\nabout parameterizing it, though. My intent was to consistently capitalize\nall the attributes and options.\n\n> A general thought: It seems we currently don't have any error messages that\n> address the user like \"You must do this\". Do we want to go there? Should we\n> try for a more impersonal wording like\n> \n> \"You must have the %s attribute to create roles.\"\n> \n> \"Current user must have the %s attribute to create roles.\"\n> \n> \"%s attribute is required to create roles.\"\n\nI think I like the last option the most. In general, I agree with trying\nto avoid the second-person phrasing.\n\n> By the way, I'm not sure what the separation between 0001 and 0002 is\n> supposed to be.\n\nI'll combine them. I first started with user.c only, but we kept finding\nnew messages to improve.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 9 Mar 2023 09:58:46 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Mar 09, 2023 at 09:58:46AM -0800, Nathan Bossart wrote:\n> On Thu, Mar 09, 2023 at 10:55:54AM +0100, Peter Eisentraut wrote:\n>> On 20.02.23 23:58, Nathan Bossart wrote:\n>>> For now, I've reworded these as \"must inherit privileges of\".\n>> \n>> I don't have a good mental model of all this role inheritance, personally,\n>> but I fear that this change makes the messages more jargony and less clear.\n>> Maybe the original wording was good enough.\n> \n> I'm fine with that.\n\nI used the original wording in v7.\n\n>> \"admin option\" is sort of a natural language term, I think, so we don't need\n>> to parametrize it as \"%s option\". Also, there are no other \"options\" in\n>> this context, I think.\n> \n> v16 introduces the INHERIT and SET options. I don't have a strong opinion\n> about parameterizing it, though. My intent was to consistently capitalize\n> all the attributes and options.\n\nI didn't change this in v7, but I can do so if you still think it shouldn't\nbe parameterized.\n\n>> A general thought: It seems we currently don't have any error messages that\n>> address the user like \"You must do this\". Do we want to go there? Should we\n>> try for a more impersonal wording like\n>> \n>> \"You must have the %s attribute to create roles.\"\n>> \n>> \"Current user must have the %s attribute to create roles.\"\n>> \n>> \"%s attribute is required to create roles.\"\n> \n> I think I like the last option the most. In general, I agree with trying\n> to avoid the second-person phrasing.\n\nI ended up using the \"current user must have\" wording in a few places, and\nfor most others, I used \"only roles with X may do Y.\" That seemed to flow\nrelatively well, and IMO it made the required privileges abundantly clear.\nI initially was going to use the \"X attribute is required to Y\" wording,\nbut I was worried that didn't make it sufficiently clear that the _role_\nmust have the attribute. In any case, I'm not wedded to the approach I\nused in the patch and am willing to try out other wordings.\n\nBTW I did find one example of a \"you must\" message while I was updating the\npatch:\n\n write_stderr(\"%s does not know where to find the server configuration file.\\n\"\n \"You must specify the --config-file or -D invocation \"\n \"option or set the PGDATA environment variable.\\n\",\n progname);\n\nI don't think it's a common style, though.\n\n>> By the way, I'm not sure what the separation between 0001 and 0002 is\n>> supposed to be.\n> \n> I'll combine them. I first started with user.c only, but we kept finding\n> new messages to improve.\n\nI combined the patches in v7.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 9 Mar 2023 16:03:13 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 10.03.23 01:03, Nathan Bossart wrote:\n>>> By the way, I'm not sure what the separation between 0001 and 0002 is\n>>> supposed to be.\n>> I'll combine them. I first started with user.c only, but we kept finding\n>> new messages to improve.\n> I combined the patches in v7.\n\nI have committed two pieces that were not message changes separately.\n\n\nI think the following change in DropRole() is incorrect:\n\n if (!is_admin_of_role(GetUserId(), roleid))\n ereport(ERROR,\n (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n- errmsg(\"must have admin option on role \\\"%s\\\"\",\n- role)));\n+ errmsg(\"permission denied to drop role\"),\n+ errdetail(\"Only roles with the %s attribute and the \n%s option on role \\\"%s\\\" may drop this role.\",\n+ \"CREATEROLE\", \"ADMIN\", \nNameStr(roleform->rolname))));\n\nThe message does not reflect what check is actually performed. (Perhaps \nthis was confused with a similar but not exactly the same check in \nRenameRole().)\n\nThat was the only \"factual\" error that I found.\n\n\nIn file_fdw_validator(), the option names \"filename\" and \"program\" could \nbe parameterized.\n\n\nIn DropOwnedObjects() and ReassignOwnedObjects(), I suggest the \nfollowing changes, for clarity:\n\n- errdetail(\"Only roles with privileges of role \\\"%s\\\" may drop its \nobjects.\",\n+ errdetail(\"Only roles with privileges of role \\\"%s\\\" may drop objects \nowned by it.\",\n\n- errdetail(\"Only roles with privileges of role \\\"%s\\\" may reassign its \nobjects.\",\n+ errdetail(\"Only roles with privileges of role \\\"%s\\\" may reassign \nobjects owned by it.\",\n\n\nThe rest looks okay to me.\n\n\n\n", "msg_date": "Thu, 16 Mar 2023 16:24:07 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Mar 16, 2023 at 04:24:07PM +0100, Peter Eisentraut wrote:\n> I have committed two pieces that were not message changes separately.\n\nThanks!\n\n> I think the following change in DropRole() is incorrect:\n> \n> if (!is_admin_of_role(GetUserId(), roleid))\n> ereport(ERROR,\n> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n> - errmsg(\"must have admin option on role \\\"%s\\\"\",\n> - role)));\n> + errmsg(\"permission denied to drop role\"),\n> + errdetail(\"Only roles with the %s attribute and the %s\n> option on role \\\"%s\\\" may drop this role.\",\n> + \"CREATEROLE\", \"ADMIN\",\n> NameStr(roleform->rolname))));\n> \n> The message does not reflect what check is actually performed. (Perhaps\n> this was confused with a similar but not exactly the same check in\n> RenameRole().)\n\nHm. Is your point that we should only mention the admin option here? I\nmentioned both createrole and admin option in this message (and the\ncreaterole check above this point) in an attempt to avoid giving partial\ninformation.\n\n> In file_fdw_validator(), the option names \"filename\" and \"program\" could be\n> parameterized.\n> \n> \n> In DropOwnedObjects() and ReassignOwnedObjects(), I suggest the following\n> changes, for clarity:\n> \n> - errdetail(\"Only roles with privileges of role \\\"%s\\\" may drop its\n> objects.\",\n> + errdetail(\"Only roles with privileges of role \\\"%s\\\" may drop objects\n> owned by it.\",\n> \n> - errdetail(\"Only roles with privileges of role \\\"%s\\\" may reassign its\n> objects.\",\n> + errdetail(\"Only roles with privileges of role \\\"%s\\\" may reassign objects\n> owned by it.\",\n\nWill do.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 Mar 2023 08:48:58 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 16.03.23 16:48, Nathan Bossart wrote:\n>> I think the following change in DropRole() is incorrect:\n>>\n>> if (!is_admin_of_role(GetUserId(), roleid))\n>> ereport(ERROR,\n>> (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> - errmsg(\"must have admin option on role \\\"%s\\\"\",\n>> - role)));\n>> + errmsg(\"permission denied to drop role\"),\n>> + errdetail(\"Only roles with the %s attribute and the %s\n>> option on role \\\"%s\\\" may drop this role.\",\n>> + \"CREATEROLE\", \"ADMIN\",\n>> NameStr(roleform->rolname))));\n>>\n>> The message does not reflect what check is actually performed. (Perhaps\n>> this was confused with a similar but not exactly the same check in\n>> RenameRole().)\n> Hm. Is your point that we should only mention the admin option here? I\n> mentioned both createrole and admin option in this message (and the\n> createrole check above this point) in an attempt to avoid giving partial\n> information.\n\nAFAICT, the mention of CREATEROLE is incorrect, because the code doesn't \nactually check for the CREATEROLE attribute.\n\n\n\n", "msg_date": "Thu, 16 Mar 2023 16:59:53 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Thu, Mar 16, 2023 at 04:59:53PM +0100, Peter Eisentraut wrote:\n> On 16.03.23 16:48, Nathan Bossart wrote:\n>> > I think the following change in DropRole() is incorrect:\n>> > \n>> > if (!is_admin_of_role(GetUserId(), roleid))\n>> > ereport(ERROR,\n>> > (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n>> > - errmsg(\"must have admin option on role \\\"%s\\\"\",\n>> > - role)));\n>> > + errmsg(\"permission denied to drop role\"),\n>> > + errdetail(\"Only roles with the %s attribute and the %s\n>> > option on role \\\"%s\\\" may drop this role.\",\n>> > + \"CREATEROLE\", \"ADMIN\",\n>> > NameStr(roleform->rolname))));\n>> > \n>> > The message does not reflect what check is actually performed. (Perhaps\n>> > this was confused with a similar but not exactly the same check in\n>> > RenameRole().)\n>> Hm. Is your point that we should only mention the admin option here? I\n>> mentioned both createrole and admin option in this message (and the\n>> createrole check above this point) in an attempt to avoid giving partial\n>> information.\n> \n> AFAICT, the mention of CREATEROLE is incorrect, because the code doesn't\n> actually check for the CREATEROLE attribute.\n\nThere is a createrole check at the top of DropRole():\n\n\t/*\n\t * DROP ROLE\n\t */\n\tvoid\n\tDropRole(DropRoleStmt *stmt)\n\t{\n\t\tRelation\tpg_authid_rel,\n\t\t\t\t\tpg_auth_members_rel;\n\t\tListCell *item;\n\t\tList\t *role_addresses = NIL;\n\t\n\t\tif (!have_createrole_privilege())\n\t\t\tereport(ERROR,\n\t\t\t\t\t(errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),\n\t\t\t\t\t errmsg(\"permission denied to drop role\")));\n\nGranted, no one will see the admin option error unless they at least have\ncreaterole, so we could leave it out, but my intent was to list the full\nset of privileges required to drop the role to avoid ambiguity.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 16 Mar 2023 09:27:49 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "Here is a rebased patch in which I've addressed the latest feedback except\nfor the DropRole() part that is under discussion.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 16 Mar 2023 16:47:01 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On 17.03.23 00:47, Nathan Bossart wrote:\n> Here is a rebased patch in which I've addressed the latest feedback except\n> for the DropRole() part that is under discussion.\n\ncommitted\n\n\n\n", "msg_date": "Fri, 17 Mar 2023 10:40:06 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" }, { "msg_contents": "On Fri, Mar 17, 2023 at 10:40:06AM +0100, Peter Eisentraut wrote:\n> committed\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 17 Mar 2023 11:10:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: improving user.c error messages" } ]
[ { "msg_contents": "Hi,\n\nI was trying to extract a commitable piece out of [1]. To be able to judge\nchanges in timing overhead more accurately, I thought it'd be sensible to\nupdate pg_test_timing to report nanoseconds instead of microseconds. Which\nlead to trying to update pg_test_timing's docs [2].\n\nThe \"Measuring Executor Timing Overhead\" section seems misleading:\n <para>\n The i7-860 system measured runs the count query in 9.8 ms while\n the <command>EXPLAIN ANALYZE</command> version takes 16.6 ms, each\n processing just over 100,000 rows. That 6.8 ms difference means the timing\n overhead per row is 68 ns, about twice what pg_test_timing estimated it\n would be. Even that relatively small amount of overhead is making the fully\n timed count statement take almost 70% longer. On more substantial queries,\n the timing overhead would be less problematic.\n </para>\n\nThe main reason for the ~2x discrepancy is that we do 2 timestamp calls for\neach invocation of an executor node, one in InstrStartNode(), one in\nInstrStopNode(). I think this should be clarified in the section?\n\n\nI also think we should update the section to compare\nEXPLAIN (ANALYZE, TIMING OFF) with\nEXPLAIN (ANALYZE, TIMING ON)\nrather than comparing the \"bare\" statement with EXPLAIN ANALYZE. There's\nplenty other overhead in EXPLAIN, even without TIMING ON.\n\nWith the instr_time-as-nanosec patches applied (I'll post a new version in a\nfew minutes), I get the following:\n\npg_test_timing:\nPer loop time including overhead: 13.97 ns\nHistogram of timing durations:\n < ns % of total count\n 16 97.48221 209400569\n 32 2.51201 5396022\n 64 0.00477 10246\n 128 0.00030 640\n 256 0.00005 117\n 512 0.00000 0\n 1024 0.00000 3\n 2048 0.00034 729\n 4096 0.00001 14\n 8192 0.00000 8\n 16384 0.00015 320\n 32768 0.00014 303\n 65536 0.00001 26\n 131072 0.00000 0\n 262144 0.00000 1\n\npsql -Xc 'DROP TABLE IF EXISTS t; CREATE TABLE t AS SELECT * FROM generate_series(1, 100000) g(i);' && pgbench -n -r -t 100 -f <(echo -e \"SELECT COUNT(*) FROM t;EXPLAIN (ANALYZE, TIMING OFF) SELECT COUNT(*) FROM t;EXPLAIN (ANALYZE, TIMING ON) SELECT COUNT(*) FROM t;\") |grep '^ '\nDROP TABLE\nSELECT 100000\n 3.431 0 SELECT COUNT(*) FROM t;\n 3.888 0 EXPLAIN (ANALYZE, TIMING OFF) SELECT COUNT(*) FROM t;\n 6.671 0 EXPLAIN (ANALYZE, TIMING ON) SELECT COUNT(*) FROM t;\n\nPgbench reports about 11% lost just from TIMING OFF ANALYZE, and a further 45%\nfrom TIMING ON. The per-row overhead, compared between TIMING ON/OFF:\n\n((6.187ms - 3.423 ms) * 1000000)/(100000 * 2) = 13.82ns\n\nwhich is within the run-to-run variance of the pg_test_timing result.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/20230116023639.rn36vf6ajqmfciua%40awork3.anarazel.de\n[2] https://www.postgresql.org/docs/current/pgtesttiming.html#id-1.9.5.11.7.3\n\n\n", "msg_date": "Mon, 16 Jan 2023 13:39:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "\"Measuring timing overhead\" in docs seems misleading" } ]
[ { "msg_contents": "Hi,\n\nWhile working on some logical replication patch,\nI've find a typo on HEAD.\nAttached the modification patch for this.\n\n\nBest Regards,\n\tTakamichi Osumi", "msg_date": "Tue, 17 Jan 2023 03:00:22 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "typo in the subscription command tests" }, { "msg_contents": "On Tue, Jan 17, 2023 at 8:30 AM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> While working on some logical replication patch,\n> I've find a typo on HEAD.\n> Attached the modification patch for this.\n>\n\nLGTM.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 17 Jan 2023 09:01:02 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: typo in the subscription command tests" } ]
[ { "msg_contents": "I was wondering why ExecCrossPartitionUpdateForeignKey() has an unused\nargument \"oldslot\" and wanted to suggest its removal. However, before I did,\nit occurred to me that callers may want to pass the whole slot when the\npartition is a foreign table, i.e. when the \"tupleid\" argument cannot be\nused. (In that case the problem would be that the function implementation is\nincomplete.)\n\nHowever, when checking how cross-partition UPDATE works internally for foreign\ntables, I saw surprising behavior. The attached script creates partitioned\ntable \"a\" with foreign table partitions \"a1\" and \"a2\". If you then run the\nfollowing commands\n\nINSERT INTO a VALUES (1), (10);\nUPDATE a SET i=11 WHERE i=1;\nTABLE a1;\n\nyou'll see that the tuples are correctly routed into the partitions, but the\nUPDATE is simply executed on the \"a1\" partition. Instead, I'd expect it to\ndelete the tuple from \"a1\" and insert it into \"a2\". That looks like a bug.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\nCREATE EXTENSION IF NOT EXISTS postgres_fdw WITH SCHEMA public;\n\nCREATE SERVER s1 FOREIGN DATA WRAPPER postgres_fdw OPTIONS (\n dbname 'postgres',\n host 'localhost',\n port '5432'\n);\n\nCREATE USER MAPPING FOR CURRENT_ROLE SERVER s1;\n\nCREATE TABLE public.a (\n i integer NOT NULL\n)\nPARTITION BY RANGE (i);\n\n\nCREATE TABLE public.a1 (\n i integer NOT NULL\n);\n\nCREATE FOREIGN TABLE public.a1_loc (\n i integer NOT NULL\n)\nSERVER s1\nOPTIONS (\n table_name 'a1'\n);\n\nCREATE TABLE public.a2 (\n i integer NOT NULL\n);\n\nCREATE FOREIGN TABLE public.a2_loc (\n i integer NOT NULL\n)\nSERVER s1\nOPTIONS (\n table_name 'a2'\n);\n\nALTER TABLE ONLY public.a ATTACH PARTITION public.a1_loc FOR VALUES FROM (0) TO (10);\nALTER TABLE ONLY public.a ATTACH PARTITION public.a2_loc FOR VALUES FROM (10) TO (20);\n\nALTER TABLE ONLY public.a1\n ADD CONSTRAINT a1_pkey PRIMARY KEY (i);\n\nALTER TABLE ONLY public.a2\n ADD CONSTRAINT a2_pkey PRIMARY KEY (i);", "msg_date": "Tue, 17 Jan 2023 10:30:38 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Cross-partition UPDATE and foreign table partitions" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> I was wondering why ExecCrossPartitionUpdateForeignKey() has an unused\n> argument \"oldslot\" and wanted to suggest its removal. However, before I did,\n> it occurred to me that callers may want to pass the whole slot when the\n> partition is a foreign table, i.e. when the \"tupleid\" argument cannot be\n> used. (In that case the problem would be that the function implementation is\n> incomplete.)\n> \n> However, when checking how cross-partition UPDATE works internally for foreign\n> tables, I saw surprising behavior. The attached script creates partitioned\n> table \"a\" with foreign table partitions \"a1\" and \"a2\". If you then run the\n> following commands\n> \n> INSERT INTO a VALUES (1), (10);\n> UPDATE a SET i=11 WHERE i=1;\n> TABLE a1;\n> \n> you'll see that the tuples are correctly routed into the partitions, but the\n> UPDATE is simply executed on the \"a1\" partition. Instead, I'd expect it to\n> delete the tuple from \"a1\" and insert it into \"a2\". That looks like a bug.\n\nWell, as it usually happens, I found a related information as soon as I had\nsent a report. The documentation of CREATE FOREIGN TABLE says:\n\n\"However it is not currently possible to move a row from a foreign-table\npartition to another partition. An UPDATE that would require doing that will\nfail due to the partitioning constraint, assuming that that is properly\nenforced by the remote server.\"\n\nSo the remaining question is whether the \"oldslot\" argument of\nExecCrossPartitionUpdateForeignKey() will be used in the future or should be\nremoved. Note that the ExecUpdateAct() passes its \"slot\" variable for it,\nwhich seems to contain the *new* version of the tuple rather than the\nold. Some cleanup may be needed.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 17 Jan 2023 10:48:36 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Cross-partition UPDATE and foreign table partitions" } ]
[ { "msg_contents": "Hi,\n\nI noticed that commit 5212d447fa updated some comments in multixact.c because\nSLRU truncation for multixacts is performed during VACUUM, instead of\ncheckpoint. Should the following comments which mentioned checkpointer be\nchanged, too?\n\n1.\n* we compute it (using nextMXact if none are valid). Each backend is\n* required not to attempt to access any SLRU data for MultiXactIds older\n* than its own OldestVisibleMXactId[] setting; this is necessary because\n* the checkpointer could truncate away such data at any instant.\n\n2.\n * We set the OldestVisibleMXactId for a given transaction the first time\n * it's going to inspect any MultiXactId. Once we have set this, we are\n * guaranteed that the checkpointer won't truncate off SLRU data for\n * MultiXactIds at or after our OldestVisibleMXactId.\n\nRegards,\nShi yu\n\n\n", "msg_date": "Tue, 17 Jan 2023 09:33:18 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "Update comments in multixact.c" }, { "msg_contents": "On Tue, Jan 17, 2023 at 1:33 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> I noticed that commit 5212d447fa updated some comments in multixact.c because\n> SLRU truncation for multixacts is performed during VACUUM, instead of\n> checkpoint. Should the following comments which mentioned checkpointer be\n> changed, too?\n\nYes, I think so.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:03:54 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Update comments in multixact.c" }, { "msg_contents": "On Wed, Jan 18, 2023 6:04 AM Peter Geoghegan <pg@bowt.ie> wrote:\r\n> \r\n> On Tue, Jan 17, 2023 at 1:33 AM shiy.fnst@fujitsu.com\r\n> <shiy.fnst@fujitsu.com> wrote:\r\n> > I noticed that commit 5212d447fa updated some comments in multixact.c\r\n> because\r\n> > SLRU truncation for multixacts is performed during VACUUM, instead of\r\n> > checkpoint. Should the following comments which mentioned checkpointer be\r\n> > changed, too?\r\n> \r\n> Yes, I think so.\r\n\r\nThanks for your reply.\r\n\r\nAttach a patch which fixed them.\r\n\r\nRegards,\r\nShi yu", "msg_date": "Wed, 18 Jan 2023 10:02:28 +0000", "msg_from": "\"shiy.fnst@fujitsu.com\" <shiy.fnst@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Update comments in multixact.c" }, { "msg_contents": "On Wed, Jan 18, 2023 at 2:02 AM shiy.fnst@fujitsu.com\n<shiy.fnst@fujitsu.com> wrote:\n> Thanks for your reply.\n>\n> Attach a patch which fixed them.\n\nPushed something close to that just now. I decided that it was better\nto not specify when truncation happened in these two places at all,\nthough. The important detail is that it can happen if certain rules\nare not followed.\n\nThanks\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 24 Jan 2023 15:18:10 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: Update comments in multixact.c" } ]
[ { "msg_contents": "Hi hackers,\n\nplease find attached a patch proposal to define $SUBJECT.\n\nThe idea has been raised in [1], where we are adding more calls to wait_for_catchup() in 'replay' mode.\n\nThe current code already has 25 of those, so the proposed patch is defining a new wait_for_replay_catchup() function.\n\nWhile at it, adding also:\n\n- wait_for_write_catchup(): called 5 times\n- wait_for_sent_catchup() and wait_for_flush_catchup() for consistency purpose (while there is\ncurrently no occurrences of wait_for_catchup() in 'sent' or 'flush' mode.).\n\nLooking forward to your feedback,\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n[1]: https://www.postgresql.org/message-id/20230106034036.2m4qnn7ep7b5ipet%40awork3.anarazel.de", "msg_date": "Tue, 17 Jan 2023 11:48:19 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "On 2023-Jan-17, Drouvot, Bertrand wrote:\n\n> The idea has been raised in [1], where we are adding more calls to\n> wait_for_catchup() in 'replay' mode.\n\nThis seems mostly useless as presented. Maybe if you're able to reduce\nthe noise on the second argument it would be worth something -- namely,\nif the wrapper function receives a node instead of an LSN: perhaps\nwait_for_replay_catchup() would use the flush LSN from the given node,\nwait_for_write_catchup() would use the write LSN, and\nwait_for_sent_catchup() would use the insert LSN. (I didn't check in\nyour patch if there are callsites that do something else). This would\nin several cases let you also remove the line with the assignment of\nappropriate LSN to a separate variable. If you did it that way, maybe\nthe code would become a tiny bit smaller overall.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 17 Jan 2023 12:23:23 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Hi,\n\nOn 1/17/23 12:23 PM, Alvaro Herrera wrote:\n> On 2023-Jan-17, Drouvot, Bertrand wrote:\n> \n>> The idea has been raised in [1], where we are adding more calls to\n>> wait_for_catchup() in 'replay' mode.\n> \n> This seems mostly useless as presented. Maybe if you're able to reduce\n> the noise on the second argument it would be worth something -- namely,\n> if the wrapper function receives a node instead of an LSN: perhaps\n> wait_for_replay_catchup() would use the flush LSN from the given node,\n> wait_for_write_catchup() would use the write LSN, and\n> wait_for_sent_catchup() would use the insert LSN. (I didn't check in\n> your patch if there are callsites that do something else). This would\n> in several cases let you also remove the line with the assignment of\n> appropriate LSN to a separate variable. If you did it that way, maybe\n> the code would become a tiny bit smaller overall.\n> \n\nThanks for looking at it!\n\nThe current calls are done that way:\n\nwait_for_replay_catchup called:\n- 8 times with write LSN as an argument\n- 1 time with insert LSN as an argument\n- 16 times with flush LSN as an argument\n\nwait_for_write_catchup called:\n- 5 times with write LSN as an argument\n\nSo it looks like that providing a node as a second argument\nwould not help for the wait_for_replay_catchup() case.\n\nWorth to use the node as an argument for wait_for_write_catchup()? (though it would be\nweird to have different types of arguments between wait_for_replay_catchup() and wait_for_write_catchup()).\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 08:54:43 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "On 2023-Jan-18, Drouvot, Bertrand wrote:\n\n> The current calls are done that way:\n> \n> wait_for_replay_catchup called:\n> - 8 times with write LSN as an argument\n> - 1 time with insert LSN as an argument\n> - 16 times with flush LSN as an argument\n\n> So it looks like that providing a node as a second argument\n> would not help for the wait_for_replay_catchup() case.\n\n... unless we changed the calls that wait for reply that use write or\ninsert so that they use flush instead. Surely everything should still\nwork, right? Flushing would still occur, either right after the write\n(as the transaction commits) or ~200ms afterwards when WAL writer\ncatches up to that point.\n\nI suppose this may fail to be true if there is some test that is\nspecifically testing whether writing WAL without flushing works, which\nshould rare enough, but if it does exist, in that one place we can use\nthe underlying wait_for_catchup().\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 18 Jan 2023 10:59:11 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Hi,\n\nOn 1/18/23 10:59 AM, Alvaro Herrera wrote:\n> On 2023-Jan-18, Drouvot, Bertrand wrote:\n> \n>> The current calls are done that way:\n>>\n>> wait_for_replay_catchup called:\n>> - 8 times with write LSN as an argument\n>> - 1 time with insert LSN as an argument\n>> - 16 times with flush LSN as an argument\n> \n>> So it looks like that providing a node as a second argument\n>> would not help for the wait_for_replay_catchup() case.\n> \n> ... unless we changed the calls that wait for reply that use write or\n> insert so that they use flush instead. \n\nThat's a good idea, thanks! Please find attached V2 doing so.\n\n> Surely everything should still\n> work, right? \n\nRight.\n\n> Flushing would still occur, either right after the write\n> (as the transaction commits) or ~200ms afterwards when WAL writer\n> catches up to that point.\n> \n> I suppose this may fail to be true if there is some test that is\n> specifically testing whether writing WAL without flushing works, which\n> should rare enough, but if it does exist, \n\nI don't see this kind of test.\n\nPlease note that V2 does not contain wait_for_flush_catchup() and\nwait_for_sent_catchup() anymore as: 1) they are not used yet\nand 2) it lets to their author (if any) decide the node->lsn() mode to be used.\n\nThis is also mentioned as a comment in V2.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 19 Jan 2023 09:10:59 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Looking again, I have two thoughts for making things easier:\n\n1. I don't think wait_for_write_catchup is necessary, because\ncalling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\nwould already do the same thing. So what we should do is patch places\nthat currently give those two arguments, so that they don't.\n\n2. Because wait_for_replay_catchup is an instance method, passing the\nsecond node as argument is needlessly noisy, because that's already\nknown as $self. So we can just say\n\n $primary_node->wait_for_replay_catchup($standby_node);\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Tue, 24 Jan 2023 19:27:28 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Hi,\n\nOn 1/24/23 7:27 PM, Alvaro Herrera wrote:\n> Looking again, I have two thoughts for making things easier:\n> \n> 1. I don't think wait_for_write_catchup is necessary, because\n> calling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\n> would already do the same thing. So what we should do is patch places\n> that currently give those two arguments, so that they don't.\n> \n\nAgree but there is one place where the node passed as the second argument is not the \"$self\":\n\nsrc/bin/pg_rewind/t/007_standby_source.pl:$node_b->wait_for_write_catchup('node_c', $node_a);\n\nSo it looks like there is still a need for wait_for_write_catchup().\n\n> 2. Because wait_for_replay_catchup is an instance method, passing the\n> second node as argument is needlessly noisy, because that's already\n> known as $self. So we can just say\n> \n> $primary_node->wait_for_replay_catchup($standby_node);\n> \n\nYeah, but same here, there is places where the node passed as the second argument is not the \"$self\":\n\nsrc/bin/pg_rewind/t/007_standby_source.pl:$node_b->wait_for_replay_catchup('node_c', $node_a);\nsrc/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\nsrc/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\nsrc/test/recovery/t/001_stream_rep.pl: $node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n\nSo it looks like there is still a need for wait_for_replay_catchup() with 2 parameters.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:33:53 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "On 2023-Jan-26, Drouvot, Bertrand wrote:\n\n> On 1/24/23 7:27 PM, Alvaro Herrera wrote:\n\n> > 1. I don't think wait_for_write_catchup is necessary, because\n> > calling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\n> > would already do the same thing. So what we should do is patch places\n> > that currently give those two arguments, so that they don't.\n> \n> Agree but there is one place where the node passed as the second argument is not the \"$self\":\n> \n> src/bin/pg_rewind/t/007_standby_source.pl:$node_b->wait_for_write_catchup('node_c', $node_a);\n> \n> So it looks like there is still a need for wait_for_write_catchup().\n\nHmm, I think that one can use the more general wait_for_catchup.\n\n\n> > 2. Because wait_for_replay_catchup is an instance method, passing the\n> > second node as argument is needlessly noisy, because that's already\n> > known as $self. So we can just say\n> > \n> > $primary_node->wait_for_replay_catchup($standby_node);\n> \n> Yeah, but same here, there is places where the node passed as the second argument is not the \"$self\":\n> \n> src/bin/pg_rewind/t/007_standby_source.pl:$node_b->wait_for_replay_catchup('node_c', $node_a);\n> src/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n> src/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n> src/test/recovery/t/001_stream_rep.pl: $node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n> \n> So it looks like there is still a need for wait_for_replay_catchup() with 2 parameters.\n\nAh, cascading replication. In that case, let's make the second\nparameter optional. If it's not given, $self is used.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"En las profundidades de nuestro inconsciente hay una obsesiva necesidad\nde un universo lógico y coherente. Pero el universo real se halla siempre\nun paso más allá de la lógica\" (Irulan)\n\n\n", "msg_date": "Thu, 26 Jan 2023 10:42:41 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Hi,\n\nOn 1/26/23 10:42 AM, Alvaro Herrera wrote:\n> On 2023-Jan-26, Drouvot, Bertrand wrote:\n> \n>> On 1/24/23 7:27 PM, Alvaro Herrera wrote:\n> \n>>> 1. I don't think wait_for_write_catchup is necessary, because\n>>> calling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\n>>> would already do the same thing. \n\nHaving a closer look, it does not seem to be the case. The default mode\nin wait_for_catchup() is 'replay' and the default mode for the lsn is 'write'.\n\nBut in wait_for_write_catchup() we are making use of 'write' for both.\n\n> \n>>> 2. Because wait_for_replay_catchup is an instance method, passing the\n>>> second node as argument is needlessly noisy, because that's already\n>>> known as $self. So we can just say\n>>>\n>>> $primary_node->wait_for_replay_catchup($standby_node);\n>>\n>> Yeah, but same here, there is places where the node passed as the second argument is not the \"$self\":\n>>\n>> src/bin/pg_rewind/t/007_standby_source.pl:$node_b->wait_for_replay_catchup('node_c', $node_a);\n>> src/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n>> src/test/recovery/t/001_stream_rep.pl:$node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n>> src/test/recovery/t/001_stream_rep.pl: $node_standby_1->wait_for_replay_catchup($node_standby_2, $node_primary);\n>>\n>> So it looks like there is still a need for wait_for_replay_catchup() with 2 parameters.\n> \n> Ah, cascading replication. In that case, let's make the second\n> parameter optional. If it's not given, $self is used.\n> \n\nGood point, done in V3 attached.\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Thu, 26 Jan 2023 20:43:25 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "On 2023-Jan-26, Drouvot, Bertrand wrote:\n\n> Hi,\n> \n> On 1/26/23 10:42 AM, Alvaro Herrera wrote:\n> > On 2023-Jan-26, Drouvot, Bertrand wrote:\n> > \n> > > On 1/24/23 7:27 PM, Alvaro Herrera wrote:\n> > \n> > > > 1. I don't think wait_for_write_catchup is necessary, because\n> > > > calling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\n> > > > would already do the same thing.\n> \n> Having a closer look, it does not seem to be the case. The default mode\n> in wait_for_catchup() is 'replay' and the default mode for the lsn is 'write'.\n> \n> But in wait_for_write_catchup() we are making use of 'write' for both.\n\nBut that turns\n $node->wait_for_catchup('foobar', 'write')\ninto\n $node->wait_for_write_catchup('foobar');\nso I don't see much value in it. Also, the patch series from which this\npatch spawned in the first place doesn't wait for write AFAICS.\n\nAfter adding some more POD docs for it, I pushed the one for replay.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n¡Ay, ay, ay! Con lo mucho que yo lo quería (bis)\nse fue de mi vera ... se fue para siempre, pa toíta ... pa toíta la vida\n¡Ay Camarón! ¡Ay Camarón! (Paco de Lucía)\n\n\n", "msg_date": "Mon, 13 Feb 2023 11:58:02 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" }, { "msg_contents": "Hi,\n\nOn 2/13/23 11:58 AM, Alvaro Herrera wrote:\n> On 2023-Jan-26, Drouvot, Bertrand wrote:\n> \n>> Hi,\n>>\n>> On 1/26/23 10:42 AM, Alvaro Herrera wrote:\n>>> On 2023-Jan-26, Drouvot, Bertrand wrote:\n>>>\n>>>> On 1/24/23 7:27 PM, Alvaro Herrera wrote:\n>>>\n>>>>> 1. I don't think wait_for_write_catchup is necessary, because\n>>>>> calling wait_for_catchup() and omitting the 'mode' and 'lsn' arguments\n>>>>> would already do the same thing.\n>>\n>> Having a closer look, it does not seem to be the case. The default mode\n>> in wait_for_catchup() is 'replay' and the default mode for the lsn is 'write'.\n>>\n>> But in wait_for_write_catchup() we are making use of 'write' for both.\n> \n> But that turns\n> $node->wait_for_catchup('foobar', 'write')\n> into\n> $node->wait_for_write_catchup('foobar');\n> so I don't see much value in it.\n\nAgree.\n\n> Also, the patch series from which this\n> patch spawned in the first place doesn't wait for write AFAICS.\n> \n\nRight, it does wait for replay only.\n\n> After adding some more POD docs for it, I pushed the one for replay.\n> \n\nThanks!\n\nRegards,\n\n-- \nBertrand Drouvot\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 13 Feb 2023 16:07:25 +0100", "msg_from": "\"Drouvot, Bertrand\" <bertranddrouvot.pg@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Helper functions for wait_for_catchup() in Cluster.pm" } ]
[ { "msg_contents": "Hi hackers,\n\nThis is a follow-up to [1] and c8ad4d81.\n\n> Additionally Bharath pointed out that there are other pieces of code\n> that we may want to change in a similar fashion,\n> proclist.h/proclist_types.h as one example. I didn't do this yet\n> because I would like to know the community opinion first on whether we\n> should do this at all.\n\nSince the consensus seems to be to constify everything possible here\nis the patch for proclist.h. There is nothing to change in\nproclist_types.h.\n\n[1]: https://postgr.es/m/CAJ7c6TM2%3D08mNKD9aJg8vEY9hd%2BG4L7%2BNvh30UiNT3kShgRgNg%40mail.gmail.com\n\n-- \nBest regards,\nAleksander Alekseev", "msg_date": "Tue, 17 Jan 2023 15:18:06 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": true, "msg_subject": "[PATCH] Constify proclist.h" }, { "msg_contents": "On 17.01.23 13:18, Aleksander Alekseev wrote:\n> This is a follow-up to [1] and c8ad4d81.\n> \n>> Additionally Bharath pointed out that there are other pieces of code\n>> that we may want to change in a similar fashion,\n>> proclist.h/proclist_types.h as one example. I didn't do this yet\n>> because I would like to know the community opinion first on whether we\n>> should do this at all.\n> \n> Since the consensus seems to be to constify everything possible here\n> is the patch for proclist.h. There is nothing to change in\n> proclist_types.h.\n> \n> [1]: https://postgr.es/m/CAJ7c6TM2%3D08mNKD9aJg8vEY9hd%2BG4L7%2BNvh30UiNT3kShgRgNg%40mail.gmail.com\n\ncommitted\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 09:52:57 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Constify proclist.h" } ]
[ { "msg_contents": "Greetings -hackers,\n\nOur beloved Google Summer of Code is back for 2023, with a format \nsimilar to 2022: both medium and large sized projects can be proposed, \nwith more flexibility on end dates. The program will be open to students \nand open source beginners, as stated in this blog post: \nhttps://opensource.googleblog.com/2022/11/get-ready-for-google-summer-of-code-2023.html\n\nNow is the time to work on getting together a set of projects we'd like \nto have GSoC students work on over the summer. Similar to last year, we \nneed to have a good set of projects for students to choose from in \nadvance of the deadline for mentoring organizations.\n\nHowever, as noted in the blog post above, project length expectations \nmay vary. Please decide accordingly based on your requirements and \navailability! Also, there is going to be only one intermediate \nevaluation, similarly to last year.\n\nGSoC timeline: https://developers.google.com/open-source/gsoc/timeline\n\nThe deadline for Mentoring organizations to apply is: February 7. The \nlist of accepted organization will be published around February 22.\n\nUnsurprisingly, we'll need to have an Ideas page again, so I've gone \nahead and created one (copying last year's):\nhttps://wiki.postgresql.org/wiki/GSoC_2023\n\nGoogle discusses what makes a good \"Ideas\" list here:\nhttps://google.github.io/gsocguides/mentor/defining-a-project-ideas-list.html\n\nAll the entries are marked with '2022' to indicate they were pulled from \nlast year. If the project from last year is still relevant, please \nupdate it to be '2023' and make sure to update all of the information \n(in particular, make sure to list yourself as a mentor and remove the \nother mentors, as appropriate). Please also be sure to update the \nproject's scope to be appropriate for the new guidelines.\n\nNew entries are certainly welcome and encouraged, just be sure to note \nthem as '2023' when you add them. Projects from last year which were \nworked on but have significant follow-on work to be completed are \nabsolutely welcome as well - simply update the description appropriately \nand mark it as being for '2023'.\n\nWhen we get closer to actually submitting our application, I'll clean \nout the '2022' entries that didn't get any updates. Also - if there are \nany projects that are no longer appropriate (maybe they were completed, \nfor example and no longer need work), please feel free to remove them. \nThe page is still work in progress, so it's entirely possible I missed \nsome updates where a GSoC project was completed independently of GSoC \n(and if I removed any that shouldn't have been - feel free to add them \nback by copying from the 2022 page).\n\nAs a reminder, each idea on the page should be in the format that the \nother entries are in and should include:\n- Project title/one-line description\n- Brief, 2-5 sentence, description of the project\n- Description of programming skills needed and estimation of the \ndifficulty level\n- Project size\n- List of potential mentors\n- Expected Outcomes\n\nAs with last year, please consider PostgreSQL to be an \"Umbrella\" \nproject and that anything which would be considered \"PostgreSQL Family\" \nper the News/Announce policy [1] is likely to be acceptable as a \nPostgreSQL GSoC project.\n\nIn other words, if you're a contributor or developer on WAL-G, barman, \npgBackRest, the PostgreSQL website (pgweb), the PgEU/PgUS website code \n(pgeu-system), pgAdmin4, pgbouncer, pldebugger, the PG RPMs (pgrpms), \nthe JDBC driver, the ODBC driver, or any of the many other PG Family \nprojects, please feel free to add a project for consideration! If we get \nquite a few, we can organize the page further based on which project or \nmaybe what skills are needed or similar.\n\nLet's have another great year of GSoC with PostgreSQL!\n\nThanks!\n\nIlaria & Stephen\n\n[1]: https://www.postgresql.org/about/policies/news-and-events/\n\n\n\n", "msg_date": "Tue, 17 Jan 2023 14:59:48 +0100", "msg_from": "Ilaria Battiston <ilaria.battiston@gmail.com>", "msg_from_op": true, "msg_subject": "GSoC 2023" } ]
[ { "msg_contents": "I'm trying to better understand the following barging behaviour with SHARED\nlocks.\n\n*Setup:*\n\npostgres=# create table t(a INT);\nCREATE TABLE\npostgres=# INSERT INTO t VALUES(1);\nINSERT 0 1\n\nThen, performing the following operations in 3 different sessions, in\norder, we observe:\n\nSession 1 Session 2 Session 3\nBEGIN;\nBEGIN\npostgres=*# SELECT * FROM t WHERE a = 1 FOR SHARE;\n a\n---\n 1\n(1 row)\npostgres=# BEGIN;\nBEGIN\npostgres=*# SELECT * FROM t WHERE a = 1 FOR UPDATE;\n\n* --- waits\nBEGIN;\nBEGIN\npostgres=*# SELECT * FROM t WHERE a = 1 FOR SHARE;\n a\n---\n 1\n(1 row)\n\n* -- returns immediately\n\nGiven there is a transaction waiting to acquire a FOR UPDATE lock, I was\nsurprised to see the second FOR SHARE transaction return immediately\ninstead of waiting. I have two questions:\n\n1) Could this barging behaviour potentially starve out the transaction\nwaiting to acquire the FOR UPDATE lock, if there is a continuous queue of\ntransactions that acquire a FOR SHARE lock briefly?\n2) Assuming this is by design, I couldn't find (in code) where this\nexplicit policy choice is made. I was looking around LockAcquireExtended, but\nit seems like the decision is made above this layer. Could someone more\nfamiliar with this code point me at the right place?\n\nThanks\n\nI'm trying to better understand the following barging behaviour with SHARED locks.Setup: postgres=# create table t(a INT);CREATE TABLEpostgres=# INSERT INTO t VALUES(1);INSERT 0 1Then, performing the following operations in 3 different sessions, in order, we observe:Session 1Session 2Session 3BEGIN;BEGINpostgres=*# SELECT * FROM t WHERE a = 1 FOR SHARE; a--- 1(1 row)postgres=# BEGIN;BEGINpostgres=*# SELECT * FROM t WHERE a = 1 FOR UPDATE;* --- waitsBEGIN;BEGINpostgres=*# SELECT * FROM t WHERE a = 1 FOR SHARE; a--- 1(1 row)* -- returns immediately Given there is a transaction waiting to acquire a FOR UPDATE lock, I was surprised to see the second FOR SHARE transaction return immediately instead of waiting. I have two questions:1) Could this barging behaviour potentially starve out the transaction waiting to acquire the FOR UPDATE lock, if there is a continuous queue of transactions that acquire a FOR SHARE lock briefly?2) Assuming this is by design, I couldn't find (in code) where this explicit policy choice is made. I was looking around LockAcquireExtended, but it seems like the decision is made above this layer. Could someone more familiar with this code point me at the right place? Thanks", "msg_date": "Tue, 17 Jan 2023 12:18:28 -0500", "msg_from": "Arul Ajmani <arula@cockroachlabs.com>", "msg_from_op": true, "msg_subject": "SHARED locks barging behaviour" }, { "msg_contents": "On Tue, Jan 17, 2023 at 12:18:28PM -0500, Arul Ajmani wrote:\n> I'm trying to better understand the following barging behaviour with SHARED\n> locks.\n...\n> Given there is a transaction waiting to acquire a FOR UPDATE lock, I was\n> surprised to see the second FOR SHARE transaction return immediately instead of\n> waiting. I have two questions:\n> \n> 1) Could this barging behaviour potentially starve out the transaction waiting\n> to acquire the FOR UPDATE lock, if there is a continuous queue of transactions\n> that acquire a FOR SHARE lock briefly?\n\nYes, see below.\n\n> 2) Assuming this is by design, I couldn't find (in code) where this explicit\n> policy choice is made. I was looking around LockAcquireExtended, but it seems\n> like the decision is made above this layer. Could someone more familiar with\n> this code point me at the right place? \n\nI know this from January, but I do have an answer. First, looking at\nparser/gram.y, I see:\n\n | FOR SHARE { $$ = LCS_FORSHARE; }\n\nLooking for LCS_FORSHARE, I see in optimizer/plan/planner.c:\n\n case LCS_FORSHARE:\n return ROW_MARK_SHARE;\n\nLooking for ROW_MARK_SHARE, I see in executor/nodeLockRows.c:\n\n case ROW_MARK_SHARE:\n lockmode = LockTupleShare;\n\nLooking for LockTupleShare, I see in access/heap/heapam.c:\n\n else if (mode == LockTupleShare)\n {\n /*\n * If we're requesting Share, we can similarly avoid sleeping if\n * there's no update and no exclusive lock present.\n */\n if (HEAP_XMAX_IS_LOCKED_ONLY(infomask) &&\n !HEAP_XMAX_IS_EXCL_LOCKED(infomask))\n {\n LockBuffer(*buffer, BUFFER_LOCK_EXCLUSIVE);\n\n /*\n * Make sure it's still an appropriate lock, else start over.\n * See above about allowing xmax to change.\n */\n if (!HEAP_XMAX_IS_LOCKED_ONLY(tuple->t_data->t_infomask) ||\n HEAP_XMAX_IS_EXCL_LOCKED(tuple->t_data->t_infomask))\n goto l3;\n require_sleep = false;\n }\n }\n\nand this is basically saying that if the row is locked\n(HEAP_XMAX_IS_LOCKED_ONLY), but not exclusively locked\n(!HEAP_XMAX_IS_EXCL_LOCKED), then there is no need to sleep waiting for\nthe lock.\n\nI hope that helps.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Only you can decide what is important to you.\n\n\n", "msg_date": "Fri, 29 Sep 2023 17:45:50 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: SHARED locks barging behaviour" }, { "msg_contents": "On Fri, 2023-09-29 at 17:45 -0400, Bruce Momjian wrote:\n> On Tue, Jan 17, 2023 at 12:18:28PM -0500, Arul Ajmani wrote:\n> > I'm trying to better understand the following barging behaviour with SHARED\n> > locks.\n> ...\n> > Given there is a transaction waiting to acquire a FOR UPDATE lock, I was\n> > surprised to see the second FOR SHARE transaction return immediately instead of\n> > waiting. I have two questions:\n> > \n> > 1) Could this barging behaviour potentially starve out the transaction waiting\n> > to acquire the FOR UPDATE lock, if there is a continuous queue of transactions\n> > that acquire a FOR SHARE lock briefly?\n> \n> Yes, see below.\n> \n> > 2) Assuming this is by design, I couldn't find (in code) where this explicit\n> > policy choice is made. I was looking around LockAcquireExtended, but it seems\n> > like the decision is made above this layer. Could someone more familiar with\n> > this code point me at the right place? \n> \n> I know this from January, but I do have an answer. [...]\n\nYou answer the question where this is implemented. But the more important question\nis whether this is intentional. This code was added by 0ac5ad5134f (introducing\nFOR KEY SHARE and FOR NO KEY UPDATE). My feeling is that it is not intentional that\na continuous stream of share row locks can starve out an exclusive row lock, since\nPostgreSQL behaves differently with other locks.\n\nOn the other hand, if nobody has complained about it in these ten years, perhaps\nit is just fine the way it is, if by design or not.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 30 Sep 2023 00:50:11 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: SHARED locks barging behaviour" }, { "msg_contents": "Hi,\n\nOn 2023-09-30 00:50:11 +0200, Laurenz Albe wrote:\n> On Fri, 2023-09-29 at 17:45 -0400, Bruce Momjian wrote:\n> > On Tue, Jan 17, 2023 at 12:18:28PM -0500, Arul Ajmani wrote:\n> > > I'm trying to better understand the following barging behaviour with SHARED\n> > > locks.\n> > ...\n> > > Given there is a transaction waiting to acquire a FOR UPDATE lock, I was\n> > > surprised to see the second FOR SHARE transaction return immediately instead of\n> > > waiting. I have two questions:\n> > > \n> > > 1) Could this barging behaviour potentially starve out the transaction waiting\n> > > to acquire the FOR UPDATE lock, if there is a continuous queue of transactions\n> > > that acquire a FOR SHARE lock briefly?\n> > \n> > Yes, see below.\n> > \n> > > 2) Assuming this is by design, I couldn't find (in code) where this explicit\n> > > policy choice is made. I was looking around�LockAcquireExtended,�but it seems\n> > > like the decision is made above this layer. Could someone more familiar with\n> > > this code point me at the right place?�\n> > \n> > I know this from January, but I do have an answer. [...]\n> \n> You answer the question where this is implemented. But the more important question\n> is whether this is intentional. This code was added by 0ac5ad5134f (introducing\n> FOR KEY SHARE and FOR NO KEY UPDATE). My feeling is that it is not intentional that\n> a continuous stream of share row locks can starve out an exclusive row lock, since\n> PostgreSQL behaves differently with other locks.\n> \n> On the other hand, if nobody has complained about it in these ten years, perhaps\n> it is just fine the way it is, if by design or not.\n\nI'd be very hesitant to change the behaviour at this point - the likelihood of\nexisting workloads slowing down substantially, or even breaking due to an\nadditional source of deadlocks, seems substantial.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 30 Sep 2023 12:34:08 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: SHARED locks barging behaviour" } ]
[ { "msg_contents": "Hi, this is extension of `teach planner to evaluate multiple windows in \nthe optimal order` work applied to distinct operation.\n\nBased on discussions before \n(https://www.postgresql.org/message-id/flat/CAApHDvr7rSCVXzGfVa1L9pLpkKj6-s8NynK8o%2B98X9sKjejnQQ%40mail.gmail.com#e01327a3053d9281c40f281ef7105b42) \n,\n\n > All I imagine you need to do for it\n > is to invent a function in pathkeys.c which is along the lines of what\n > pathkeys_count_contained_in() does, but returns a List of pathkeys\n > which are in keys1 but not in keys2 and NIL if keys2 has a pathkey\n > that does not exist as a pathkey in keys1. In\n > create_final_distinct_paths(), you can then perform an incremental\n > sort on any input_path which has a non-empty return list and in\n > create_incremental_sort_path(), you'll pass presorted_keys as the\n > number of pathkeys in the path, and the required pathkeys the\n > input_path->pathkeys + the pathkeys returned from the new function.\n\n\nThere is bit confusion in wording here:\n\n\"returns a List of pathkeys\nwhich are in keys1 but not in keys2 and NIL if keys2 has a pathkey\nthat does not exist as a pathkey in keys1.\"\n\nYou mean extract common keys without ordering right?\n\nExample: keys1 = (a,b,c), keys2 = (b,a)\n\nreturns (a,b)\n\nand\n\nkeys1 = (a,b,c), keys = (d)\n\nreturns = ()\n\nwhich translates to\n\nneeded_pathkeys = (a,b,c) = key2\n\ninput_pathkeys = (b,a) key1\n\nreturns (b,a) = common_keys\n\nnew needed_pathkeys = unique(common_keys + old needed_pathkeys)\n\n=> new needed_pathkeys = (b,a,c)\n\nThe new needed_pathkeys matches input_pathkeys.\n\nThis is what I implemented in the patch.\n\n\nThe patched version yields the following plans:\n\nset enable_hashagg=0;\nset enable_seqscan=0;\n\nexplain (costs off) select distinct relname,relkind,count(*) over \n(partition by\nrelkind) from pg_Class;\n                        QUERY PLAN\n---------------------------------------------------------\n  Unique\n    ->  Incremental Sort\n          Sort Key: relkind, relname, (count(*) OVER (?))\n          Presorted Key: relkind\n          ->  WindowAgg\n                ->  Sort\n                      Sort Key: relkind\n                      ->  Seq Scan on pg_class\n(8 rows)\n\nexplain (costs off) select distinct a, b, count(*) over (partition by b, \na) from abcd;\n                        QUERY PLAN\n--------------------------------------------------------\n  Unique\n    ->  Incremental Sort\n          Sort Key: b, a, (count(*) OVER (?))\n          Presorted Key: b, a\n          ->  WindowAgg\n                ->  Incremental Sort\n                      Sort Key: b, a\n                      Presorted Key: b\n                      ->  Index Scan using b_idx on abcd\n(9 rows)\n\nexplain (costs off) select distinct a, b, count(*) over (partition by c, \nd) from abcd;\n                        QUERY PLAN\n--------------------------------------------------------\n  Unique\n    ->  Sort\n          Sort Key: a, b, (count(*) OVER (?))\n          ->  WindowAgg\n                ->  Incremental Sort\n                      Sort Key: c, d\n                      Presorted Key: c\n                      ->  Index Scan using c_idx on abcd\n(8 rows)\n\n\nIssue with index path still remains as pathkeys get purged by \ntruncate_useless_pathkeys\n\nand hence are not available in create_final_distinct_paths for the above \noptimizations.\n\n\nI have attached a patch for the reference.\n\n\nThanks,\n\nAnkit", "msg_date": "Wed, 18 Jan 2023 00:57:54 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Teach planner to further optimize sort in distinct" }, { "msg_contents": "On Wed, 18 Jan 2023 at 08:27, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> There is bit confusion in wording here:\n>\n> \"returns a List of pathkeys\n> which are in keys1 but not in keys2 and NIL if keys2 has a pathkey\n> that does not exist as a pathkey in keys1.\"\n>\n> You mean extract common keys without ordering right?\n\nI think you should write a function like:\n\nbool pathkeys_count_contained_in_unordered(List *keys1, List *keys2,\nList **reorderedkeys, int *n_common)\n\nwhich works very similarly to pathkeys_count_contained_in, but\npopulates *reorderedkeys so it contains all of the keys in keys1, but\nput the matching ones in the same order as they are in keys2. If you\nfind a keys2 that does not exist in keys1 then just add the additional\nunmatched keys1 keys to *reorderedkeys. Set *n_common to the number\nof common keys excluding any that come after a key2 key that does not\nexist as a key1 key.\n\nYou can just switch to using that function in\ncreate_final_distinct_paths(). You'll need to consider if the query is\na DISTINCT ON query and not try the unordered version of the function\nin that case.\n\nI also just noticed that in build_index_paths() we'll leave the index\npath's pathkeys empty if we deem the pathkeys as useless. I'm not\nsure what the repercussions of setting those to the return value of\nbuild_index_pathkeys() if useful_pathkeys is otherwise empty. It's\npossible that truncate_useless_pathkeys() needs to be modified to\ncheck if the pathkeys might be useful for DISTINCT, but now that I see\nwe don't populate the IndexPath's pathkeys when we deem them not\nuseful makes me wonder if this entire patch is a good idea. When I\nthought about it I assumed that we always set IndexPath's pathkeys to\nwhatever (if any) sort order that the index provides.\n\nDavid\n\n\n", "msg_date": "Fri, 20 Jan 2023 02:19:11 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Teach planner to further optimize sort in distinct" }, { "msg_contents": "> On 19/01/23 18:49, David Rowley wrote:\n\n> I think you should write a function like:\n\n> bool pathkeys_count_contained_in_unordered(List *keys1, List *keys2,\n> List **reorderedkeys, int *n_common)\n\n> which works very similarly to pathkeys> _count_contained_in, but\n> populates *reorderedkeys so it contains all of the keys in keys1, but\n> put the matching ones in the same order as they are in keys2. If you\n> find a keys2 that does not exist in keys1 then just add the additional\n> unmatched keys1 keys to *reorderedkeys. Set *n_common to the number\n> of common keys excluding any that come after a key2 key that does not\n> exist as a key1 key.\n\n> You can just switch to using that function in\n> create_final_distinct_paths(). You'll need to consider if the query is\n> a DISTINCT ON query and not try the unordered version of the function\n> in that case.\n\nTried this, it worked as expected. Tests are green as well.\n\n> I also just noticed that in build_index_paths() we'll leave the index\n> path's pathkeys empty if we deem the pathkeys as useless. I'm not\n> sure what the repercussions of setting those to the return value of\n> build_index_pathkeys() if useful_pathkeys is otherwise empty.\n\nThis is very rigid indeed.\n\n> It's possible that truncate_useless_pathkeys() needs to be modified to\n> check if the pathkeys might be useful for DISTINCT \n\nWe have pathkeys_useful_for_merging and pathkeys_useful_for_ordering.\n\nCan we not have pathkeys_useful_for_distinct?\n\nAlso, pathkeys_useful_for_ordering calls pathkeys_count_contained_in.\n\nWe can add code path on similar lines.\n\n> When I\n> thought about it I assumed that we always set IndexPath's pathkeys to\n> whatever (if any) sort order that the index provides.\n\nCan we not added original path keys in IndexPath? It could be useful\n\nat other places as well. Atleast, I can see it useful in sorting cases.\n\n> makes me wonder if this entire patch is a good idea. \n\nWe are still getting some benefit even without index paths for now.\n\n\nI have attached patch with pathkeys_count_contained_in_unordered\n\nand corresponding changes in create_final_distinct_paths for reference.\n\n\nThanks,\n\nAnkit", "msg_date": "Fri, 20 Jan 2023 00:56:13 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Teach planner to further optimize sort in distinct" }, { "msg_contents": "On Fri, 20 Jan 2023 at 08:26, Ankit Kumar Pandey <itsankitkp@gmail.com> wrote:\n> > On 19/01/23 18:49, David Rowley wrote:\n> > You can just switch to using that function in\n> > create_final_distinct_paths(). You'll need to consider if the query is\n> > a DISTINCT ON query and not try the unordered version of the function\n> > in that case.\n>\n> Tried this, it worked as expected. Tests are green as well.\n\nLooking at the patch, you've not added any additional tests. If the\nexisting tests are all passing then that just tells me that either the\ncode is not functioning as intended or we have no tests that look at\nthe EXPLAIN output which can make use of this optimization. If the\nformer is true, then the patch needs to be fixed. If it's the latter\nthen you need to write new tests.\n\n> > It's possible that truncate_useless_pathkeys() needs to be modified to\n> > check if the pathkeys might be useful for DISTINCT\n>\n> We have pathkeys_useful_for_merging and pathkeys_useful_for_ordering.\n>\n> Can we not have pathkeys_useful_for_distinct?\n\nI don't know all the repercussions. If you look at add_path() then\nyou'll see we do a pathkey comparison when the costs are not fuzzily\ndifferent from an existing path so that we try to keep a path with the\nbest pathkeys. If we start keeping paths around with other weird\npathkeys that are not along the lines of the query_pathkeys requires,\nthen add_path might start throwing away paths that are actually good\nfor something. It seems probable that could cause some regressions.\n\n> I have attached patch with pathkeys_count_contained_in_unordered\n> and corresponding changes in create_final_distinct_paths for reference.\n\nDoes this patch actually work? I tried:\n\ncreate table ab (a int, b int);\ninsert into ab select a,b from generate_Series(1,1000)\na,generate_series(1,1000) b;\nanalyze ab;\ncreate index on ab(a);\nset enable_hashagg=0;\nexplain select distinct b,a from ab where a < 10;\n QUERY PLAN\n------------------------------------------------------------------------------------\n Unique (cost=729.70..789.67 rows=7714 width=8)\n -> Sort (cost=729.70..749.69 rows=7996 width=8)\n Sort Key: b, a\n -> Index Scan using ab_a_idx on ab (cost=0.42..211.36\nrows=7996 width=8)\n Index Cond: (a < 10)\n(5 rows)\n\nI'd have expected an incremental sort here. I don't see that you're\nadjusting IndexPath's pathkeys anywhere. The nested loop in\npathkeys_count_contained_in_unordered() seems to be inside out. The\nreordered_pathkeys must have the common pathkeys in the order they\nappear in keys2. In your patch, they'll be ordered according to the\nkeys1 list, which is wrong. Testing would tell you this, so all the\nmore reason to make the patch work and write some queries to ensure it\ndoes actually work, then some tests to ensure that remains in a\nworking state.\n\nFeel free to take the proper time to write a working patch which\ncontains new tests to ensure it's functioning as intended. It's\ndisheartening to review patches that don't seem to work. If it wasn't\nmeant to work, then you didn't make that clear. I'll likely not be\nable to do any further reviews until the March commitfest, so it might\nbe better to only post if you're stuck. Please don't rush out the\nnext patch. Take your time and study the code and see if you can build\nup your own picture for what the repercussions might be of IndexPaths\nhaving additional pathkeys when they were previously empty. If you're\nuncertain of aspects of the patch you've written feel free to leave\nXXX type comments to indicate this. That way the reviewer will know\nyou might need more guidance on that and you'll not forget yourself\nwhen you come back and look again after a few weeks.\n\nDavid\n\n\n", "msg_date": "Fri, 20 Jan 2023 13:37:52 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Teach planner to further optimize sort in distinct" }, { "msg_contents": "\n> On 20/01/23 06:07, David Rowley wrote:\n\n> Looking at the patch, you've not added any additional tests. If the\n> existing tests are all passing then that just tells me that either the\n> code is not functioning as intended or we have no tests that look at\n> the EXPLAIN output which can make use of this optimization. If the\n> former is true, then the patch needs to be fixed. If it's the latter\n> then you need to write new tests.\n\nI definitely need to add tests because this scenario is missing.\n\n\n\n> I don't know all the repercussions. If you look at add_path() then\n> you'll see we do a pathkey comparison when the costs are not fuzzily\n> different from an existing path so that we try to keep a path with the\n> best pathkeys. If we start keeping paths around with other weird\n> pathkeys that are not along the lines of the query_pathkeys requires,\n> then add_path might start throwing away paths that are actually good\n> for something. It seems probable that could cause some regressions.\n\nOkay, in that case I think it is better idea to store original pathkeys\n(apart from what get assigned by useful_pathkeys). I need to dig deeper for this.\n\n\n> Does this patch actually work? I tried:\n> I don't see that you're\n> adjusting IndexPath's pathkeys anywhere. \n\nI had removed the changes for indexPath (it was in v1) because I hadn't investigated\nrepercussions. But I failed to mention this.\n\n> The nested loop in\n> pathkeys_count_contained_in_unordered() seems to be inside out. The\n> reordered_pathkeys must have the common pathkeys in the order they\n> appear in keys2. In your patch, they'll be ordered according to the\n> keys1 list, which is wrong. Testing would tell you this, so all the\n> more reason to make the patch work and write some queries to ensure it\n> does actually work, then some tests to ensure that remains in a\n> working state.\n> Feel free to take the proper time to write a working patch which\n> contains new tests to ensure it's functioning as intended. It's\n> disheartening to review patches that don't seem to work. If it wasn't\n> meant to work, then you didn't make that clear.\n> Please don't rush out the next patch. Take your time and study the code \n> and see if you can build up your own picture for what the repercussions \n> might be of IndexPaths having additional pathkeys when they were previously empty. \n> If you're uncertain of aspects of the patch you've written feel free to leave\n> XXX type comments to indicate this. That way the reviewer will know\n> you might need more guidance on that and you'll not forget yourself\n> when you come back and look again after a few weeks.\n\nI deeply regret this. I will be mindful of my patches and ensure that they are\ncomplete by themselves.\nThanks for your pointers as well, I can see errors in my approach which I will address.\n \n\n> I'll likely not be\n> able to do any further reviews until the March commitfest, so it might\n> be better to only post if you're stuck. \n\nYes sure, I will work on patches and limit posts to discussion only (if blocked).\n\nThanks,\nAnkit\n\n\n\n", "msg_date": "Fri, 20 Jan 2023 23:02:13 +0530", "msg_from": "Ankit Kumar Pandey <itsankitkp@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Teach planner to further optimize sort in distinct" } ]
[ { "msg_contents": "Hi,\n\nI happened to notice some examples of SGML linkends that were using\nsingle quotes instead of double quotes.\n\nIt didn't seem to be the conventional style because grepping (from\ndoc/src/sgml folder) showed only a tiny fraction using single quotes.\n\n(single-quotes)\n$ grep --include=*.sgml -rn . -e \"linkend='\" | wc -l\n12\n\n(double-quotes)\n$ grep --include=*.sgml -rn . -e 'linkend=\"' | wc -l\n5915\n\n~~\n\nPSA patch that makes them all use double quotes.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Wed, 18 Jan 2023 09:37:37 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "PGDOCS - sgml linkend using single-quotes" }, { "msg_contents": "On 18/01/2023 00:37, Peter Smith wrote:\n> I happened to notice some examples of SGML linkends that were using\n> single quotes instead of double quotes.\n> \n> It didn't seem to be the conventional style because grepping (from\n> doc/src/sgml folder) showed only a tiny fraction using single quotes.\n> \n> (single-quotes)\n> $ grep --include=*.sgml -rn . -e \"linkend='\" | wc -l\n> 12\n> \n> (double-quotes)\n> $ grep --include=*.sgml -rn . -e 'linkend=\"' | wc -l\n> 5915\n> \n> ~~\n> \n> PSA patch that makes them all use double quotes.\n\nThere were also a few \"id\" attributes using single-quotes. Fixed those \ntoo, and pushed. Thanks!\n\n- Heikki\n\n\n\n", "msg_date": "Mon, 27 Feb 2023 10:04:21 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: PGDOCS - sgml linkend using single-quotes" }, { "msg_contents": "On Mon, Feb 27, 2023 at 7:04 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n...\n>\n> There were also a few \"id\" attributes using single-quotes. Fixed those\n> too, and pushed. Thanks!\n>\n\nThankyou for pushing.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Tue, 28 Feb 2023 08:21:38 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: PGDOCS - sgml linkend using single-quotes" } ]
[ { "msg_contents": "Hi, hackers\n\n\nThe attached patch includes below two changes for the description of\nLogical Replication \"Configuration Settings\".\n\n1. Add one brief description about wal_sender_timeout.\n I made it similar to one other sentence for subscriber.\n2. Fix a wrong GUC name \"wal_receiver_retry_interval\".\n I think this doesn't seem to exist and would mean \"wal_retrieve_retry_interval\".\n\nKindly have a look at it.\n\n\nBest Regards,\n\tTakamichi Osumi", "msg_date": "Wed, 18 Jan 2023 07:00:43 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "Modify the document of Logical Replication configuration settings" }, { "msg_contents": "On Wed, Jan 18, 2023 at 07:00:43AM +0000, Takamichi Osumi (Fujitsu) wrote:\n> The attached patch includes below two changes for the description of\n> Logical Replication \"Configuration Settings\".\n> \n> 1. Add one brief description about wal_sender_timeout.\n> I made it similar to one other sentence for subscriber.\n> 2. Fix a wrong GUC name \"wal_receiver_retry_interval\".\n> I think this doesn't seem to exist and would mean \"wal_retrieve_retry_interval\".\n> \n> Kindly have a look at it.\n\nLooks right to me, thanks!\n--\nMichael", "msg_date": "Wed, 18 Jan 2023 16:11:02 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Modify the document of Logical Replication configuration settings" }, { "msg_contents": "On Wednesday, January 18, 2023 4:11 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 18, 2023 at 07:00:43AM +0000, Takamichi Osumi (Fujitsu) wrote:\n> > The attached patch includes below two changes for the description of\n> > Logical Replication \"Configuration Settings\".\n> >\n> > 1. Add one brief description about wal_sender_timeout.\n> > I made it similar to one other sentence for subscriber.\n> > 2. Fix a wrong GUC name \"wal_receiver_retry_interval\".\n> > I think this doesn't seem to exist and would mean\n> \"wal_retrieve_retry_interval\".\n> >\n> > Kindly have a look at it.\n> \n> Looks right to me, thanks!\nThank you for checking, too !\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Wed, 18 Jan 2023 08:15:24 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Modify the document of Logical Replication configuration settings" }, { "msg_contents": "On Wed, Jan 18, 2023 at 12:31 PM Takamichi Osumi (Fujitsu)\n<osumi.takamichi@fujitsu.com> wrote:\n>\n> Hi, hackers\n>\n> The attached patch includes below two changes for the description of\n> Logical Replication \"Configuration Settings\".\n>\n> 1. Add one brief description about wal_sender_timeout.\n> I made it similar to one other sentence for subscriber.\n\n+ <para>\n+ Logical replication walsender is also affected by\n+ <link linkend=\"guc-wal-sender-timeout\"><varname>wal_sender_timeout</varname></link>.\n+ </para>\n\nLooks fine. Adding something like [1] in wal_sender_timeout GUC's\ndescription might be a good idea just to give specific information\nthat the logical replication subscribers too get affected. Perhaps,\nit's not required since the postgres glossary wraps logical\nreplication subscriber under standby anyway -\nhttps://www.postgresql.org/docs/devel/glossary.html. To me personally,\nthe typical notion of standby is the one connected to primary via\nstreaming replication.\n\n> 2. Fix a wrong GUC name \"wal_receiver_retry_interval\".\n> I think this doesn't seem to exist and would mean \"wal_retrieve_retry_interval\".\n\nGood catch. +1.\n\n[1]\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex 89d53f2a64..6f9509267c 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -4326,7 +4326,8 @@ restore_command = 'copy\n\"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n <para>\n Terminate replication connections that are inactive for longer\n than this amount of time. This is useful for\n- the sending server to detect a standby crash or network outage.\n+ the sending server to detect a standby crash or logical replication\n+ subscriber crash or network outage.\n If this value is specified without units, it is taken as milliseconds.\n The default value is 60 seconds.\n A value of zero disables the timeout mechanism.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 18 Jan 2023 14:04:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Modify the document of Logical Replication configuration settings" }, { "msg_contents": "On Wed, Jan 18, 2023 at 02:04:16PM +0530, Bharath Rupireddy wrote:\n> [1]\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index 89d53f2a64..6f9509267c 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -4326,7 +4326,8 @@ restore_command = 'copy\n> \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n> <para>\n> Terminate replication connections that are inactive for longer\n> than this amount of time. This is useful for\n> - the sending server to detect a standby crash or network outage.\n> + the sending server to detect a standby crash or logical replication\n> + subscriber crash or network outage.\n> If this value is specified without units, it is taken as milliseconds.\n> The default value is 60 seconds.\n> A value of zero disables the timeout mechanism.\n\nPerhaps we could do that, I am not sure whether this brings much in\nthis section, though.\n--\nMichael", "msg_date": "Thu, 19 Jan 2023 15:14:01 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Modify the document of Logical Replication configuration settings" }, { "msg_contents": "Hi,\n\nOn Thursday, January 19, 2023 3:14 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Jan 18, 2023 at 02:04:16PM +0530, Bharath Rupireddy wrote:\n> > [1]\n> > diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml index\n> > 89d53f2a64..6f9509267c 100644\n> > --- a/doc/src/sgml/config.sgml\n> > +++ b/doc/src/sgml/config.sgml\n> > @@ -4326,7 +4326,8 @@ restore_command = 'copy\n> > \"C:\\\\server\\\\archivedir\\\\%f\" \"%p\"' # Windows\n> > <para>\n> > Terminate replication connections that are inactive for longer\n> > than this amount of time. This is useful for\n> > - the sending server to detect a standby crash or network outage.\n> > + the sending server to detect a standby crash or logical replication\n> > + subscriber crash or network outage.\n> > If this value is specified without units, it is taken as milliseconds.\n> > The default value is 60 seconds.\n> > A value of zero disables the timeout mechanism.\n> \n> Perhaps we could do that, I am not sure whether this brings much in this\n> section, though.\nThis might increase comprehensiveness of the doc slightly.\n\nIf we want to do this, it might be better to\nadd this kind of additions to other parameters such as\nwal_receiver_timeout, wal_retrieve_retry_interval\nand wal_receiver_status_interval, too.\n\nBTH, thank you for having taken care of my patch, Michael-san!\n\n\nBest Regards,\n\tTakamichi Osumi\n\n\n\n", "msg_date": "Thu, 19 Jan 2023 16:06:14 +0000", "msg_from": "\"Takamichi Osumi (Fujitsu)\" <osumi.takamichi@fujitsu.com>", "msg_from_op": true, "msg_subject": "RE: Modify the document of Logical Replication configuration settings" } ]
[ { "msg_contents": "Hi,\n\nlogicalrep_read_tuple() duplicates code for LOGICALREP_COLUMN_TEXT and\nLOGICALREP_COLUMN_BINARY introduced by commit 9de77b5. While it\ndoesn't hurt anyone, deduplication makes code a bit leaner by 57 bytes\n[1]. I've attached a patch for $SUBJECT.\n\nThoughts?\n\n[1] size ./src/backend/replication/logical/proto.o\nPATCHED:\n text data bss dec hex filename\n 15558 0 0 15558 3cc6\n./src/backend/replication/logical/proto.o\n\nHEAD:\n text data bss dec hex filename\n 15615 0 0 15615 3cff\n./src/backend/replication/logical/proto.o\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 18 Jan 2023 12:56:06 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Deduplicate logicalrep_read_tuple()" }, { "msg_contents": "On Wed, Jan 18, 2023 at 6:26 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> logicalrep_read_tuple() duplicates code for LOGICALREP_COLUMN_TEXT and\n> LOGICALREP_COLUMN_BINARY introduced by commit 9de77b5. While it\n> doesn't hurt anyone, deduplication makes code a bit leaner by 57 bytes\n> [1]. I've attached a patch for $SUBJECT.\n>\n> Thoughts?\n>\n\nThe code looks the same but there is a subtle comment difference where\npreviously only LOGICALREP_COLUMN_BINARY case said:\n /* not strictly necessary but per StringInfo practice */\n\nSo if you de-duplicate the code then should that comment be modified to say\n/* not strictly necessary for LOGICALREP_COLUMN_BINARY but per\nStringInfo practice */\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Thu, 19 Jan 2023 14:06:26 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate logicalrep_read_tuple()" }, { "msg_contents": "On Thu, Jan 19, 2023 at 8:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Wed, Jan 18, 2023 at 6:26 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > logicalrep_read_tuple() duplicates code for LOGICALREP_COLUMN_TEXT and\n> > LOGICALREP_COLUMN_BINARY introduced by commit 9de77b5. While it\n> > doesn't hurt anyone, deduplication makes code a bit leaner by 57 bytes\n> > [1]. I've attached a patch for $SUBJECT.\n> >\n> > Thoughts?\n> >\n>\n> The code looks the same but there is a subtle comment difference where\n> previously only LOGICALREP_COLUMN_BINARY case said:\n> /* not strictly necessary but per StringInfo practice */\n>\n> So if you de-duplicate the code then should that comment be modified to say\n> /* not strictly necessary for LOGICALREP_COLUMN_BINARY but per\n> StringInfo practice */\n\nThanks. Done so in the attached v2.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Fri, 3 Mar 2023 16:13:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Deduplicate logicalrep_read_tuple()" }, { "msg_contents": "On Fri, Mar 3, 2023 at 4:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Jan 19, 2023 at 8:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> >\n> > On Wed, Jan 18, 2023 at 6:26 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > logicalrep_read_tuple() duplicates code for LOGICALREP_COLUMN_TEXT and\n> > > LOGICALREP_COLUMN_BINARY introduced by commit 9de77b5. While it\n> > > doesn't hurt anyone, deduplication makes code a bit leaner by 57 bytes\n> > > [1]. I've attached a patch for $SUBJECT.\n> > >\n> > > Thoughts?\n> > >\n> >\n> > The code looks the same but there is a subtle comment difference where\n> > previously only LOGICALREP_COLUMN_BINARY case said:\n> > /* not strictly necessary but per StringInfo practice */\n> >\n> > So if you de-duplicate the code then should that comment be modified to say\n> > /* not strictly necessary for LOGICALREP_COLUMN_BINARY but per\n> > StringInfo practice */\n>\n> Thanks. Done so in the attached v2.\n>\n\nLGTM. Unless Peter or someone has any comments on this, I'll push this\nearly next week.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Fri, 3 Mar 2023 16:34:30 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate logicalrep_read_tuple()" }, { "msg_contents": "On Fri, Mar 3, 2023 at 10:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Fri, Mar 3, 2023 at 4:13 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Thu, Jan 19, 2023 at 8:36 AM Peter Smith <smithpb2250@gmail.com> wrote:\n> > >\n> > > On Wed, Jan 18, 2023 at 6:26 PM Bharath Rupireddy\n> > > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> > > >\n> > > > Hi,\n> > > >\n> > > > logicalrep_read_tuple() duplicates code for LOGICALREP_COLUMN_TEXT and\n> > > > LOGICALREP_COLUMN_BINARY introduced by commit 9de77b5. While it\n> > > > doesn't hurt anyone, deduplication makes code a bit leaner by 57 bytes\n> > > > [1]. I've attached a patch for $SUBJECT.\n> > > >\n> > > > Thoughts?\n> > > >\n> > >\n> > > The code looks the same but there is a subtle comment difference where\n> > > previously only LOGICALREP_COLUMN_BINARY case said:\n> > > /* not strictly necessary but per StringInfo practice */\n> > >\n> > > So if you de-duplicate the code then should that comment be modified to say\n> > > /* not strictly necessary for LOGICALREP_COLUMN_BINARY but per\n> > > StringInfo practice */\n> >\n> > Thanks. Done so in the attached v2.\n> >\n>\n> LGTM. Unless Peter or someone has any comments on this, I'll push this\n> early next week.\n>\n\nNo more comments. Patch v2 LGTM.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia\n\n\n", "msg_date": "Sun, 5 Mar 2023 18:52:19 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate logicalrep_read_tuple()" }, { "msg_contents": "On Sun, Mar 5, 2023 at 1:22 PM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Fri, Mar 3, 2023 at 10:04 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > >\n> > > Thanks. Done so in the attached v2.\n> > >\n> >\n> > LGTM. Unless Peter or someone has any comments on this, I'll push this\n> > early next week.\n> >\n>\n> No more comments. Patch v2 LGTM.\n>\n\nPushed.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Mon, 6 Mar 2023 16:14:24 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Deduplicate logicalrep_read_tuple()" } ]