threads
listlengths
1
2.99k
[ { "msg_contents": "Hi.\n\nI was investigating why foreign join is not happening and found the \nfollowing issue. When we search for foreign join path we stop on the \nfirst try. For example, in case\nA JOIN B JOIN C where all of them are foreign tables on the same server \nfirstly we estimate A JOIN B, A JOIN C and B JOIN C costs. Then we \nselect one of them (let's say A JOIN B) and estimate (A JOIN B) JOIN C \ncost. On this stage we fill in joinrel->fdw_private in joinrel (A, B, \nC). Now we could look at A JOIN (B JOIN C), but we don't, as\nin contrib/postgres_fdw/postgres_fdw.c:5962 (in \npostgresGetForeignJoinPaths()) we have:\n\n\t/*\n\t * Skip if this join combination has been considered already.\n\t */\n\tif (joinrel->fdw_private)\n\t\treturn;\n\nThis can lead to situation when A JOIN B JOIN C cost is not correctly \nestimated (as we don't look at another join orders) and foreign join is \nnot pushed down when it would be efficient.\n\nAttached patch tries to fix this. Now we collect different join paths \nfor joinrels, if join is possible at all. However, as we don't know what \npath is really selected, we can't precisely estimate fpinfo costs.\n\nThe following example shows the case, when we fail to push down foreign \njoin due to mentioned issue.\n\ncreate extension postgres_fdw;\n\nDO $d$\n BEGIN\n EXECUTE $$CREATE SERVER loopback FOREIGN DATA WRAPPER \npostgres_fdw\n OPTIONS (dbname '$$||current_database()||$$',\n port '$$||current_setting('port')||$$'\n )$$;\nEND\n$d$;\n\ncreate user MAPPING FOR PUBLIC SERVER loopback ;\n\nCREATE SCHEMA test;\n\nCREATE TABLE test.district (\n d_id smallint NOT NULL PRIMARY KEY,\n d_next_o_id integer NOT NULL,\n d_name varchar(100) NOT NULL\n);\n\nCREATE TABLE test.stock (\n s_i_id integer NOT NULL PRIMARY KEY,\n s_quantity smallint NOT NULL,\n s_data varchar(100) NOT NULL\n);\n\nCREATE TABLE test.order_line (\n ol_o_id integer NOT NULL PRIMARY KEY,\n ol_i_id integer NOT NULL,\n ol_d_id smallint NOT NULL\n);\n\nimport foreign schema test from server loopback into public ;\n\nINSERT INTO district SELECT i, 20000 + 20*i , 'test' FROM \ngenerate_series(1,10) i;\nINSERT INTO stock SELECT i, round(random()*100) + 10, 'test data' FROM \ngenerate_series(1,10000) i;\nINSERT INTO order_line SELECT i, i%10000, i%10 FROM \ngenerate_series(1,30000) i;\n\nanalyze test.order_line, test.district ,test.stock;\nanalyze order_line, district ,stock;\n\n\nWithout patch:\n\nexplain analyze verbose SELECT * FROM order_line, stock, district WHERE \nol_d_id = 1 AND d_id = 1 AND (ol_o_id < d_next_o_id) AND ol_o_id >= \n(d_next_o_id - 20) AND s_i_id = ol_i_id AND s_quantity < 11;\n \n QUERY PLAN\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Hash Join (cost=382.31..966.86 rows=2 width=37) (actual \ntime=5.459..5.466 rows=0 loops=1)\n Output: order_line.ol_o_id, order_line.ol_i_id, order_line.ol_d_id, \nstock.s_i_id, stock.s_quantity, stock.s_data, district.d_id, \ndistrict.d_next_o_id, district.d_name\n Hash Cond: (order_line.ol_i_id = stock.s_i_id)\n -> Foreign Scan (cost=100.00..683.29 rows=333 width=21) (actual \ntime=2.773..2.775 rows=2 loops=1)\n Output: order_line.ol_o_id, order_line.ol_i_id, \norder_line.ol_d_id, district.d_id, district.d_next_o_id, district.d_name\n Relations: (public.order_line) INNER JOIN (public.district)\n Remote SQL: SELECT r1.ol_o_id, r1.ol_i_id, r1.ol_d_id, r3.d_id, \nr3.d_next_o_id, r3.d_name FROM (test.order_line r1 INNER JOIN \ntest.district r3 ON (((r1.ol_o_id < r3.d_next_o_id)) AND ((r1.ol_o_id >= \n(r3.d_next_o_id - 20))) AND ((r3.d_id = 1)) AND ((r1.ol_d_id = 1))))\n -> Hash (cost=281.42..281.42 rows=71 width=16) (actual \ntime=2.665..2.665 rows=48 loops=1)\n Output: stock.s_i_id, stock.s_quantity, stock.s_data\n Buckets: 1024 Batches: 1 Memory Usage: 11kB\n -> Foreign Scan on public.stock (cost=100.00..281.42 rows=71 \nwidth=16) (actual time=2.625..2.631 rows=48 loops=1)\n Output: stock.s_i_id, stock.s_quantity, stock.s_data\n Remote SQL: SELECT s_i_id, s_quantity, s_data FROM \ntest.stock WHERE ((s_quantity < 11))\n Planning Time: 1.812 ms\n Execution Time: 8.534 ms\n(15 rows)\n\nWith patch:\n\nexplain analyze verbose SELECT * FROM order_line, stock, district WHERE \nol_d_id = 1 AND d_id = 1 AND (ol_o_id < d_next_o_id) AND ol_o_id >= \n(d_next_o_id - 20) AND s_i_id = ol_i_id AND s_quantity < 11;\n \n \n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Foreign Scan (cost=100.00..974.88 rows=2 width=37) (actual \ntime=3.399..3.400 rows=0 loops=1)\n Output: order_line.ol_o_id, order_line.ol_i_id, order_line.ol_d_id, \nstock.s_i_id, stock.s_quantity, stock.s_data, district.d_id, \ndistrict.d_next_o_id, district.d_name\n Relations: ((public.order_line) INNER JOIN (public.district)) INNER \nJOIN (public.stock)\n Remote SQL: SELECT r1.ol_o_id, r1.ol_i_id, r1.ol_d_id, r2.s_i_id, \nr2.s_quantity, r2.s_data, r3.d_id, r3.d_next_o_id, r3.d_name FROM \n((test.order_line r1 INNER JOIN test.district r3 ON (((r1.ol_o_id < \nr3.d_next_o_id)) AND ((r1.ol_o_id >= (r3.d_next_o_id - 20))) AND \n((r3.d_id = 1)) AND ((r1.ol_d_id = 1)))) INNER JOIN test.stock r2 ON \n(((r1.ol_i_id = r2.s_i_id)) AND ((r2.s_quantity < 11))))\n Planning Time: 0.928 ms\n Execution Time: 4.511 ms\n\n\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 25 Jan 2022 10:56:12 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Foreign join search stops on the first try" }, { "msg_contents": "This code was written long ago. So I may have some recollection\nerrors. But AFAIR, the reasons we wanted to avoid repeated\nestimation/planning for the same foreign join rel were\n1. If use_remote_estimate = true, we fetch EXPLAIN output from the\nforeign server for various pathkeys. Fetching EXPLAIN output is\nexpensive. Irrespective of the join order being considered locally, we\nexpect the foreign server to give us the same cost since the join is\nthe same. So we avoid running EXPLAIN again and again.\n2. If use_remote_estimate = false, the logic to estimate a foreign\njoin locally is independent of the join order so should yield same\ncost again and again. For some reason that doesn't seem to be the case\nhere.\n\nOn Tue, Jan 25, 2022 at 1:26 PM Alexander Pyhalov\n<a.pyhalov@postgrespro.ru> wrote:\n\n>\n> Without patch:\n>\n> explain analyze verbose SELECT * FROM order_line, stock, district WHERE\n> ol_d_id = 1 AND d_id = 1 AND (ol_o_id < d_next_o_id) AND ol_o_id >=\n> (d_next_o_id - 20) AND s_i_id = ol_i_id AND s_quantity < 11;\n... clipped\n> test.stock WHERE ((s_quantity < 11))\n> Planning Time: 1.812 ms\n> Execution Time: 8.534 ms\n> (15 rows)\n>\n> With patch:\n>\n> explain analyze verbose SELECT * FROM order_line, stock, district WHERE\n> ol_d_id = 1 AND d_id = 1 AND (ol_o_id < d_next_o_id) AND ol_o_id >=\n> (d_next_o_id - 20) AND s_i_id = ol_i_id AND s_quantity < 11;\n>\n... clipped\n> Planning Time: 0.928 ms\n> Execution Time: 4.511 ms\n\nIt is surprising that the planning time halves with the patch. I\nexpected it to increase slightly since we will compute estimates\nthrice instead of once.\n\nWhat is use_remote_estimate? Is it ON/OFF?\n\nIf we want to proceed along this line, we should take care not to fire\nmore EXPLAIN queries on the foreign server.\n\n-- \nBest Wishes,\nAshutosh Bapat\n\n\n", "msg_date": "Tue, 25 Jan 2022 19:38:09 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Foreign join search stops on the first try" }, { "msg_contents": "Ashutosh Bapat писал 2022-01-25 17:08:\n> This code was written long ago. So I may have some recollection\n> errors. But AFAIR, the reasons we wanted to avoid repeated\n> estimation/planning for the same foreign join rel were\n> 1. If use_remote_estimate = true, we fetch EXPLAIN output from the\n> foreign server for various pathkeys. Fetching EXPLAIN output is\n> expensive. Irrespective of the join order being considered locally, we\n> expect the foreign server to give us the same cost since the join is\n> the same. So we avoid running EXPLAIN again and again.\n> 2. If use_remote_estimate = false, the logic to estimate a foreign\n> join locally is independent of the join order so should yield same\n> cost again and again. For some reason that doesn't seem to be the case\n> here.\n\nHi.\nuse_remote_estimate was set to false in our case, and yes, it fixed this \nissue.\nThe problem is that if use_remote_estimate = false, the logic to \nestimate a foreign join locally\nis not independent from the join order.\n\nIn above example, without patch we see plan with cost: \ncost=382.31..966.86 rows=2 width=37\n\nIf we avoid exiting on (joinrel->fdw_private), we can see in gdb the \nfollowing cases, when joining all 3 relations:\n\ncase 1:\nouterrel:relids (stock, order_line), startup_cost = 100, total_cost = \n2415.9200000000001, rel_startup_cost = 0, rel_total_cost = 2315.5, \nretrieved_rows = 21\ninnerrel: relid (district) startup_cost = 100, total_cost = \n101.14500000000001, rel_startup_cost = 0, rel_total_cost = 1.125, \nretrieved_rows = 1\njoinrel: startup_cost = 100, total_cost = 2416.875, retrieved_rows = 2\n\ncase 2:\nouterrel: relids (district, order_line), startup_cost = 100, total_cost \n= 281.41999999999996, rel_total_cost = 180, retrieved_rows = 71\ninnerrel: relid (stock), startup_cost = 100, total_cost = \n683.28500000000008, rel_startup_cost = 0, rel_total_cost = 576.625, \nretrieved_rows = 333\njoinrel: startup_cost = 100, total_cost = 974.88, retrieved_rows = 2\n\n\nSo, (stock join order_line) join district has different cost from \n(district join order_line) join stock.\n\n> \n> On Tue, Jan 25, 2022 at 1:26 PM Alexander Pyhalov\n> <a.pyhalov@postgrespro.ru> wrote:\n> \n> \n> It is surprising that the planning time halves with the patch. I\n> expected it to increase slightly since we will compute estimates\n> thrice instead of once.\n\nI wouldn't look at estimate times here precisely (and would looked at \ncosts). Real example where we found it had 100 times more data, but \neffect was the same. Here some differences in planing time could be \nrelated to restarting instances with or without patches.\n\n> \n> What is use_remote_estimate? Is it ON/OFF?\n> \n\nYes, it was off.\n\n> If we want to proceed along this line, we should take care not to fire\n> more EXPLAIN queries on the foreign server.\n\nYou are correct. Fixed patch to avoid extensive join search when \nuse_remote_estimate is true.\n\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Tue, 25 Jan 2022 19:37:32 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Re: Foreign join search stops on the first try" } ]
[ { "msg_contents": "This issue was discovered by Fabien in the SHOW_ALL_RESULTS thread. I'm \nposting it here separately, because I think it ought to be addressed in \nlibpq rather than with a workaround in psql, as proposed over there.\n\nWhen using PQsendQuery() + PQgetResult() and the server crashes during \nthe execution of the command, PQgetResult() then returns two result sets \nwith partially duplicated error messages, like this from the attached \ntest program:\n\ncommand = SELECT 'before';\nresult 1 status = PGRES_TUPLES_OK\nerror message = \"\"\n\ncommand = SELECT pg_terminate_backend(pg_backend_pid());\nresult 1 status = PGRES_FATAL_ERROR\nerror message = \"FATAL: terminating connection due to administrator command\n\"\nresult 2 status = PGRES_FATAL_ERROR\nerror message = \"FATAL: terminating connection due to administrator command\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\"\n\ncommand = SELECT 'after';\nPQsendQuery() error: no connection to the server\n\n\nThis is hidden in normal use because PQexec() throws away all but the \nlast result set, but the extra one is still generated internally.\n\nApparently, this has changed between PG13 and PG14. In PG13 and \nearlier, the output is\n\ncommand = SELECT 'before';\nresult 1 status = PGRES_TUPLES_OK\nerror message = \"\"\n\ncommand = SELECT pg_terminate_backend(pg_backend_pid());\nresult 1 status = PGRES_FATAL_ERROR\nerror message = \"FATAL: terminating connection due to administrator command\n\"\nresult 2 status = PGRES_FATAL_ERROR\nerror message = \"server closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\"\n\ncommand = SELECT 'after';\nPQsendQuery() error: no connection to the server\n\nIn PG13, PQexec() concatenates all the error messages from multiple \nresults, so a user of PQexec() sees the same output before and after. \nBut for users of the lower-level APIs, things have become a bit more \nconfusing.\n\nAlso, why are there multiple results being generated in the first place?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/alpine.DEB.2.22.394.2112230703530.2668598@pseudo", "msg_date": "Tue, 25 Jan 2022 09:32:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "libpq async duplicate error results" }, { "msg_contents": "\n> command = SELECT pg_terminate_backend(pg_backend_pid());\n> result 1 status = PGRES_FATAL_ERROR\n> error message = \"FATAL: terminating connection due to administrator command\n> \"\n> result 2 status = PGRES_FATAL_ERROR\n> error message = \"FATAL: terminating connection due to administrator command\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> \"\n\n> Also, why are there multiple results being generated in the first place?\n\nMy interpretation is that the first message is a close message issued by \nthe server before actually severing the connection, and the second message \nis generated by libpq when it notices that the connection has been closed, \nso there is some sense in having to results to report these two \nconsecutive errors, and the question might be about when the buffer should \nbe reset.\n\n-- \nFabien.\n\n\n", "msg_date": "Wed, 26 Jan 2022 14:52:03 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "On 26.01.22 14:52, Fabien COELHO wrote:\n> \n>> command = SELECT pg_terminate_backend(pg_backend_pid());\n>> result 1 status = PGRES_FATAL_ERROR\n>> error message = \"FATAL:  terminating connection due to administrator \n>> command\n>> \"\n>> result 2 status = PGRES_FATAL_ERROR\n>> error message = \"FATAL:  terminating connection due to administrator \n>> command\n>> server closed the connection unexpectedly\n>>     This probably means the server terminated abnormally\n>>     before or while processing the request.\n>> \"\n> \n>> Also, why are there multiple results being generated in the first place?\n> \n> My interpretation is that the first message is a close message issued by \n> the server before actually severing the connection, and the second \n> message is generated by libpq when it notices that the connection has \n> been closed, so there is some sense in having to results to report these \n> two consecutive errors, and the question might be about when the buffer \n> should be reset.\n\nI see. If we stipulate for a moment that getting two results is what we \nwant here, I see a conceptual problem. Users of PQexec() don't care \nwhether the error message they are ultimately getting is a concatenation \nof two messages from two internal results (old behavior) or just an \nappropriately accumulated message from the last internal result (new \nbehavior). But users of PQgetResult() don't have a principled way to \ndeal with this anymore. They can't just ignore one or the other result, \nbecause they could after all be two separate errors from two separate \ncommands. So the only proper behavior is to show all errors from all \nresults. But that results in this misbehavior because the last error \nmessage must include the messages from the previous messages in order to \naccommodate PQexec() users.\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 10:40:18 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Tom, you worked on reorganzing the error message handling in libpq in \nPostgreSQL 14 (commit ffa2e4670123124b92f037d335a1e844c3782d3f). Any \nthoughts on this?\n\n\nOn 25.01.22 09:32, Peter Eisentraut wrote:\n> \n> This issue was discovered by Fabien in the SHOW_ALL_RESULTS thread.  I'm \n> posting it here separately, because I think it ought to be addressed in \n> libpq rather than with a workaround in psql, as proposed over there.\n> \n> When using PQsendQuery() + PQgetResult() and the server crashes during \n> the execution of the command, PQgetResult() then returns two result sets \n> with partially duplicated error messages, like this from the attached \n> test program:\n> \n> command = SELECT 'before';\n> result 1 status = PGRES_TUPLES_OK\n> error message = \"\"\n> \n> command = SELECT pg_terminate_backend(pg_backend_pid());\n> result 1 status = PGRES_FATAL_ERROR\n> error message = \"FATAL:  terminating connection due to administrator \n> command\n> \"\n> result 2 status = PGRES_FATAL_ERROR\n> error message = \"FATAL:  terminating connection due to administrator \n> command\n> server closed the connection unexpectedly\n>     This probably means the server terminated abnormally\n>     before or while processing the request.\n> \"\n> \n> command = SELECT 'after';\n> PQsendQuery() error: no connection to the server\n> \n> \n> This is hidden in normal use because PQexec() throws away all but the \n> last result set, but the extra one is still generated internally.\n> \n> Apparently, this has changed between PG13 and PG14.  In PG13 and \n> earlier, the output is\n> \n> command = SELECT 'before';\n> result 1 status = PGRES_TUPLES_OK\n> error message = \"\"\n> \n> command = SELECT pg_terminate_backend(pg_backend_pid());\n> result 1 status = PGRES_FATAL_ERROR\n> error message = \"FATAL:  terminating connection due to administrator \n> command\n> \"\n> result 2 status = PGRES_FATAL_ERROR\n> error message = \"server closed the connection unexpectedly\n>     This probably means the server terminated abnormally\n>     before or while processing the request.\n> \"\n> \n> command = SELECT 'after';\n> PQsendQuery() error: no connection to the server\n> \n> In PG13, PQexec() concatenates all the error messages from multiple \n> results, so a user of PQexec() sees the same output before and after. \n> But for users of the lower-level APIs, things have become a bit more \n> confusing.\n> \n> Also, why are there multiple results being generated in the first place?\n> \n> \n> [0]: \n> https://www.postgresql.org/message-id/alpine.DEB.2.22.394.2112230703530.2668598@pseudo\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 15:10:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> Tom, you worked on reorganzing the error message handling in libpq in \n> PostgreSQL 14 (commit ffa2e4670123124b92f037d335a1e844c3782d3f). Any \n> thoughts on this?\n\nAh, sorry, I'd not noticed this thread.\n\nI concur with Fabien's analysis: we report the FATAL message from\nthe server during the first PQgetResult, and then the second call\ndiscovers that the connection is gone and reports \"server closed\nthe connection unexpectedly\". Those are two independent events\n(in libpq's view anyway) so reporting them separately is correct,\nor at least necessary unless you want to engage in seriously\nmajor redesign and behavioral changes.\n\nIt is annoying that some of the text is duplicated in the second\nreport, but in the end that's cosmetic, and I'm not sure we can\nimprove it without breaking other things. In particular, we\ncan't just think about what comes back in the PGgetResult calls,\nwe also have to consider what PQerrorMessage(conn) will report\nafterwards. So I don't think that resetting conn->errorMessage\nduring a PQgetResult series would be a good fix.\n\nAn idea that I don't have time to pursue right now is to track\nhow much of conn->errorMessage has been read out by PQgetResult,\nand only report newly-added text in later PQgetResult calls of\na series, while PQerrorMessage would continue to report the\nwhole thing. Buffer resets would occur where they do now.\n\nIt'd probably be necessary to make a user-visible PQgetResult\nthat becomes a wrapper around PQgetResultInternal, because\ninternal callers such as PQexecFinish will need the old behavior,\nor at least some of them will.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 03 Feb 2022 11:04:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "\nHello Tom,\n\n> I concur with Fabien's analysis: we report the FATAL message from\n> the server during the first PQgetResult, and then the second call\n> discovers that the connection is gone and reports \"server closed\n> the connection unexpectedly\". Those are two independent events\n> (in libpq's view anyway) so reporting them separately is correct,\n> or at least necessary unless you want to engage in seriously\n> major redesign and behavioral changes.\n>\n> It is annoying that some of the text is duplicated in the second\n> report, but in the end that's cosmetic, and I'm not sure we can\n> improve it without breaking other things. In particular, we\n> can't just think about what comes back in the PGgetResult calls,\n> we also have to consider what PQerrorMessage(conn) will report\n> afterwards. So I don't think that resetting conn->errorMessage\n> during a PQgetResult series would be a good fix.\n>\n> An idea that I don't have time to pursue right now is to track\n> how much of conn->errorMessage has been read out by PQgetResult,\n> and only report newly-added text in later PQgetResult calls of\n> a series, while PQerrorMessage would continue to report the\n> whole thing. Buffer resets would occur where they do now.\n>\n> It'd probably be necessary to make a user-visible PQgetResult\n> that becomes a wrapper around PQgetResultInternal, because\n> internal callers such as PQexecFinish will need the old behavior,\n> or at least some of them will.\n\nI agree that the message reset is not convincing, but it was the only way \nto get the expected behavior without changing the existing functions.\n\nI see two paths:\n\n(1) keep the duplicate message in this corner case.\n\n(2) do the hocus pocus you suggest around PQerrorMessage and PQgetResult \nto work around the duplication, just for this corner case.\n\nI'd tend to choose (1) to keep it simple, even if (2) is feasible.\n\n-- \nFabien.\n\n\n", "msg_date": "Sun, 6 Feb 2022 07:58:06 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "[ cc'ing Alvaro for pipeline questions ]\n\nFabien COELHO <coelho@cri.ensmp.fr> writes:\n>> It is annoying that some of the text is duplicated in the second\n>> report, but in the end that's cosmetic, and I'm not sure we can\n>> improve it without breaking other things. In particular, we\n>> can't just think about what comes back in the PGgetResult calls,\n>> we also have to consider what PQerrorMessage(conn) will report\n>> afterwards. So I don't think that resetting conn->errorMessage\n>> during a PQgetResult series would be a good fix.\n>> \n>> An idea that I don't have time to pursue right now is to track\n>> how much of conn->errorMessage has been read out by PQgetResult,\n>> and only report newly-added text in later PQgetResult calls of\n>> a series, while PQerrorMessage would continue to report the\n>> whole thing. Buffer resets would occur where they do now.\n\n> I agree that the message reset is not convincing, but it was the only way \n> to get the expected behavior without changing the existing functions.\n\nI spent a bit more time poking at this issue. I no longer like the\nidea of reporting only the tail part of conn->errorMessage; I'm\nafraid that would convert \"edge cases sometimes return redundant text\"\ninto \"edge cases sometimes miss returning text at all\", which is not\nan improvement.\n\nIn any case, the particular example we're looking at here is connection\nloss, which is not something that should greatly concern us as far as\npipeline semantics are concerned, because you're not getting any more\npipelined results anyway if that happens. In the non-I/O-error case,\nI see that PQgetResult does a hacky \"resetPQExpBuffer(&conn->errorMessage)\"\nin between pipelined queries. It seems like if there are real usability\nissues in this area, they probably come from that not being well placed.\nIn particular, I wonder why that's done with the pqPipelineProcessQueue\ncall in the PGASYNC_IDLE stanza, yet not with the pqPipelineProcessQueue\ncall in the PGASYNC_READY stanza (where we're returning a PIPELINE_SYNC\nresult). It kinda looks to me like maybe pqPipelineProcessQueue\nought to have the responsibility for doing that, rather than having\ntwo out of the three call sites do it. Also it would seem more natural\nto associate it with the reinitialization that happens inside\npqPipelineProcessQueue.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 12:07:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "On 2022-Feb-07, Tom Lane wrote:\n\n> In any case, the particular example we're looking at here is connection\n> loss, which is not something that should greatly concern us as far as\n> pipeline semantics are concerned, because you're not getting any more\n> pipelined results anyway if that happens. In the non-I/O-error case,\n> I see that PQgetResult does a hacky \"resetPQExpBuffer(&conn->errorMessage)\"\n> in between pipelined queries. It seems like if there are real usability\n> issues in this area, they probably come from that not being well placed.\n> In particular, I wonder why that's done with the pqPipelineProcessQueue\n> call in the PGASYNC_IDLE stanza, yet not with the pqPipelineProcessQueue\n> call in the PGASYNC_READY stanza (where we're returning a PIPELINE_SYNC\n> result). It kinda looks to me like maybe pqPipelineProcessQueue\n> ought to have the responsibility for doing that, rather than having\n> two out of the three call sites do it. Also it would seem more natural\n> to associate it with the reinitialization that happens inside\n> pqPipelineProcessQueue.\n\nYeah, I agree that the placement of error message reset in pipelined\nlibpq is more ad-hoc to the testing I was doing than carefully\nprincipled. I didn't test changing it, but on a quick look your\nproposed change seems reasonable to me.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\nsólo le suma el nuevo terror de la locura\" (Perelandra, C.S. Lewis)\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:00:41 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "I wrote:\n>>> An idea that I don't have time to pursue right now is to track\n>>> how much of conn->errorMessage has been read out by PQgetResult,\n>>> and only report newly-added text in later PQgetResult calls of\n>>> a series, while PQerrorMessage would continue to report the\n>>> whole thing. Buffer resets would occur where they do now.\n\n> I spent a bit more time poking at this issue. I no longer like the\n> idea of reporting only the tail part of conn->errorMessage; I'm\n> afraid that would convert \"edge cases sometimes return redundant text\"\n> into \"edge cases sometimes miss returning text at all\", which is not\n> an improvement.\n\nI figured out what seems like an adequately bulletproof solution to\nthis problem. The thing that was scaring me before is that there are\ncode paths in which we'll discard a pending conn->result PGresult\n(which could be an error) and make a new one; it wasn't entirely\nclear that we could track which result represents what text.\nThe solution to that has two components:\n\n1. Never advance the how-much-text-has-been-reported counter until\nwe are just about to return an error PGresult to the application.\nThis reduces the risk of missing or duplicated text to what seems\nan acceptable level. I found that pqPrepareAsyncResult can be used\nto handle this logic -- it's only called at outer levels, and it\nalready has some state-updating behavior, so we'd already have bugs\nif it were called improperly.\n\n2. Don't make error PGresults at all until we reach pqPrepareAsyncResult.\nThis saves some cycles in the paths where we previously replaced one\nerror with another. Importantly, it also means that we can now have\na bulletproof solution to the problem of how to return an error when\nwe get OOM on any attempt to make a PGresult. The attached patch\nincludes a statically-declared PGresult that we can return in that\ncase.\n\nIt's not very practical to do #2 exactly, because in the case of\nan error received from the server, we need to build a PGresult that\nincludes the broken-down error fields. But all internally-generated\nerrors can be handled in this style.\n\nThis seems workable, and you'll notice it fixes the duplicated text\nin the test case Andres was worried about.\n\nNote that this patch is on top of the event-proc changes I posted\nin [1]. Without that change, it's not very clear how to cope\nwith PGEVT_RESULTCREATE failure after we've already updated the\nerror status.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3390587.1645035110%40sss.pgh.pa.us", "msg_date": "Wed, 16 Feb 2022 18:51:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Hi,\n\nOn 2022-02-16 18:51:37 -0500, Tom Lane wrote:\n> This seems workable, and you'll notice it fixes the duplicated text\n> in the test case Andres was worried about.\n\nCool.\n\nI find it mildly scary that we didn't have any other tests verifying the libpq\nside of connection termination. Seems like we we maybe should add a few more?\nSome simple cases we can do via isolationtester. But some others would\nprobably require a C test program to be robust...\n\n\n> +\t/* Also, do nothing if the argument is OOM_result */\n> +\tif (res == unconstify(PGresult *, &OOM_result))\n> +\t\treturn;\n\nWouldn't it make more sense to make res const, rather than unconstifying\n&OOM_result?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 16 Feb 2022 17:11:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-16 18:51:37 -0500, Tom Lane wrote:\n>> +\t/* Also, do nothing if the argument is OOM_result */\n>> +\tif (res == unconstify(PGresult *, &OOM_result))\n>> +\t\treturn;\n\n> Wouldn't it make more sense to make res const, rather than unconstifying\n> &OOM_result?\n\nUh ... then we'd have to cast away the const to do free().\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Feb 2022 20:28:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Hi,\n\nOn 2022-02-16 20:28:02 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2022-02-16 18:51:37 -0500, Tom Lane wrote:\n> >> +\t/* Also, do nothing if the argument is OOM_result */\n> >> +\tif (res == unconstify(PGresult *, &OOM_result))\n> >> +\t\treturn;\n>\n> > Wouldn't it make more sense to make res const, rather than unconstifying\n> > &OOM_result?\n>\n> Uh ... then we'd have to cast away the const to do free().\n\nI was just thinking of\n\nif ((const PGresult *) res == &OOM_result)\n\nIt's not important, I just find it stylistically nicer (making a pointer const\nfrom an non-const pointer is safe, the other way round not generally).\n\n- Andres\n\n\n", "msg_date": "Wed, 16 Feb 2022 17:53:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-16 20:28:02 -0500, Tom Lane wrote:\n>> Uh ... then we'd have to cast away the const to do free().\n\n> I was just thinking of\n> if ((const PGresult *) res == &OOM_result)\n\nOh, I see. Sure, can do it like that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 16 Feb 2022 21:12:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> On 2022-Feb-07, Tom Lane wrote:\n>> I see that PQgetResult does a hacky \"resetPQExpBuffer(&conn->errorMessage)\"\n>> in between pipelined queries. It seems like if there are real usability\n>> issues in this area, they probably come from that not being well placed.\n>> In particular, I wonder why that's done with the pqPipelineProcessQueue\n>> call in the PGASYNC_IDLE stanza, yet not with the pqPipelineProcessQueue\n>> call in the PGASYNC_READY stanza (where we're returning a PIPELINE_SYNC\n>> result). It kinda looks to me like maybe pqPipelineProcessQueue\n>> ought to have the responsibility for doing that, rather than having\n>> two out of the three call sites do it. Also it would seem more natural\n>> to associate it with the reinitialization that happens inside\n>> pqPipelineProcessQueue.\n\n> Yeah, I agree that the placement of error message reset in pipelined\n> libpq is more ad-hoc to the testing I was doing than carefully\n> principled. I didn't test changing it, but on a quick look your\n> proposed change seems reasonable to me.\n\nHere's a patch to do that. It passes check-world (particularly\nsrc/test/modules/libpq_pipeline), for whatever that's worth.\n\nHowever ... in the wake of 618c16707, I wonder if we should consider\nan alternative definition, which is to NOT clear errorMessage when\nstarting a new pipelined query. (That would be this patch minus\nthe addition to pqPipelineProcessQueue.) Thanks to 618c16707,\nthe returned error PGresult(s) should bear exactly the same contents\neither way. What would change is that PQerrorMessage(conn) would return\nthe concatenation of all errors that have occurred during the current\npipeline sequence. I think that's arguably a more useful definition\nthan what we have now --- though obviously it'd require a docs change.\nIt seems to fit with the spirit of the v14 changes to ensure that\nPQerrorMessage tells you about everything that happened during a\nfailed connection attempt, not only the last thing.\n\nI've tested that variant, and it also passes check-world, which may\nsay something about libpq_pipeline's coverage of error cases ...\nI didn't look closely.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Feb 2022 15:26:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "I wrote:\n> However ... in the wake of 618c16707, I wonder if we should consider\n> an alternative definition, which is to NOT clear errorMessage when\n> starting a new pipelined query. (That would be this patch minus\n> the addition to pqPipelineProcessQueue.) Thanks to 618c16707,\n> the returned error PGresult(s) should bear exactly the same contents\n> either way. What would change is that PQerrorMessage(conn) would return\n> the concatenation of all errors that have occurred during the current\n> pipeline sequence. I think that's arguably a more useful definition\n> than what we have now --- though obviously it'd require a docs change.\n> It seems to fit with the spirit of the v14 changes to ensure that\n> PQerrorMessage tells you about everything that happened during a\n> failed connection attempt, not only the last thing.\n\nAfter studying the pipeline mode some more, I no longer think that's\na great idea. ISTM we want pipeline mode to act as nearly as possible\nthe same as issuing the same series of queries non-pipelined. But\nthis redefinition would make it different. So, barring objection,\nI'll push that cleanup patch as-posted.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Feb 2022 18:15:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Actually ... it now seems to me that there's live bugs in the\ninteraction between pipeline mode and error-buffer clearing.\nNamely, that places like PQsendQueryStart will unconditionally\nclear the buffer even though we may be in the middle of a pipeline\nsequence, and the buffer might hold an error that's yet to be\nreported to the application. So I think we need something\nmore like the attached.\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 24 Feb 2022 12:20:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "I took another look at this. The output from the test program at the \nbeginning of the thread is now (master branch):\n\ncommand = SELECT 'before';\nresult 1 status = PGRES_TUPLES_OK\nerror message = \"\"\n\ncommand = SELECT pg_terminate_backend(pg_backend_pid());\nresult 1 status = PGRES_FATAL_ERROR\nerror message = \"FATAL: terminating connection due to administrator command\n\"\nresult 2 status = PGRES_FATAL_ERROR\nerror message = \"server closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\"\n\ncommand = SELECT 'after';\nPQsendQuery() error: FATAL: terminating connection due to administrator \ncommand\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\nno connection to the server\n\n\nIt appears the \"after\" query is getting the error message from the \nprevious cycle somehow.\n\n\nThe output in PG14 and PG13 is:\n\ncommand = SELECT 'after';\nPQsendQuery() error: no connection to the server\n\n\nIs the change intended or do we need to think about more tweaking?\n\n\n", "msg_date": "Fri, 6 May 2022 15:49:28 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I took another look at this. The output from the test program at the \n> beginning of the thread is now (master branch):\n> ...\n> command = SELECT 'after';\n> PQsendQuery() error: FATAL: terminating connection due to administrator \n> command\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> no connection to the server\n\n> It appears the \"after\" query is getting the error message from the \n> previous cycle somehow.\n\nWhat is happening is that this bit in PQsendQueryStart is deciding\nnot to clear the message buffer because conn->cmd_queue_head isn't\nNULL:\n\n /*\n * If this is the beginning of a query cycle, reset the error state.\n * However, in pipeline mode with something already queued, the error\n * buffer belongs to that command and we shouldn't clear it.\n */\n if (newQuery && conn->cmd_queue_head == NULL)\n pqClearConnErrorState(conn);\n\nSo apparently the fault is with the pipelined-query logic. I'm\nnot sure what cleanup step is missing though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 May 2022 10:28:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" }, { "msg_contents": "I wrote:\n> What is happening is that this bit in PQsendQueryStart is deciding\n> not to clear the message buffer because conn->cmd_queue_head isn't\n> NULL:\n\n> /*\n> * If this is the beginning of a query cycle, reset the error state.\n> * However, in pipeline mode with something already queued, the error\n> * buffer belongs to that command and we shouldn't clear it.\n> */\n> if (newQuery && conn->cmd_queue_head == NULL)\n> pqClearConnErrorState(conn);\n\n> So apparently the fault is with the pipelined-query logic. I'm\n> not sure what cleanup step is missing though.\n\nI experimented with the attached patch, which is based on the idea\nthat clearing the command queue is being done in entirely the wrong\nplace. It's more appropriate to do it where we drop unsent output\ndata, ie in pqDropConnection not pqDropServerData. The fact that\nit was jammed into the latter without any commentary doesn't do much\nto convince me that that placement was thought about carefully.\n\nThis fixes the complained-of symptom and still passes check-world.\nHowever, I wonder whether cutting the queue down to completely empty\nis the right thing. Why is the queue set up like this in the first\nplace --- that is, why don't we remove an entry the moment the\ncommand is sent? I also wonder whether pipelineStatus ought to\nget reset here. We aren't resetting asyncStatus here, so maybe it's\nfine, but I'm not quite sure.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 06 May 2022 11:47:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: libpq async duplicate error results" } ]
[ { "msg_contents": "Hi, hackers\n\nACL lists in tables may potentially be large in size. I suggest adding TOAST support for system tables, namely pg_class, pg_attribute and pg_largeobject_metadata, for they include ACL columns.\n\nIn commit 96cdeae0 these tables were missed because of possible conflicts with VACUUM FULL and problem with seeing a non-empty new cluster by pg_upgrade. Given patch fixes both expected bugs. Also patch adds a new regress test for checking TOAST properly working with ACL attributes. Suggested feature has been used in Postgres Pro Enterprise for a long time, and it helps with large ACL's.\n \nThe VACUUM FULL bug is detailed here:\n \nhttp://postgr.es/m/CAPpHfdu3PJUzHpQrvJ5RC9bEX_bZ6LwX52kBpb0EiD_9e3Np5g%40mail.gmail.com\n \nLooking forward to your comments\n \n--\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company", "msg_date": "Tue, 25 Jan 2022 15:04:24 +0300", "msg_from": "=?UTF-8?B?U29maWEgS29waWtvdmE=?= <s.kopikova@postgrespro.ru>", "msg_from_op": true, "msg_subject": "\n =?UTF-8?B?W1BBVENIXSBBZGQgVE9BU1Qgc3VwcG9ydCBmb3Igc2V2ZXJhbCBzeXN0ZW0g?=\n =?UTF-8?B?dGFibGVz?=" }, { "msg_contents": "=?UTF-8?B?U29maWEgS29waWtvdmE=?= <s.kopikova@postgrespro.ru> writes:\n> ACL lists in tables may potentially be large in size. I suggest adding TOAST support for system tables, namely pg_class, pg_attribute and pg_largeobject_metadata, for they include ACL columns.\n\nTBH, the idea of adding a toast table to pg_class scares the daylights\nout of me. I do not for one minute believe that you've fixed every\nproblem that will cause, nor do I think \"allow wider ACLs\" is a\ncompelling enough reason to take the risk.\n\nI wonder if it'd be practical to move the ACLs for relations\nand attributes into some new table, indexed like pg_depend or\npg_description, so that we could dodge that whole problem.\npg_attrdef is a prototype for how this could work.\n\n(Obviously, once we had such a table the ACLs for other things\ncould be put in it, but I'm not sure that the ensuing breakage\nwould be justified for other sorts of objects.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Feb 2022 18:08:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?B?W1BBVENIXSBBZGQgVE9BU1Qgc3VwcG9ydCBmb3Igc2V2ZXJhbCBzeXN0ZW0g?=\n =?UTF-8?B?dGFibGVz?=" }, { "msg_contents": "On 2022-02-28 18:08:48 -0500, Tom Lane wrote:\n> =?UTF-8?B?U29maWEgS29waWtvdmE=?= <s.kopikova@postgrespro.ru> writes:\n> > ACL lists in tables may potentially be large in size. I suggest adding TOAST support for system tables, namely pg_class, pg_attribute and pg_largeobject_metadata, for they include ACL columns.\n> \n> TBH, the idea of adding a toast table to pg_class scares the daylights\n> out of me. I do not for one minute believe that you've fixed every\n> problem that will cause, nor do I think \"allow wider ACLs\" is a\n> compelling enough reason to take the risk.\n> \n> I wonder if it'd be practical to move the ACLs for relations\n> and attributes into some new table, indexed like pg_depend or\n> pg_description, so that we could dodge that whole problem.\n> pg_attrdef is a prototype for how this could work.\n> \n> (Obviously, once we had such a table the ACLs for other things\n> could be put in it, but I'm not sure that the ensuing breakage\n> would be justified for other sorts of objects.)\n\nAs there has been no progress since this email, I'm marking this CF entry as\nreturned with feedback for now.\n\n- Andres\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:09:47 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add TOAST support for several system tables" } ]
[ { "msg_contents": "Hello,\n\nI have come across some missing documentation that I think could benefit\nthe community.\n\nSeveral functions like `jsonb_exists`, `jsonb_exists_any`,\n`jsonb_exists_all` have existed for many PG versions but were not\ndocumented. They are equivalent to `?`, `?|`, and `?&` operators. But some\nJDBC drivers have issues with native queries containing these operators\n(see\nhttps://stackoverflow.com/questions/38370972/how-do-i-use-postgresql-jsonb-operators-containing-a-question-mark-via-jdb),\nso it is useful for users of PG to know the function equivalents of these\noperators.\n\nI have attached the patch as an attachment to this email. The documentation\nbuilds correctly without any lint errors after applying the patch locally.\nThis is my first time contributing, so let me know if there is anything\nelse I should do (add to commitfest etc).\n\nCheers!\nMikhail Dobrinin", "msg_date": "Tue, 25 Jan 2022 16:38:05 -0600", "msg_from": "Mikhail Dobrinin <mvdobrinin@gmail.com>", "msg_from_op": true, "msg_subject": "JSONB docs patch" }, { "msg_contents": "On Tue, Jan 25, 2022 at 3:38 PM Mikhail Dobrinin <mvdobrinin@gmail.com>\nwrote:\n\n> Hello,\n>\n> I have come across some missing documentation that I think could benefit\n> the community.\n>\n> Several functions like `jsonb_exists`, `jsonb_exists_any`,\n> `jsonb_exists_all` have existed for many PG versions but were not\n> documented. They are equivalent to `?`, `?|`, and `?&` operators. But some\n> JDBC drivers have issues with native queries containing these operators\n> (see\n> https://stackoverflow.com/questions/38370972/how-do-i-use-postgresql-jsonb-operators-containing-a-question-mark-via-jdb),\n> so it is useful for users of PG to know the function equivalents of these\n> operators.\n>\n> I have attached the patch as an attachment to this email. The\n> documentation builds correctly without any lint errors after applying\n> the patch locally. This is my first time contributing, so let me know if\n> there is anything else I should do (add to commitfest etc).\n>\n>\nI'm doubtful that encouraging use of these functions for JDBC-users is\nbetter than them learning to write queries using the proper operator. The\nreality is that the usage of indexes depends upon operators being used in\nquery text, not function names (unless you define a functional index, which\ndoesn't happen). Your SO post says as much and does mention that ?? is\nindeed the coding that is required.\n\nWhat I think we should do in light of this reality, though, is indeed\nprohibit \"??\" as (or within) an operator in PostgreSQL. Since core is not\npresently using that operator its prohibition should be reasonably simple -\nthough maybe extension authors got too creative?\n\n-1 to this patch on the above grounds. As for the patch itself:\nThe parentheticals you wrote might be appropriate for a commit message but\ndo not belong in the documentation. Mentioning JDBC is simply a no-no; and\nwe don't document \"why\" we decided to document something. We also don't go\naround pointing out what functions and operators perform the same behavior\n(mostly because we generally just don't do that, see above).\n\nI didn't actually review the material parts of the table. Nothing seems\nobviously incorrect there though.\n\nDavid J.\n\nOn Tue, Jan 25, 2022 at 3:38 PM Mikhail Dobrinin <mvdobrinin@gmail.com> wrote:Hello,I have come across some missing documentation that I think could benefit the community.Several functions like `jsonb_exists`, `jsonb_exists_any`, `jsonb_exists_all` have existed for many PG versions but were not documented. They are equivalent to `?`, `?|`, and `?&` operators. But some JDBC drivers have issues with native queries containing these operators (see https://stackoverflow.com/questions/38370972/how-do-i-use-postgresql-jsonb-operators-containing-a-question-mark-via-jdb), so it is useful for users of PG to know the function equivalents of these operators.I have attached the patch as an attachment to this email. The documentation builds correctly without any lint errors after applying the patch locally. This is my first time contributing, so let me know if there is anything else I should do (add to commitfest etc).I'm doubtful that encouraging use of these functions for JDBC-users is better than them learning to write queries using the proper operator.  The reality is that the usage of indexes depends upon operators being used in query text, not function names (unless you define a functional index, which doesn't happen).  Your SO post says as much and does mention that ?? is indeed the coding that is required.What I think we should do in light of this reality, though, is indeed prohibit \"??\" as (or within) an operator in PostgreSQL.  Since core is not presently using that operator its prohibition should be reasonably simple - though maybe extension authors got too creative?-1 to this patch on the above grounds.  As for the patch itself:The parentheticals you wrote might be appropriate for a commit message but do not belong in the documentation.  Mentioning JDBC is simply a no-no; and we don't document \"why\" we decided to document something. We also don't go around pointing out what functions and operators perform the same behavior (mostly because we generally just don't do that, see above).I didn't actually review the material parts of the table.  Nothing seems obviously incorrect there though.David J.", "msg_date": "Tue, 25 Jan 2022 16:08:12 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: JSONB docs patch" }, { "msg_contents": "\nOn 1/25/22 18:08, David G. Johnston wrote:\n> On Tue, Jan 25, 2022 at 3:38 PM Mikhail Dobrinin\n> <mvdobrinin@gmail.com> wrote:\n>\n> Hello,\n>\n> I have come across some missing documentation that I think could\n> benefit the community.\n>\n> Several functions like `jsonb_exists`, `jsonb_exists_any`,\n> `jsonb_exists_all` have existed for many PG versions but were not\n> documented. They are equivalent to `?`, `?|`, and `?&` operators.\n> But some JDBC drivers have issues with native queries containing\n> these operators (see\n> https://stackoverflow.com/questions/38370972/how-do-i-use-postgresql-jsonb-operators-containing-a-question-mark-via-jdb),\n> so it is useful for users of PG to know the function equivalents\n> of these operators.\n>\n> I have attached the patch as an attachment to this email. The\n> documentation builds correctly without any lint errors after\n> applying the patch locally. This is my first time contributing, so\n> let me know if there is anything else I should do (add to\n> commitfest etc).\n>\n>\n> I'm doubtful that encouraging use of these functions for JDBC-users is\n> better than them learning to write queries using the proper operator. \n> The reality is that the usage of indexes depends upon operators being\n> used in query text, not function names (unless you define a functional\n> index, which doesn't happen).  Your SO post says as much and does\n> mention that ?? is indeed the coding that is required.\n>\n> What I think we should do in light of this reality, though, is indeed\n> prohibit \"??\" as (or within) an operator in PostgreSQL.  Since core is\n> not presently using that operator its prohibition should be reasonably\n> simple - though maybe extension authors got too creative?\n>\n> -1 to this patch on the above grounds.  As for the patch itself:\n> The parentheticals you wrote might be appropriate for a commit message\n> but do not belong in the documentation.  Mentioning JDBC is simply a\n> no-no; and we don't document \"why\" we decided to document something.\n> We also don't go around pointing out what functions and operators\n> perform the same behavior (mostly because we generally just don't do\n> that, see above).\n>\n> I didn't actually review the material parts of the table.  Nothing\n> seems obviously incorrect there though.\n>\n>\n\nYeah. The omission from the docs is not accidental - we generally don't\ndocument the functions underlying operators.\n\nThese jsonb operators are not the only ones with '?' in the name -\nthere's a whole heap of them for geometric types. Are we supposed to\ndocument all those too?\n\nI feel the pain that JDBC users have here. It's a pity there are no\nalternative placeholder mechanisms in JDBC. Perl's DBD::Pg, which also\nhas '?' placeholders, provides a couple of quite workable ways around\nthe issue.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 26 Jan 2022 09:49:10 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: JSONB docs patch" } ]
[ { "msg_contents": "Hello pgsql-hackers,\n\nWhen dropping a partitioned index, the locking order starts with the\npartitioned table, then its partitioned index, then the partition\nindexes dependent on that partitioned index, and finally the dependent\npartition indexes' parent tables. This order allows a deadlock\nscenario to happen if for example an ANALYZE happens on one of the\npartition tables which locks the partition table and then blocks when\nit attempts to lock its index (the DROP INDEX has the index lock but\ncannot get the partition table lock).\n\nDeadlock Reproduction\n==================================================\nInitial setup:\nCREATE TABLE foopart (a int) PARTITION BY RANGE (a);\nCREATE TABLE foopart_child PARTITION OF foopart FOR VALUES FROM (1) TO (5);\nCREATE INDEX foopart_idx ON foopart USING btree(a);\n\nAttach debugger to session 1 and put breakpoint on function deleteObjectsInList:\n1: DROP INDEX foopart_idx;\n\nWhile session 1 is blocked by debugger breakpoint, run the following:\n2: ANALYZE foopart_child;\n\nAfter a couple seconds, detach the debugger from session 1 and see deadlock.\n\nIn order to prevent this deadlock scenario, we should find and lock\nall the sub-partition and partition tables beneath the partitioned\nindex's partitioned table before attempting to lock the sub-partition\nand partition indexes.\n\nAttached is a patch (+isolation test) which fixes the issue. We\nobserved this on latest head of Postgres.\n\nRegards,\nJimmy Yih and Gaurab Dey", "msg_date": "Tue, 25 Jan 2022 22:45:42 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "Jimmy Yih <jyih@vmware.com> writes:\n> When dropping a partitioned index, the locking order starts with the\n> partitioned table, then its partitioned index, then the partition\n> indexes dependent on that partitioned index, and finally the dependent\n> partition indexes' parent tables. This order allows a deadlock\n> scenario to happen if for example an ANALYZE happens on one of the\n> partition tables which locks the partition table and then blocks when\n> it attempts to lock its index (the DROP INDEX has the index lock but\n> cannot get the partition table lock).\n\nI agree this is a bug, and I think that you may have the right\ngeneral idea about how to fix it: acquire the necessary locks in\nRangeVarCallbackForDropRelation. However, this patch still needs\na fair amount of work.\n\n1. RangeVarCallbackForDropRelation can get called more than once\nduring a lookup (in case of concurrent rename and suchlike).\nIf so it needs to be prepared to drop the lock(s) it got last time.\nYou have not implemented any such logic. This doesn't seem hard\nto fix, just store the OID list into struct DropRelationCallbackState.\n\n2. I'm not sure you have fully thought through the interaction\nwith the subsequent \"is_partition\" stanza. If the target is\nan intermediate partitioned index, don't we need to acquire lock\non its parent index before starting to lock children? (It may\nbe that that stanza is already in the wrong place relative to\nthe table-locking stanza it currently follows, but not sure.)\n\n3. Calling find_all_inheritors with lockmode NoLock, and then\nlocking those relation OIDs later, is both unsafe and code-wasteful.\nJust tell find_all_inheritors to get the locks you want.\n\n4. This code will invoke find_all_inheritors on InvalidOid if\nIndexGetRelation fails. It needs to be within the if-test\njust above.\n\n5. Reading classform again at this point in the function is\nnot merely inefficient, but outright wrong, because we\nalready released the syscache entry. Use the local variable.\n\n6. You've not paid enough attention to updating existing comments,\nparticularly the header comment for RangeVarCallbackForDropRelation.\n\nActually though, maybe you *don't* want to do this in\nRangeVarCallbackForDropRelation. Because of point 2, it might be\nbetter to run find_all_inheritors after we've successfully\nidentified and locked the direct target relation, ie do it back\nin RemoveRelations. I've not thought hard about that, but it's\nattractive if only because it'd mean you don't have to fix point 1.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Mar 2022 18:54:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 1. RangeVarCallbackForDropRelation can get called more than once\n> during a lookup (in case of concurrent rename and suchlike).\n> If so it needs to be prepared to drop the lock(s) it got last time.\n> You have not implemented any such logic. This doesn't seem hard\n> to fix, just store the OID list into struct DropRelationCallbackState.\n\nFixed in attached patch. We added the OID list to the\nDropRelationCallbackState as you suggested.\n\n> 2. I'm not sure you have fully thought through the interaction\n> with the subsequent \"is_partition\" stanza. If the target is\n> an intermediate partitioned index, don't we need to acquire lock\n> on its parent index before starting to lock children? (It may\n> be that that stanza is already in the wrong place relative to\n> the table-locking stanza it currently follows, but not sure.)\n\nIt's not required to acquire lock on a possible parent index because\nof the restriction where we can only run DROP INDEX on the top-most\npartitioned index.\n\n> 3. Calling find_all_inheritors with lockmode NoLock, and then\n> locking those relation OIDs later, is both unsafe and code-wasteful.\n> Just tell find_all_inheritors to get the locks you want.\n\nFixed in attached patch.\n\n> 4. This code will invoke find_all_inheritors on InvalidOid if\n> IndexGetRelation fails. It needs to be within the if-test\n> just above.\n\nFixed in attached patch.\n\n> 5. Reading classform again at this point in the function is\n> not merely inefficient, but outright wrong, because we\n> already released the syscache entry. Use the local variable.\n\nFixed in attached patch. Added another local variable\nis_partitioned_index to store the classform value. The main reason we\nneed the classform is because the existing relkind and\nexpected_relkind local variables would only show RELKIND_INDEX whereas\nwe needed exactly RELKIND_PARTITIONED_INDEX.\n\n> 6. You've not paid enough attention to updating existing comments,\n> particularly the header comment for RangeVarCallbackForDropRelation.\n\nFixed in attached patch. Updated the header comment and aggregated our\nintroduced comment to another relative comment slightly above the\nproposed locking section.\n\n> Actually though, maybe you *don't* want to do this in\n> RangeVarCallbackForDropRelation. Because of point 2, it might be\n> better to run find_all_inheritors after we've successfully\n> identified and locked the direct target relation, ie do it back\n> in RemoveRelations. I've not thought hard about that, but it's\n> attractive if only because it'd mean you don't have to fix point 1.\n\nWe think that RangeVarCallbackForDropRelation is probably the most\ncorrect function to place the fix. It would look a bit out-of-place\nbeing in RemoveRelations seeing how there's already relative DROP\nINDEX code in RangeVarCallbackForDropRelation. With point 2 explained\nand point 1 addressed, the fix seems to look okay in\nRangeVarCallbackForDropRelation.\n\nThanks for the feedback. Attached new patch with feedback addressed.\n\n--\nJimmy Yih (VMware)\nGaurab Dey (VMware)", "msg_date": "Wed, 16 Mar 2022 18:20:10 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "On Wed, Mar 16, 2022 at 11:20 AM Jimmy Yih <jyih@vmware.com> wrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > 1. RangeVarCallbackForDropRelation can get called more than once\n> > during a lookup (in case of concurrent rename and suchlike).\n> > If so it needs to be prepared to drop the lock(s) it got last time.\n> > You have not implemented any such logic. This doesn't seem hard\n> > to fix, just store the OID list into struct DropRelationCallbackState.\n>\n> Fixed in attached patch. We added the OID list to the\n> DropRelationCallbackState as you suggested.\n>\n> > 2. I'm not sure you have fully thought through the interaction\n> > with the subsequent \"is_partition\" stanza. If the target is\n> > an intermediate partitioned index, don't we need to acquire lock\n> > on its parent index before starting to lock children? (It may\n> > be that that stanza is already in the wrong place relative to\n> > the table-locking stanza it currently follows, but not sure.)\n>\n> It's not required to acquire lock on a possible parent index because\n> of the restriction where we can only run DROP INDEX on the top-most\n> partitioned index.\n>\n> > 3. Calling find_all_inheritors with lockmode NoLock, and then\n> > locking those relation OIDs later, is both unsafe and code-wasteful.\n> > Just tell find_all_inheritors to get the locks you want.\n>\n> Fixed in attached patch.\n>\n> > 4. This code will invoke find_all_inheritors on InvalidOid if\n> > IndexGetRelation fails. It needs to be within the if-test\n> > just above.\n>\n> Fixed in attached patch.\n>\n> > 5. Reading classform again at this point in the function is\n> > not merely inefficient, but outright wrong, because we\n> > already released the syscache entry. Use the local variable.\n>\n> Fixed in attached patch. Added another local variable\n> is_partitioned_index to store the classform value. The main reason we\n> need the classform is because the existing relkind and\n> expected_relkind local variables would only show RELKIND_INDEX whereas\n> we needed exactly RELKIND_PARTITIONED_INDEX.\n>\n> > 6. You've not paid enough attention to updating existing comments,\n> > particularly the header comment for RangeVarCallbackForDropRelation.\n>\n> Fixed in attached patch. Updated the header comment and aggregated our\n> introduced comment to another relative comment slightly above the\n> proposed locking section.\n>\n> > Actually though, maybe you *don't* want to do this in\n> > RangeVarCallbackForDropRelation. Because of point 2, it might be\n> > better to run find_all_inheritors after we've successfully\n> > identified and locked the direct target relation, ie do it back\n> > in RemoveRelations. I've not thought hard about that, but it's\n> > attractive if only because it'd mean you don't have to fix point 1.\n>\n> We think that RangeVarCallbackForDropRelation is probably the most\n> correct function to place the fix. It would look a bit out-of-place\n> being in RemoveRelations seeing how there's already relative DROP\n> INDEX code in RangeVarCallbackForDropRelation. With point 2 explained\n> and point 1 addressed, the fix seems to look okay in\n> RangeVarCallbackForDropRelation.\n>\n> Thanks for the feedback. Attached new patch with feedback addressed.\n>\n> --\n> Jimmy Yih (VMware)\n> Gaurab Dey (VMware)\n\n\nHi,\nFor RangeVarCallbackForDropRelation():\n\n+ LockRelationOid(indexRelationOid, heap_lockmode);\n\nSince the above is called for both if and else blocks, it can be lifted\noutside.\n\nCheers\n\nOn Wed, Mar 16, 2022 at 11:20 AM Jimmy Yih <jyih@vmware.com> wrote:Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> 1. RangeVarCallbackForDropRelation can get called more than once\n> during a lookup (in case of concurrent rename and suchlike).\n> If so it needs to be prepared to drop the lock(s) it got last time.\n> You have not implemented any such logic.  This doesn't seem hard\n> to fix, just store the OID list into struct DropRelationCallbackState.\n\nFixed in attached patch. We added the OID list to the\nDropRelationCallbackState as you suggested.\n\n> 2. I'm not sure you have fully thought through the interaction\n> with the subsequent \"is_partition\" stanza.   If the target is\n> an intermediate partitioned index, don't we need to acquire lock\n> on its parent index before starting to lock children?  (It may\n> be that that stanza is already in the wrong place relative to\n> the table-locking stanza it currently follows, but not sure.)\n\nIt's not required to acquire lock on a possible parent index because\nof the restriction where we can only run DROP INDEX on the top-most\npartitioned index.\n\n> 3. Calling find_all_inheritors with lockmode NoLock, and then\n> locking those relation OIDs later, is both unsafe and code-wasteful.\n> Just tell find_all_inheritors to get the locks you want.\n\nFixed in attached patch.\n\n> 4. This code will invoke find_all_inheritors on InvalidOid if\n> IndexGetRelation fails.  It needs to be within the if-test\n> just above.\n\nFixed in attached patch.\n\n> 5. Reading classform again at this point in the function is\n> not merely inefficient, but outright wrong, because we\n> already released the syscache entry.  Use the local variable.\n\nFixed in attached patch. Added another local variable\nis_partitioned_index to store the classform value. The main reason we\nneed the classform is because the existing relkind and\nexpected_relkind local variables would only show RELKIND_INDEX whereas\nwe needed exactly RELKIND_PARTITIONED_INDEX.\n\n> 6. You've not paid enough attention to updating existing comments,\n> particularly the header comment for RangeVarCallbackForDropRelation.\n\nFixed in attached patch. Updated the header comment and aggregated our\nintroduced comment to another relative comment slightly above the\nproposed locking section.\n\n> Actually though, maybe you *don't* want to do this in\n> RangeVarCallbackForDropRelation.  Because of point 2, it might be\n> better to run find_all_inheritors after we've successfully\n> identified and locked the direct target relation, ie do it back\n> in RemoveRelations.  I've not thought hard about that, but it's\n> attractive if only because it'd mean you don't have to fix point 1.\n\nWe think that RangeVarCallbackForDropRelation is probably the most\ncorrect function to place the fix. It would look a bit out-of-place\nbeing in RemoveRelations seeing how there's already relative DROP\nINDEX code in RangeVarCallbackForDropRelation. With point 2 explained\nand point 1 addressed, the fix seems to look okay in\nRangeVarCallbackForDropRelation.\n\nThanks for the feedback.  Attached new patch with feedback addressed.\n\n--\nJimmy Yih (VMware)\nGaurab Dey (VMware)Hi,For RangeVarCallbackForDropRelation():+               LockRelationOid(indexRelationOid, heap_lockmode); Since the above is called for both if and else blocks, it can be lifted outside.Cheers", "msg_date": "Wed, 16 Mar 2022 12:48:09 -0700", "msg_from": "Zhihong Yu <zyu@yugabyte.com>", "msg_from_op": false, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "Zhihong Yu <zyu@yugabyte.com> wrote:\n> Hi,\n> For RangeVarCallbackForDropRelation():\n>\n> + LockRelationOid(indexRelationOid, heap_lockmode);\n>\n> Since the above is called for both if and else blocks, it can be lifted outside.\n\nThanks for the feedback. Attached new v3 patch with feedback addressed.\n\n--\nJimmy Yih (VMware)\nGaurab Dey (VMware)", "msg_date": "Wed, 16 Mar 2022 20:43:12 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "Jimmy Yih <jyih@vmware.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually though, maybe you *don't* want to do this in\n>> RangeVarCallbackForDropRelation. Because of point 2, it might be\n>> better to run find_all_inheritors after we've successfully\n>> identified and locked the direct target relation, ie do it back\n>> in RemoveRelations. I've not thought hard about that, but it's\n>> attractive if only because it'd mean you don't have to fix point 1.\n\n> We think that RangeVarCallbackForDropRelation is probably the most\n> correct function to place the fix. It would look a bit out-of-place\n> being in RemoveRelations seeing how there's already relative DROP\n> INDEX code in RangeVarCallbackForDropRelation.\n\nI really think you made the wrong choice here. Doing the locking in\nRemoveRelations leads to an extremely simple patch, as I demonstrate\nbelow. Moreover, since RemoveRelations also has special-case code\nfor partitioned indexes, it's hard to argue that it mustn't cover\nthis case too.\n\nAlso, I think the proposed test case isn't very good, because when\nI run it without applying the code patch, it fails to demonstrate\nany deadlock. The query output is different, but not obviously\nwrong.\n\n> Fixed in attached patch. Added another local variable\n> is_partitioned_index to store the classform value. The main reason we\n> need the classform is because the existing relkind and\n> expected_relkind local variables would only show RELKIND_INDEX whereas\n> we needed exactly RELKIND_PARTITIONED_INDEX.\n\nYeah. As I looked at that I realized that it was totally confusing:\nat least one previous author thought that \"relkind\" stored the rel's\nactual relkind, which it doesn't as the code stands. In particular,\nin this bit:\n\n if ((relkind == RELKIND_INDEX || relkind == RELKIND_PARTITIONED_INDEX) &&\n relOid != oldRelOid)\n {\n state->heapOid = IndexGetRelation(relOid, true);\n\nthe test for relkind == RELKIND_PARTITIONED_INDEX is dead code\nbecause relkind will never be that. It accidentally works anyway\nbecause the other half of the || does the right thing, but somebody\nwas confused, and so will readers be.\n\nHence, I propose the attached. 0001 is pure refactoring: it hopefully\nclears up the confusion about which \"relkind\" is which, and it also\nsaves a couple of redundant syscache fetches in RemoveRelations.\nThen 0002 adds the actual bug fix as well as a test case that does\ndeadlock with unpatched code.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 20 Mar 2022 14:39:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hence, I propose the attached. 0001 is pure refactoring: it hopefully\n> clears up the confusion about which \"relkind\" is which, and it also\n> saves a couple of redundant syscache fetches in RemoveRelations.\n> Then 0002 adds the actual bug fix as well as a test case that does\n> deadlock with unpatched code.\n\nThe proposed patches look great and make much more sense. I see you've\nalready squashed and committed in\n7b6ec86532c2ca585d671239bba867fe380448ed. Thanks!\n\n--\nJimmy Yih (VMware)\nGaurab Dey (VMware)\n\n", "msg_date": "Mon, 21 Mar 2022 22:57:11 +0000", "msg_from": "Jimmy Yih <jyih@vmware.com>", "msg_from_op": true, "msg_subject": "Re: Concurrent deadlock scenario with DROP INDEX on partitioned index" } ]
[ { "msg_contents": "Hi,\n\nI don't know what's going on here yet, but I noticed a $SUBJECT about\nhalf the time on my machine eelpout, starting 2-3 weeks ago. It fails\nlike:\n\nBailout called. Further testing stopped: system initdb failed\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=eelpout&br=REL_14_STABLE\n\nUnfortunately when I 'check' in\nsrc/test/modules/ssl_passphrase_callback on that host it doesn't it\nfail, there doesn't seem to be anything special about that particular\ninitdb invocation, no commit is jumping out at me in that time window,\nand the BF logs don't show any initdb output...\n\n\n", "msg_date": "Wed, 26 Jan 2022 12:23:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "ssl_passphrase_callback failure in REL_14_STABLE" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> I don't know what's going on here yet, but I noticed a $SUBJECT about\n> half the time on my machine eelpout, starting 2-3 weeks ago. It fails\n> like:\n\n> Bailout called. Further testing stopped: system initdb failed\n\n> https://buildfarm.postgresql.org/cgi-bin/show_history.pl?nm=eelpout&br=REL_14_STABLE\n\n> Unfortunately when I 'check' in\n> src/test/modules/ssl_passphrase_callback on that host it doesn't it\n> fail, there doesn't seem to be anything special about that particular\n> initdb invocation, no commit is jumping out at me in that time window,\n> and the BF logs don't show any initdb output...\n\nAny possibility that it's out-of-disk-space?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 25 Jan 2022 18:59:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ssl_passphrase_callback failure in REL_14_STABLE" }, { "msg_contents": "On Wed, Jan 26, 2022 at 12:59 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Any possibility that it's out-of-disk-space?\n\nErm, yeah that's embarrassing, it could well be that. I've just given\nit some more room.\n\n\n", "msg_date": "Wed, 26 Jan 2022 13:25:02 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: ssl_passphrase_callback failure in REL_14_STABLE" } ]
[ { "msg_contents": "Hi,\n\nIn a thread about sequences and sync replication [1], I've explained \nthat the issue we're observing is due to not waiting for WAL at commit \nif the transaction only did nextval(). In which case we don't flush WAL \nin RecordTransactionCommit, we don't wait for sync replica, etc. The WAL \nmay get lost in case of crash, etc.\n\nAs I explained in the other thread, there are various other cases where \na transaction generates WAL but does not have XID, which is sufficient \nfor not flushing/waiting at transaction commit. Some of those cases are \nprobably fine (e.g. there are comments explaining why this is fine for \nPRUNE record).\n\nBut other cases (discovered by running regression tests with extra \nlogging) looked a bit suspicious - particularly those that write \nmultiple WAL messages, because what if we lose just some of those?\n\nSo I looked at two cases related to BRIN, mostly because those were \nfairly simple, and I think at least the brin_summarize_range() is \nsomewhat broken.\n\n\n1) brin_desummarize_range()\n\nThis is pretty simple, because this function generates a single WAL \nrecord, without waiting for it to be flushed:\n\nDESUMMARIZE pagesPerRange 1, heapBlk 0, page offset 9, blkref #0: ...\n\nBut if the cluster/VM/... crashes right after you ran the function (and \nit completed just fine, possibly even in an explicit transaciton), that \nchange will get lost. Not really a serious data corruption/loss, and you \ncan simply run it again, but IMHO rather surprising.\n\nOf course, most people are unlikely to run brin_desummarize_range() very \noften, so maybe it's acceptable? But of course - if we expect this to be \nvery rare operation, why skip the WAL at all?\n\n\n2) brin_summarize_range()\n\nNow, the issue I think is more serious, more likely to happen, and \nharder to fix. When summarizing a range, we write two WAL records:\n\nINSERT heapBlk 2 pagesPerRange 2 offnum 2, blkref #0: rel 1663/63 ...\nSAMEPAGE_UPDATE offnum 2, blkref #0: rel 1663/63341/73957 blk 2\n\nSo, what happens if we lost the second WAL record, e.g. due to a crash? \nTo experiment with this, I wrote a trivial patch (attached) that allows \ncrashing on WAL message of certain type by simply setting a GUC.\n\nNow, consider this example:\n\n create table t (a int);\n insert into t select i from generate_series(1,5000) s(i);\n create index on t using brin (a);\n select brin_desummarize_range('t_a_idx', 1);\n\n set crash_on_wal_message = 'SAMEPAGE_UPDATE';\n\n select brin_summarize_range('t_a_idx', 5);\n\n PANIC: crashing before 'SAMEPAGE_UPDATE' WAL message\n server closed the connection unexpectedly\n ...\n\nAfter recovery, this is what we have:\n\n select * from brin_page_items(get_Raw_page('t_a_idx', 2), 't_a_idx');\n\n ... | allnulls | hasnulls | placeholder | value\n ... -+----------+----------+-------------+-------\n ... | t | f | t |\n (1 row)\n\nSo the BRIN tuple is still marked as placeholder, which is a problem \nbecause that means we'll always consider it as matching, making the \nbitmap index scan less efficient. And we'll *never* fix this, because \njust summarizing the range does nothing:\n\n select brin_summarize_range('t_a_idx', 5);\n brin_summarize_range\n ----------------------\n 0\n (1 row)\n\nSo it's still marked as placeholder, and to fix it you have to \nexplicitly desummarize the range first.\n\nThe reason for this seems obvious - only the process that created the \nplaceholder tuple is expected to mark it as \"placeholder=false\", but \nthis is described as two WAL records. And if we lose the update, the \ntuple will stay marked as a placeholder forever.\n\nOf course, this requires a crash while something is summarizing ranges. \nBut consider the summarization is often done by autovacuum, so it's not \njust about hitting this from manually-executed brin_summarize_range.\n\nI'm not quite sure what to do about this. Just doing XLogFlush() does \nnot really fix this - it makes it less likely, but the root cause is the \nchange is described by multiple WAL messages that are not linked \ntogether in any way. We may lost the last message without noticing that, \nand the flush does not fix that.\n\nI didn't look at the other cases mentioned in [1], but I would't be \nsurprised if some had a similar issue (e.g. the GIN pending list cleanup \nseems like another candidate).\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/0f827a71-a01b-bcf9-fe77-3047a9d4a93c%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 26 Jan 2022 04:12:27 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "BRIN summarization vs. WAL logging" }, { "msg_contents": "On Tue, Jan 25, 2022 at 10:12 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> 1) brin_desummarize_range()\n>\n> But if the cluster/VM/... crashes right after you ran the function (and\n> it completed just fine, possibly even in an explicit transaciton), that\n> change will get lost. Not really a serious data corruption/loss, and you\n> can simply run it again, but IMHO rather surprising.\n\nI don't have a big problem with inserting an XLogFlush() here, but I\nalso don't think there's a hard and fast rule that maintenance\noperations have to have full transactional behavior. Because that's\nviolated in various different ways by various different DDL commands.\nCREATE INDEX CONCURRENTLY can leave detritus around if it fails. At\nleast one variant of ALTER TABLE does a non-MVCC-aware rewrite of the\ntable contents. COPY FREEZE violates MVCC. Concurrent partition attach\nand detach aren't fully serializable with concurrent transactions.\nYears ago TRUNCATE didn't have MVCC semantics. We often make small\ncompromises in these areas for implementation simplicity or other\nbenefits.\n\n> 2) brin_summarize_range()\n>\n> Now, the issue I think is more serious, more likely to happen, and\n> harder to fix. When summarizing a range, we write two WAL records:\n>\n> INSERT heapBlk 2 pagesPerRange 2 offnum 2, blkref #0: rel 1663/63 ...\n> SAMEPAGE_UPDATE offnum 2, blkref #0: rel 1663/63341/73957 blk 2\n>\n> So, what happens if we lost the second WAL record, e.g. due to a crash?\n\nOuch. As you say, XLogFlush() won't fix that. The WAL logging scheme\nneeds to be redesigned.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jan 2022 12:40:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: BRIN summarization vs. WAL logging" }, { "msg_contents": "On 2022-Jan-26, Robert Haas wrote:\n\n> On Tue, Jan 25, 2022 at 10:12 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n\n> > 2) brin_summarize_range()\n> >\n> > Now, the issue I think is more serious, more likely to happen, and\n> > harder to fix. When summarizing a range, we write two WAL records:\n> >\n> > INSERT heapBlk 2 pagesPerRange 2 offnum 2, blkref #0: rel 1663/63 ...\n> > SAMEPAGE_UPDATE offnum 2, blkref #0: rel 1663/63341/73957 blk 2\n> >\n> > So, what happens if we lost the second WAL record, e.g. due to a crash?\n> \n> Ouch. As you say, XLogFlush() won't fix that. The WAL logging scheme\n> needs to be redesigned.\n\nI'm not sure what a good fix is. I was thinking that maybe if a\nplaceholder tuple is found during index scan, and the corresponding\nprocess is no longer running, then the index scanner would remove the\nplaceholder tuple, making the range unsummarized again. However, how\nwould we know that the process is gone?\n\nAnother idea is to use WAL's rm_cleanup: during replay, remember that a\nplaceholder tuple was seen, then remove the info if we see an update\nfrom the originating process that replaces the placeholder tuple with a\nreal one; at cleanup time, if the list of remembered placeholder tuples\nis nonempty, remove them.\n\n(I vaguely recall we used the WAL rm_cleanup mechanism for something\nlike this, but we no longer do AFAICS.)\n\n... Oh, but if there is a promotion involved, we may end up with a\nplaceholder insertion before the promotion and the update afterwards.\nThat would probably not be handled well.\n\n\nOne thing not completely clear to me is whether this only affects\nplaceholder tuples. Is it possible to have this problem with regular\nBRIN tuples? I think it isn't.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 26 Jan 2022 15:14:24 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: BRIN summarization vs. WAL logging" }, { "msg_contents": "\n\nOn 1/26/22 19:14, Alvaro Herrera wrote:\n> On 2022-Jan-26, Robert Haas wrote:\n> \n>> On Tue, Jan 25, 2022 at 10:12 PM Tomas Vondra\n>> <tomas.vondra@enterprisedb.com> wrote:\n> \n>>> 2) brin_summarize_range()\n>>>\n>>> Now, the issue I think is more serious, more likely to happen, and\n>>> harder to fix. When summarizing a range, we write two WAL records:\n>>>\n>>> INSERT heapBlk 2 pagesPerRange 2 offnum 2, blkref #0: rel 1663/63 ...\n>>> SAMEPAGE_UPDATE offnum 2, blkref #0: rel 1663/63341/73957 blk 2\n>>>\n>>> So, what happens if we lost the second WAL record, e.g. due to a crash?\n>>\n>> Ouch. As you say, XLogFlush() won't fix that. The WAL logging scheme\n>> needs to be redesigned.\n> \n> I'm not sure what a good fix is. I was thinking that maybe if a\n> placeholder tuple is found during index scan, and the corresponding\n> process is no longer running, then the index scanner would remove the\n> placeholder tuple, making the range unsummarized again. However, how\n> would we know that the process is gone?\n> \n> Another idea is to use WAL's rm_cleanup: during replay, remember that a\n> placeholder tuple was seen, then remove the info if we see an update\n> from the originating process that replaces the placeholder tuple with a\n> real one; at cleanup time, if the list of remembered placeholder tuples\n> is nonempty, remove them.\n> \n> (I vaguely recall we used the WAL rm_cleanup mechanism for something\n> like this, but we no longer do AFAICS.)\n> \n> ... Oh, but if there is a promotion involved, we may end up with a\n> placeholder insertion before the promotion and the update afterwards.\n> That would probably not be handled well.\n> \n\nRight, I think we'd miss those. And can't that happen for regular \nrecovery too. If the placeholder tuple is before the redo LSN, we'd miss \nit too, right? But something prevents that.\n\nI think the root cause is the two WAL records are not linked together, \nso we fail to ensure the atomicity of the operation. Trying to fix this \nby tracking placeholder tuples seems like solving the symptoms.\n\nThe easiest solution would be to link the records by XID, but of course \nthat goes against the whole placeholder tuple idea - no one could modify \nthe placeholder tuple in between. Maybe that's a price we have to pay.\n\n> \n> One thing not completely clear to me is whether this only affects\n> placeholder tuples. Is it possible to have this problem with regular\n> BRIN tuples? I think it isn't.\n> \n\nYeah. I've been playing with this for a while, trying to cause issues \nwith non-placeholder tuples. But I think that's fine - once the tuple \nbecomes non-placeholder, all subsequent updates are part of the \ntransaction that modifies the table. So if that fails, we don't update \nthe index tuple.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 27 Jan 2022 14:40:54 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: BRIN summarization vs. WAL logging" } ]
[ { "msg_contents": "Hi,\n\npg_log_backend_memory_contexts() should be designed not to send the messages about the memory contexts to the client regardless of client_min_messages. But I found that the message \"logging memory contexts of PID %d\" can be sent to the client because it's ereport()'d with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and shouldn't we use LOG_SERVER_ONLY level to log that message? Patch attached.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Wed, 26 Jan 2022 13:17:43 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "On Wed, Jan 26, 2022 at 9:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> Hi,\n>\n> pg_log_backend_memory_contexts() should be designed not to send the messages about the memory contexts to the client regardless of client_min_messages. But I found that the message \"logging memory contexts of PID %d\" can be sent to the client because it's ereport()'d with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and shouldn't we use LOG_SERVER_ONLY level to log that message? Patch attached.\n\n+1. The patch LGTM.\n\nThe same applies for the on-going patch [1] for pg_log_backtrace, let\nme quickly send the updated patch there as well. The pg_log_query_plan\non-going patch [2], does the right thing.\n\n[1] - https://www.postgresql.org/message-id/CALDaNm3tbc1OKvSKvD5SfmEj66M_sWDPCgDvzFtS9gxRov8jRQ%40mail.gmail.com\n[2] - https://www.postgresql.org/message-id/fb6360e4c0ffbdd0a6698b92bb2617e2%40oss.nttdata.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 26 Jan 2022 10:46:22 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "On Wed, Jan 26, 2022 at 10:46 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 9:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > Hi,\n> >\n> > pg_log_backend_memory_contexts() should be designed not to send the messages about the memory contexts to the client regardless of client_min_messages. But I found that the message \"logging memory contexts of PID %d\" can be sent to the client because it's ereport()'d with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and shouldn't we use LOG_SERVER_ONLY level to log that message? Patch attached.\n>\n> +1. The patch LGTM.\n>\n> The same applies for the on-going patch [1] for pg_log_backtrace, let\n> me quickly send the updated patch there as well.\n>\n> [1] - https://www.postgresql.org/message-id/CALDaNm3tbc1OKvSKvD5SfmEj66M_sWDPCgDvzFtS9gxRov8jRQ%40mail.gmail.com\n\nMy bad, the v17 patch of pg_log_backtrace at [1] does emit at LOG_SERVER_ONLY.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 26 Jan 2022 10:50:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "On Wed, Jan 26, 2022 at 10:46 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Wed, Jan 26, 2022 at 9:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >\n> > Hi,\n> >\n> > pg_log_backend_memory_contexts() should be designed not to send the messages about the memory contexts to the client regardless of client_min_messages. But I found that the message \"logging memory contexts of PID %d\" can be sent to the client because it's ereport()'d with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and shouldn't we use LOG_SERVER_ONLY level to log that message? Patch attached.\n>\n> +1. The patch LGTM.\n\nWhile we are here, I think it's enough to have the\nhas_function_privilege, not needed to set role regress_log_memory and\nthen execute the function as we already have code covering the\nfunction above (SELECT\npg_log_backend_memory_contexts(pg_backend_pid());). Thought?\n\nSELECT has_function_privilege('regress_log_memory',\n 'pg_log_backend_memory_contexts(integer)', 'EXECUTE'); -- yes\n\nSET ROLE regress_log_memory;\nSELECT pg_log_backend_memory_contexts(pg_backend_pid());\nRESET ROLE;\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 26 Jan 2022 10:57:00 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "On 2022-01-26 13:17, Fujii Masao wrote:\n> Hi,\n> \n> pg_log_backend_memory_contexts() should be designed not to send the\n> messages about the memory contexts to the client regardless of\n> client_min_messages. But I found that the message \"logging memory\n> contexts of PID %d\" can be sent to the client because it's ereport()'d\n> with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and\n> shouldn't we use LOG_SERVER_ONLY level to log that message? Patch\n> attached.\n> \n> Regards,\n\nThanks! I think it's a bug.\n\nThere are two clients: \"the client that executed \npg_log_backend_memory_contexts()\" and \"the client that issued the query \nthat was the target of the memory context logging\", but I was only \nconcerned about the former when I wrote that part of the code.\nThe latter is not expected.\n\nIt seems better to use LOG_SERVER_ONLY as the attached patch.\n\nThe patch was successfully applied and there was no regression test \nerror on my environment.\n\n-- \nRegards,\n\n--\nAtsushi Torikoshi\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 26 Jan 2022 14:28:43 +0900", "msg_from": "torikoshia <torikoshia@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "\n\nOn 2022/01/26 14:27, Bharath Rupireddy wrote:\n> On Wed, Jan 26, 2022 at 10:46 AM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>>\n>> On Wed, Jan 26, 2022 at 9:48 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> Hi,\n>>>\n>>> pg_log_backend_memory_contexts() should be designed not to send the messages about the memory contexts to the client regardless of client_min_messages. But I found that the message \"logging memory contexts of PID %d\" can be sent to the client because it's ereport()'d with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and shouldn't we use LOG_SERVER_ONLY level to log that message? Patch attached.\n>>\n>> +1. The patch LGTM.\n\nThanks for the review!\n\n\n> While we are here, I think it's enough to have the\n> has_function_privilege, not needed to set role regress_log_memory and\n> then execute the function as we already have code covering the\n> function above (SELECT\n> pg_log_backend_memory_contexts(pg_backend_pid());). Thought?\n> \n> SELECT has_function_privilege('regress_log_memory',\n> 'pg_log_backend_memory_contexts(integer)', 'EXECUTE'); -- yes\n> \n> SET ROLE regress_log_memory;\n> SELECT pg_log_backend_memory_contexts(pg_backend_pid());\n> RESET ROLE;\n\nOr it's enough to set role and execute the function? Either those or has_function_privilege() can be removed from the test, but I don't have strong reason to do that for now...\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:45:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "\n\nOn 2022/01/26 14:28, torikoshia wrote:\n> On 2022-01-26 13:17, Fujii Masao wrote:\n>> Hi,\n>>\n>> pg_log_backend_memory_contexts() should be designed not to send the\n>> messages about the memory contexts to the client regardless of\n>> client_min_messages. But I found that the message \"logging memory\n>> contexts of PID %d\" can be sent to the client because it's ereport()'d\n>> with LOG level instead of LOG_SERVER_ONLY. Is this a bug, and\n>> shouldn't we use LOG_SERVER_ONLY level to log that message? Patch\n>> attached.\n>>\n>> Regards,\n> \n> Thanks! I think it's a bug.\n> \n> There are two clients: \"the client that executed pg_log_backend_memory_contexts()\" and \"the client that issued the query that was the target of the memory context logging\", but I was only concerned about the former when I wrote that part of the code.\n> The latter is not expected.\n> \n> It seems better to use LOG_SERVER_ONLY as the attached patch.\n> \n> The patch was successfully applied and there was no regression test error on my environment.\n\nThanks for the review! So barring any objection, I will commit the patch and backport it to v14 where pg_log_backend_memory_contexts() is added.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:45:33 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" }, { "msg_contents": "\n\nOn 2022/01/27 12:45, Fujii Masao wrote:\n> Thanks for the review! So barring any objection, I will commit the patch and backport it to v14 where pg_log_backend_memory_contexts() is added.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:27:39 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: pg_log_backend_memory_contexts() and log level" } ]
[ { "msg_contents": "I was thinking about whether it made sense to try to commit anything\nand decided it would be a good idea to check how the buildfarm looks\nfirst. It doesn't look great.\n\n1. snapper failed 4 out of the last 5 runs in recoveryCheck. The\nlatest run as of this writing shows this:\n\n[19:09:50] t/026_overwrite_contrecord.pl ........ ok 43136 ms\n# poll_query_until timed out executing this query:\n# SELECT '0/1415E310' <= replay_lsn AND state = 'streaming' FROM\npg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n# expecting this output:\n# t\n# last actual query output:\n#\n\n2. skink failed the last run in this MiscCheck phase. I had no idea\nwhat this phase was, because the name isn't very descriptive. It seems\nthat it runs some stuff in contrib and some stuff in src/test/modules,\nwhich seems a bit confusing. Anyway, the failure here is:\n\ntest oldest_xmin ... FAILED 5533 ms\n\nTo find the failure, the regression test suite suggests looking at\nfile contrib/test_decoding/output_iso/regression.diffs or\nregression.out. But neither file is in the buildfarm results so far as\nI can see. It does have the postmaster log but I can't tell what's\ngone wrong from looking at that. In fact I'm not really sure it's the\ncorrect log file, because oldest_xmin.spec uses a slot called\n\"isolation_slot\" and test_decoding/log/postmaster.log refers only to\n\"regression_slot\", so it seems like this postmaster.log file might\ncover only the pg_regress tests and not the results from the isolation\ntester.\n\n3. fairywren failed the last run in module-commit_tsCheck. It's unhappy because:\n\n[16:30:02] t/002_standby.pl .... ok 13354 ms ( 0.06 usr 0.00 sys +\n 1.11 cusr 3.20 csys = 4.37 CPU)\n# poll_query_until timed out executing this query:\n# SELECT '0/303C7D0'::pg_lsn <= pg_last_wal_replay_lsn()\n# expecting this output:\n# t\n# last actual query output:\n# f\n\nI don't know what is causing any of these failures, and I don't know\nif there's already some discussion elsewhere that I've missed, but\nmaybe this email will be helpful to someone. I also noticed that 2 out\nof the 3 failures report 2dbb7b9b2279d064f66ce9008869fd0e2b794534 \"Fix\npg_hba_file_rules for authentication method cert\" as the only new\ncommit since the last run, and it's hardly believable that that commit\nwould have broken this. Nor do I see any other recent changes that\nlook like likely culprits. Apologies in advance if any of this is my\nfault.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 26 Jan 2022 16:17:19 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "snapper and skink and fairywren (oh my!)" }, { "msg_contents": "\nOn 1/26/22 16:17, Robert Haas wrote\n> 3. fairywren failed the last run in module-commit_tsCheck. It's unhappy because:\n>\n> [16:30:02] t/002_standby.pl .... ok 13354 ms ( 0.06 usr 0.00 sys +\n> 1.11 cusr 3.20 csys = 4.37 CPU)\n> # poll_query_until timed out executing this query:\n> # SELECT '0/303C7D0'::pg_lsn <= pg_last_wal_replay_lsn()\n> # expecting this output:\n> # t\n> # last actual query output:\n> # f\n>\n> I don't know what is causing any of these failures, and I don't know\n> if there's already some discussion elsewhere that I've missed, but\n> maybe this email will be helpful to someone. I also noticed that 2 out\n> of the 3 failures report 2dbb7b9b2279d064f66ce9008869fd0e2b794534 \"Fix\n> pg_hba_file_rules for authentication method cert\" as the only new\n> commit since the last run, and it's hardly believable that that commit\n> would have broken this. Nor do I see any other recent changes that\n> look like likely culprits. Apologies in advance if any of this is my\n> fault.\n>\n\nIntermittent failures give a false positive against the latest set of\ncommits. These failures started happening regularly about 49 days ago.\nSo we need to look at what exactly happened around then. See\n<https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&stage=module-commit_tsCheck&filter=Submit>\n\nIt's very unlikely any of this is your fault. In any case, intermittent\nfailures are very hard to nail down.\n\nI should add that both drongo and fairywren run on a single\nAWS/EC2/Windows2019 instance. I haven't updated the OS in a while. so\nI'm doing that to see if matters improve. FWIW. I just reproduced the\nerror on a similar instance where I'm testing some stuff for Andres'\nproject to make TAP tests more portable.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 26 Jan 2022 18:45:53 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: snapper and skink and fairywren (oh my!)" }, { "msg_contents": "At Wed, 26 Jan 2022 18:45:53 -0500, Andrew Dunstan <andrew@dunslane.net> wrote in \n> It's very unlikely any of this is your fault. In any case, intermittent\n> failures are very hard to nail down.\n\nThe primary starts at 2022-01-26 16:30:06.613 the accepted a connectin\nfrom the standby at 2022-01-26 16:30:09.911.\n\nP: 2022-01-26 16:30:06.613 UTC [74668:1] LOG: starting PostgreSQL 15devel on x86_64-w64-mingw32, compiled by gcc.exe (Rev2, Built by MSYS2 project) 10.3.0, 64-bit\nS: 2022-01-26 16:30:09.637 UTC [72728:1] LOG: starting PostgreSQL 15devel on x86_64-w64-mingw32, compiled by gcc.exe (Rev2, Built by MSYS2 project) 10.3.0, 64-bit\nP: 2022-01-26 16:30:09.911 UTC [74096:3] [unknown] LOG: replication connection authorized: user=pgrunner application_name=standby\nS: 2022-01-26 16:30:09.912 UTC [73932:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\n\nAfter this the primary restarts.\n\nP: 2022-01-26 16:30:11.832 UTC [74668:7] LOG: database system is shut down\nP: 2022-01-26 16:30:12.010 UTC [72140:1] LOG: starting PostgreSQL 15devel on x86_64-w64-mingw32, compiled by gcc.exe (Rev2, Built by MSYS2 project) 10.3.0, 64-bit\n\nBut the standby doesn't notice the disconnection and continue with the\npoll_query_until() on the stale connection. But the query should have\nexecuted after reconnection finally. The following log lines are not\nthinned out.\n\nS: 2022-01-26 16:30:09.912 UTC [73932:1] LOG: started streaming WAL from primary at 0/3000000 on timeline 1\nS: 2022-01-26 16:30:12.825 UTC [72760:1] [unknown] LOG: connection received: host=127.0.0.1 port=60769\nS: 2022-01-26 16:30:12.830 UTC [72760:2] [unknown] LOG: connection authenticated: identity=\"EC2AMAZ-P7KGG90\\\\pgrunner\" method=sspi (C:/tools/msys64/home/pgrunner/bf/root/HEAD/pgsql.build/src/test/modules/commit_ts/tmp_check/t_003_standby_2_standby_data/pgdata/pg_hba.conf:2)\nS: 2022-01-26 16:30:12.830 UTC [72760:3] [unknown] LOG: connection authorized: user=pgrunner database=postgres application_name=003_standby_2.pl\nS: 2022-01-26 16:30:12.838 UTC [72760:4] 003_standby_2.pl LOG: statement: SELECT '0/303C7D0'::pg_lsn <= pg_last_wal_replay_lsn()\n\nSince the standby dones't receive WAL from the restarted server so the\npoll_query_until() doesn't return.\n\nI'm not sure why walsender of the standby continued running not knowing the primary has been once dead for such a long time.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Jan 2022 11:55:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: snapper and skink and fairywren (oh my!)" }, { "msg_contents": "Kyotaro Horiguchi <horikyota.ntt@gmail.com> writes:\n> I'm not sure why walsender of the standby continued running not knowing the primary has been once dead for such a long time.\n\nIsn't this precisely the problem that made us revert the \"graceful\ndisconnection\" patch [1]? Munro seems to have a theory about\nwhy that broke things, but in any case it's not Haas' fault ;-)\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com\n\n\n", "msg_date": "Wed, 26 Jan 2022 23:07:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: snapper and skink and fairywren (oh my!)" }, { "msg_contents": "On Wed, Jan 26, 2022 at 04:17:19PM -0500, Robert Haas wrote:\n> 1. snapper failed 4 out of the last 5 runs in recoveryCheck. The\n> latest run as of this writing shows this:\n> \n> [19:09:50] t/026_overwrite_contrecord.pl ........ ok 43136 ms\n> # poll_query_until timed out executing this query:\n> # SELECT '0/1415E310' <= replay_lsn AND state = 'streaming' FROM\n> pg_catalog.pg_stat_replication WHERE application_name = 'standby_1';\n> # expecting this output:\n> # t\n> # last actual query output:\n> #\n\nI expect commit ce6d793 will end the failures on snapper, kittiwake, and\ntadarida. I apologize for letting the damage last months.\n\n\n", "msg_date": "Wed, 26 Jan 2022 20:26:26 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: snapper and skink and fairywren (oh my!)" }, { "msg_contents": "\nOn 1/26/22 18:45, Andrew Dunstan wrote:\n> On 1/26/22 16:17, Robert Haas wrote\n>> 3. fairywren failed the last run in module-commit_tsCheck. It's unhappy because:\n>>\n> Intermittent failures give a false positive against the latest set of\n> commits. These failures started happening regularly about 49 days ago.\n> So we need to look at what exactly happened around then. See\n> <https://buildfarm.postgresql.org/cgi-bin/show_failures.pl?max_days=90&stage=module-commit_tsCheck&filter=Submit>\n\n\nSpecifically, the issue is discussed here\n<https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BOeoETZQ%3DQw5Ub5h3tmwQhBmDA%3DnuNO3KG%3DzWfUypFAw%40mail.gmail.com>\nTom has already reverted the offending changes on the back branches.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 09:59:07 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: snapper and skink and fairywren (oh my!)" } ]
[ { "msg_contents": "Hello.\n\npg_waldump complains at the end in any case. I noticed that the LSN\nit shows in the finish message is incorrect. (I faintly thought that\nI posted about this but I didn't find it..)\n\n> pg_waldump: fatal: error in WAL record at 0/15073F8: invalid record length at 0/1507470: wanted 24, got 0\n\nxlogreader found the error at the record begins at 1507470, but\npg_waldump tells that error happens at 15073F8, which is actually the\nbeginning of the last sound record.\n\n\nIf I give an empty file to the tool it complains as the follows.\n\n> pg_waldump: fatal: could not read file \"hoge\": No such file or directory\n\nNo, the file exists. The cause is it reads uninitialized errno to\ndetect errors from the system call. read(2) is defined to set errno\nalways when it returns -1 and doesn't otherwise. Thus it seems to me\nthat it is better to check that the return value is less than zero\nthan to clear errno before the call to read().\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Thu, 27 Jan 2022 10:07:38 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Two noncritical bugs of pg_waldump" }, { "msg_contents": "On Thu, Jan 27, 2022 at 10:07:38AM +0900, Kyotaro Horiguchi wrote:\n> pg_waldump complains at the end in any case. I noticed that the LSN\n> it shows in the finish message is incorrect. (I faintly thought that\n> I posted about this but I didn't find it..)\n\nIs this thread [0] what you are remembering?\n\n[0] https://postgr.es/m/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Wed, 26 Jan 2022 17:25:14 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "At Wed, 26 Jan 2022 17:25:14 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Thu, Jan 27, 2022 at 10:07:38AM +0900, Kyotaro Horiguchi wrote:\n> > pg_waldump complains at the end in any case. I noticed that the LSN\n> > it shows in the finish message is incorrect. (I faintly thought that\n> > I posted about this but I didn't find it..)\n> \n> Is this thread [0] what you are remembering?\n> \n> [0] https://postgr.es/m/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n\nMaybe exactly. I rememberd the discussion.\n\nSo the issue there is neither EndRecPtr and ReadRecPtr always points\nto the current read LSN. The first proposal from Nathen was to use\ncurrRecPtr but it was a private member. But after discussion, it\nseems to me it is (at least now) exactly what we need here so if we\nofccially exposed the member the problem would be blown away. Do you\nwant to conitnue that?\n\nOtherwise, I think we need to add a comment there about the known\nissue.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Jan 2022 13:23:06 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "(this is off-topic)\n\nAt Thu, 27 Jan 2022 13:23:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 26 Jan 2022 17:25:14 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> > On Thu, Jan 27, 2022 at 10:07:38AM +0900, Kyotaro Horiguchi wrote:\n> > > pg_waldump complains at the end in any case. I noticed that the LSN\n> > > it shows in the finish message is incorrect. (I faintly thought that\n> > > I posted about this but I didn't find it..)\n> > \n> > Is this thread [0] what you are remembering?\n> > \n> > [0] https://postgr.es/m/2B4510B2-3D70-4990-BFE3-0FE64041C08A%40amazon.com\n> \n> Maybe exactly. I rememberd the discussion.\n> \n> So the issue there is neither EndRecPtr and ReadRecPtr always points\n> to the current read LSN. The first proposal from Nathen was to use\n\nMmm. Sorry for my fat finger, Nathan.\n\n> currRecPtr but it was a private member. But after discussion, it\n> seems to me it is (at least now) exactly what we need here so if we\n> ofccially exposed the member the problem would be blown away. Do you\n> want to conitnue that?\n> \n> Otherwise, I think we need to add a comment there about the known\n> issue.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 27 Jan 2022 13:27:36 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "On Thu, Jan 27, 2022 at 01:23:06PM +0900, Kyotaro Horiguchi wrote:\n> So the issue there is neither EndRecPtr and ReadRecPtr always points\n> to the current read LSN. The first proposal from Nathen was to use\n> currRecPtr but it was a private member. But after discussion, it\n> seems to me it is (at least now) exactly what we need here so if we\n> ofccially exposed the member the problem would be blown away. Do you\n> want to conitnue that?\n\nPresumably there is a good reason for keeping it private, but if not, maybe\nthat is a reasonable approach to consider. I don't think we wanted to\nproceed with the approaches discussed on the old thread, anyway.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:18:16 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "On Thu, Jan 27, 2022 at 01:27:36PM +0900, Kyotaro Horiguchi wrote:\n> At Thu, 27 Jan 2022 13:23:06 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n>> So the issue there is neither EndRecPtr and ReadRecPtr always points\n>> to the current read LSN. The first proposal from Nathen was to use\n> \n> Mmm. Sorry for my fat finger, Nathan.\n\nIt's okay. :)\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com/\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:27:02 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "Hmm..\n\nAt Thu, 27 Jan 2022 10:07:38 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> pg_waldump complains at the end in any case. I noticed that the LSN\n> it shows in the finish message is incorrect. (I faintly thought that\n> I posted about this but I didn't find it..)\n> \n> > pg_waldump: fatal: error in WAL record at 0/15073F8: invalid record length at 0/1507470: wanted 24, got 0\n> \n> xlogreader found the error at the record begins at 1507470, but\n> pg_waldump tells that error happens at 15073F8, which is actually the\n> beginning of the last sound record.\n\nIt is arguable, but the following is indisputable.\n\n> If I give an empty file to the tool it complains as the follows.\n> \n> > pg_waldump: fatal: could not read file \"hoge\": No such file or directory\n> \n> No, the file exists. The cause is it reads uninitialized errno to\n> detect errors from the system call. read(2) is defined to set errno\n> always when it returns -1 and doesn't otherwise. Thus it seems to me\n> that it is better to check that the return value is less than zero\n> than to clear errno before the call to read().\n\nSo I post a patch contains only the indisputable part.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center", "msg_date": "Mon, 14 Feb 2022 18:18:47 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 18:18:47 +0900, Kyotaro Horiguchi wrote:\n> > If I give an empty file to the tool it complains as the follows.\n> > \n> > > pg_waldump: fatal: could not read file \"hoge\": No such file or directory\n> > \n> > No, the file exists. The cause is it reads uninitialized errno to\n> > detect errors from the system call. read(2) is defined to set errno\n> > always when it returns -1 and doesn't otherwise. Thus it seems to me\n> > that it is better to check that the return value is less than zero\n> > than to clear errno before the call to read().\n> \n> So I post a patch contains only the indisputable part.\n\nThanks for the report and fix. Pushed. This was surprisingly painful, all but\none branch had conflicts...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 25 Feb 2022 10:48:47 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Two noncritical bugs of pg_waldump" }, { "msg_contents": "At Fri, 25 Feb 2022 10:48:47 -0800, Andres Freund <andres@anarazel.de> wrote in \n> Hi,\n> \n> On 2022-02-14 18:18:47 +0900, Kyotaro Horiguchi wrote:\n> > > If I give an empty file to the tool it complains as the follows.\n> > > \n> > > > pg_waldump: fatal: could not read file \"hoge\": No such file or directory\n> > > \n> > > No, the file exists. The cause is it reads uninitialized errno to\n> > > detect errors from the system call. read(2) is defined to set errno\n> > > always when it returns -1 and doesn't otherwise. Thus it seems to me\n> > > that it is better to check that the return value is less than zero\n> > > than to clear errno before the call to read().\n> > \n> > So I post a patch contains only the indisputable part.\n> \n> Thanks for the report and fix. Pushed. This was surprisingly painful, all but\n> one branch had conflicts...\n\nAh, I didn't expect that this is committed so quickly. I should have\ncreated patches for all versions. Anyway thanks for committing this!\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 02 Mar 2022 17:37:04 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Two noncritical bugs of pg_waldump" } ]
[ { "msg_contents": "Hi,\n\nI tested the following case in PostgreSQL master:58e2e6\n\nthe partition table created:\n\n create table tbl_dts (dts timestamp with time zone not null) partition\nby range(dts);\n create table tbl_dts_1 partition of tbl_dts for values from\n('2021-07-02') to ('2021-08-01');\n create table tbl_dts_2 partition of tbl_dts for values from\n('2021-08-02') to ('2021-09-01');\n create table tbl_dts_3 partition of tbl_dts for values from\n('2021-09-02') to ('2021-10-01');\n create table tbl_dts_4 partition of tbl_dts for values from\n('2021-10-02') to ('2021-11-01');\n\nand the query:\n\n explain select * from tbl_dts where dts between '2022-01-20'::date and\n'2022-01-26'::date;\n\n QUERY PLAN\n---------------------------------------------\n Append (cost=0.00..175.82 rows=44 width=8)\n Subplans Removed: 4\n(2 rows)\n\nthe plan shows all the partitions are pruned, but in gdb tracing, it shows\nthat\nthe pruning happens in ExecInitAppend, and during planning stage pg does not\nprune any partitions. this is because in function\nmatch_clause_to_partition_key\ndo not handle the case for STABLE operator:\n\nif (op_volatile(opno) != PROVOLATILE_IMMUTABLE)\n{\ncontext->has_mutable_op = true;\n\n/*\n* When pruning in the planner, we cannot prune with mutable\n* operators.\n*/\nif (context->target == PARTTARGET_PLANNER)\nreturn PARTCLAUSE_UNSUPPORTED;\n}\n\nthe procs for timestamptz compare with date are STABLE:\n\n proname | provolatile\n----------------------+-------------\n timestamptz_lt_date | s\n timestamptz_le_date | s\n timestamptz_eq_date | s\n timestamptz_gt_date | s\n timestamptz_ge_date | s\n timestamptz_ne_date | s\n timestamptz_cmp_date | s\n(7 rows)\n\nbut in ExecInitAppend call perform_pruning_base_step which do not consider\nthe STABLE\nproperty of the cmpfn.\n\nso I have serveral questions:\n1) why in planning the function volatility is considered but not in\nexecInitAppend;\n2) why timestamptz_xxx_date is STABLE not IMMUTABLE;\n\nthanks.\n\nHi,I tested the following case in PostgreSQL master:58e2e6the partition table created:    create table tbl_dts (dts timestamp with time zone not null) partition by range(dts);    create table tbl_dts_1 partition of tbl_dts for values from ('2021-07-02') to ('2021-08-01');    create table tbl_dts_2 partition of tbl_dts for values from ('2021-08-02') to ('2021-09-01');    create table tbl_dts_3 partition of tbl_dts for values from ('2021-09-02') to ('2021-10-01');    create table tbl_dts_4 partition of tbl_dts for values from ('2021-10-02') to ('2021-11-01');and the query:    explain select * from tbl_dts where dts between '2022-01-20'::date and '2022-01-26'::date;                 QUERY PLAN                  --------------------------------------------- Append  (cost=0.00..175.82 rows=44 width=8)   Subplans Removed: 4(2 rows)the plan shows all the partitions are pruned, but in gdb tracing, it shows that the pruning happens in ExecInitAppend, and during planning stage pg does notprune any partitions. this is because in function match_clause_to_partition_keydo not handle the case for STABLE operator:if (op_volatile(opno) != PROVOLATILE_IMMUTABLE)\t\t{\t\t\tcontext->has_mutable_op = true;\t\t\t/*\t\t\t * When pruning in the planner, we cannot prune with mutable\t\t\t * operators.\t\t\t */\t\t\tif (context->target == PARTTARGET_PLANNER)\t\t\t\treturn PARTCLAUSE_UNSUPPORTED;\t\t}the procs for timestamptz compare with date are STABLE:       proname        | provolatile ----------------------+------------- timestamptz_lt_date  | s timestamptz_le_date  | s timestamptz_eq_date  | s timestamptz_gt_date  | s timestamptz_ge_date  | s timestamptz_ne_date  | s timestamptz_cmp_date | s(7 rows)but in ExecInitAppend call perform_pruning_base_step which do not consider the STABLEproperty of the cmpfn.so I have serveral questions:1) why in planning the function volatility is considered but not in execInitAppend;2) why timestamptz_xxx_date is STABLE not IMMUTABLE;thanks.", "msg_date": "Thu, 27 Jan 2022 09:28:32 +0800", "msg_from": "TAO TANG <tang.tao.cn@gmail.com>", "msg_from_op": true, "msg_subject": "Question on partition pruning involving stable operator:\n timestamptz_ge_date" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 27, 2022 at 10:28 AM TAO TANG <tang.tao.cn@gmail.com> wrote:\n> the plan shows all the partitions are pruned, but in gdb tracing, it shows that\n> the pruning happens in ExecInitAppend, and during planning stage pg does not\n> prune any partitions. this is because in function match_clause_to_partition_key\n> do not handle the case for STABLE operator:\n>\n> if (op_volatile(opno) != PROVOLATILE_IMMUTABLE)\n> {\n> context->has_mutable_op = true;\n>\n> /*\n> * When pruning in the planner, we cannot prune with mutable\n> * operators.\n> */\n> if (context->target == PARTTARGET_PLANNER)\n> return PARTCLAUSE_UNSUPPORTED;\n> }\n>\n> the procs for timestamptz compare with date are STABLE:\n>\n> proname | provolatile\n> ----------------------+-------------\n> timestamptz_lt_date | s\n> timestamptz_le_date | s\n> timestamptz_eq_date | s\n> timestamptz_gt_date | s\n> timestamptz_ge_date | s\n> timestamptz_ne_date | s\n> timestamptz_cmp_date | s\n> (7 rows)\n>\n> but in ExecInitAppend call perform_pruning_base_step which do not consider the STABLE\n> property of the cmpfn.\n>\n> so I have serveral questions:\n> 1) why in planning the function volatility is considered but not in execInitAppend;\n\nThe value of a STABLE expression can change based on runtime\nparameters, so while it is guaranteed to remain the same during a\nparticular execution of a plan in which it is contained, it can change\nacross multiple executions of that plan (if it is cached, for\nexample). So the planner cannot assume a particular value of such\nexpressions when choosing partitions to add to the plan, because each\nexecution of the plan (each may run in a separate transaction) can\nproduce different values. ExecInitAppend(), on the other hand, can\nassume a particular value when choosing partitions to initialize,\nbecause the value is fixed for a particular execution during which it\nruns.\n\n> 2) why timestamptz_xxx_date is STABLE not IMMUTABLE;\n\nBecause calculations involving timestamptz values can produce\ndifferent results dependkng on volatile settings like timezone,\ndatestyle, etc.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 27 Jan 2022 11:11:33 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Question on partition pruning involving stable operator:\n timestamptz_ge_date" } ]
[ { "msg_contents": "The INSERT...ON CONFLICT is used for doing upserts in one of our app.\nOur app works with both MS SQL and Postgresql, based on customer needs.\n\nUnlike the MS SQL MERGE command's OUTPUT clause that gives the $action\n<https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15#output_clause>\n[INSERT / UPDATE /DELETE] that was done during the upsert, the RETURNING\nclause of the pgsql does not return the action done.\nWe need this so that the application can use that for auditing and UI\npurposes.\nIs there any workaround to get this info ?\nOr is there a way this enhancement can be requested in future PG versions ?\n\nthanks,\nAnand.\n\nThe INSERT...ON CONFLICT is used for doing upserts in one of our app. Our app works with both MS SQL and Postgresql, based on customer needs.Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action [INSERT / UPDATE  /DELETE] that was done during the upsert, the RETURNING clause of the pgsql does not return the action done. We need this so that the application can use that for auditing and UI purposes.Is there any workaround to get this info ?Or is there a way this enhancement can be requested in future PG versions ?thanks,Anand.", "msg_date": "Thu, 27 Jan 2022 10:24:14 +0530", "msg_from": "Anand Sowmithiran <anandsowmi2@gmail.com>", "msg_from_op": true, "msg_subject": "Output clause for Upsert aka INSERT...ON CONFLICT" }, { "msg_contents": "On Wednesday, January 26, 2022, Anand Sowmithiran <anandsowmi2@gmail.com>\nwrote:\n\n> The INSERT...ON CONFLICT is used for doing upserts in one of our app.\n> Our app works with both MS SQL and Postgresql, based on customer needs.\n>\n> Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action\n> <https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15#output_clause>\n> [INSERT / UPDATE /DELETE] that was done during the upsert, the RETURNING\n> clause of the pgsql does not return the action done.\n> We need this so that the application can use that for auditing and UI\n> purposes.\n>\n\n\n> Is there any workaround to get this info ?\n>\n\nThere is not. But I’d presume the correct trigger is fired for whichever\nDML is ultimately applied so maybe you have a way through that.\n\n\n> Or is there a way this enhancement can be requested in future PG versions ?\n>\n>\nYou just did. There is nothing formal. But presently there isn’t anyone\nchampioning improvements to this feature (just my unresearched impression,\nsearching our public mailing lists and commitfest would let you form a\nresearched impression).\n\nDavid J.\n\nOn Wednesday, January 26, 2022, Anand Sowmithiran <anandsowmi2@gmail.com> wrote:The INSERT...ON CONFLICT is used for doing upserts in one of our app. Our app works with both MS SQL and Postgresql, based on customer needs.Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action [INSERT / UPDATE  /DELETE] that was done during the upsert, the RETURNING clause of the pgsql does not return the action done. We need this so that the application can use that for auditing and UI purposes. Is there any workaround to get this info ?There is not.  But I’d presume the correct trigger is fired for whichever DML is ultimately applied so maybe you have a way through that. Or is there a way this enhancement can be requested in future PG versions ?You just did.  There is nothing formal.  But presently there isn’t anyone championing improvements to this feature (just my unresearched impression, searching our public mailing lists and commitfest would let you form a researched impression). David J.", "msg_date": "Wed, 26 Jan 2022 22:07:54 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Output clause for Upsert aka INSERT...ON CONFLICT" }, { "msg_contents": "On Thu, Jan 27, 2022 at 5:54 PM Anand Sowmithiran <anandsowmi2@gmail.com> wrote:\n> Is there any workaround to get this info ?\n\nThere's an undocumented trick:\n\nhttps://stackoverflow.com/questions/34762732/how-to-find-out-if-an-upsert-was-an-update-with-postgresql-9-5-upsert\n\n\n", "msg_date": "Thu, 27 Jan 2022 18:55:25 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Output clause for Upsert aka INSERT...ON CONFLICT" }, { "msg_contents": "On Thu, Jan 27, 2022 at 10:24:14AM +0530, Anand Sowmithiran wrote:\n> The INSERT...ON CONFLICT is used for doing upserts in one of our app.\n> Our app works with both MS SQL and Postgresql, based on customer needs.\n> \n> Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action\n> <https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15#output_clause>\n> [INSERT / UPDATE /DELETE] that was done during the upsert, the RETURNING\n> clause of the pgsql does not return the action done.\n> We need this so that the application can use that for auditing and UI\n> purposes.\n> Is there any workaround to get this info ?\n\nThomas already answered about the xmax hack, but I dug these up in the\nmeantime.\n\nhttps://www.postgresql.org/message-id/flat/CAA-aLv4d%3DzHnx%2BzFKqoszT8xRFpdeRNph1Z2uhEYA33bzmgtaA%40mail.gmail.com#899e15b8b357c6b29c51d94a0767a601\nhttps://www.postgresql.org/message-id/flat/1565486215.7551.0%40finefun.com.au\nhttps://www.postgresql.org/message-id/flat/20190724232439.lpxzjw2jg3ukgcqn%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/flat/DE57F14C-DB96-4F17-9254-AD0AABB3F81F%40mackerron.co.uk\nhttps://www.postgresql.org/message-id/CAM3SWZRmkVqmRCs34YtZPOCn%2BHmHqtcdEmo6%3D%3Dnqz1kNA43DVw%40mail.gmail.com\n\nhttps://stackoverflow.com/questions/39058213/postgresql-upsert-differentiate-inserted-and-updated-rows-using-system-columns-x/39204667\nhttps://stackoverflow.com/questions/40878027/detect-if-the-row-was-updated-or-inserted/40880200#40880200\n\n\n", "msg_date": "Wed, 26 Jan 2022 23:59:35 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Output clause for Upsert aka INSERT...ON CONFLICT" }, { "msg_contents": "Thanks , that was helpful. I was not framing the right key words during my\nsearches in SO!\nUsing the xmax internal column, able to detect if the upsert did an Insert\nor Update.\nHowever, the MS SQL server MERGE command also does 'delete' using the 'when\nnot matched' clause, is there an equivalent ?\n\nthanks,\nAnand.\n\nOn Thu, Jan 27, 2022 at 11:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> On Thu, Jan 27, 2022 at 10:24:14AM +0530, Anand Sowmithiran wrote:\n> > The INSERT...ON CONFLICT is used for doing upserts in one of our app.\n> > Our app works with both MS SQL and Postgresql, based on customer needs.\n> >\n> > Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action\n> > <\n> https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15#output_clause\n> >\n> > [INSERT / UPDATE /DELETE] that was done during the upsert, the RETURNING\n> > clause of the pgsql does not return the action done.\n> > We need this so that the application can use that for auditing and UI\n> > purposes.\n> > Is there any workaround to get this info ?\n>\n> Thomas already answered about the xmax hack, but I dug these up in the\n> meantime.\n>\n>\n> https://www.postgresql.org/message-id/flat/CAA-aLv4d%3DzHnx%2BzFKqoszT8xRFpdeRNph1Z2uhEYA33bzmgtaA%40mail.gmail.com#899e15b8b357c6b29c51d94a0767a601\n>\n> https://www.postgresql.org/message-id/flat/1565486215.7551.0%40finefun.com.au\n>\n> https://www.postgresql.org/message-id/flat/20190724232439.lpxzjw2jg3ukgcqn%40alap3.anarazel.de\n>\n> https://www.postgresql.org/message-id/flat/DE57F14C-DB96-4F17-9254-AD0AABB3F81F%40mackerron.co.uk\n>\n> https://www.postgresql.org/message-id/CAM3SWZRmkVqmRCs34YtZPOCn%2BHmHqtcdEmo6%3D%3Dnqz1kNA43DVw%40mail.gmail.com\n>\n>\n> https://stackoverflow.com/questions/39058213/postgresql-upsert-differentiate-inserted-and-updated-rows-using-system-columns-x/39204667\n>\n> https://stackoverflow.com/questions/40878027/detect-if-the-row-was-updated-or-inserted/40880200#40880200\n>\n\nThanks , that was helpful. I was not framing the right key words during my searches in SO!Using the xmax internal column, able to detect if the upsert did an Insert or Update.  However, the MS SQL server MERGE command also does 'delete' using the 'when not matched' clause, is there an equivalent ?thanks,Anand.On Thu, Jan 27, 2022 at 11:29 AM Justin Pryzby <pryzby@telsasoft.com> wrote:On Thu, Jan 27, 2022 at 10:24:14AM +0530, Anand Sowmithiran wrote:\n> The INSERT...ON CONFLICT is used for doing upserts in one of our app.\n> Our app works with both MS SQL and Postgresql, based on customer needs.\n> \n> Unlike the MS SQL MERGE command's OUTPUT clause that gives the $action\n> <https://docs.microsoft.com/en-us/sql/t-sql/statements/merge-transact-sql?view=sql-server-ver15#output_clause>\n> [INSERT / UPDATE  /DELETE] that was done during the upsert, the RETURNING\n> clause of the pgsql does not return the action done.\n> We need this so that the application can use that for auditing and UI\n> purposes.\n> Is there any workaround to get this info ?\n\nThomas already answered about the xmax hack, but I dug these up in the\nmeantime.\n\nhttps://www.postgresql.org/message-id/flat/CAA-aLv4d%3DzHnx%2BzFKqoszT8xRFpdeRNph1Z2uhEYA33bzmgtaA%40mail.gmail.com#899e15b8b357c6b29c51d94a0767a601\nhttps://www.postgresql.org/message-id/flat/1565486215.7551.0%40finefun.com.au\nhttps://www.postgresql.org/message-id/flat/20190724232439.lpxzjw2jg3ukgcqn%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/flat/DE57F14C-DB96-4F17-9254-AD0AABB3F81F%40mackerron.co.uk\nhttps://www.postgresql.org/message-id/CAM3SWZRmkVqmRCs34YtZPOCn%2BHmHqtcdEmo6%3D%3Dnqz1kNA43DVw%40mail.gmail.com\n\nhttps://stackoverflow.com/questions/39058213/postgresql-upsert-differentiate-inserted-and-updated-rows-using-system-columns-x/39204667\nhttps://stackoverflow.com/questions/40878027/detect-if-the-row-was-updated-or-inserted/40880200#40880200", "msg_date": "Thu, 27 Jan 2022 13:51:10 +0530", "msg_from": "Anand Sowmithiran <anandsowmi2@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Output clause for Upsert aka INSERT...ON CONFLICT" }, { "msg_contents": "On Thursday, January 27, 2022, Anand Sowmithiran <anandsowmi2@gmail.com>\nwrote:\n>\n> However, the MS SQL server MERGE command also does 'delete' using the\n> 'when not matched' clause, is there an equivalent ?\n>\n\nPostgreSQL does not have a merge command feature. Just the subset of\nbehavior that is INSERT…on conflict\n\nDavid J.\n\nOn Thursday, January 27, 2022, Anand Sowmithiran <anandsowmi2@gmail.com> wrote:However, the MS SQL server MERGE command also does 'delete' using the 'when not matched' clause, is there an equivalent ?PostgreSQL does not have a merge command feature.  Just the subset of behavior that is INSERT…on conflictDavid J.", "msg_date": "Thu, 27 Jan 2022 06:30:08 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Output clause for Upsert aka INSERT...ON CONFLICT" } ]
[ { "msg_contents": "Hi,\n\nWhile working on WaitEventSet-ifying various codepaths, I found it\nstrange that walreceiver wakes up 10 times per second while idle.\nHere's a draft patch to compute the correct sleep time.", "msg_date": "Thu, 27 Jan 2022 23:50:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Hello.\n\nAt Thu, 27 Jan 2022 23:50:04 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> While working on WaitEventSet-ifying various codepaths, I found it\n> strange that walreceiver wakes up 10 times per second while idle.\n> Here's a draft patch to compute the correct sleep time.\n\nAgree to the objective. However I feel the patch makes the code\nsomewhat less readable maybe because WalRcvComputeNextWakeup hides the\ntimeout deatils. Of course other might thing differently.\n\n- ping_sent = false;\n- XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1,\n- startpointTLI);\n+ WalRcvComputeNextWakeup(&state,\n+ WALRCV_WAKEUP_TIMEOUT,\n+ last_recv_timestamp);\n+ WalRcvComputeNextWakeup(&state,\n+ WALRCV_WAKEUP_PING,\n+ last_recv_timestamp);\n\nThe calculated target times are not used within the closest loop and\nthe loop is quite busy. WALRCV_WAKEUP_PING is the replacement of\nping_sent, but I think the computation of both two target times would\nbe better done after the loop only when the \"if (len > 0)\" block was\npassed.\n\n- XLogWalRcvSendReply(false, false);\n+ XLogWalRcvSendReply(&state, GetCurrentTimestamp(),\n+ false, false);\n\nThe GetCurrentTimestamp() is same with last_recv_timestamp when the\nrecent recv() had any bytes received. So we can avoid the call to\nGetCurrentTimestamp in that case. If we do the change above, the same\nflag notifies the necessity of separete GetCurrentTimestamp().\n\nI understand the reason for startpointTLI being stored in WalRcvInfo\nbut don't understand about primary_has_standby_xmin. It is just moved\nfrom a static variable of XLogWalRcvSendHSFeedback to the struct\nmember that is modifed and read only by the same function.\n\nThe enum item WALRCV_WAKEUP_TIMEOUT looks somewhat uninformative. How\nabout naming it WALRCV_WAKEUP_TERMIATE (or WALRCV_WAKEUP_TO_DIE)?\n\nWALRCV_WAKEUP_TIMEOUT and WALRCV_WAKEUP_PING doesn't fire when\nwal_receiver_timeout is zero. In that case we should not set the\ntimeouts at all to avoid spurious wakeups?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 28 Jan 2022 16:26:22 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Fri, Jan 28, 2022 at 8:26 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> At Thu, 27 Jan 2022 23:50:04 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in\n> > While working on WaitEventSet-ifying various codepaths, I found it\n> > strange that walreceiver wakes up 10 times per second while idle.\n> > Here's a draft patch to compute the correct sleep time.\n>\n> Agree to the objective. However I feel the patch makes the code\n> somewhat less readable maybe because WalRcvComputeNextWakeup hides the\n> timeout deatils. Of course other might thing differently.\n\nThanks for looking!\n\nThe reason why I put the timeout computation into a function is\nbecause there are about 3 places you need to do it: at the beginning,\nwhen rescheduling for next time, and when the configuration file\nchanges and you might want to recompute them. The logic to decode the\nGUCs and compute the times would be duplicated. I have added a\ncomment about that, and tried to make the code clearer.\n\nDo you have another idea?\n\n> - ping_sent = false;\n> - XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1,\n> - startpointTLI);\n> + WalRcvComputeNextWakeup(&state,\n> + WALRCV_WAKEUP_TIMEOUT,\n> + last_recv_timestamp);\n> + WalRcvComputeNextWakeup(&state,\n> + WALRCV_WAKEUP_PING,\n> + last_recv_timestamp);\n>\n> The calculated target times are not used within the closest loop and\n> the loop is quite busy. WALRCV_WAKEUP_PING is the replacement of\n> ping_sent, but I think the computation of both two target times would\n> be better done after the loop only when the \"if (len > 0)\" block was\n> passed.\n\nThose two wakeup times should only be adjusted when data is received.\nThe calls should be exactly where last_recv_timestamp is set in\nmaster, no?\n\n> - XLogWalRcvSendReply(false, false);\n> + XLogWalRcvSendReply(&state, GetCurrentTimestamp(),\n> + false, false);\n>\n> The GetCurrentTimestamp() is same with last_recv_timestamp when the\n> recent recv() had any bytes received. So we can avoid the call to\n> GetCurrentTimestamp in that case. If we do the change above, the same\n> flag notifies the necessity of separete GetCurrentTimestamp().\n\nYeah, I agree that we should remove more GetCurrentTimestamp() calls.\nHere's another idea: let's remove last_recv_timestamp, and just use\n'now', being careful to update it after sleeping and in the receive\nloop (in case it gets busy for a long time), so that it's always fresh\nenough.\n\n> I understand the reason for startpointTLI being stored in WalRcvInfo\n> but don't understand about primary_has_standby_xmin. It is just moved\n> from a static variable of XLogWalRcvSendHSFeedback to the struct\n> member that is modifed and read only by the same function.\n\nYeah, this was unnecessary refactoring. I removed that change (we\ncould look into moving more state into the new state object instead of\nusing static locals and globals, but that can be for another thread, I\ngot carried away...)\n\n> The enum item WALRCV_WAKEUP_TIMEOUT looks somewhat uninformative. How\n> about naming it WALRCV_WAKEUP_TERMIATE (or WALRCV_WAKEUP_TO_DIE)?\n\nOk, let's try TERMINATE.\n\n> WALRCV_WAKEUP_TIMEOUT and WALRCV_WAKEUP_PING doesn't fire when\n> wal_receiver_timeout is zero. In that case we should not set the\n> timeouts at all to avoid spurious wakeups?\n\nOh, yeah, I forgot to handle wal_receiver_timeout = 0. Fixed.", "msg_date": "Fri, 28 Jan 2022 22:41:32 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "At Fri, 28 Jan 2022 22:41:32 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in \n> On Fri, Jan 28, 2022 at 8:26 PM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > At Thu, 27 Jan 2022 23:50:04 +1300, Thomas Munro <thomas.munro@gmail.com> wrote in\n> The reason why I put the timeout computation into a function is\n> because there are about 3 places you need to do it: at the beginning,\n> when rescheduling for next time, and when the configuration file\n> changes and you might want to recompute them. The logic to decode the\n> GUCs and compute the times would be duplicated. I have added a\n> comment about that, and tried to make the code clearer.\n> \n> Do you have another idea?\n\nYeah, I understand that and am having no better ideas so I'm fine with\nthat.\n\n> > - ping_sent = false;\n> > - XLogWalRcvProcessMsg(buf[0], &buf[1], len - 1,\n> > - startpointTLI);\n> > + WalRcvComputeNextWakeup(&state,\n> > + WALRCV_WAKEUP_TIMEOUT,\n> > + last_recv_timestamp);\n> > + WalRcvComputeNextWakeup(&state,\n> > + WALRCV_WAKEUP_PING,\n> > + last_recv_timestamp);\n> >\n> > The calculated target times are not used within the closest loop and\n> > the loop is quite busy. WALRCV_WAKEUP_PING is the replacement of\n> > ping_sent, but I think the computation of both two target times would\n> > be better done after the loop only when the \"if (len > 0)\" block was\n> > passed.\n> \n> Those two wakeup times should only be adjusted when data is received.\n\nYes.\n\n> The calls should be exactly where last_recv_timestamp is set in\n> master, no?\n\nThe calcuations at other than the last one iteration are waste of\ncycles and I thought the loop as a kind of tight-loop. So I don't\nmind we consider the additional cycles are acceptable. (Just I'm not\nsure.)\n\n> > - XLogWalRcvSendReply(false, false);\n> > + XLogWalRcvSendReply(&state, GetCurrentTimestamp(),\n> > + false, false);\n> >\n> > The GetCurrentTimestamp() is same with last_recv_timestamp when the\n> > recent recv() had any bytes received. So we can avoid the call to\n> > GetCurrentTimestamp in that case. If we do the change above, the same\n> > flag notifies the necessity of separete GetCurrentTimestamp().\n> \n> Yeah, I agree that we should remove more GetCurrentTimestamp() calls.\n> Here's another idea: let's remove last_recv_timestamp, and just use\n> 'now', being careful to update it after sleeping and in the receive\n> loop (in case it gets busy for a long time), so that it's always fresh\n> enough.\n\nI like it. It could be slightly off but it doesn't matter at all.\n\n> > I understand the reason for startpointTLI being stored in WalRcvInfo\n> > but don't understand about primary_has_standby_xmin. It is just moved\n> > from a static variable of XLogWalRcvSendHSFeedback to the struct\n> > member that is modifed and read only by the same function.\n> \n> Yeah, this was unnecessary refactoring. I removed that change (we\n> could look into moving more state into the new state object instead of\n> using static locals and globals, but that can be for another thread, I\n> got carried away...)\n> \n> > The enum item WALRCV_WAKEUP_TIMEOUT looks somewhat uninformative. How\n> > about naming it WALRCV_WAKEUP_TERMIATE (or WALRCV_WAKEUP_TO_DIE)?\n> \n> Ok, let's try TERMINATE.\n\nThanks:)\n\n> > WALRCV_WAKEUP_TIMEOUT and WALRCV_WAKEUP_PING doesn't fire when\n> > wal_receiver_timeout is zero. In that case we should not set the\n> > timeouts at all to avoid spurious wakeups?\n> \n> Oh, yeah, I forgot to handle wal_receiver_timeout = 0. Fixed.\n\nYeah, looks fine.\n\n+static void\n+WalRcvComputeNextWakeup(WalRcvInfo *state,\n+\t\t\t\t\t\tWalRcvWakeupReason reason,\n+\t\t\t\t\t\tTimestampTz now)\n+{\n+\tswitch (reason)\n+\t{\n...\n+\tdefault:\n+\t\tbreak;\n+\t}\n+}\n\nMmm.. there's NUM_WALRCV_WAKEUPS.. But I think we don't need to allow\nthe case. Isn't it better to replace the break with Assert()?\n\n\nIt might be suitable for another patch, but we can do that\ntogether. If we passed \"now\" to the function XLogWalRcvProcessMsg, we\nwould be able to remove further several calls to\nGetCurrentTimestamp(), in the function itself and\nProcessWalSndrMessage (the name is improperly abbreviated..).\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 31 Jan 2022 16:28:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "I was surprised to see that this patch was never committed!\n\nOn Mon, Jan 31, 2022 at 04:28:11PM +0900, Kyotaro Horiguchi wrote:\n> +static void\n> +WalRcvComputeNextWakeup(WalRcvInfo *state,\n> +\t\t\t\t\t\tWalRcvWakeupReason reason,\n> +\t\t\t\t\t\tTimestampTz now)\n> +{\n> +\tswitch (reason)\n> +\t{\n> ...\n> +\tdefault:\n> +\t\tbreak;\n> +\t}\n> +}\n> \n> Mmm.. there's NUM_WALRCV_WAKEUPS.. But I think we don't need to allow\n> the case. Isn't it better to replace the break with Assert()?\n\n+1\n\n> It might be suitable for another patch, but we can do that\n> together. If we passed \"now\" to the function XLogWalRcvProcessMsg, we\n> would be able to remove further several calls to\n> GetCurrentTimestamp(), in the function itself and\n> ProcessWalSndrMessage (the name is improperly abbreviated..).\n\nWhile reading the patch, I did find myself wondering if there were some\nadditional opportunities for reducing the calls to GetCurrentTimestamp().\nThat was really the only thing that stood out to me.\n\nThomas, are you planning to continue with this patch? If not, I don't mind\npicking it up.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 29 Sep 2022 20:51:41 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Fri, Sep 30, 2022 at 4:51 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Thomas, are you planning to continue with this patch? If not, I don't mind\n> picking it up.\n\nHi Nathan,\n\nPlease go ahead! One thing I had in my notes to check with this patch\nis whether there might be a bug due to computing all times in\nmicroseconds, but sleeping for a number of milliseconds without\nrounding up (what I mean is that if it's trying to sleep for 900\nmicroseconds, it might finish up sleeping for 0 milliseconds in a\ntight loop a few times).\n\n\n", "msg_date": "Fri, 30 Sep 2022 17:04:43 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Fri, Sep 30, 2022 at 05:04:43PM +1300, Thomas Munro wrote:\n> Please go ahead! One thing I had in my notes to check with this patch\n> is whether there might be a bug due to computing all times in\n> microseconds, but sleeping for a number of milliseconds without\n> rounding up (what I mean is that if it's trying to sleep for 900\n> microseconds, it might finish up sleeping for 0 milliseconds in a\n> tight loop a few times).\n\nI think that's right. If 'next_wakeup - now' is less than 1000, 'nap' will\nbe 0. My first instinct is to simply round to the nearest millisecond, but\nthis might still result in wakeups before the calculated wakeup time, and I\nthink we ultimately want to nap until >= the wakeup time whenever possible\nto avoid tight loops. So, I agree that it would be better to round up to\nthe next millisecond whenever nextWakeup > now. Given the current behavior\nis to wake up every 100 milliseconds, an extra millisecond here and there\ndoesn't seem like it should cause any problems. And I suppose we should\nassume we might nap longer already, anyway.\n\nI do see that the wakeup time for WALRCV_WAKEUP_PING might be set to 'now'\nwhenever wal_receiver_timeout is 1 millisecond, but I think that's existing\nbehavior, and it feels like an extreme case that we probably needn't worry\nabout too much.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 30 Sep 2022 13:41:47 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Here is an updated patch set with the following changes:\n\n* The creation of the struct for non-shared WAL receiver state is moved to\na prerequisite 0001 patch. This should help ease review of 0002 a bit.\n\n* I updated the nap time calculation to round up to the next millisecond,\nas discussed upthread.\n\n* I attempted to minimize the calls to GetCurrentTimestamp(). The WAL\nreceiver code already calls this function pretty liberally, so I don't know\nif this is important, but perhaps it could make a difference for systems\nthat don't have something like the vDSO to avoid real system calls.\n\n* I removed the 'tli' argument from functions that now have an argument for\nthe non-shared state struct. The 'tli' is stored within the struct, so the\nextra argument is unnecessary.\n\nOne thing that still bugs me a little bit about 0002 is that the calls to\nGetCurrentTimestamp() feel a bit scattered and more difficult to reason\nabout. AFAICT 0002 keeps 'now' relatively up-to-date, but it seems\npossible that a future change could easily disrupt that. I don't have any\nother ideas at the moment, though.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Tue, 4 Oct 2022 10:57:55 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On 2022-Oct-04, Nathan Bossart wrote:\n\n> Here is an updated patch set with the following changes:\n> \n> * The creation of the struct for non-shared WAL receiver state is moved to\n> a prerequisite 0001 patch. This should help ease review of 0002 a bit.\n\nI think that would be even better if you moved the API adjustments of\nsome functions for the new struct to 0001 as well\n(XLogWalRcvSendHSFeedback etc).\n\n> * I updated the nap time calculation to round up to the next millisecond,\n> as discussed upthread.\n\nI didn't look at this part very carefully, but IIRC walreceiver's\nresponsivity has a direct impact on latency for sync replicas. Maybe\nit'd be sensible to the definition that the sleep time is rounded down\nrather than up? (At least, for the cases where we have something to do\nand not merely continue sleeping.) \n\n> * I attempted to minimize the calls to GetCurrentTimestamp(). The WAL\n> receiver code already calls this function pretty liberally, so I don't know\n> if this is important, but perhaps it could make a difference for systems\n> that don't have something like the vDSO to avoid real system calls.\n\nYeah, we are indeed careful about this in most places. Maybe it's not\nparticularly important in 2022 and also not particularly important for\nwalreceiver (again, except in the code paths that affect sync replicas),\nbut there's no reason not to continue to be careful until we discover\nmore widely that it doesn't matter.\n\n> * I removed the 'tli' argument from functions that now have an argument for\n> the non-shared state struct. The 'tli' is stored within the struct, so the\n> extra argument is unnecessary.\n\n+1 (This could also be in 0001, naturally.)\n\n> One thing that still bugs me a little bit about 0002 is that the calls to\n> GetCurrentTimestamp() feel a bit scattered and more difficult to reason\n> about. AFAICT 0002 keeps 'now' relatively up-to-date, but it seems\n> possible that a future change could easily disrupt that. I don't have any\n> other ideas at the moment, though.\n\nHmm.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 5 Oct 2022 09:57:00 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Thanks for taking a look.\n\nOn Wed, Oct 05, 2022 at 09:57:00AM +0200, Alvaro Herrera wrote:\n> On 2022-Oct-04, Nathan Bossart wrote:\n>> * The creation of the struct for non-shared WAL receiver state is moved to\n>> a prerequisite 0001 patch. This should help ease review of 0002 a bit.\n> \n> I think that would be even better if you moved the API adjustments of\n> some functions for the new struct to 0001 as well\n> (XLogWalRcvSendHSFeedback etc).\n\nI moved as much as I could to 0001 in v4.\n\n>> * I updated the nap time calculation to round up to the next millisecond,\n>> as discussed upthread.\n> \n> I didn't look at this part very carefully, but IIRC walreceiver's\n> responsivity has a direct impact on latency for sync replicas. Maybe\n> it'd be sensible to the definition that the sleep time is rounded down\n> rather than up? (At least, for the cases where we have something to do\n> and not merely continue sleeping.) \n\nThe concern is that if we wake up early, we'll end up spinning until the\nwake-up time is reached. Given the current behavior is to sleep for 100ms\nat a time, and the tasks in question are governed by wal_receiver_timeout\nand wal_receiver_status_interval (which are typically set to several\nseconds) it seems reasonably safe to sleep up to an extra ~1ms here and\nthere without sacrificing responsiveness. In fact, I imagine this patch\nresults in a net improvement in responsiveness for these tasks.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Wed, 5 Oct 2022 11:08:09 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Wed, Oct 5, 2022 at 11:38 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> I moved as much as I could to 0001 in v4.\n\nSome comments on v4-0002:\n\n1. You might want to use PG_INT64_MAX instead of INT64_MAX for portability?\n\n2. With the below change, the time walreceiver spends in\nXLogWalRcvSendReply() is also included for XLogWalRcvSendHSFeedback\nright? I think it's a problem given that XLogWalRcvSendReply() can\ntake a while. Earlier, this wasn't the case, each function calculating\n'now' separately. Any reason for changing this behaviour? I know that\nGetCurrentTimestamp(); isn't cheaper all the time, but here it might\nresult in a change in the behaviour.\n- XLogWalRcvSendReply(false, false);\n- XLogWalRcvSendHSFeedback(false);\n+ TimestampTz now = GetCurrentTimestamp();\n+\n+ XLogWalRcvSendReply(state, now, false, false);\n+ XLogWalRcvSendHSFeedback(state, now, false);\n\n3. I understand that TimestampTz type is treated as microseconds.\nWould you mind having a comment in below places to say why we're\nmultiplying with 1000 or 1000000 here? Also, do we need 1000L or\n1000000L or type cast to\nTimestampTz?\n+ state->wakeup[reason] = now + (wal_receiver_timeout / 2) * 1000;\n+ state->wakeup[reason] = now + wal_receiver_timeout * 1000;\n+ state->wakeup[reason] = now +\nwal_receiver_status_interval * 1000000;\n\n4. How about simplifying WalRcvComputeNextWakeup() something like below?\nstatic void\nWalRcvComputeNextWakeup(WalRcvInfo *state, WalRcvWakeupReason reason,\n TimestampTz now)\n{\n TimestampTz ts = INT64_MAX;\n\n switch (reason)\n {\n case WALRCV_WAKEUP_TERMINATE:\n if (wal_receiver_timeout > 0)\n ts = now + (TimestampTz) (wal_receiver_timeout * 1000L);\n break;\n case WALRCV_WAKEUP_PING:\n if (wal_receiver_timeout > 0)\n ts = now + (TimestampTz) ((wal_receiver_timeout / 2) * 1000L);\n break;\n case WALRCV_WAKEUP_HSFEEDBACK:\n if (hot_standby_feedback && wal_receiver_status_interval > 0)\n ts = now + (TimestampTz) (wal_receiver_status_interval * 1000000L);\n break;\n case WALRCV_WAKEUP_REPLY:\n if (wal_receiver_status_interval > 0)\n ts = now + (TimestampTz) (wal_receiver_status_interval * 1000000);\n break;\n default:\n /* An error is better here. */\n }\n\n state->wakeup[reason] = ts;\n}\n\n5. Can we move below code snippets to respective static functions for\nbetter readability and code reuse?\nThis:\n+ /* Find the soonest wakeup time, to limit our nap. */\n+ nextWakeup = INT64_MAX;\n+ for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n+ nextWakeup = Min(state.wakeup[i], nextWakeup);\n+ nap = Max(0, (nextWakeup - now + 999) / 1000);\n\nAnd this:\n\n+ now = GetCurrentTimestamp();\n+ for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n+ WalRcvComputeNextWakeup(&state, i, now);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 15:22:15 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Oct 10, 2022 at 03:22:15PM +0530, Bharath Rupireddy wrote:\n> Some comments on v4-0002:\n\nThanks for taking a look.\n\n> 1. You might want to use PG_INT64_MAX instead of INT64_MAX for portability?\n\nYes, I used PG_INT64_MAX in v5.\n\n> 2. With the below change, the time walreceiver spends in\n> XLogWalRcvSendReply() is also included for XLogWalRcvSendHSFeedback\n> right? I think it's a problem given that XLogWalRcvSendReply() can\n> take a while. Earlier, this wasn't the case, each function calculating\n> 'now' separately. Any reason for changing this behaviour? I know that\n> GetCurrentTimestamp(); isn't cheaper all the time, but here it might\n> result in a change in the behaviour.\n\nYes, if XLogWalRcvSendReply() takes a long time, we might defer sending the\nhot_standby_feedback message until a later time. The only reason I changed\nthis was to avoid extra calls to GetCurrentTimestamp(), which might be\nexpensive on some platforms. Outside of the code snippet you pointed out,\nI think WalReceiverMain() has a similar problem. That being said, I'm\ngenerally skeptical that this sort of thing is detrimental given the\ncurrent behavior (i.e., wake up every 100ms), the usual values of these\nGUCs (i.e., tens of seconds), and the fact that any tasks that are\ninappropriately skipped will typically be retried in the next iteration of\nthe loop.\n\n> 3. I understand that TimestampTz type is treated as microseconds.\n> Would you mind having a comment in below places to say why we're\n> multiplying with 1000 or 1000000 here? Also, do we need 1000L or\n> 1000000L or type cast to\n> TimestampTz?\n\nI used INT64CONST() in v5, as that seemed the most portable, but I stopped\nshort of adding comments to explain all the conversions. IMO the\nconversions are relatively obvious, and the units of the GUCs can be easily\nseen in guc_tables.c.\n\n> 4. How about simplifying WalRcvComputeNextWakeup() something like below?\n\nOther than saving a couple lines of code, IMO this doesn't meaningfully\nsimplify the function or improve readability.\n\n> 5. Can we move below code snippets to respective static functions for\n> better readability and code reuse?\n> This:\n> + /* Find the soonest wakeup time, to limit our nap. */\n> + nextWakeup = INT64_MAX;\n> + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> + nextWakeup = Min(state.wakeup[i], nextWakeup);\n> + nap = Max(0, (nextWakeup - now + 999) / 1000);\n> \n> And this:\n> \n> + now = GetCurrentTimestamp();\n> + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> + WalRcvComputeNextWakeup(&state, i, now);\n\nThis did cross my mind, but I opted to avoid creating new functions because\n1) they aren't likely to be reused very much, and 2) I actually think it\nmight hurt readability by forcing developers to traipse around the file to\nfigure out what these functions are actually doing. It's not like it would\nsave many lines of code, either.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 10 Oct 2022 10:51:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Oct 10, 2022 at 10:51:14AM -0700, Nathan Bossart wrote:\n>> + /* Find the soonest wakeup time, to limit our nap. */\n>> + nextWakeup = INT64_MAX;\n>> + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n>> + nextWakeup = Min(state.wakeup[i], nextWakeup);\n>> + nap = Max(0, (nextWakeup - now + 999) / 1000);\n\nHm. We should probably be more cautious here since nextWakeup is an int64\nand nap is an int. My guess is that we should just set nap to -1 if\nnextWakeup > INT_MAX.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 11:10:14 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Oct 10, 2022 at 11:10:14AM -0700, Nathan Bossart wrote:\n> On Mon, Oct 10, 2022 at 10:51:14AM -0700, Nathan Bossart wrote:\n>>> + /* Find the soonest wakeup time, to limit our nap. */\n>>> + nextWakeup = INT64_MAX;\n>>> + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n>>> + nextWakeup = Min(state.wakeup[i], nextWakeup);\n>>> + nap = Max(0, (nextWakeup - now + 999) / 1000);\n> \n> Hm. We should probably be more cautious here since nextWakeup is an int64\n> and nap is an int. My guess is that we should just set nap to -1 if\n> nextWakeup > INT_MAX.\n\nHere's an attempt at fixing that. I ended up just changing \"nap\" to an\nint64 and then ensuring it's no larger than INT_MAX in the call to\nWaitLatchOrSocket(). IIUC we can't use -1 here because WL_TIMEOUT is set.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 10 Oct 2022 17:50:59 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Oct 10, 2022 at 11:21 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n\nThanks for the updated patches.\n\n> > 2. With the below change, the time walreceiver spends in\n> > XLogWalRcvSendReply() is also included for XLogWalRcvSendHSFeedback\n> > right? I think it's a problem given that XLogWalRcvSendReply() can\n> > take a while. Earlier, this wasn't the case, each function calculating\n> > 'now' separately. Any reason for changing this behaviour? I know that\n> > GetCurrentTimestamp(); isn't cheaper all the time, but here it might\n> > result in a change in the behaviour.\n>\n> Yes, if XLogWalRcvSendReply() takes a long time, we might defer sending the\n> hot_standby_feedback message until a later time. The only reason I changed\n> this was to avoid extra calls to GetCurrentTimestamp(), which might be\n> expensive on some platforms.\n\nAgree. However...\n\n> Outside of the code snippet you pointed out,\n> I think WalReceiverMain() has a similar problem. That being said, I'm\n> generally skeptical that this sort of thing is detrimental given the\n> current behavior (i.e., wake up every 100ms), the usual values of these\n> GUCs (i.e., tens of seconds), and the fact that any tasks that are\n> inappropriately skipped will typically be retried in the next iteration of\n> the loop.\n\nAFICS, the aim of the patch isn't optimizing around\nGetCurrentTimestamp() calls and the patch does seem to change quite a\nbit of them which may cause a change of the behaviour. I think that\nthe GetCurrentTimestamp() optimizations can go to 0003 patch in this\nthread itself or we can discuss it as a separate thread to seek more\nthoughts.\n\nThe 'now' in many instances in the patch may not actually represent\nthe true current time and it includes time taken by other operations\nas well. I think this will have problems.\n\n+ XLogWalRcvSendReply(state, now, false, false);\n+ XLogWalRcvSendHSFeedback(state, now, false);\n\n+ now = GetCurrentTimestamp();\n+ for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n+ WalRcvComputeNextWakeup(&state, i, now);\n+ XLogWalRcvSendHSFeedback(&state, now, true);\n\n+ now = GetCurrentTimestamp();\n+ for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n+ WalRcvComputeNextWakeup(&state, i, now);\n+ XLogWalRcvSendHSFeedback(&state, now, true);\n\n> > 4. How about simplifying WalRcvComputeNextWakeup() something like below?\n>\n> Other than saving a couple lines of code, IMO this doesn't meaningfully\n> simplify the function or improve readability.\n\nIMO, the duplicate lines of code aren't necessary in\nWalRcvComputeNextWakeup() and that function's code isn't that complex\nby removing them.\n\n> > 5. Can we move below code snippets to respective static functions for\n> > better readability and code reuse?\n> > This:\n> > + /* Find the soonest wakeup time, to limit our nap. */\n> > + nextWakeup = INT64_MAX;\n> > + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> > + nextWakeup = Min(state.wakeup[i], nextWakeup);\n> > + nap = Max(0, (nextWakeup - now + 999) / 1000);\n> >\n> > And this:\n> >\n> > + now = GetCurrentTimestamp();\n> > + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> > + WalRcvComputeNextWakeup(&state, i, now);\n>\n> This did cross my mind, but I opted to avoid creating new functions because\n> 1) they aren't likely to be reused very much, and 2) I actually think it\n> might hurt readability by forcing developers to traipse around the file to\n> figure out what these functions are actually doing. It's not like it would\n> save many lines of code, either.\n\nThe second snippet is being used twice already. IMO having static\nfunctions (perhaps inline) is much better here.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 07:01:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Oct 11, 2022 at 07:01:26AM +0530, Bharath Rupireddy wrote:\n> On Mon, Oct 10, 2022 at 11:21 PM Nathan Bossart\n>> Outside of the code snippet you pointed out,\n>> I think WalReceiverMain() has a similar problem. That being said, I'm\n>> generally skeptical that this sort of thing is detrimental given the\n>> current behavior (i.e., wake up every 100ms), the usual values of these\n>> GUCs (i.e., tens of seconds), and the fact that any tasks that are\n>> inappropriately skipped will typically be retried in the next iteration of\n>> the loop.\n> \n> AFICS, the aim of the patch isn't optimizing around\n> GetCurrentTimestamp() calls and the patch does seem to change quite a\n> bit of them which may cause a change of the behaviour. I think that\n> the GetCurrentTimestamp() optimizations can go to 0003 patch in this\n> thread itself or we can discuss it as a separate thread to seek more\n> thoughts.\n> \n> The 'now' in many instances in the patch may not actually represent\n> the true current time and it includes time taken by other operations\n> as well. I think this will have problems.\n\nWhat problems do you think this will cause?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 10 Oct 2022 19:50:31 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Oct 11, 2022 at 8:20 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> > AFICS, the aim of the patch isn't optimizing around\n> > GetCurrentTimestamp() calls and the patch does seem to change quite a\n> > bit of them which may cause a change of the behaviour. I think that\n> > the GetCurrentTimestamp() optimizations can go to 0003 patch in this\n> > thread itself or we can discuss it as a separate thread to seek more\n> > thoughts.\n> >\n> > The 'now' in many instances in the patch may not actually represent\n> > the true current time and it includes time taken by other operations\n> > as well. I think this will have problems.\n>\n> What problems do you think this will cause?\n\nnow = t1;\nXLogWalRcvSendReply() /* say it ran for a minute or so for whatever reasons */\nXLogWalRcvSendHSFeedback() /* with patch walrecevier sends hot standby\nfeedback more often without properly honouring\nwal_receiver_status_interval because the 'now' isn't actually the\ncurrent time as far as that function is concerned, it is\nt1 + XLogWalRcvSendReply()'s time. */\n\nWell, is this really a problem? I'm not sure about that. Let's hear from others.\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 09:34:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Oct 11, 2022 at 09:34:25AM +0530, Bharath Rupireddy wrote:\n> now = t1;\n> XLogWalRcvSendReply() /* say it ran for a minute or so for whatever reasons */\n> XLogWalRcvSendHSFeedback() /* with patch walrecevier sends hot standby\n> feedback more often without properly honouring\n> wal_receiver_status_interval because the 'now' isn't actually the\n> current time as far as that function is concerned, it is\n> t1 + XLogWalRcvSendReply()'s time. */\n> \n> Well, is this really a problem? I'm not sure about that. Let's hear from others.\n\nFor this example, the feedback message would just be sent in the next loop\niteration instead.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 11 Oct 2022 10:52:17 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Oct 11, 2022 at 11:22 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n>\n> On Tue, Oct 11, 2022 at 09:34:25AM +0530, Bharath Rupireddy wrote:\n> > now = t1;\n> > XLogWalRcvSendReply() /* say it ran for a minute or so for whatever reasons */\n> > XLogWalRcvSendHSFeedback() /* with patch walrecevier sends hot standby\n> > feedback more often without properly honouring\n> > wal_receiver_status_interval because the 'now' isn't actually the\n> > current time as far as that function is concerned, it is\n> > t1 + XLogWalRcvSendReply()'s time. */\n> >\n> > Well, is this really a problem? I'm not sure about that. Let's hear from others.\n>\n> For this example, the feedback message would just be sent in the next loop\n> iteration instead.\n\nI think the hot standby feedback message gets sent too frequently\nwithout honouring the wal_receiver_status_interval because the 'now'\nis actually not the current time with your patch but 'now +\nXLogWalRcvSendReply()'s time'. However, it's possible that I may be\nwrong here.\n\n /*\n * Send feedback at most once per wal_receiver_status_interval.\n */\n if (!TimestampDifferenceExceeds(sendTime, now,\n wal_receiver_status_interval * 1000))\n return;\n\nAs said upthread [1], I think the best way to move forward is to\nseparate the GetCurrentTimestamp() calls optimizations into 0003.\n\nDo you have any further thoughts on review comments posted upthread [1]?\n\n[1] https://www.postgresql.org/message-id/CALj2ACV5q2dbVRKwu2PL2s_YY0owZFTRxLdX%3Dt%2BdZ1iag15khA%40mail.gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 15:35:14 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "I think in 0001 we should put more stuff in the state struct --\nspecifically these globals:\n\nstatic int\trecvFile = -1;\nstatic TimeLineID recvFileTLI = 0;\nstatic XLogSegNo recvSegNo = 0;\n\nThe main reason is that it seems odd to have startpointTLI in the struct\nused in some places together with a file-global recvFileTLI which isn't.\nThe way one is passed as argument and the other as part of a struct\nseemed too distracting. This should reduce the number of moving parts,\nISTM.\n\n\nOne thing that confused me for a moment is that we have some state in\nwalrcv and some more state in 'state'. The difference is pretty obvious\nonce you look at the other, but it suggest to me that a better variable\nname for the latter is 'localstate' to more obviously distinguish them.\n\n\nI was tempted to suggest that LogstreamResult would also be good to have\nin the new struct, but that might be going a bit too far for a first\ncut.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:37:39 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Thu, Oct 13, 2022 at 03:35:14PM +0530, Bharath Rupireddy wrote:\n> I think the hot standby feedback message gets sent too frequently\n> without honouring the wal_receiver_status_interval because the 'now'\n> is actually not the current time with your patch but 'now +\n> XLogWalRcvSendReply()'s time'. However, it's possible that I may be\n> wrong here.\n\nIs your concern that the next wakeup time will be miscalculated because of\na stale value in 'now'? I think that's existing behavior, as we save the\ncurrent time before sending the message in XLogWalRcvSendReply and\nXLogWalRcvSendHSFeedback. The only way to completely avoid this would be\nto call GetCurrentTimestamp() after the message has been sent and to use\nthat value to calculate when the next message should be sent. However, I\nam skeptical there's really any practical impact. Is it really a problem\nif a message that should be sent every 30 seconds is sent at an interval of\n29.9 seconds sometimes? I think it could go the other way, too. If 'now'\nis stale, we might decide not to send a feedback message until the next\nloop iteration, so the interval might be 30.1 seconds. I don't think it's\nrealistic to expect complete accuracy here.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 10:45:06 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Thu, Oct 13, 2022 at 12:37:39PM +0200, Alvaro Herrera wrote:\n> I think in 0001 we should put more stuff in the state struct --\n> specifically these globals:\n> \n> static int\trecvFile = -1;\n> static TimeLineID recvFileTLI = 0;\n> static XLogSegNo recvSegNo = 0;\n> \n> The main reason is that it seems odd to have startpointTLI in the struct\n> used in some places together with a file-global recvFileTLI which isn't.\n> The way one is passed as argument and the other as part of a struct\n> seemed too distracting. This should reduce the number of moving parts,\n> ISTM.\n\nMakes sense. Do you think the struct should be file-global so that it\ndoesn't need to be provided as an argument to most of the static functions\nin this file?\n\n> One thing that confused me for a moment is that we have some state in\n> walrcv and some more state in 'state'. The difference is pretty obvious\n> once you look at the other, but it suggest to me that a better variable\n> name for the latter is 'localstate' to more obviously distinguish them.\n\nSure, I'll change it to 'localstate'.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 13 Oct 2022 12:09:54 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Thu, Oct 13, 2022 at 12:09:54PM -0700, Nathan Bossart wrote:\n> On Thu, Oct 13, 2022 at 12:37:39PM +0200, Alvaro Herrera wrote:\n>> The main reason is that it seems odd to have startpointTLI in the struct\n>> used in some places together with a file-global recvFileTLI which isn't.\n>> The way one is passed as argument and the other as part of a struct\n>> seemed too distracting. This should reduce the number of moving parts,\n>> ISTM.\n> \n> Makes sense. Do you think the struct should be file-global so that it\n> doesn't need to be provided as an argument to most of the static functions\n> in this file?\n\nHere's a different take. Instead of creating structs and altering function\nsignatures, we can just make the wake-up times file-global, and we can skip\nthe changes related to reducing the number of calls to\nGetCurrentTimestamp() for now. This results in a far less invasive patch.\n(I still think reducing the number of calls to GetCurrentTimestamp() is\nworthwhile, but I'm beginning to agree with Bharath that it should be\nhandled separately.)\n\nThoughts?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 15 Oct 2022 20:59:00 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Thanks for taking this up.\n\nAt Sat, 15 Oct 2022 20:59:00 -0700, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Thu, Oct 13, 2022 at 12:09:54PM -0700, Nathan Bossart wrote:\n> > On Thu, Oct 13, 2022 at 12:37:39PM +0200, Alvaro Herrera wrote:\n> >> The main reason is that it seems odd to have startpointTLI in the struct\n> >> used in some places together with a file-global recvFileTLI which isn't.\n> >> The way one is passed as argument and the other as part of a struct\n> >> seemed too distracting. This should reduce the number of moving parts,\n> >> ISTM.\n> > \n> > Makes sense. Do you think the struct should be file-global so that it\n> > doesn't need to be provided as an argument to most of the static functions\n> > in this file?\n> \n> Here's a different take. Instead of creating structs and altering function\n> signatures, we can just make the wake-up times file-global, and we can skip\n\nI'm fine with that. Rather it seems to me really simple.\n\n> the changes related to reducing the number of calls to\n> GetCurrentTimestamp() for now. This results in a far less invasive patch.\n\nSure.\n\n> (I still think reducing the number of calls to GetCurrentTimestamp() is\n> worthwhile, but I'm beginning to agree with Bharath that it should be\n> handled separately.)\n\n+1. We would gain lesser for the required effort.\n\n> Thoughts?\n\nNow that I see the fix for the implicit conversion:\n\nL527: +\t\t\t\tnap = Max(0, (nextWakeup - now + 999) / 1000);\n..\nL545: +\t\t\t\t\t\t\t\t\t (int) Min(INT_MAX, nap),\n\n\nI think limiting the naptime at use is confusing. Don't we place these\nadjacently? Then we could change the nap to an integer. Or we can\njust assert out for the nap time longer than INT_MAX (but this would\nrequire another int64 variable. I belive we won't need such a long\nnap, (or that is no longer a nap?)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 17 Oct 2022 15:21:18 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Sun, Oct 16, 2022 at 9:29 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> Here's a different take. Instead of creating structs and altering function\n> signatures, we can just make the wake-up times file-global, and we can skip\n> the changes related to reducing the number of calls to\n> GetCurrentTimestamp() for now. This results in a far less invasive patch.\n> (I still think reducing the number of calls to GetCurrentTimestamp() is\n> worthwhile, but I'm beginning to agree with Bharath that it should be\n> handled separately.)\n>\n> Thoughts?\n\nThanks. I'm wondering if we need to extend the similar approach for\nlogical replication workers in logical/worker.c LogicalRepApplyLoop()?\n\nA comment on the patch:\nIsn't it better we track the soonest wakeup time or min of wakeup[]\narray (update the min whenever the array element is updated in\nWalRcvComputeNextWakeup()) instead of recomputing everytime by looping\nfor NUM_WALRCV_WAKEUPS times? I think this will save us some CPU\ncycles, because the for loop, in which the below code is placed, runs\ntill the walreceiver end of life cycle. We may wrap wakeup[] and min\ninside a structure for better code organization or just add min as\nanother static variable alongside wakeup[].\n\n+ /* Find the soonest wakeup time, to limit our nap. */\n+ nextWakeup = PG_INT64_MAX;\n+ for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n+ nextWakeup = Min(wakeup[i], nextWakeup);\n+ nap = Max(0, (nextWakeup - now + 999) / 1000);\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 26 Oct 2022 15:05:20 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Oct 17, 2022 at 03:21:18PM +0900, Kyotaro Horiguchi wrote:\n> Now that I see the fix for the implicit conversion:\n> \n> L527: +\t\t\t\tnap = Max(0, (nextWakeup - now + 999) / 1000);\n> ..\n> L545: +\t\t\t\t\t\t\t\t\t (int) Min(INT_MAX, nap),\n> \n> \n> I think limiting the naptime at use is confusing. Don't we place these\n> adjacently? Then we could change the nap to an integer. Or we can\n> just assert out for the nap time longer than INT_MAX (but this would\n> require another int64 variable. I belive we won't need such a long\n> nap, (or that is no longer a nap?)\n\nYeah, I guess this deserves a comment. I could also combine it easily:\n\n\tnap = (int) Min(INT_MAX, Max(0, (nextWakeup - now + 999) / 1000));\n\nWe could probably just remove the WL_TIMEOUT flag and set timeout to -1\nwhenever \"nap\" is calculated to be > INT_MAX, but I didn't think it was\nworth complicating the code in order to save an extra wakeup every ~25\ndays.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 5 Nov 2022 15:00:55 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Wed, Oct 26, 2022 at 03:05:20PM +0530, Bharath Rupireddy wrote:\n> A comment on the patch:\n> Isn't it better we track the soonest wakeup time or min of wakeup[]\n> array (update the min whenever the array element is updated in\n> WalRcvComputeNextWakeup()) instead of recomputing everytime by looping\n> for NUM_WALRCV_WAKEUPS times? I think this will save us some CPU\n> cycles, because the for loop, in which the below code is placed, runs\n> till the walreceiver end of life cycle. We may wrap wakeup[] and min\n> inside a structure for better code organization or just add min as\n> another static variable alongside wakeup[].\n> \n> + /* Find the soonest wakeup time, to limit our nap. */\n> + nextWakeup = PG_INT64_MAX;\n> + for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> + nextWakeup = Min(wakeup[i], nextWakeup);\n> + nap = Max(0, (nextWakeup - now + 999) / 1000);\n\nWhile that might save a few CPU cycles when computing the nap time, I don't\nthink it's worth the added complexity and CPU cycles in\nWalRcvComputeNextWakeup(). I suspect it'd be difficult to demonstrate any\nmeaningful difference between the two approaches.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sat, 5 Nov 2022 15:11:40 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Sat, Nov 05, 2022 at 03:00:55PM -0700, Nathan Bossart wrote:\n> On Mon, Oct 17, 2022 at 03:21:18PM +0900, Kyotaro Horiguchi wrote:\n>> Now that I see the fix for the implicit conversion:\n>> \n>> L527: +\t\t\t\tnap = Max(0, (nextWakeup - now + 999) / 1000);\n>> ..\n>> L545: +\t\t\t\t\t\t\t\t\t (int) Min(INT_MAX, nap),\n>> \n>> \n>> I think limiting the naptime at use is confusing. Don't we place these\n>> adjacently? Then we could change the nap to an integer. Or we can\n>> just assert out for the nap time longer than INT_MAX (but this would\n>> require another int64 variable. I belive we won't need such a long\n>> nap, (or that is no longer a nap?)\n> \n> Yeah, I guess this deserves a comment. I could also combine it easily:\n> \n> \tnap = (int) Min(INT_MAX, Max(0, (nextWakeup - now + 999) / 1000));\n\nHere is a new version of the patch that addresses this feedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 5 Nov 2022 16:01:15 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Sun, Nov 6, 2022 at 12:01 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Here is a new version of the patch that addresses this feedback.\n\nThis looks pretty good to me. Thanks for picking it up! I can live\nwith the change to use a global variable; it seems we couldn't quite\ndecide what the scope was for moving stuff into a struct (ie various\npre-existing stuff representing the walreceiver's state), but I'm OK\nwith considering that as a pure refactoring project for a separate\nthread. That array should be \"static\", though.\n\n\n", "msg_date": "Tue, 8 Nov 2022 15:20:22 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 8, 2022 at 3:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sun, Nov 6, 2022 at 12:01 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> > Here is a new version of the patch that addresses this feedback.\n>\n> This looks pretty good to me. Thanks for picking it up! I can live\n> with the change to use a global variable; it seems we couldn't quite\n> decide what the scope was for moving stuff into a struct (ie various\n> pre-existing stuff representing the walreceiver's state), but I'm OK\n> with considering that as a pure refactoring project for a separate\n> thread. That array should be \"static\", though.\n\nAnd with that change and a pgindent, pushed.\n\n\n", "msg_date": "Tue, 8 Nov 2022 20:46:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 8, 2022 at 1:17 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> And with that change and a pgindent, pushed.\n\nThanks. Do we need a similar wakeup approach for logical replication\nworkers in worker.c? Or is it okay that the nap time is 1sec there?\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 13:50:25 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 8, 2022 at 9:20 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n> Thanks. Do we need a similar wakeup approach for logical replication\n> workers in worker.c? Or is it okay that the nap time is 1sec there?\n\nYeah, I think so. At least for its idle/nap case (waiting for flush\nis also a technically interesting case, but harder, and applies to\nnon-idle systems so the polling is a little less offensive).\n\n\n", "msg_date": "Tue, 8 Nov 2022 21:45:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 08, 2022 at 08:46:26PM +1300, Thomas Munro wrote:\n> On Tue, Nov 8, 2022 at 3:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> This looks pretty good to me. Thanks for picking it up! I can live\n>> with the change to use a global variable; it seems we couldn't quite\n>> decide what the scope was for moving stuff into a struct (ie various\n>> pre-existing stuff representing the walreceiver's state), but I'm OK\n>> with considering that as a pure refactoring project for a separate\n>> thread. That array should be \"static\", though.\n> \n> And with that change and a pgindent, pushed.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 13:06:39 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 08, 2022 at 09:45:40PM +1300, Thomas Munro wrote:\n> On Tue, Nov 8, 2022 at 9:20 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n>> Thanks. Do we need a similar wakeup approach for logical replication\n>> workers in worker.c? Or is it okay that the nap time is 1sec there?\n> \n> Yeah, I think so. At least for its idle/nap case (waiting for flush\n> is also a technically interesting case, but harder, and applies to\n> non-idle systems so the polling is a little less offensive).\n\nBharath, are you planning to pick this up? If not, I can probably spend\nsome time on it.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 8 Nov 2022 13:08:26 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Wed, Nov 9, 2022 at 2:38 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>\n> On Tue, Nov 08, 2022 at 09:45:40PM +1300, Thomas Munro wrote:\n> > On Tue, Nov 8, 2022 at 9:20 PM Bharath Rupireddy\n> > <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >> Thanks. Do we need a similar wakeup approach for logical replication\n> >> workers in worker.c? Or is it okay that the nap time is 1sec there?\n> >\n> > Yeah, I think so. At least for its idle/nap case (waiting for flush\n> > is also a technically interesting case, but harder, and applies to\n> > non-idle systems so the polling is a little less offensive).\n>\n> Bharath, are you planning to pick this up? If not, I can probably spend\n> some time on it.\n\nPlease go ahead!\n\n--\nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Nov 2022 10:22:37 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> And with that change and a pgindent, pushed.\n\nThere is something very seriously wrong with this patch.\n\nOn my machine, running \"make -j10 check-world\" (with compilation\nalready done) has been taking right about 2 minutes for some time.\nSince this patch, it's taking around 2:45 --- I did a bisect run\nto confirm that this patch is where it changed.\n\nThe buildfarm is showing a hit, too. Comparing the step runtimes at\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2022-11-08%2005%3A29%3A28\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2022-11-08%2007%3A49%3A31\n\nit'd seem that most tests involving walreceivers got much slower:\npg_basebackup-check from 00:29 to 00:39,\npg_rewind-check went from 00:56 to 01:26,\nand recovery-check went from 03:56 to 04:45.\nCuriously, subscription-check only went from 03:26 to 03:29.\n\nI've not dug into it further than that, but my bet is that some\nrequired wakeup condition was not accounted for.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Nov 2022 17:08:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 11:08 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> There is something very seriously wrong with this patch.\n>\n> On my machine, running \"make -j10 check-world\" (with compilation\n> already done) has been taking right about 2 minutes for some time.\n> Since this patch, it's taking around 2:45 --- I did a bisect run\n> to confirm that this patch is where it changed.\n>\n> The buildfarm is showing a hit, too. Comparing the step runtimes at\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2022-11-08%2005%3A29%3A28\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=longfin&dt=2022-11-08%2007%3A49%3A31\n>\n> it'd seem that most tests involving walreceivers got much slower:\n> pg_basebackup-check from 00:29 to 00:39,\n> pg_rewind-check went from 00:56 to 01:26,\n> and recovery-check went from 03:56 to 04:45.\n> Curiously, subscription-check only went from 03:26 to 03:29.\n>\n> I've not dug into it further than that, but my bet is that some\n> required wakeup condition was not accounted for.\n\nLooking...\n\n\n", "msg_date": "Mon, 14 Nov 2022 11:18:13 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Sun, Nov 13, 2022 at 05:08:04PM -0500, Tom Lane wrote:\n> There is something very seriously wrong with this patch.\n> \n> On my machine, running \"make -j10 check-world\" (with compilation\n> already done) has been taking right about 2 minutes for some time.\n> Since this patch, it's taking around 2:45 --- I did a bisect run\n> to confirm that this patch is where it changed.\n\nI've been looking into this. I wrote a similar patch for logical/worker.c\nbefore noticing that check-world was taking much longer. The problem in\nthat case seems to be that process_syncing_tables() isn't called as often.\nIt wouldn't surprise me if there's also something in walreceiver.c that\ndepends upon the frequent wakeups. I suspect this will require a revert.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 13 Nov 2022 14:26:44 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 11:26 AM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Sun, Nov 13, 2022 at 05:08:04PM -0500, Tom Lane wrote:\n> > There is something very seriously wrong with this patch.\n> >\n> > On my machine, running \"make -j10 check-world\" (with compilation\n> > already done) has been taking right about 2 minutes for some time.\n> > Since this patch, it's taking around 2:45 --- I did a bisect run\n> > to confirm that this patch is where it changed.\n>\n> I've been looking into this. I wrote a similar patch for logical/worker.c\n> before noticing that check-world was taking much longer. The problem in\n> that case seems to be that process_syncing_tables() isn't called as often.\n> It wouldn't surprise me if there's also something in walreceiver.c that\n> depends upon the frequent wakeups. I suspect this will require a revert.\n\nIn the case of \"meson test pg_basebackup/020_pg_receivewal\" it looks\nlike wait_for_catchup hangs around for 10 seconds waiting for HS\nfeedback. I'm wondering if we need to go back to high frequency\nwakeups until it's caught up, or (probably better) arrange for a\nproper event for progress. Digging...\n\n\n", "msg_date": "Mon, 14 Nov 2022 12:11:19 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 12:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Mon, Nov 14, 2022 at 11:26 AM Nathan Bossart\n> <nathandbossart@gmail.com> wrote:\n> > On Sun, Nov 13, 2022 at 05:08:04PM -0500, Tom Lane wrote:\n> > > There is something very seriously wrong with this patch.\n> > >\n> > > On my machine, running \"make -j10 check-world\" (with compilation\n> > > already done) has been taking right about 2 minutes for some time.\n> > > Since this patch, it's taking around 2:45 --- I did a bisect run\n> > > to confirm that this patch is where it changed.\n> >\n> > I've been looking into this. I wrote a similar patch for logical/worker.c\n> > before noticing that check-world was taking much longer. The problem in\n> > that case seems to be that process_syncing_tables() isn't called as often.\n> > It wouldn't surprise me if there's also something in walreceiver.c that\n> > depends upon the frequent wakeups. I suspect this will require a revert.\n>\n> In the case of \"meson test pg_basebackup/020_pg_receivewal\" it looks\n> like wait_for_catchup hangs around for 10 seconds waiting for HS\n> feedback. I'm wondering if we need to go back to high frequency\n> wakeups until it's caught up, or (probably better) arrange for a\n> proper event for progress. Digging...\n\nMaybe there is a better way to code this (I mean, who likes global\nvariables?) and I need to test some more, but I suspect the attached\nis approximately what we missed.", "msg_date": "Mon, 14 Nov 2022 12:35:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 12:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Maybe there is a better way to code this (I mean, who likes global\n> variables?) and I need to test some more, but I suspect the attached\n> is approximately what we missed.\n\nErm, ENOCOFFEE, sorry that's not quite right, it needs the apply LSN,\nbut it demonstrates what the problem is...\n\n\n", "msg_date": "Mon, 14 Nov 2022 12:51:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 12:51:14PM +1300, Thomas Munro wrote:\n> On Mon, Nov 14, 2022 at 12:35 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Maybe there is a better way to code this (I mean, who likes global\n>> variables?) and I need to test some more, but I suspect the attached\n>> is approximately what we missed.\n> \n> Erm, ENOCOFFEE, sorry that's not quite right, it needs the apply LSN,\n> but it demonstrates what the problem is...\n\nYeah, I'm able to sort-of reproduce the problem by sending fewer replies,\nbut it's not clear to me why that's the problem. AFAICT all of the code\npaths that write/flush are careful to send a reply shortly afterward, which\nshould keep writePtr/flushPtr updated.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Sun, 13 Nov 2022 16:05:31 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 1:05 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Yeah, I'm able to sort-of reproduce the problem by sending fewer replies,\n> but it's not clear to me why that's the problem. AFAICT all of the code\n> paths that write/flush are careful to send a reply shortly afterward, which\n> should keep writePtr/flushPtr updated.\n\nAhh, I think I might have it. In the old coding, sendTime starts out\nas 0, which I think caused it to send its first reply message after\nthe first 100ms sleep, and only after that fall into a cadence of\nwal_receiver_status_interval (10s) while idle. Our new coding won't\nsend its first reply until start time + wal_receiver_status_interval.\nIf I have that right, think we can get back to the previous behaviour\nby explicitly setting the first message time, like:\n\n@@ -433,6 +433,9 @@ WalReceiverMain(void)\n for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n WalRcvComputeNextWakeup(i, now);\n\n+ /* XXX start with a reply after 100ms */\n+ wakeup[WALRCV_WAKEUP_REPLY] = now + 100000;\n+\n /* Loop until end-of-streaming or error */\n\nObviously that's bogus and racy (it races with wait_for_catchup, and\nit's slow, actually both sides are not great and really should be\nevent-driven, in later work)...\n\n\n", "msg_date": "Mon, 14 Nov 2022 14:47:14 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Mon, Nov 14, 2022 at 02:47:14PM +1300, Thomas Munro wrote:\n> Ahh, I think I might have it. In the old coding, sendTime starts out\n> as 0, which I think caused it to send its first reply message after\n> the first 100ms sleep, and only after that fall into a cadence of\n> wal_receiver_status_interval (10s) while idle. Our new coding won't\n> send its first reply until start time + wal_receiver_status_interval.\n> If I have that right, think we can get back to the previous behaviour\n> by explicitly setting the first message time, like:\n\nGood find!\n\n> @@ -433,6 +433,9 @@ WalReceiverMain(void)\n> for (int i = 0; i < NUM_WALRCV_WAKEUPS; ++i)\n> WalRcvComputeNextWakeup(i, now);\n> \n> + /* XXX start with a reply after 100ms */\n> + wakeup[WALRCV_WAKEUP_REPLY] = now + 100000;\n> +\n> /* Loop until end-of-streaming or error */\n> \n> Obviously that's bogus and racy (it races with wait_for_catchup, and\n> it's slow, actually both sides are not great and really should be\n> event-driven, in later work)...\n\nIs there any reason we should wait for 100ms before sending the initial\nreply? ISTM the previous behavior essentially caused the first reply (and\nfeedback message) to be sent at the first opportunity after streaming\nbegins. My instinct is to do something like the attached. I wonder if we\nought to do something similar in the ConfigReloadPending path in case\nhot_standby_feedback is being turned on.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Nov 2022 11:01:27 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 15, 2022 at 8:01 AM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Is there any reason we should wait for 100ms before sending the initial\n> reply? ISTM the previous behavior essentially caused the first reply (and\n> feedback message) to be sent at the first opportunity after streaming\n> begins. My instinct is to do something like the attached. I wonder if we\n> ought to do something similar in the ConfigReloadPending path in case\n> hot_standby_feedback is being turned on.\n\nThat works for 020_pg_receivewal. I wonder if there are also tests\nthat stream a bit of WAL first and then do wait_for_catchup that were\npreviously benefiting from the 100ms-after-startup message by\nscheduling luck (as in, that was usually enough for replay)? I might\ngo and teach Cluster.pm to log how much time is wasted in\nwait_for_catchup to get some observability, and then try to figure out\nhow to optimise it properly. We could perhaps put the 100ms duct tape\nback temporarily though, if necessary.\n\n\n", "msg_date": "Tue, 15 Nov 2022 09:42:26 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 15, 2022 at 09:42:26AM +1300, Thomas Munro wrote:\n> That works for 020_pg_receivewal. I wonder if there are also tests\n> that stream a bit of WAL first and then do wait_for_catchup that were\n> previously benefiting from the 100ms-after-startup message by\n> scheduling luck (as in, that was usually enough for replay)? I might\n> go and teach Cluster.pm to log how much time is wasted in\n> wait_for_catchup to get some observability, and then try to figure out\n> how to optimise it properly. We could perhaps put the 100ms duct tape\n> back temporarily though, if necessary.\n\nOh, I see. Since we don't check the apply position when determining\nwhether to send a reply, tests may need to wait a full\nwal_receiver_status_interval. FWIW with my patch, the runtime of the\nsrc/test/recovery tests seems to be back to what it was on my machine, but\nI certainly wouldn't rule out scheduling luck.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 14 Nov 2022 15:14:39 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 15, 2022 at 12:14 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> On Tue, Nov 15, 2022 at 09:42:26AM +1300, Thomas Munro wrote:\n> > That works for 020_pg_receivewal. I wonder if there are also tests\n> > that stream a bit of WAL first and then do wait_for_catchup that were\n> > previously benefiting from the 100ms-after-startup message by\n> > scheduling luck (as in, that was usually enough for replay)? I might\n> > go and teach Cluster.pm to log how much time is wasted in\n> > wait_for_catchup to get some observability, and then try to figure out\n> > how to optimise it properly. We could perhaps put the 100ms duct tape\n> > back temporarily though, if necessary.\n>\n> Oh, I see. Since we don't check the apply position when determining\n> whether to send a reply, tests may need to wait a full\n> wal_receiver_status_interval. FWIW with my patch, the runtime of the\n> src/test/recovery tests seems to be back to what it was on my machine, but\n> I certainly wouldn't rule out scheduling luck.\n\nYeah, what you have looks good in my experiments. Here are some\nstatistics on waits on my system. Times in seconds, first row is\nbefore the new time logic went in, your patch is the last row.\n\n name | count | sum | avg | stddev | min | max\n---------------+-------+---------+-------+--------+-------+--------\n before | 718 | 37.375 | 0.052 | 0.135 | 0.006 | 1.110\n master | 718 | 173.100 | 0.241 | 1.374 | 0.006 | 10.004\n initial-100ms | 718 | 37.169 | 0.052 | 0.126 | 0.006 | 0.676\n initial-0ms | 718 | 35.276 | 0.049 | 0.123 | 0.006 | 0.661\n\nThe difference on master is explained by these 14 outliers:\n\n name | time\n------------------------------------------+--------\n testrun/recovery/019_replslot_limit | 10.004\n testrun/recovery/033_replay_tsp_drops | 9.917\n testrun/recovery/033_replay_tsp_drops | 9.957\n testrun/pg_basebackup/020_pg_receivewal | 9.899\n testrun/pg_rewind/003_extrafiles | 9.961\n testrun/pg_rewind/003_extrafiles | 9.916\n testrun/pg_rewind/008_min_recovery_point | 9.929\n recovery/019_replslot_limit | 10.004\n recovery/033_replay_tsp_drops | 9.917\n recovery/033_replay_tsp_drops | 9.957\n pg_basebackup/020_pg_receivewal | 9.899\n pg_rewind/003_extrafiles | 9.961\n pg_rewind/003_extrafiles | 9.916\n pg_rewind/008_min_recovery_point | 9.929\n(14 rows)\n\nNow that I can see what is going on I'm going to try to see how much I\ncan squeeze out of this...\n\n\n", "msg_date": "Tue, 15 Nov 2022 12:26:00 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "Another option might be to just force initial reply/feedback messages when\nstreaming starts. The attached patch improves src/test/recovery test\nruntime just like the previous one I posted.\n\nOn Mon, Nov 14, 2022 at 11:01:27AM -0800, Nathan Bossart wrote:\n> I wonder if we\n> ought to do something similar in the ConfigReloadPending path in case\n> hot_standby_feedback is being turned on.\n\nLooking closer, I see that a feedback message is forced in this case\nalready.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Nov 2022 20:49:16 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Tue, Nov 15, 2022 at 5:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Another option might be to just force initial reply/feedback messages when\n> streaming starts. The attached patch improves src/test/recovery test\n> runtime just like the previous one I posted.\n\nYeah, looks good in my tests. I think we just need to make it\nconditional so we don't force it if someone has\nwal_receiver_status_interval disabled.", "msg_date": "Wed, 16 Nov 2022 16:57:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Wed, Nov 16, 2022 at 04:57:08PM +1300, Thomas Munro wrote:\n> On Tue, Nov 15, 2022 at 5:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Another option might be to just force initial reply/feedback messages when\n>> streaming starts. The attached patch improves src/test/recovery test\n>> runtime just like the previous one I posted.\n> \n> Yeah, looks good in my tests. I think we just need to make it\n> conditional so we don't force it if someone has\n> wal_receiver_status_interval disabled.\n\nYeah, that makes sense. IIUC setting \"force\" to false would have the same\neffect for the initial round of streaming, but since writePtr/flushPtr will\nbe set in later rounds, no reply would be guaranteed. That might be good\nenough for the tests, but it seems a bit inconsistent. So, your patch is\nprobably the way to go.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 15 Nov 2022 20:24:16 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" }, { "msg_contents": "On Wed, Nov 16, 2022 at 5:24 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> On Wed, Nov 16, 2022 at 04:57:08PM +1300, Thomas Munro wrote:\n> > On Tue, Nov 15, 2022 at 5:49 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> >> Another option might be to just force initial reply/feedback messages when\n> >> streaming starts. The attached patch improves src/test/recovery test\n> >> runtime just like the previous one I posted.\n> >\n> > Yeah, looks good in my tests. I think we just need to make it\n> > conditional so we don't force it if someone has\n> > wal_receiver_status_interval disabled.\n>\n> Yeah, that makes sense. IIUC setting \"force\" to false would have the same\n> effect for the initial round of streaming, but since writePtr/flushPtr will\n> be set in later rounds, no reply would be guaranteed. That might be good\n> enough for the tests, but it seems a bit inconsistent. So, your patch is\n> probably the way to go.\n\nOn second thoughts, there's not much point in that, since we always\nforce replies under various other circumstances anyway. There isn't\nreally a 'never send replies' mode. Committed the way you had it\nbefore. Thanks!\n\nI can't unsee that we're spending ~35 seconds waiting for catchup in\n718 separate wait_for_catchup calls. The problem is entirely on the\nwaiting side (we already do WalRcvForceReply() in xlogrecovery.c when\ngoing idle), and there's probably a nice condition variable-based\nimprovement here but I decided to hop over that rabbit hole today.\n\n\n", "msg_date": "Thu, 17 Nov 2022 11:31:57 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Suppressing useless wakeups in walreceiver" } ]
[ { "msg_contents": "While reviewing code related to supporting multiple result sets in psql, \nit is always confusing that in psql many variables of type PGresult* are \nnamed \"results\" (plural), as if there could be multiple. While it is ok \nin casual talk to consider a return from a query to be a bunch of stuff, \nthis plural naming is inconsistent with how other code and the libpq API \nuses these terms. And if we're going to get to multiple result sets \nsupport, I think we need to be more precise throughout the code. The \nattached patch renames these variables and functions to singular where \nappropriate.", "msg_date": "Thu, 27 Jan 2022 12:34:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "psql: Rename results to result when only a single one is meant" } ]
[ { "msg_contents": "If you create an empty table, it is not fsync'd. As soon as you insert a \nrow to it, register_dirty_segment() gets called, and after that, the \nnext checkpoint will fsync it. But before that, the creation itself is \nnever fsync'd. That's obviously not great.\n\nThe lack of an fsync is a bit hard to prove because it requires a \nhardware failure, or a simulation of it, and can be affected by \nfilesystem options too. But I was able to demonstrate a problem with \nthese steps:\n\n1. Create a VM with two virtual disks. Use ext4, with 'data=writeback' \noption (I'm not sure if that's required). Install PostgreSQL on one of \nthe virtual disks.\n\n2. Start the server, and create a tablespace on the other disk:\n\nCREATE TABLESPACE foospc LOCATION '/data/heikki';\n\n3. Do this:\n\nCREATE TABLE foo (i int) TABLESPACE foospc;\nCHECKPOINT;\n\n4. Immediately after that, kill the VM. I used:\n\nkillall -9 qemu-system-x86_64\n\n5. Restart the VM, restart PostgreSQL. Now when you try to use the \ntable, you get an error:\n\npostgres=# select * from crashtest ;\nERROR: could not open file \"pg_tblspc/81921/PG_15_202201271/5/98304\": \nNo such file or directory\n\nI was not able to reproduce this without the tablespace on a different \nvirtual disk, I presume because ext4 orders the writes so that the \ncheckpoint implicitly always flushes the creation of the file to disk. I \ntried data=writeback but it didn't make a difference. But with a \nseparate disk, it happens every time.\n\nI think the simplest fix is to call register_dirty_segment() from \nmdcreate(). As in the attached. Thoughts?\n\n- Heikki", "msg_date": "Thu, 27 Jan 2022 19:55:45 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "Hi,\n\nOn 2022-01-27 19:55:45 +0200, Heikki Linnakangas wrote:\n> If you create an empty table, it is not fsync'd. As soon as you insert a row\n> to it, register_dirty_segment() gets called, and after that, the next\n> checkpoint will fsync it. But before that, the creation itself is never\n> fsync'd. That's obviously not great.\n\nGood catch.\n\n\n> The lack of an fsync is a bit hard to prove because it requires a hardware\n> failure, or a simulation of it, and can be affected by filesystem options\n> too.\n\nI've been thinking that we need a validation layer for fsyncs, it's too hard\nto get right without testing, and crash testing is not likel enough to catch\nproblems quickly / resource intensive.\n\nMy thought was that we could keep a shared hash table of all files created /\ndirtied at the fd layer, with the filename as key and the value the current\nLSN. We'd delete files from it when they're fsynced. At checkpoints we go\nthrough the list and see if there's any files from before the redo that aren't\nyet fsynced. All of this in assert builds only, of course.\n\n\n\n> \n> 1. Create a VM with two virtual disks. Use ext4, with 'data=writeback'\n> option (I'm not sure if that's required). Install PostgreSQL on one of the\n> virtual disks.\n> \n> 2. Start the server, and create a tablespace on the other disk:\n> \n> CREATE TABLESPACE foospc LOCATION '/data/heikki';\n> \n> 3. Do this:\n> \n> CREATE TABLE foo (i int) TABLESPACE foospc;\n> CHECKPOINT;\n> \n> 4. Immediately after that, kill the VM. I used:\n> \n> killall -9 qemu-system-x86_64\n> \n> 5. Restart the VM, restart PostgreSQL. Now when you try to use the table,\n> you get an error:\n> \n> postgres=# select * from crashtest ;\n> ERROR: could not open file \"pg_tblspc/81921/PG_15_202201271/5/98304\": No\n> such file or directory\n\nI think it might be good to have a bunch of in-core scripting to do this kind\nof thing. I think we can get lower iteration time by using linux\ndevice-mapper targets to refuse writes.\n\n\n> I was not able to reproduce this without the tablespace on a different\n> virtual disk, I presume because ext4 orders the writes so that the\n> checkpoint implicitly always flushes the creation of the file to disk.\n\nIt's likely that the control file sync at the end of a checkpoint has the side\neffect of also forcing the file creation to be durable if on the same\ntablespace (it'd not make the file contents durable, but they don't exist\nhere, so ...).\n\n\n> I tried data=writeback but it didn't make a difference. But with a separate\n> disk, it happens every time.\n\nMy understanding is that data=writeback just stops a journal flush of\nfilesystem metadata from also flushing all previously dirtied data. Which\noften is good, because that's a lot of unnecessary writes. Even with\ndata=writeback prior file system journal contents are still synced to disk.\n\n\n> I think the simplest fix is to call register_dirty_segment() from\n> mdcreate(). As in the attached. Thoughts?\n\n> diff --git a/src/backend/storage/smgr/md.c b/src/backend/storage/smgr/md.c\n> index d26c915f90e..2dfd80ca66b 100644\n> --- a/src/backend/storage/smgr/md.c\n> +++ b/src/backend/storage/smgr/md.c\n> @@ -225,6 +225,9 @@ mdcreate(SMgrRelation reln, ForkNumber forkNum, bool isRedo)\n> \tmdfd = &reln->md_seg_fds[forkNum][0];\n> \tmdfd->mdfd_vfd = fd;\n> \tmdfd->mdfd_segno = 0;\n> +\n> +\tif (!SmgrIsTemp(reln))\n> +\t\tregister_dirty_segment(reln, forkNum, mdfd);\n\nI think we might want different handling for unlogged tables and for relations\nthat are synced to disk in other ways (e.g. index builds). But I've not\nre-read the relevant code.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 10:28:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 7:28 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-27 19:55:45 +0200, Heikki Linnakangas wrote:\n> > I was not able to reproduce this without the tablespace on a different\n> > virtual disk, I presume because ext4 orders the writes so that the\n> > checkpoint implicitly always flushes the creation of the file to disk.\n>\n> It's likely that the control file sync at the end of a checkpoint has the side\n> effect of also forcing the file creation to be durable if on the same\n\n> tablespace (it'd not make the file contents durable, but they don't exist\n> here, so ...).\n\nIt might be possible to avoid that on xfs or pretty much any other\nfile system. I wasn't following this closely, but even with ext4's\nrecent fast commit changes, its fsync implementation still\ndeliberately synchronises data for other file descriptors as a side\neffect as summarised in [1], unlike xfs and other systems. So they've\ncaught up with xfs's concurrent writes (and gone further than xfs by\ndoing it also for buffered I/O giving up even page-level atomicity, as\ndiscussed in a couple of other threads), but not yet decided to pull\nthe trigger on just-fsync-what-I-asked-for.\n\n[1] https://lwn.net/Articles/842385/\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:01:01 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 6:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I think the simplest fix is to call register_dirty_segment() from\n> mdcreate(). As in the attached. Thoughts?\n\n+1\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:12:54 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "Hi,\n\nOn 2022-01-28 08:01:01 +1300, Thomas Munro wrote:\n> On Fri, Jan 28, 2022 at 7:28 AM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-01-27 19:55:45 +0200, Heikki Linnakangas wrote:\n> > > I was not able to reproduce this without the tablespace on a different\n> > > virtual disk, I presume because ext4 orders the writes so that the\n> > > checkpoint implicitly always flushes the creation of the file to disk.\n> >\n> > It's likely that the control file sync at the end of a checkpoint has the side\n> > effect of also forcing the file creation to be durable if on the same\n> \n> > tablespace (it'd not make the file contents durable, but they don't exist\n> > here, so ...).\n> \n> It might be possible to avoid that on xfs or pretty much any other\n> file system. I wasn't following this closely, but even with ext4's\n> recent fast commit changes, its fsync implementation still\n> deliberately synchronises data for other file descriptors as a side\n> effect as summarised in [1], unlike xfs and other systems.\n\nNot data, just metadata, right? Well, and volatile write caches (by virtue of\ndoing an otherwise unnecessary REQ_PREFLUSH). With data=writeback file\ncontents for file a are not flushed to disk (rather than from disk write\ncache) when file b is fsynced. Before/After the fast commit feature.\n\n\n> So they've caught up with xfs's concurrent writes (and gone further than xfs\n> by doing it also for buffered I/O giving up even page-level atomicity, as\n> discussed in a couple of other threads), but not yet decided to pull the\n> trigger on just-fsync-what-I-asked-for.\n\nI don't think the page level locking and the changes above / fsyncing are\npretty much independent?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 11:17:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 8:17 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-28 08:01:01 +1300, Thomas Munro wrote:\n> > It might be possible to avoid that on xfs or pretty much any other\n> > file system. I wasn't following this closely, but even with ext4's\n> > recent fast commit changes, its fsync implementation still\n> > deliberately synchronises data for other file descriptors as a side\n> > effect as summarised in [1], unlike xfs and other systems.\n>\n> Not data, just metadata, right? Well, and volatile write caches (by virtue of\n> doing an otherwise unnecessary REQ_PREFLUSH). With data=writeback file\n> contents for file a are not flushed to disk (rather than from disk write\n> cache) when file b is fsynced. Before/After the fast commit feature.\n\nPass. [/me suppresses the urge to go and test]. I'm just going by\nwords in that article and T'so's linked email about file entanglement\nand the global barrier nature of fsync. I'm not studying the code.\n\n> > So they've caught up with xfs's concurrent writes (and gone further than xfs\n> > by doing it also for buffered I/O giving up even page-level atomicity, as\n> > discussed in a couple of other threads), but not yet decided to pull the\n> > trigger on just-fsync-what-I-asked-for.\n>\n> I don't think the page level locking and the changes above / fsyncing are\n> pretty much independent?\n\nRight, the concurrency thing is completely unrelated and an old change\n(maybe so old that I'm actually thinking of ext3?). I was just\ncommenting on two historical reasons why people used to switch to xfs\n(especially other databases using DIO). I see that Noah's results\nshowed \"atomicity\" (for some definition) for read/write in 3.10, but\nnot for pread/pwrite, with the difference gone in 4.9, so I guess\nperhaps somewhere between those releases is where our unlocked control\nfile access became problematic, but perhaps it wasn't fundamental,\njust a quirk of the implementation of read/write with implicit\nposition, and if we'd used pread/pwrite for the control file we'd have\nseen it sooner. *shrug*\n\n\n", "msg_date": "Fri, 28 Jan 2022 09:41:04 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 8:12 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jan 28, 2022 at 6:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> > I think the simplest fix is to call register_dirty_segment() from\n> > mdcreate(). As in the attached. Thoughts?\n>\n> +1\n\n[Testing]\n\nErm, so now I see my new table in checkpoint's activities:\n\nopenat(AT_FDCWD,\"base/5/16399\",O_RDWR,00) = 20 (0x14)\nfsync(20) = 0 (0x0)\n\n... but we still never synchronize \"base/5\". According to our\nproject's reading of the POSIX tea leaves we should be doing that to\nnail down the directory entry.\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:11:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On 28/01/2022 00:11, Thomas Munro wrote:\n> On Fri, Jan 28, 2022 at 8:12 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Fri, Jan 28, 2022 at 6:55 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>> I think the simplest fix is to call register_dirty_segment() from\n>>> mdcreate(). As in the attached. Thoughts?\n>>\n>> +1\n> \n> [Testing]\n> \n> Erm, so now I see my new table in checkpoint's activities:\n> \n> openat(AT_FDCWD,\"base/5/16399\",O_RDWR,00) = 20 (0x14)\n> fsync(20) = 0 (0x0)\n> \n> ... but we still never synchronize \"base/5\". According to our\n> project's reading of the POSIX tea leaves we should be doing that to\n> nail down the directory entry.\n\nReally? 'base/5' is fsync'd by initdb, when it's created. I didn't think \nwe try to fsync() the directory, when a new file is created in it. We do \nthat with durable_rename() and durable_unlink(), but not with file creation.\n\nHmm, if a relation is dropped, we use plain unlink() to delete it (at \nthe next checkpoint). Should we use durable_unlink() there, or otherwise \narrange to fsync() the parent directory?\n\n- Heikki\n\n\n", "msg_date": "Fri, 28 Jan 2022 00:39:22 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "Hi,\n\nOn 2022-01-28 00:39:22 +0200, Heikki Linnakangas wrote:\n> On 28/01/2022 00:11, Thomas Munro wrote:\n> > ... but we still never synchronize \"base/5\". According to our\n> > project's reading of the POSIX tea leaves we should be doing that to\n> > nail down the directory entry.\n> \n> Really? 'base/5' is fsync'd by initdb, when it's created. I didn't think we\n> try to fsync() the directory, when a new file is created in it.\n\nI've not heard of concrete reports of it being needed (whereas the directory\nfsync being needed after a rename() is pretty easy to be reproduce). There's\nsome technical reasons why it'd make sense for it to only be really needed for\nthings after the initial file creation, but I'm not sure it's a good idea to\nrely on it.\n\n From the filesystem POV a file doesn't necessarily know which directory it is\nin, and it can be several at once due to hardlinks. Scheduling all the\ndirectories entries to be durable only really is feasible if all metadata\noperations are globally ordered - which we don't really want for scalability\nreasons.\n\nBut when initially creating a file, it's always in the context of a\ndirectory. I assume that most journalled filesystem is going to have the\ncreation of the directory entry and of the file itself be related journalling\noperations. But I wouldn't bet it's all, and theoretically we claim to be\nusable on non-journalled filesystems as well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 27 Jan 2022 15:36:34 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 12:36 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-01-28 00:39:22 +0200, Heikki Linnakangas wrote:\n> > On 28/01/2022 00:11, Thomas Munro wrote:\n> > > ... but we still never synchronize \"base/5\". According to our\n> > > project's reading of the POSIX tea leaves we should be doing that to\n> > > nail down the directory entry.\n> >\n> > Really? 'base/5' is fsync'd by initdb, when it's created. I didn't think we\n> > try to fsync() the directory, when a new file is created in it.\n>\n> I've not heard of concrete reports of it being needed (whereas the directory\n> fsync being needed after a rename() is pretty easy to be reproduce). There's\n> some technical reasons why it'd make sense for it to only be really needed for\n> things after the initial file creation, but I'm not sure it's a good idea to\n> rely on it.\n\nI don't personally know of any system where that would break either.\nI base my paranoia on man pages and the Austin Group's famous open\nticket 0000672. Those don't mention special treatment for O_CREAT,\nthough I get the technical reason why it's different. I think we\nshould probably do something about that even if it's hypothetical, but\nI'm happy to call it a separate topic and follow up later (like commit\naca74843 did for SLRUs).\n\n\n", "msg_date": "Fri, 28 Jan 2022 14:48:58 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On Fri, Jan 28, 2022 at 11:39 AM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Hmm, if a relation is dropped, we use plain unlink() to delete it (at\n> the next checkpoint). Should we use durable_unlink() there, or otherwise\n> arrange to fsync() the parent directory?\n\nHmmmmm. I think the latter might be a good idea, but not because of\nthe file we unlink after checkpoint. Rationale: On commit, we\ntruncate all segments, and unlink segments > 0. After checkpoint, we\nunlink segment 0 (the tombstone preventing relfilenode recycling).\nSo, it might close some storage leak windows if we did:\n\n1. register_dirty_segment() on truncated segment 0, so that at\ncheckpoint it is fsynced. That means that if we lose power between\nthe checkpoint and the unlink(), at least its size is durably zero and\nnot 1GB. I don't know how to completely avoid leaking empty tombstone\nfiles if you lose power in that window. durable_unlink() may narrow\nthe window but it's still on the wrong side of a checkpoint and won't\nbe replayed on crash recovery. I hope we can get rid of tombstones\ncompletely as Dilip is attempting in [1].\n\n2. fsync() the containing directory as part of checkpointing, so the\nunlinks of non-tombstone segments are made durable at checkpoint.\nOtherwise, after checkpoint, you might lose power, come back up and\nfind the segments are still present in the directory, and worse, not\ntruncated.\n\nAFAICS we have no defences against zombie segments > 0 if the\ntombstone is gone, allowing recycling. Zombie segments could appear\nto be concatenated to the next user of the relfilenode. That problem\ngoes away with Dilip's project.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BfTEcYQvc18aEbrJjPri99A09JZcEXzXjT5h57S%2BAgkw%40mail.gmail.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 16:02:12 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "This patch required a trivial rebase after conflicting with bfcf1b3480 so I've\nattached a v2 to get the CFBot to run this.\n\n--\nDaniel Gustafsson", "msg_date": "Mon, 3 Jul 2023 14:59:41 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" }, { "msg_contents": "On 03/07/2023 15:59, Daniel Gustafsson wrote:\n> This patch required a trivial rebase after conflicting with bfcf1b3480 so I've\n> attached a v2 to get the CFBot to run this.\n\nThank you! Pushed to all supported branches. (Without forgetting the new \nREL_16_STABLE :-D ).\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 18:33:05 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Creation of an empty table is not fsync'd at checkpoint" } ]
[ { "msg_contents": "Unlogged relations are not WAL-logged, but creating the init-fork is. \nThere are a few things around that seem sloppy:\n\n1. In index_build(), we do this:\n\n> \n> \t/*\n> \t * If this is an unlogged index, we may need to write out an init fork for\n> \t * it -- but we must first check whether one already exists. If, for\n> \t * example, an unlogged relation is truncated in the transaction that\n> \t * created it, or truncated twice in a subsequent transaction, the\n> \t * relfilenode won't change, and nothing needs to be done here.\n> \t */\n> \tif (indexRelation->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED &&\n> \t\t!smgrexists(RelationGetSmgr(indexRelation), INIT_FORKNUM))\n> \t{\n> \t\tsmgrcreate(RelationGetSmgr(indexRelation), INIT_FORKNUM, false);\n> \t\tindexRelation->rd_indam->ambuildempty(indexRelation);\n> \t}\n\nShouldn't we call log_smgrcreate() here? Creating the init fork is \notherwise not WAL-logged at all. In practice, all the ambuildempty() \nimplementations create and WAL-log a metapage and replay of that will \nimplicitly create the underlying relation. But if you created an index \naccess method whose ambuildempty() function does nothing, i.e. the init \nfork is just an empty file, there would be no trace in the WAL about \ncreating the init fork, which is bad. In fact, \nsrc/test/modules/dummy_index_am is an example of that, but because it \ndoes nothing at all with the file, you cannot see any ill effect from \nthe missing init fork.\n\n2. Some implementations of ambuildempty() use the buffer cache (hash, \ngist, gin, brin), while others bypass it and call smgrimmedsync() \ninstead (btree, spgist, bloom). I don't see any particular reason for \nthose decisions, it seems to be based purely on which example the author \nhappened to copy-paste.\n\nUsing the buffer cache seems better to me, because it avoids the \nsmgrimmedsync() call. That makes creating unlogged indexes faster. \nThat's not significant for any real world application, but still.\n\n3. Those ambuildempty implementations that bypass the buffer cache use \nsmgrwrite() to write the pages. That doesn't make any difference in \npractice, but in principle it's wrong: You are supposed to use \nsmgrextend() when extending a relation. Using smgrwrite() skips updating \nthe relation size cache, which is harmless in this case because it \nwasn't initialized previously either, and I'm not sure if we ever call \nsmgrnblocks() on the init-fork. But if you build with \nCHECK_WRITE_VS_EXTEND, you get an assertion failure.\n\n4. Also, the smgrwrite() calls are performed before WAL-logging the \npages, so the page that's written to disk has 0/0 as the LSN, not the \nLSN of the WAL record. That's harmless too, but seems a bit sloppy.\n\n(I remember we had a discussion once whether we should always extend the \nrelation first, and WAL-log only after that, but I can't find the thread \nright now. The point is to avoid the situation that the operation fails \nbecause you run out of disk space, and then you crash and WAL replay \nalso fails because you are still out of disk space. But most places \ncurrently call log_newpage() first, and smgrextend() after that.)\n\n5. In heapam_relation_set_new_filenode(), we do this:\n\n> \n> \t/*\n> \t * If required, set up an init fork for an unlogged table so that it can\n> \t * be correctly reinitialized on restart. An immediate sync is required\n> \t * even if the page has been logged, because the write did not go through\n> \t * shared_buffers and therefore a concurrent checkpoint may have moved the\n> \t * redo pointer past our xlog record. Recovery may as well remove it\n> \t * while replaying, for example, XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE\n> \t * record. Therefore, logging is necessary even if wal_level=minimal.\n> \t */\n> \tif (persistence == RELPERSISTENCE_UNLOGGED)\n> \t{\n> \t\tAssert(rel->rd_rel->relkind == RELKIND_RELATION ||\n> \t\t\t rel->rd_rel->relkind == RELKIND_MATVIEW ||\n> \t\t\t rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n> \t\tsmgrcreate(srel, INIT_FORKNUM, false);\n> \t\tlog_smgrcreate(newrnode, INIT_FORKNUM);\n> \t\tsmgrimmedsync(srel, INIT_FORKNUM);\n> \t}\n\nThe comment doesn't make much sense, we haven't written nor WAL-logged \nany page here, with nor without the buffer cache. It made more sense \nbefore commit fa0f466d53.\n\nThis is what actually led me to discover the bug I just reported at [1], \nwith regular tables. If we fix that bug I like I proposed there, then \nsmgrcreate() will register and fsync request, and we won't need \nsmgrimmedsync here anymore.\n\nAttached is a patch to tighten those up. The third one should arguably \nbe part of [1], not this thread, but here it goes too.\n\n[1] \nhttps://www.postgresql.org/message-id/d47d8122-415e-425c-d0a2-e0160829702d%40iki.fi\n\n- Heikki", "msg_date": "Thu, 27 Jan 2022 21:32:04 +0200", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Unlogged relations and WAL-logging" }, { "msg_contents": "On Thu, Jan 27, 2022 at 2:32 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Unlogged relations are not WAL-logged, but creating the init-fork is.\n> There are a few things around that seem sloppy:\n>\n> 1. In index_build(), we do this:\n>\n> > */\n> > if (indexRelation->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED &&\n> > !smgrexists(RelationGetSmgr(indexRelation), INIT_FORKNUM))\n> > {\n> > smgrcreate(RelationGetSmgr(indexRelation), INIT_FORKNUM, false);\n> > indexRelation->rd_indam->ambuildempty(indexRelation);\n> > }\n>\n> Shouldn't we call log_smgrcreate() here? Creating the init fork is\n> otherwise not WAL-logged at all.\n\nYes, that's a bug.\n\n> 2. Some implementations of ambuildempty() use the buffer cache (hash,\n> gist, gin, brin), while others bypass it and call smgrimmedsync()\n> instead (btree, spgist, bloom). I don't see any particular reason for\n> those decisions, it seems to be based purely on which example the author\n> happened to copy-paste.\n\nI thought that this inconsistency was odd when I was developing the\nunlogged feature, but I tried to keep each routine's ambuildempty()\nconsistent with whatever ambuild() was doing. I don't mind if you want\nto change it, though.\n\n> 3. Those ambuildempty implementations that bypass the buffer cache use\n> smgrwrite() to write the pages. That doesn't make any difference in\n> practice, but in principle it's wrong: You are supposed to use\n> smgrextend() when extending a relation.\n\nThat's a mistake on my part.\n\n> 4. Also, the smgrwrite() calls are performed before WAL-logging the\n> pages, so the page that's written to disk has 0/0 as the LSN, not the\n> LSN of the WAL record. That's harmless too, but seems a bit sloppy.\n\nThat is also a mistake on my part.\n\n> 5. In heapam_relation_set_new_filenode(), we do this:\n>\n> >\n> > /*\n> > * If required, set up an init fork for an unlogged table so that it can\n> > * be correctly reinitialized on restart. An immediate sync is required\n> > * even if the page has been logged, because the write did not go through\n> > * shared_buffers and therefore a concurrent checkpoint may have moved the\n> > * redo pointer past our xlog record. Recovery may as well remove it\n> > * while replaying, for example, XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE\n> > * record. Therefore, logging is necessary even if wal_level=minimal.\n> > */\n> > if (persistence == RELPERSISTENCE_UNLOGGED)\n> > {\n> > Assert(rel->rd_rel->relkind == RELKIND_RELATION ||\n> > rel->rd_rel->relkind == RELKIND_MATVIEW ||\n> > rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n> > smgrcreate(srel, INIT_FORKNUM, false);\n> > log_smgrcreate(newrnode, INIT_FORKNUM);\n> > smgrimmedsync(srel, INIT_FORKNUM);\n> > }\n>\n> The comment doesn't make much sense, we haven't written nor WAL-logged\n> any page here, with nor without the buffer cache. It made more sense\n> before commit fa0f466d53.\n\nWell, it seems to me (and perhaps I am just confused) that complaining\nthat there's no page written here might be a technicality. The point\nis that there's no synchronization between the work we're doing here\n-- which is creating a fork, not writing a page -- and any concurrent\ncheckpoint. So we both need to log it, and also sync it immediately.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:57:42 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unlogged relations and WAL-logging" }, { "msg_contents": "On 28/01/2022 15:57, Robert Haas wrote:\n> On Thu, Jan 27, 2022 at 2:32 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>> Unlogged relations are not WAL-logged, but creating the init-fork is.\n>> There are a few things around that seem sloppy:\n>>\n>> 1. In index_build(), we do this:\n>>\n>>> */\n>>> if (indexRelation->rd_rel->relpersistence == RELPERSISTENCE_UNLOGGED &&\n>>> !smgrexists(RelationGetSmgr(indexRelation), INIT_FORKNUM))\n>>> {\n>>> smgrcreate(RelationGetSmgr(indexRelation), INIT_FORKNUM, false);\n>>> indexRelation->rd_indam->ambuildempty(indexRelation);\n>>> }\n>>\n>> Shouldn't we call log_smgrcreate() here? Creating the init fork is\n>> otherwise not WAL-logged at all.\n> \n> Yes, that's a bug.\n\nPushed and backpatched this patch (commit 3142a8845b).\n\n>> 2. Some implementations of ambuildempty() use the buffer cache (hash,\n>> gist, gin, brin), while others bypass it and call smgrimmedsync()\n>> instead (btree, spgist, bloom). I don't see any particular reason for\n>> those decisions, it seems to be based purely on which example the author\n>> happened to copy-paste.\n> \n> I thought that this inconsistency was odd when I was developing the\n> unlogged feature, but I tried to keep each routine's ambuildempty()\n> consistent with whatever ambuild() was doing. I don't mind if you want\n> to change it, though.\n> \n>> 3. Those ambuildempty implementations that bypass the buffer cache use\n>> smgrwrite() to write the pages. That doesn't make any difference in\n>> practice, but in principle it's wrong: You are supposed to use\n>> smgrextend() when extending a relation.\n> \n> That's a mistake on my part.\n> \n>> 4. Also, the smgrwrite() calls are performed before WAL-logging the\n>> pages, so the page that's written to disk has 0/0 as the LSN, not the\n>> LSN of the WAL record. That's harmless too, but seems a bit sloppy.\n> \n> That is also a mistake on my part.\n\nI'm still sitting on these fixes. I think the patch I posted still makes \nsense, but I got carried away with a more invasive approach that \nintroduces a whole new set of functions for bulk-creating a relation, \nwhich would handle WAL-logging, smgrimmedsync() and all that (see \nbelow). We have some repetitive, error-prone code in all the index build \nfunctions for that. But that's not backpatchable, so I'll rebase the \noriginal approach next week.\n\n>> 5. In heapam_relation_set_new_filenode(), we do this:\n>>\n>>>\n>>> /*\n>>> * If required, set up an init fork for an unlogged table so that it can\n>>> * be correctly reinitialized on restart. An immediate sync is required\n>>> * even if the page has been logged, because the write did not go through\n>>> * shared_buffers and therefore a concurrent checkpoint may have moved the\n>>> * redo pointer past our xlog record. Recovery may as well remove it\n>>> * while replaying, for example, XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE\n>>> * record. Therefore, logging is necessary even if wal_level=minimal.\n>>> */\n>>> if (persistence == RELPERSISTENCE_UNLOGGED)\n>>> {\n>>> Assert(rel->rd_rel->relkind == RELKIND_RELATION ||\n>>> rel->rd_rel->relkind == RELKIND_MATVIEW ||\n>>> rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n>>> smgrcreate(srel, INIT_FORKNUM, false);\n>>> log_smgrcreate(newrnode, INIT_FORKNUM);\n>>> smgrimmedsync(srel, INIT_FORKNUM);\n>>> }\n>>\n>> The comment doesn't make much sense, we haven't written nor WAL-logged\n>> any page here, with nor without the buffer cache. It made more sense\n>> before commit fa0f466d53.\n> \n> Well, it seems to me (and perhaps I am just confused) that complaining\n> that there's no page written here might be a technicality. The point\n> is that there's no synchronization between the work we're doing here\n> -- which is creating a fork, not writing a page -- and any concurrent\n> checkpoint. So we both need to log it, and also sync it immediately.\n\nI see. I pushed the fix from the other thread that makes smgrcreate() \ncall register_dirty_segment (commit 4b4798e13). I believe that makes \nthis smgrimmedsync() unnecessary. If a concurrent checkpoint happens \nwith a redo pointer greater than this WAL record, it must've received \nthe fsync request created by smgrcreate(). That depends on the fact that \nwe write the WAL record *after* smgrcreate(). Subtle..\n\nHmm, we have a similar smgrimmedsync() call after index build, because \nwe have written pages directly with smgrextend(skipFsync=true). If no \ncheckpoints have occurred during the index build, we could call \nregister_dirty_segment() instead of smgrimmedsync(). That would avoid \nthe fsync() latency when creating an index on an empty or small index.\n\nThis is all very subtle to get right though. That's why I'd like to \ninvent a new bulk-creation facility that would handle this stuff, and \nmake the callers less error-prone.\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 7 Jul 2023 18:21:44 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Unlogged relations and WAL-logging" }, { "msg_contents": "On 07/07/2023 18:21, Heikki Linnakangas wrote:\n> On 28/01/2022 15:57, Robert Haas wrote:\n>>> 4. Also, the smgrwrite() calls are performed before WAL-logging the\n>>> pages, so the page that's written to disk has 0/0 as the LSN, not the\n>>> LSN of the WAL record. That's harmless too, but seems a bit sloppy.\n>>\n>> That is also a mistake on my part.\n> \n> I'm still sitting on these fixes. I think the patch I posted still makes\n> sense, but I got carried away with a more invasive approach that\n> introduces a whole new set of functions for bulk-creating a relation,\n> which would handle WAL-logging, smgrimmedsync() and all that (see\n> below). We have some repetitive, error-prone code in all the index build\n> functions for that. But that's not backpatchable, so I'll rebase the\n> original approach next week.\n\nCommitted this fix to master and v16. Didn't seem worth backpatching \nfurther than that, given that there is no live user-visible issue here.\n\n>>> 5. In heapam_relation_set_new_filenode(), we do this:\n>>>\n>>>>\n>>>> /*\n>>>> * If required, set up an init fork for an unlogged table so that it can\n>>>> * be correctly reinitialized on restart. An immediate sync is required\n>>>> * even if the page has been logged, because the write did not go through\n>>>> * shared_buffers and therefore a concurrent checkpoint may have moved the\n>>>> * redo pointer past our xlog record. Recovery may as well remove it\n>>>> * while replaying, for example, XLOG_DBASE_CREATE or XLOG_TBLSPC_CREATE\n>>>> * record. Therefore, logging is necessary even if wal_level=minimal.\n>>>> */\n>>>> if (persistence == RELPERSISTENCE_UNLOGGED)\n>>>> {\n>>>> Assert(rel->rd_rel->relkind == RELKIND_RELATION ||\n>>>> rel->rd_rel->relkind == RELKIND_MATVIEW ||\n>>>> rel->rd_rel->relkind == RELKIND_TOASTVALUE);\n>>>> smgrcreate(srel, INIT_FORKNUM, false);\n>>>> log_smgrcreate(newrnode, INIT_FORKNUM);\n>>>> smgrimmedsync(srel, INIT_FORKNUM);\n>>>> }\n>>>\n>>> The comment doesn't make much sense, we haven't written nor WAL-logged\n>>> any page here, with nor without the buffer cache. It made more sense\n>>> before commit fa0f466d53.\n>>\n>> Well, it seems to me (and perhaps I am just confused) that complaining\n>> that there's no page written here might be a technicality. The point\n>> is that there's no synchronization between the work we're doing here\n>> -- which is creating a fork, not writing a page -- and any concurrent\n>> checkpoint. So we both need to log it, and also sync it immediately.\n> \n> I see. I pushed the fix from the other thread that makes smgrcreate()\n> call register_dirty_segment (commit 4b4798e13). I believe that makes\n> this smgrimmedsync() unnecessary. If a concurrent checkpoint happens\n> with a redo pointer greater than this WAL record, it must've received\n> the fsync request created by smgrcreate(). That depends on the fact that\n> we write the WAL record *after* smgrcreate(). Subtle..\n> \n> Hmm, we have a similar smgrimmedsync() call after index build, because\n> we have written pages directly with smgrextend(skipFsync=true). If no\n> checkpoints have occurred during the index build, we could call\n> register_dirty_segment() instead of smgrimmedsync(). That would avoid\n> the fsync() latency when creating an index on an empty or small index.\n> \n> This is all very subtle to get right though. That's why I'd like to\n> invent a new bulk-creation facility that would handle this stuff, and\n> make the callers less error-prone.\n\nHaving a more generic and less error-prone bulk-creation mechanism is \nstill on my long TODO list..\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Wed, 23 Aug 2023 17:40:32 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Unlogged relations and WAL-logging" }, { "msg_contents": "Is the patch \n0003-Remove-unnecessary-smgrimmedsync-when-creating-unlog.patch still \nrelevant, or can this commitfest entry be closed?\n\nOn 23.08.23 16:40, Heikki Linnakangas wrote:\n>>>> 5. In heapam_relation_set_new_filenode(), we do this:\n>>>>\n>>>>>\n>>>>>         /*\n>>>>>          * If required, set up an init fork for an unlogged table \n>>>>> so that it can\n>>>>>          * be correctly reinitialized on restart.  An immediate \n>>>>> sync is required\n>>>>>          * even if the page has been logged, because the write did \n>>>>> not go through\n>>>>>          * shared_buffers and therefore a concurrent checkpoint may \n>>>>> have moved the\n>>>>>          * redo pointer past our xlog record.  Recovery may as well \n>>>>> remove it\n>>>>>          * while replaying, for example, XLOG_DBASE_CREATE or \n>>>>> XLOG_TBLSPC_CREATE\n>>>>>          * record. Therefore, logging is necessary even if \n>>>>> wal_level=minimal.\n>>>>>          */\n>>>>>         if (persistence == RELPERSISTENCE_UNLOGGED)\n>>>>>         {\n>>>>>                 Assert(rel->rd_rel->relkind == RELKIND_RELATION ||\n>>>>>                            rel->rd_rel->relkind == RELKIND_MATVIEW ||\n>>>>>                            rel->rd_rel->relkind == \n>>>>> RELKIND_TOASTVALUE);\n>>>>>                 smgrcreate(srel, INIT_FORKNUM, false);\n>>>>>                 log_smgrcreate(newrnode, INIT_FORKNUM);\n>>>>>                 smgrimmedsync(srel, INIT_FORKNUM);\n>>>>>         }\n>>>>\n>>>> The comment doesn't make much sense, we haven't written nor WAL-logged\n>>>> any page here, with nor without the buffer cache. It made more sense\n>>>> before commit fa0f466d53.\n>>>\n>>> Well, it seems to me (and perhaps I am just confused) that complaining\n>>> that there's no page written here might be a technicality. The point\n>>> is that there's no synchronization between the work we're doing here\n>>> -- which is creating a fork, not writing a page -- and any concurrent\n>>> checkpoint. So we both need to log it, and also sync it immediately.\n>>\n>> I see. I pushed the fix from the other thread that makes smgrcreate()\n>> call register_dirty_segment (commit 4b4798e13). I believe that makes\n>> this smgrimmedsync() unnecessary. If a concurrent checkpoint happens\n>> with a redo pointer greater than this WAL record, it must've received\n>> the fsync request created by smgrcreate(). That depends on the fact that\n>> we write the WAL record *after* smgrcreate(). Subtle..\n>>\n>> Hmm, we have a similar smgrimmedsync() call after index build, because\n>> we have written pages directly with smgrextend(skipFsync=true). If no\n>> checkpoints have occurred during the index build, we could call\n>> register_dirty_segment() instead of smgrimmedsync(). That would avoid\n>> the fsync() latency when creating an index on an empty or small index.\n>>\n>> This is all very subtle to get right though. That's why I'd like to\n>> invent a new bulk-creation facility that would handle this stuff, and\n>> make the callers less error-prone.\n> \n> Having a more generic and less error-prone bulk-creation mechanism is \n> still on my long TODO list..\n> \n\n\n\n", "msg_date": "Fri, 1 Sep 2023 14:49:52 +0200", "msg_from": "Peter Eisentraut <peter@eisentraut.org>", "msg_from_op": false, "msg_subject": "Re: Unlogged relations and WAL-logging" }, { "msg_contents": "On 01/09/2023 15:49, Peter Eisentraut wrote:\n> Is the patch\n> 0003-Remove-unnecessary-smgrimmedsync-when-creating-unlog.patch still\n> relevant, or can this commitfest entry be closed?\n\nYes. Pushed it now, thanks!\n\n-- \nHeikki Linnakangas\nNeon (https://neon.tech)\n\n\n\n", "msg_date": "Fri, 15 Sep 2023 17:54:30 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": true, "msg_subject": "Re: Unlogged relations and WAL-logging" } ]
[ { "msg_contents": "Commit b07642dbcd (\"Trigger autovacuum based on number of INSERTs\")\ntaught autovacuum to run in response to INSERTs, which is now\ntypically the dominant factor that drives vacuuming for an append-only\ntable -- a very useful feature, certainly.\n\nThis is driven by the following logic from autovacuum.c:\n\n vacinsthresh = (float4) vac_ins_base_thresh +\nvac_ins_scale_factor * reltuples;\n ...\n\n /* Determine if this table needs vacuum or analyze. */\n *dovacuum = force_vacuum || (vactuples > vacthresh) ||\n (vac_ins_base_thresh >= 0 && instuples > vacinsthresh);\n\nI can see why the nearby, similar vacthresh and anlthresh variables\n(not shown here) are scaled based on pg_class.reltuples -- that makes\nsense. But why should we follow that example here, with vacinsthresh?\n\nBoth VACUUM and ANALYZE update pg_class.reltuples. But this code seems\nto assume that it's only something that VACUUM can ever do. Why\nwouldn't we expect a plain ANALYZE to have actually been the last\nthing to update pg_class.reltuples for an append-only table? Wouldn't\nthat lead to less frequent (perhaps infinitely less frequent)\nvacuuming for an append-only table, relative to the documented\nbehavior of autovacuum_vacuum_insert_scale_factor?\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jan 2022 12:20:18 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Both VACUUM and ANALYZE update pg_class.reltuples. But this code seems\n> to assume that it's only something that VACUUM can ever do. Why\n> wouldn't we expect a plain ANALYZE to have actually been the last\n> thing to update pg_class.reltuples for an append-only table? Wouldn't\n> that lead to less frequent (perhaps infinitely less frequent)\n> vacuuming for an append-only table, relative to the documented\n> behavior of autovacuum_vacuum_insert_scale_factor?\n\nPgStat_StatTabEntry.inserts_since_vacuum will continue to grow and\ngrow as more tuples are inserted, until VACUUM actually runs, no\nmatter what. That largely explains why this bug was missed before now:\nit's inevitable that inserts_since_vacuum will become large at some\npoint -- even large relative to a bogus scaled\npg_class.reltuples-at-ANALYZE threshold (unless ANALYZE hasn't been\nrun since the last VACUUM, in which case pg_class.reltuples will be at\nthe expected value anyway). And so we'll eventually get to the point\nwhere so many unvacuumed inserted tuples have accumulated that an\ninsert-driven autovacuum still takes place.\n\nIn practice these delayed insert-driven autovacuum operations will\nusually happen without *ludicrous* delay (relative to the documented\nbehavior). Even still, the autovacuum schedule for append-only tables\nwill often be quite wrong. (Anti-wraparound VACUUMs probably made the\nbug harder to notice as well, of course.)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 27 Jan 2022 13:59:38 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Thu, 2022-01-27 at 12:20 -0800, Peter Geoghegan wrote:\n> I can see why the nearby, similar vacthresh and anlthresh variables\n> (not shown here) are scaled based on pg_class.reltuples -- that makes\n> sense. But why should we follow that example here, with vacinsthresh?\n\nWhat would you suggest instead? pg_stat_all_tables.n_live_tup?\n\nYours,\nLaurenz Albe\n\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:22:19 +0100", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 11:22 PM Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> What would you suggest instead? pg_stat_all_tables.n_live_tup?\n\nI'm not sure, except that I assume that it'll have to come from the\nstatistics collector, not from pg_class. I think that this bug stemmed\nfrom the fact that vac_update_relstats() is used by both VACUUM and\nANALYZE to store information in pg_class, while both *also* use\npgstat_report_vacuum()/pgstat_report_analyze() to store closely\nrelated information at the same point. There is rather a lot of\nimplicit information here. I recall a few other bugs that also seemed\nrelated to this messiness with statistics and pg_class.\n\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 28 Jan 2022 19:35:51 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Thu, Jan 27, 2022 at 01:59:38PM -0800, Peter Geoghegan wrote:\n> On Thu, Jan 27, 2022 at 12:20 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > Both VACUUM and ANALYZE update pg_class.reltuples. But this code seems\n> > to assume that it's only something that VACUUM can ever do. Why\n> > wouldn't we expect a plain ANALYZE to have actually been the last\n> > thing to update pg_class.reltuples for an append-only table? Wouldn't\n> > that lead to less frequent (perhaps infinitely less frequent)\n> > vacuuming for an append-only table, relative to the documented\n> > behavior of autovacuum_vacuum_insert_scale_factor?\n> \n> PgStat_StatTabEntry.inserts_since_vacuum will continue to grow and\n> grow as more tuples are inserted, until VACUUM actually runs, no\n> matter what. That largely explains why this bug was missed before now:\n> it's inevitable that inserts_since_vacuum will become large at some\n> point -- even large relative to a bogus scaled\n> pg_class.reltuples-at-ANALYZE threshold (unless ANALYZE hasn't been\n> run since the last VACUUM, in which case pg_class.reltuples will be at\n> the expected value anyway). And so we'll eventually get to the point\n> where so many unvacuumed inserted tuples have accumulated that an\n> insert-driven autovacuum still takes place.\n\nMaybe I'm missed your point, but I think this may not rise to the level of\nbeing a \"bug\".\n\nIf we just vacuumed an insert-only table, and then insert some more, and\nautoanalyze runs and updates reltuples, what's wrong with vac_ins_scale_factor\n* reltuples + vac_ins_base_thresh ?\n\nYou're saying reltuples should be the number of tuples at the last vacuum\ninstead of the most recent value from either vacuum or analyze ?\n\nIt's true that the vacuum threshold may be hit later than if reltuples hadn't\nbeen updated by ANALYZE. If that's what you're referring to, that's the\nbehavior of scale factor in general. If a table is growing in size at a\nconstant rate, analyze will run at decreasing frequency. With the default 20%\nscale factor, it'll first run at 1.2x the table's initial size, then at 1.44\n(not 1.4), then at 1.728 (not 1.6), then at 2.0736 (not 1.8). That's not\nnecessarily desirable, but it's not necessarily wrong, either. If your table\ndoubles in size, you might have to adjust these things. Maybe there should be\nanother knob allowing perfect, \"geometric\" (or other) frequency, but the\nbehavior is not new in this patch.\n\nWe talked about that here.\nhttps://www.postgresql.org/message-id/flat/20200305172749.GK684%40telsasoft.com#edac59123843f9f8e1abbc2b570c76f1\n\nWith the default values, analyze happens after 10% growth, and vacuum happens\nafter 20% (which ends up being 22% of the initial table size). The goal of\nthis patch was to have inserts trigger autovacuum *at all*. This behavior may\nbe subtle or non-ideal, but not a problem? The opposite thing could also\nhappen - *vacuum* could update reltuples, causing the autoanalyze threshold to\nbe hit a bit later. \n\n-- \nJustin\n\n\n", "msg_date": "Sun, 30 Jan 2022 11:00:50 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Fri, 28 Jan 2022 at 09:20, Peter Geoghegan <pg@bowt.ie> wrote:\n> Both VACUUM and ANALYZE update pg_class.reltuples. But this code seems\n> to assume that it's only something that VACUUM can ever do.\n\nLike Justin I'm also not quite following what the problem is here.\npg_class.reltuples is only used to estimate how many tuples the scale\nfactor is likely to be. It does not matter if that was set by ANALYZE\nor VACUUM, it's simply an estimate.\n\nI quoted the text above as I get the idea that you've gotten the wrong\nend of the stick about how this works. reltuples is just used to\nestimate what the number of tuples for the insert threshold is based\non the scale factor. It does not matter if that was estimated by\nVACUUM or ANALYZE.\n\nIf ANALYZE runs and sets pg_class.reltuples to 1 million, then we\ninsert 500k tuples, assuming a 0 vacuum_ins_threshold and a\nvacuum_ins_scale_factor of 0.2, then we'll want to perform a vacuum as\n\"vac_ins_base_thresh + vac_ins_scale_factor * reltuples\" will come out\nat 200k. auto-vacuum will then trigger and update reltuples hopefully\nto some value around 1.5 million, then next time it'll take 300k\ntuples to trigger an insert vacuum.\n\nI'm not quite following where the problem is with that. (Of course\nwith the exception of the fact that ANALYZE and VACUUM have different\nmethods how they decide what to set pg_class.reltuples to. That's not\na new problem)\n\nDavid\n\n\n", "msg_date": "Mon, 31 Jan 2022 17:28:43 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Mon, 31 Jan 2022 at 17:28, David Rowley <dgrowleyml@gmail.com> wrote:\n> If ANALYZE runs and sets pg_class.reltuples to 1 million, then we\n> insert 500k tuples, assuming a 0 vacuum_ins_threshold and a\n> vacuum_ins_scale_factor of 0.2, then we'll want to perform a vacuum as\n> \"vac_ins_base_thresh + vac_ins_scale_factor * reltuples\" will come out\n> at 200k. auto-vacuum will then trigger and update reltuples hopefully\n> to some value around 1.5 million, then next time it'll take 300k\n> tuples to trigger an insert vacuum.\n\nIf we wanted a more current estimate for the number of tuples in a\nrelation then we could use reltuples / relpages *\nRelationGetNumberOfBlocks(r). However, I still don't see why an\nINSERT driven auto-vacuums are a particularly special case. ANALYZE\nupdating the reltuples estimate had an effect on when auto-vacuum\nwould trigger for tables that generally grow in the number of live\ntuples but previously only (i.e before insert vacuums existed)\nreceived auto-vacuum attention due to UPDATEs/DELETEs.\n\nI suppose the question is, what is autovacuum_vacuum_scale_factor\nmeant to represent? Our documents claim:\n\n> Specifies a fraction of the table size to add to autovacuum_vacuum_threshold when deciding whether to trigger a VACUUM. The default is 0.2 (20% of table size). This parameter can only be set in the postgresql.conf file or on the server command line; but the setting can be overridden for individual tables by changing table storage parameters.\n\nNothing there seems to indicate the scale is based on the historical\ntable size when the table was last vacuumed/analyzed, so you could\nclaim that the 3 usages of relpages when deciding if the table should\nbe vacuumed and/or analyzed are all wrong and should take into account\nRelationGetNumberOfBlocks too.\n\nI'm not planning on doing anything to change any of this unless I see\nsome compelling argument that what's there is wrong.\n\nDavid\n\n\n", "msg_date": "Wed, 2 Feb 2022 09:02:38 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" }, { "msg_contents": "On Tue, Feb 1, 2022 at 3:02 PM David Rowley <dgrowleyml@gmail.com> wrote:\n> If we wanted a more current estimate for the number of tuples in a\n> relation then we could use reltuples / relpages *\n> RelationGetNumberOfBlocks(r). However, I still don't see why an\n> INSERT driven auto-vacuums are a particularly special case. ANALYZE\n> updating the reltuples estimate had an effect on when auto-vacuum\n> would trigger for tables that generally grow in the number of live\n> tuples but previously only (i.e before insert vacuums existed)\n> received auto-vacuum attention due to UPDATEs/DELETEs.\n\nSorry for the delay in my response; I've been travelling.\n\nI admit that I jumped the gun here -- I now believe that it's\noperating as originally designed. And that the design is reasonable.\nIt couldn't hurt to describe the design in a little more detail in the\nuser docs, though.\n\n> Nothing there seems to indicate the scale is based on the historical\n> table size when the table was last vacuumed/analyzed, so you could\n> claim that the 3 usages of relpages when deciding if the table should\n> be vacuumed and/or analyzed are all wrong and should take into account\n> RelationGetNumberOfBlocks too.\n\nThe route of my confusion might interest you. vacuumlazy.c's call to\nvac_update_relstats() provides relpages and reltuples values that are\nvery accurate relative to the VACUUM operations view of things\n(relative to its OldestXmin, which uses the size of the table at the\nstart of the VACUUM, not the end). When the vac_update_relstats() call\nis made, the same accurate-relative-toOldestXmin values might actually\n*already* be out of date relative to the physical table as it is at\nthe end of the same VACUUM.\n\nThe fact that these accurate values could be out of date like this is\npractically inevitable for a certain kind of table. But that might not\nmatter at all, if they were later *interpreted* in a way that took\nthis into account later on -- context matters.\n\nFor some reason I made the leap to thinking that everybody else\nbelieved the same thing, too, and that the intention with the design\nof the insert-driven autovacuum stuff was to capture that. But that's\njust not the case, at all. I apologize for the confusion.\n\nBTW, this situation with the relstats only being accurate \"relative to\nthe last VACUUM's OldestXmin\" is *especially* important with\nnew_dead_tuples values, which are basically recently dead tuples\nrelative to OldestXmin. In the common case where there are very few\nrecently dead tuples, we have an incredibly distorted \"sample\" of dead\ntuples (it tells us much more about VACUUM implementation details than\nthe truth of what's going on in the table). So that also recommends\n\"relativistically interpreting\" the values later on. This specific\nissue has even less to do with autovacuum_vacuum_scale_factor than the\nmain point, of course. I agree with you that the insert-driven stuff\nisn't a special case.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Wed, 2 Feb 2022 15:38:51 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": true, "msg_subject": "Re: Why is INSERT-driven autovacuuming based on pg_class.reltuples?" } ]
[ { "msg_contents": "Is this intentional behavior?\n\n-- I can do this\nSELECT substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.'); \n\n-- But can't do this gives error syntax error at or near \"from\"\nSELECT pg_catalog.substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n\n-- but can do\nSELECT pg_catalog.substring('3.2.0', '[0-9]*\\.([0-9]*)\\.');\n\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 21:22:14 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": true, "msg_subject": "substring odd behavior" }, { "msg_contents": "On Thu, Jan 27, 2022 at 7:22 PM Regina Obe <lr@pcorp.us> wrote:\n\n> Is this intentional behavior?\n>\n> -- I can do this\n> SELECT substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n>\n> -- But can't do this gives error syntax error at or near \"from\"\n> SELECT pg_catalog.substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n>\n>\nselect pg_catalog.trim(leading 'hi' from 'hi david'); -- syntax error at or\nnear \"leading\" (returns ' david' if pg_catalog is omitted). I also tested\nposition(text in text) and get a syntax failure at the second text argument.\n\nGeneralizing from three examples, it seems the SQL Standard defined\nfunctions that use keywords cannot be reliably called with a schema prefix.\n\nThis seems likely to be intended (or at least not worth avoiding) behavior\ngiven the special constraints these functions place on the parser. But I\nexpect someone more authoritative than I on the subject to chime in. For\nme, adding another sentence to Chapter 9.4 (I don't know if other\ncategories of functions have this dynamic) after we say \"SQL defines some\nstring functions that use key words...PostgreSQL also provides versions of\nthese functions that use the regular function invocation syntax.\" The\nschema qualification would seem to be part of \"regular function invocation\nsyntax\" only.\n\nI'd consider adding \"Note that the ability to specify the pg_catalog schema\nin front of the function name only applies to the regular function\ninvocation syntax.\" Though it isn't like our users are tripping over this\neither. Maybe noting it in the Syntax chapter would be more appropriate.\nI'd rather not leave the status quo behavior without documenting it\nsomewhere (but all the better if it can be fixed).\n\nIt probably isn't worth adding a section to the Syntax chapter for\n\"Irregular Function Calls (SQL Standard Mandated)\" but I'd entertain the\nthought.\n\nDavid J.\n\nOn Thu, Jan 27, 2022 at 7:22 PM Regina Obe <lr@pcorp.us> wrote:Is this intentional behavior?\n\n-- I can do this\nSELECT substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.'); \n\n-- But can't do this gives error syntax error at or near \"from\"\nSELECT pg_catalog.substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');select pg_catalog.trim(leading 'hi' from 'hi david'); -- syntax error at or near \"leading\" (returns ' david' if pg_catalog is omitted).  I also tested position(text in text) and get a syntax failure at the second text argument.Generalizing from three examples, it seems the SQL Standard defined functions that use keywords cannot be reliably called with a schema prefix.This seems likely to be intended (or at least not worth avoiding) behavior given the special constraints these functions place on the parser.  But I expect someone more authoritative than I on the subject to chime in.  For me, adding another sentence to Chapter 9.4 (I don't know if other categories of functions have this dynamic) after we say \"SQL defines some string functions that use key words...PostgreSQL also provides versions of these functions that use the regular function invocation syntax.\"  The schema qualification would seem to be part of \"regular function invocation syntax\" only.I'd consider adding \"Note that the ability to specify the pg_catalog schema in front of the function name only applies to the regular function invocation syntax.\"  Though it isn't like our users are tripping over this either.  Maybe noting it in the Syntax chapter would be more appropriate.  I'd rather not leave the status quo behavior without documenting it somewhere (but all the better if it can be fixed).It probably isn't worth adding a section to the Syntax chapter for \"Irregular Function Calls (SQL Standard Mandated)\" but I'd entertain the thought.David J.", "msg_date": "Thu, 27 Jan 2022 19:54:49 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: substring odd behavior" }, { "msg_contents": "\"David G. Johnston\" <david.g.johnston@gmail.com> writes:\n> On Thu, Jan 27, 2022 at 7:22 PM Regina Obe <lr@pcorp.us> wrote:\n>> Is this intentional behavior?\n>> -- I can do this\n>> SELECT substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n>> -- But can't do this gives error syntax error at or near \"from\"\n>> SELECT pg_catalog.substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n\n> Generalizing from three examples, it seems the SQL Standard defined\n> functions that use keywords cannot be reliably called with a schema prefix.\n\nThe syntax with keywords instead of commas is defined in the SQL standard,\nand it is defined there without any schema qualification. They seem to\nthink that \"substring\" is also a keyword of sorts, and that's how we\nimplement it.\n\nIn short: you can call substring() with the SQL syntax, which is a\nspecial-purpose production that does not involve any schema name,\nor you can call it as an ordinary function with ordinary function\nnotation. You can't mix pieces of those notations.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 27 Jan 2022 22:06:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: substring odd behavior" }, { "msg_contents": "\n\n> On Jan 27, 2022, at 7:06 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> In short: you can call substring() with the SQL syntax, which is a\n> special-purpose production that does not involve any schema name,\n> or you can call it as an ordinary function with ordinary function\n> notation. You can't mix pieces of those notations.\n\nBeware that your choice of grammar interacts with search_path considerations. If you use what looks like a regular function call, the search_path will be consulted, but if you use the \"from\" based syntax, the one from pg_catalog will be used:\n\nSET search_path = substr_test, pg_catalog;\nSELECT substring('first', 'second');\n substring\n----------------------------\n substr_test: first, second\n(1 row)\n\nSELECT substring('first' FROM 'second');\n substring\n-----------\n\n(1 row)\n\nSET search_path = pg_catalog, substr_test;\nSELECT substring('first', 'second');\n substring\n-----------\n\n(1 row)\n\nSELECT substring('first' FROM 'second');\n substring\n-----------\n\n(1 row)\n\nSELECT substr_test.substring('first', 'second');\n substring\n----------------------------\n substr_test: first, second\n(1 row)\n\nSELECT substr_test.substring('first' FROM 'second');\nERROR: syntax error at or near \"FROM\"\nLINE 1: SELECT substr_test.substring('first' FROM 'second');\n\n\n—\nMark Dilger\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n\n\n\n", "msg_date": "Thu, 27 Jan 2022 19:11:25 -0800", "msg_from": "Mark Dilger <mark.dilger@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: substring odd behavior" }, { "msg_contents": "Hi,\n\nOn Thu, Jan 27, 2022 at 07:54:49PM -0700, David G. Johnston wrote:\n> On Thu, Jan 27, 2022 at 7:22 PM Regina Obe <lr@pcorp.us> wrote:\n> \n> > Is this intentional behavior?\n> >\n> > -- I can do this\n> > SELECT substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n> >\n> > -- But can't do this gives error syntax error at or near \"from\"\n> > SELECT pg_catalog.substring('3.2.0' from '[0-9]*\\.([0-9]*)\\.');\n> >\n> >\n> select pg_catalog.trim(leading 'hi' from 'hi david'); -- syntax error at or\n> near \"leading\" (returns ' david' if pg_catalog is omitted). I also tested\n> position(text in text) and get a syntax failure at the second text argument.\n> \n> Generalizing from three examples, it seems the SQL Standard defined\n> functions that use keywords cannot be reliably called with a schema prefix.\n\nYes, I don't have a copy of the standard but I think that they define such\nconstructs as part of the language and not plain function calls, so you can't\nschema qualify it.\n\nThat's how it's internally implemented, and the SUBSTRING( FOR / FROM / ESCAPE\n) is a syntactic sugar over pg_catalog.substring().\n\n\n", "msg_date": "Fri, 28 Jan 2022 11:12:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: substring odd behavior" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing another patch [1], I found an unnecessary test case of\npg_log_backend_memory_contexts, that is, executing the function after\ngranting the permission to the role and checking the permissions with\nthe has_function_privilege. IMO, it's unnecessary as we have the test\ncases covering the function execution just above that. With the\nremoval of these test cases, the regression tests timings can be\nimproved (a tiniest tiniest tiniest tiniest amount of time though).\nAttaching a small patch.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/04596d19-1bb6-4865-391d-4e63c9b3317f%40oss.nttdata.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 14:15:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_log_backend_memory_contexts - remove unnecessary test case" } ]
[ { "msg_contents": "Hi,\n\nWhile reviewing patch at [1], it has been found that the memory\ncontext switch to oldcontext from cstate->copycontext in between\nBeginCopyTo is not correct because the intention of the copycontext is\nto use it through the copy command processing. It looks like a thinko\nfrom the commit c532d1 [2]. Attaching a small patch to remove this.\n\nThoughts?\n\n[1] https://www.postgresql.org/message-id/539760.1642610324%40sss.pgh.pa.us\n[2]\ncommit c532d15dddff14b01fe9ef1d465013cb8ef186df\nAuthor: Heikki Linnakangas <heikki.linnakangas@iki.fi>\nDate: Mon Nov 23 10:50:50 2020 +0200\n\n Split copy.c into four files.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 15:41:11 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "BeginCopyTo - remove switching to old memory context in between COPY\n TO command processing" }, { "msg_contents": "\nOn Fri, 28 Jan 2022 at 18:11, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote:\n> Hi,\n>\n> While reviewing patch at [1], it has been found that the memory\n> context switch to oldcontext from cstate->copycontext in between\n> BeginCopyTo is not correct because the intention of the copycontext is\n> to use it through the copy command processing. It looks like a thinko\n> from the commit c532d1 [2]. Attaching a small patch to remove this.\n>\n> Thoughts?\n>\n\nThanks for the patch! Tested and passed regression tests. LGTM.\n\n--\nRegrads,\nJapin Li.\nChengDu WenWu Information Technology Co.,Ltd.\n\n\n", "msg_date": "Fri, 28 Jan 2022 19:01:20 +0800", "msg_from": "Japin Li <japinli@hotmail.com>", "msg_from_op": false, "msg_subject": "Re: BeginCopyTo - remove switching to old memory context in between\n COPY TO command processing" }, { "msg_contents": "On Fri, Jan 28, 2022 at 03:41:11PM +0530, Bharath Rupireddy wrote:\n> While reviewing patch at [1], it has been found that the memory\n> context switch to oldcontext from cstate->copycontext in between\n> BeginCopyTo is not correct because the intention of the copycontext is\n> to use it through the copy command processing. It looks like a thinko\n> from the commit c532d1 [2]. Attaching a small patch to remove this.\n> \n> Thoughts?\n\nI think that you are right. Before c532d15d, BeginCopy() was\ninternally doing one switch with copycontext and the current memory\ncontext. This commit has just moved the internals of BeginCopy()\ninto BeginCopyTo(), so this looks like a copy-paste error to me. I'll\ngo fix it, thanks for the report!\n-\nMichael", "msg_date": "Fri, 28 Jan 2022 20:23:57 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: BeginCopyTo - remove switching to old memory context in between\n COPY TO command processing" } ]
[ { "msg_contents": "Hi Hackers,\n\nI just noticed that the new server-side base backup feature requires\nsuperuser privileges (which is only documented in the pg_basebackup\nmanual, not in the streaming replication protocol specification).\n\nIsn't this the kind of thing the pg_write_server_files role was created\nfor, so that it can be delegated to a non-superuser?\n\n- ilmari\n\n\n", "msg_date": "Fri, 28 Jan 2022 10:58:09 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Server-side base backup: why superuser, not pg_write_server_files?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 5:58 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> I just noticed that the new server-side base backup feature requires\n> superuser privileges (which is only documented in the pg_basebackup\n> manual, not in the streaming replication protocol specification).\n>\n> Isn't this the kind of thing the pg_write_server_files role was created\n> for, so that it can be delegated to a non-superuser?\n\nThat's a good idea. I didn't think of that. Would you like to propose a patch?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 08:44:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Fri, Jan 28, 2022 at 5:58 AM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> I just noticed that the new server-side base backup feature requires\n>> superuser privileges (which is only documented in the pg_basebackup\n>> manual, not in the streaming replication protocol specification).\n>>\n>> Isn't this the kind of thing the pg_write_server_files role was created\n>> for, so that it can be delegated to a non-superuser?\n>\n> That's a good idea. I didn't think of that. Would you like to propose a patch?\n\nSure, I'll try and whip something up over the weekend.\n\n- ilmari\n\n\n", "msg_date": "Fri, 28 Jan 2022 15:28:20 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> writes:\n\n> Robert Haas <robertmhaas@gmail.com> writes:\n>\n>> On Fri, Jan 28, 2022 at 5:58 AM Dagfinn Ilmari Mannsåker\n>> <ilmari@ilmari.org> wrote:\n>>> I just noticed that the new server-side base backup feature requires\n>>> superuser privileges (which is only documented in the pg_basebackup\n>>> manual, not in the streaming replication protocol specification).\n>>>\n>>> Isn't this the kind of thing the pg_write_server_files role was created\n>>> for, so that it can be delegated to a non-superuser?\n>>\n>> That's a good idea. I didn't think of that. Would you like to propose a patch?\n>\n> Sure, I'll try and whip something up over the weekend.\n\nOr now. Patch attached.\n\n- ilmari", "msg_date": "Fri, 28 Jan 2022 17:15:59 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 12:16 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Or now. Patch attached.\n\nLGTM. Committed.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 28 Jan 2022 12:33:24 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "\nOn Fri, 28 Jan 2022, at 17:33, Robert Haas wrote:\n> LGTM. Committed.\n\nThanks!\n\n- ilmari\n\n\n", "msg_date": "Fri, 28 Jan 2022 17:35:33 +0000", "msg_from": "=?UTF-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Fri, Jan 28, 2022 at 12:35 PM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> On Fri, 28 Jan 2022, at 17:33, Robert Haas wrote:\n> > LGTM. Committed.\n>\n> Thanks!\n\nIt appears that neither of us actually tested that this works. For me,\nit works when I test as a superuser, but if I test as a non-superuser\nwith or without pg_write_server_files, it crashes, because we end up\ntrying to do syscache lookups without a transaction environment. I\n*think* that the attached is a sufficient fix; at least, it passes\nsimple testing.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Feb 2022 10:14:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n\n> On Fri, Jan 28, 2022 at 12:35 PM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n>> On Fri, 28 Jan 2022, at 17:33, Robert Haas wrote:\n>> > LGTM. Committed.\n>>\n>> Thanks!\n>\n> It appears that neither of us actually tested that this works.\n\nOops!\n\n> For me, it works when I test as a superuser, but if I test as a\n> non-superuser with or without pg_write_server_files, it crashes,\n> because we end up trying to do syscache lookups without a transaction\n> environment. I *think* that the attached is a sufficient fix; at\n> least, it passes simple testing.\n\nHere's a follow-on patch that adds a test for non-superuser server-side\nbasebackup, which crashes without your patch and passes with it.\n\n- ilmari", "msg_date": "Wed, 02 Feb 2022 15:42:16 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 10:42 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Here's a follow-on patch that adds a test for non-superuser server-side\n> basebackup, which crashes without your patch and passes with it.\n\nThis seems like a good idea, but I'm not going to slip a change from\nan exact test count to done_testing() into a commit on some other\ntopic...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 10:57:28 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> This seems like a good idea, but I'm not going to slip a change from\n> an exact test count to done_testing() into a commit on some other\n> topic...\n\nActually, it seemed that the consensus in the nearby thread [1]\nwas to start doing exactly that, rather than try to convert them\nall in one big push.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/9D4FFB61-392B-4A2C-B7E4-911797B4AC14%40yesql.se\n\n\n", "msg_date": "Wed, 02 Feb 2022 12:55:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > This seems like a good idea, but I'm not going to slip a change from\n> > an exact test count to done_testing() into a commit on some other\n> > topic...\n>\n> Actually, it seemed that the consensus in the nearby thread [1]\n> was to start doing exactly that, rather than try to convert them\n> all in one big push.\n\nUrk. Well, OK then.\n\nSuch an approach seems to me to have essentially nothing to recommend\nit, but I just work here.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 13:44:18 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 2, 2022 at 12:55 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Actually, it seemed that the consensus in the nearby thread [1]\n>> was to start doing exactly that, rather than try to convert them\n>> all in one big push.\n\n> Urk. Well, OK then.\n\n> Such an approach seems to me to have essentially nothing to recommend\n> it, but I just work here.\n\nWell, if someone wants to step up and provide a patch that changes 'em\nall at once, that'd be great. But we've discussed this before and\nnothing's happened.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 13:46:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Well, if someone wants to step up and provide a patch that changes 'em\n> all at once, that'd be great. But we've discussed this before and\n> nothing's happened.\n\nI mean, I don't understand why it's even better. And I would go so far\nas to say that if nobody can be bothered to do the work to convert\neverything at once, it probably isn't better.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 13:50:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "On Wed, Feb 2, 2022 at 1:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Wed, Feb 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Well, if someone wants to step up and provide a patch that changes 'em\n> > all at once, that'd be great. But we've discussed this before and\n> > nothing's happened.\n>\n> I mean, I don't understand why it's even better. And I would go so far\n> as to say that if nobody can be bothered to do the work to convert\n> everything at once, it probably isn't better.\n\nAnd one thing that concretely stinks about is the progress reporting\nyou get while the tests are running:\n\nt/010_pg_basebackup.pl ... 142/?\n\nThat's definitely less informative than 142/330 or whatever.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Feb 2022 13:58:15 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "> On 2 Feb 2022, at 19:58, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Wed, Feb 2, 2022 at 1:50 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>> On Wed, Feb 2, 2022 at 1:46 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> Well, if someone wants to step up and provide a patch that changes 'em\n>>> all at once, that'd be great. But we've discussed this before and\n>>> nothing's happened.\n>> \n>> I mean, I don't understand why it's even better. And I would go so far\n>> as to say that if nobody can be bothered to do the work to convert\n>> everything at once, it probably isn't better.\n\nI personally think it's better, so I went and did the work. The attached is a\nfirst pass over the tree to see what such a patch would look like. This should\nget a thread of it's own and not be hidden here but as it was discussed I piled\non for now.\n\n> And one thing that concretely stinks about is the progress reporting\n> you get while the tests are running:\n> \n> t/010_pg_basebackup.pl ... 142/?\n> \n> That's definitely less informative than 142/330 or whatever.\n\nThere is that. That's less informative, but only when looking at the tests\nwhile they are running. There is no difference once the tests has finished so\nCI runs etc are no less informative. This however is something to consider.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 2 Feb 2022 20:36:11 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser, not\n pg_write_server_files?" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n>> On 2 Feb 2022, at 19:58, Robert Haas <robertmhaas@gmail.com> wrote:\n>> And one thing that concretely stinks about is the progress reporting\n>> you get while the tests are running:\n>> \n>> t/010_pg_basebackup.pl ... 142/?\n>> \n>> That's definitely less informative than 142/330 or whatever.\n\n> There is that. That's less informative, but only when looking at the tests\n> while they are running. There is no difference once the tests has finished so\n> CI runs etc are no less informative. This however is something to consider.\n\nTBH I don't see that that's worth much. None of our tests run so long\nthat you'll be sitting there trying to estimate when it'll be done.\nI'd rather have the benefit of not having to maintain the test counts.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 14:47:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Here's a follow-on patch that adds a test for non-superuser server-side\n> basebackup, which crashes without your patch and passes with it.\n\nThe Windows animals don't like this:\n\n# Running: pg_basebackup --no-sync -cfast -U backupuser --target server:C:\\\\prog\\\\bf\\\\root\\\\HEAD\\\\pgsql.build\\\\src\\\\bin\\\\pg_basebackup\\\\tmp_check\\\\tmp_test_VGMM/backuponserver -X none\npg_basebackup: error: connection to server at \"127.0.0.1\", port 59539 failed: FATAL: SSPI authentication failed for user \"backupuser\"\nnot ok 108 - backup target server\n\n# Failed test 'backup target server'\n# at t/010_pg_basebackup.pl line 527.\n\nNot sure whether we have a standard method to get around that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 17:43:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "I wrote:\n> The Windows animals don't like this:\n> pg_basebackup: error: connection to server at \"127.0.0.1\", port 59539 failed: FATAL: SSPI authentication failed for user \"backupuser\"\n\n> Not sure whether we have a standard method to get around that.\n\nAh, right, we do. Looks like adding something like\n\nauth_extra => [ '--create-role', 'backupuser' ]\n\nto the $node->init call would do it, or you could mess with\ninvoking pg_regress --config-auth directly.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 17:52:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I wrote:\n>> The Windows animals don't like this:\n>> pg_basebackup: error: connection to server at \"127.0.0.1\", port 59539\n>> failed: FATAL: SSPI authentication failed for user \"backupuser\"\n>\n>> Not sure whether we have a standard method to get around that.\n>\n> Ah, right, we do. Looks like adding something like\n>\n> auth_extra => [ '--create-role', 'backupuser' ]\n>\n> to the $node->init call would do it, or you could mess with\n> invoking pg_regress --config-auth directly.\n\nThis was enough incentive for me to set up Cirrus-CI for my fork on\nGitHub, and the auth_extra approach in the attached patch fixed the\ntest:\n\nhttps://cirrus-ci.com/task/6578617030279168?logs=test_bin#L21\n\n- ilmari", "msg_date": "Thu, 03 Feb 2022 16:20:11 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": true, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" }, { "msg_contents": "\nOn 2/2/22 17:52, Tom Lane wrote:\n> I wrote:\n>> The Windows animals don't like this:\n>> pg_basebackup: error: connection to server at \"127.0.0.1\", port 59539 failed: FATAL: SSPI authentication failed for user \"backupuser\"\n>> Not sure whether we have a standard method to get around that.\n> Ah, right, we do. Looks like adding something like\n>\n> auth_extra => [ '--create-role', 'backupuser' ]\n>\n> to the $node->init call would do it, or you could mess with\n> invoking pg_regress --config-auth directly.\n>\n> \t\t\t\n\n\n\nI've fixed this using the auth_extra method, which avoids a reload.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 12:26:09 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser, not\n pg_write_server_files?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 12:26 PM Andrew Dunstan <andrew@dunslane.net> wrote:\n> I've fixed this using the auth_extra method, which avoids a reload.\n\nThank you much.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 13:10:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Server-side base backup: why superuser,\n not pg_write_server_files?" } ]
[ { "msg_contents": "Hi,\n\nIt seems like there are some instances where xloginsert.h is included\nright after xlog.h but xlog.h has already included xloginsert.h.\nUnless I'm missing something badly, we can safely remove including\nxloginsert.h after xlog.h. Attempting to post a patch to remove the\nextra xloginsert.h includes.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Fri, 28 Jan 2022 19:40:02 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove extra includes of \"access/xloginsert.h\" when \"access/xlog.h\"\n is included" }, { "msg_contents": "On 2022-Jan-28, Bharath Rupireddy wrote:\n\n> Hi,\n> \n> It seems like there are some instances where xloginsert.h is included\n> right after xlog.h but xlog.h has already included xloginsert.h.\n> Unless I'm missing something badly, we can safely remove including\n> xloginsert.h after xlog.h. Attempting to post a patch to remove the\n> extra xloginsert.h includes.\n\nWhy isn't it better to remove the line that includes xloginsert.h in\nxlog.h instead? When xloginsert.h was introduced (commit 2076db2aea76),\nXLogRecData was put there so xloginsert.h was necessary for xlog.h; but\nnow we have a forward declaration (per commit 2c03216d8311) so it\ndoesn't seem needed anymore.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 28 Jan 2022 12:55:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" }, { "msg_contents": "On Fri, Jan 28, 2022 at 9:25 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-28, Bharath Rupireddy wrote:\n>\n> > Hi,\n> >\n> > It seems like there are some instances where xloginsert.h is included\n> > right after xlog.h but xlog.h has already included xloginsert.h.\n> > Unless I'm missing something badly, we can safely remove including\n> > xloginsert.h after xlog.h. Attempting to post a patch to remove the\n> > extra xloginsert.h includes.\n>\n> Why isn't it better to remove the line that includes xloginsert.h in\n> xlog.h instead? When xloginsert.h was introduced (commit 2076db2aea76),\n> XLogRecData was put there so xloginsert.h was necessary for xlog.h; but\n> now we have a forward declaration (per commit 2c03216d8311) so it\n> doesn't seem needed anymore.\n\nRemoving the xloginsert.h in xlog.h would need us to add xloginsert.h\nin more areas. And also, it might break any non-core extensions that\nincludes just xlog.h and gets xloginsert.h. Instead I prefer removing\nxloginsert.h if there's xlog.h included.\n\nAttaching v2 patch removing xloginsert.h in a few more places.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sat, 29 Jan 2022 19:12:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" }, { "msg_contents": "On 2022-Jan-29, Bharath Rupireddy wrote:\n\n> Removing the xloginsert.h in xlog.h would need us to add xloginsert.h\n> in more areas.\n\nSure.\n\n> And also, it might break any non-core extensions that\n> includes just xlog.h and gets xloginsert.h.\n\nThat's a pretty easy fix anyway -- it's not even version-specific, since\nthe fix would work with the older versions. It's not something that\nwould break on a minor version, either.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"The problem with the facetime model is not just that it's demoralizing, but\nthat the people pretending to work interrupt the ones actually working.\"\n (Paul Graham)\n\n\n", "msg_date": "Sat, 29 Jan 2022 16:37:20 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" }, { "msg_contents": "On Sun, Jan 30, 2022 at 1:07 AM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2022-Jan-29, Bharath Rupireddy wrote:\n>\n> > Removing the xloginsert.h in xlog.h would need us to add xloginsert.h\n> > in more areas.\n>\n> Sure.\n>\n> > And also, it might break any non-core extensions that\n> > includes just xlog.h and gets xloginsert.h.\n>\n> That's a pretty easy fix anyway -- it's not even version-specific, since\n> the fix would work with the older versions. It's not something that\n> would break on a minor version, either.\n\nHere's the v3 patch removing xloginsert.h from xlog.h and adding\nxloginsert.h in the required files.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 30 Jan 2022 18:52:48 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" }, { "msg_contents": "Hi,\n\nOn Sun, Jan 30, 2022 at 06:52:48PM +0530, Bharath Rupireddy wrote:\n> \n> Here's the v3 patch removing xloginsert.h from xlog.h and adding\n> xloginsert.h in the required files.\n\n+1, this approach is better. In general it's better to increase the number of\ninclude lines rather than having a few that includes everything, for\ncompilation performance.\n\n\n", "msg_date": "Sun, 30 Jan 2022 22:49:12 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" }, { "msg_contents": "On 2022-Jan-30, Bharath Rupireddy wrote:\n\n> Here's the v3 patch removing xloginsert.h from xlog.h and adding\n> xloginsert.h in the required files.\n\nPushed, but I added xloginsert.h to 9 additional files that needed it to\navoid compiler warnings. Thanks!\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sun, 30 Jan 2022 12:27:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Remove extra includes of \"access/xloginsert.h\" when\n \"access/xlog.h\" is included" } ]
[ { "msg_contents": "Hi,\n\nWhile working on another pg_control patch, I observed that the\npg_controldata output fields such as \"Latest checkpoint's\nTimeLineID:\", \"Latest checkpoint's NextOID:'' and so on, are being\nused in pg_resetwal.c, pg_controldata.c and pg_upgrade/controldata.c.\nDirect usage of these fields in many places doesn't look good and can\nbe error prone if we try to change the text in one place and forget in\nanother place. I'm thinking of having those fields as macros in\npg_control.h, something like [1] and use it all the places. This will\nbe a good idea for better code manageability IMO.\n\nThoughts?\n\n[1]\n#define PG_CONTROL_FIELD_VERSION_NUMBER \"pg_control version number:\"\n#define PG_CONTROL_FIELD_CATALOG_NUMBER \"Catalog version number:\"\n#define PG_CONTROL_FIELD_CHECKPOINT_TLI \"Latest checkpoint's TimeLineID:\"\n#define PG_CONTROL_FIELD_CHECKPOINT_PREV_TLI \"Latest checkpoint's\nPrevTimeLineID:\"\n#define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID \"Latest checkpoint's oldestXID:\"\n#define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID_DB \"Latest checkpoint's\noldestXID's DB:\"\nand so on.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sat, 29 Jan 2022 20:00:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Replace pg_controldata output fields with macros for better code\n manageability" }, { "msg_contents": "On Sat, Jan 29, 2022 at 08:00:53PM +0530, Bharath Rupireddy wrote:\n> Hi,\n> \n> While working on another pg_control patch, I observed that the\n> pg_controldata output fields such as \"Latest checkpoint's\n> TimeLineID:\", \"Latest checkpoint's NextOID:'' and so on, are being\n> used in pg_resetwal.c, pg_controldata.c and pg_upgrade/controldata.c.\n> Direct usage of these fields in many places doesn't look good and can\n> be error prone if we try to change the text in one place and forget in\n> another place. I'm thinking of having those fields as macros in\n> pg_control.h, something like [1] and use it all the places. This will\n> be a good idea for better code manageability IMO.\n> \n> Thoughts?\n> \n> [1]\n> #define PG_CONTROL_FIELD_VERSION_NUMBER \"pg_control version number:\"\n> #define PG_CONTROL_FIELD_CATALOG_NUMBER \"Catalog version number:\"\n> #define PG_CONTROL_FIELD_CHECKPOINT_TLI \"Latest checkpoint's TimeLineID:\"\n> #define PG_CONTROL_FIELD_CHECKPOINT_PREV_TLI \"Latest checkpoint's\n> PrevTimeLineID:\"\n> #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID \"Latest checkpoint's oldestXID:\"\n> #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID_DB \"Latest checkpoint's\n> oldestXID's DB:\"\n> and so on.\n\nThat seems like a very good idea.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 18:02:39 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Replace pg_controldata output fields with macros for better code\n manageability" }, { "msg_contents": "At Wed, 2 Feb 2022 18:02:39 -0500, Bruce Momjian <bruce@momjian.us> wrote in \n> On Sat, Jan 29, 2022 at 08:00:53PM +0530, Bharath Rupireddy wrote:\n> > Hi,\n> > \n> > While working on another pg_control patch, I observed that the\n> > pg_controldata output fields such as \"Latest checkpoint's\n> > TimeLineID:\", \"Latest checkpoint's NextOID:'' and so on, are being\n> > used in pg_resetwal.c, pg_controldata.c and pg_upgrade/controldata.c.\n> > Direct usage of these fields in many places doesn't look good and can\n> > be error prone if we try to change the text in one place and forget in\n> > another place. I'm thinking of having those fields as macros in\n> > pg_control.h, something like [1] and use it all the places. This will\n> > be a good idea for better code manageability IMO.\n> > \n> > Thoughts?\n> > \n> > [1]\n> > #define PG_CONTROL_FIELD_VERSION_NUMBER \"pg_control version number:\"\n> > #define PG_CONTROL_FIELD_CATALOG_NUMBER \"Catalog version number:\"\n> > #define PG_CONTROL_FIELD_CHECKPOINT_TLI \"Latest checkpoint's TimeLineID:\"\n> > #define PG_CONTROL_FIELD_CHECKPOINT_PREV_TLI \"Latest checkpoint's\n> > PrevTimeLineID:\"\n> > #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID \"Latest checkpoint's oldestXID:\"\n> > #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID_DB \"Latest checkpoint's\n> > oldestXID's DB:\"\n> > and so on.\n> \n> That seems like a very good idea.\n\n+1 for consolidate the texts an any shape.\n\nBut it doesn't sound like the way we should take to simply replace\nonly the concrete text labels to symbols. Significant downsides of\ndoing that are untranslatability and difficulty to add the proper\nindentation before the value field. So we need to define the texts\nincluding indentation spaces and format placeholder.\n\nBut if we define the strings separately, I would rather define them in\nan array.\n\n\n\ntypedef enum pg_control_fields\n{\n PGCNTRL_FIELD_ERROR = -1,\n PGCNTRL_FIELD_VERSION = 0,\n PGCNTRL_FIELD_CATALOG_VERSION,\n ...\n PGCTRL_NUM_FIELDS\n} pg_control_fields;\n\nconst char *pg_control_item_formats[] = {\n\tgettext_noop(\"pg_control version number: %u\\n\"),\n\tgettext_noop(\"Catalog version number: %u\\n\"),\n\t...\n};\n\nconst char *\nget_pg_control_item_format(pg_control_fields item_id)\n{\n\treturn _(pg_control_item_formats[item_id]);\n};\n\nconst char *\nget_pg_control_item()\n{\n\treturn _(pg_control_item_formats[item_id]);\n};\n\npg_control_fields\nget_pg_control_item(const char *str, const char **valp)\n{\n\tint i;\n\tfor (i = 0 ; i < PGCNTL_NUM_FIELDS ; i++)\n\t{\n\t\tconst char *fmt = pg_control_item_formats[i];\n\t\tconst char *p = strchr(fmt, ':');\n\n\t\tAssert(p);\n\t\tif (strncmp(str, fmt, p - fmt) == 0)\n\t\t{\n\t\t\tp = str + (p - fmt);\n\t\t\tfor(p++ ; *p == ' ' ; p++);\n\t\t\t*valp = p;\n\t\t\treturn i;\n\t\t}\n\t}\n\n\treturn -1;\n}\n\nPrinter side does:\n\n printf(get_pg_control_item_format(PGCNTRL_FIELD_VERSION),\n\t ControlFile->pg_control_version);\n \t\nAnd the reader side would do:\n\n switch(get_pg_control_item(str, &v))\n {\n case PGCNTRL_FIELD_VERSION:\n cluster->controldata.ctrl_ver = str2uint(v);\n\t\t break;\n ...\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n \n\n\n", "msg_date": "Thu, 03 Feb 2022 15:04:14 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replace pg_controldata output fields with macros for better\n code manageability" }, { "msg_contents": "On Thu, Feb 3, 2022 at 11:34 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> > > #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID \"Latest checkpoint's oldestXID:\"\n> > > #define PG_CONTROL_FIELD_CHECKPOINT_OLDESTXID_DB \"Latest checkpoint's\n> > > oldestXID's DB:\"\n> > > and so on.\n> >\n> > That seems like a very good idea.\n>\n> +1 for consolidate the texts an any shape.\n>\n> But it doesn't sound like the way we should take to simply replace\n> only the concrete text labels to symbols. Significant downsides of\n> doing that are untranslatability and difficulty to add the proper\n> indentation before the value field. So we need to define the texts\n> including indentation spaces and format placeholder.\n\nIt looks like we also translate the printf statements that tools emit.\nI'm not sure how having a macro in the printf creates a problem with\nthe translation.\n\nYes, the indentation part needs to be taken care in any case, which is\nalso true now, different fields are adjusted to different indentation\n(number of spaces, not tabs) for the sake of readability in the\noutput.\n\n> But if we define the strings separately, I would rather define them in\n> an array.\n>\n> typedef enum pg_control_fields\n> {\n>\n> const char *pg_control_item_formats[] = {\n> gettext_noop(\"pg_control version number: %u\\n\"),\n>\n> const char *\n> get_pg_control_item_format(pg_control_fields item_id)\n> {\n> return _(pg_control_item_formats[item_id]);\n> };\n>\n> const char *\n> get_pg_control_item()\n> {\n> return _(pg_control_item_formats[item_id]);\n> };\n>\n> pg_control_fields\n> get_pg_control_item(const char *str, const char **valp)\n> {\n> int i;\n> for (i = 0 ; i < PGCNTL_NUM_FIELDS ; i++)\n> {\n> const char *fmt = pg_control_item_formats[i];\n> const char *p = strchr(fmt, ':');\n>\n> Assert(p);\n> if (strncmp(str, fmt, p - fmt) == 0)\n> {\n> p = str + (p - fmt);\n> for(p++ ; *p == ' ' ; p++);\n> *valp = p;\n> return i;\n> }\n> }\n>\n> return -1;\n> }\n>\n> Printer side does:\n>\n> printf(get_pg_control_item_format(PGCNTRL_FIELD_VERSION),\n> ControlFile->pg_control_version);\n>\n> And the reader side would do:\n>\n> switch(get_pg_control_item(str, &v))\n> {\n> case PGCNTRL_FIELD_VERSION:\n> cluster->controldata.ctrl_ver = str2uint(v);\n> break;\n\nThanks for your time on the above code. IMHO, I don't know if we ever\ngo the complex way with the above sort of style (error-prone, huge\nmaintenance burden and extra function calls). Instead, I would be\nhappy to keep the code as is if the macro-way(as proposed originally\nin this thread) really has a translation problem.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 3 Feb 2022 18:34:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Replace pg_controldata output fields with macros for better code\n manageability" }, { "msg_contents": "On 29.01.22 15:30, Bharath Rupireddy wrote:\n> While working on another pg_control patch, I observed that the\n> pg_controldata output fields such as \"Latest checkpoint's\n> TimeLineID:\", \"Latest checkpoint's NextOID:'' and so on, are being\n> used in pg_resetwal.c, pg_controldata.c and pg_upgrade/controldata.c.\n\nThe duplication between pg_resetwal and pg_controldata could probably be \nhandled by refactoring the code so that only one place is responsible \nfor printing it. The usages in pg_upgrade are probably best guarded by \ntests that ensure that any changes anywhere else don't break things. \nUsing macros like you suggest only protects against one kind of change: \nchanging the wording of a line. But it doesn't protect for example \nagainst a line being removed from the output.\n\nWhile I'm sympathetic to the issue you describe, the proposed solution \nintroduces a significant amount of ugliness for a problem that is really \nquite rare.\n\n\n\n", "msg_date": "Thu, 3 Feb 2022 14:51:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Replace pg_controldata output fields with macros for better code\n manageability" } ]
[ { "msg_contents": "Hey!\n\nI was investigating a leak reported in the PostGIS issues tracker [1] which\nled me to the Postgres side where the problem really is. The leak is\nreproducible with query from original ticket [1]:\n\nWITH latitudes AS (\n\tSELECT generate_series AS latitude\n\tFROM generate_series(-90, 90, 0.1)\n), longitudes AS (\n\tSELECT generate_series AS longitude\n\tFROM generate_series(-180, 180, 0.1)\n), points AS (\n\tSELECT ST_SetSRID(ST_Point(longitude, latitude), 4326)::geography AS geog\n\tFROM latitudes\n\tCROSS JOIN longitudes\n)\nSELECT\n\tgeog,\n\t(\n\t\tSELECT name\n\t\tFROM ne_110m_admin_0_countries AS ne\n\t\tORDER BY p.geog <-> ne.geog\n\t\tLIMIT 1\n\t)\nFROM points AS p\n;\n\nThe leak is only noticeable when index scan with reorder happens as part of\nsubquery plan which is explained by the fact that heap tuples cloned in\nreorderqueue_push are not freed during flush of reorder queue in\nExecReScanIndex.\n\n[1] https://trac.osgeo.org/postgis/ticket/4720", "msg_date": "Sun, 30 Jan 2022 05:49:34 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "[PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "Aliaksandr Kalenik <akalenik@kontur.io> writes:\n> The leak is only noticeable when index scan with reorder happens as part of\n> subquery plan which is explained by the fact that heap tuples cloned in\n> reorderqueue_push are not freed during flush of reorder queue in\n> ExecReScanIndex.\n\nHmm ... I see from the code coverage report[1] that that part of\nExecReScanIndexScan isn't even reached by our existing regression\ntests. Seems bad :-(\n\n\t\t\tregards, tom lane\n\n[1] https://coverage.postgresql.org/src/backend/executor/nodeIndexscan.c.gcov.html#577\n\n\n", "msg_date": "Sun, 30 Jan 2022 11:02:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "Aliaksandr Kalenik <akalenik@kontur.io> writes:\n> I was investigating a leak reported in the PostGIS issues tracker [1] which\n> led me to the Postgres side where the problem really is. The leak is\n> reproducible with query from original ticket [1]:\n> ...\n> The leak is only noticeable when index scan with reorder happens as part of\n> subquery plan which is explained by the fact that heap tuples cloned in\n> reorderqueue_push are not freed during flush of reorder queue in\n> ExecReScanIndex.\n\nActually, that code has got worse problems than that. I tried to improve\nour regression tests to exercise that code path, as attached. What I got\nwas\n\n+SELECT point(x,x), (SELECT circle_center(f1) FROM gcircle_tbl ORDER BY f1 <-> p\noint(x,x) LIMIT 1) as c FROM generate_series(0,1000,1) x;\n+ERROR: index returned tuples in wrong order\n\n(The error doesn't always appear depending on what generate_series\nparameters you use, but it seems to show up consistently with\na step of 1 and a limit of 1000 or more.)\n\nFixing this is well beyond my knowledge of that code, so I'm punting\nit to the original authors.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 30 Jan 2022 11:24:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "Thanks for your review!\n\nOn Sun, Jan 30, 2022 at 7:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Actually, that code has got worse problems than that. I tried to improve\n> our regression tests to exercise that code path, as attached. What I got\n> was\n>\n> +SELECT point(x,x), (SELECT circle_center(f1) FROM gcircle_tbl ORDER BY f1 <-> p\n> oint(x,x) LIMIT 1) as c FROM generate_series(0,1000,1) x;\n> +ERROR: index returned tuples in wrong order\n\nI tried to figure out what is the problem with this query. This error\nhappens when actual distance is less than estimated distance.\nFor this specific query it happened while comparing these values:\n50.263279680219099532223481219262 (actual distance returned by\ndist_cpoint in geo_ops.c) and 50.263279680219113743078196421266\n(bounding box distance returned by computeDistance in gistproc.c).\n\nSo for me it looks like this error is not really related to KNN scan\ncode but to some floating-arithmetic issue in distance calculation\nfunctions for geometry type.Would be great to figure out a fix but\nfor now I didn’t manage to find a better way than comparing the\ndifference of distance with FLT_EPSILON which definitely doesn't seem\nlike the way to fix :(\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:42:37 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "Hi!\n\nOn Mon, Feb 7, 2022 at 11:42 AM Aliaksandr Kalenik <akalenik@kontur.io> wrote:\n> Thanks for your review!\n>\n> On Sun, Jan 30, 2022 at 7:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Actually, that code has got worse problems than that. I tried to improve\n> > our regression tests to exercise that code path, as attached. What I got\n> > was\n> >\n> > +SELECT point(x,x), (SELECT circle_center(f1) FROM gcircle_tbl ORDER BY f1 <-> p\n> > oint(x,x) LIMIT 1) as c FROM generate_series(0,1000,1) x;\n> > +ERROR: index returned tuples in wrong order\n>\n> I tried to figure out what is the problem with this query. This error\n> happens when actual distance is less than estimated distance.\n> For this specific query it happened while comparing these values:\n> 50.263279680219099532223481219262 (actual distance returned by\n> dist_cpoint in geo_ops.c) and 50.263279680219113743078196421266\n> (bounding box distance returned by computeDistance in gistproc.c).\n>\n> So for me it looks like this error is not really related to KNN scan\n> code but to some floating-arithmetic issue in distance calculation\n> functions for geometry type.Would be great to figure out a fix but\n> for now I didn’t manage to find a better way than comparing the\n> difference of distance with FLT_EPSILON which definitely doesn't seem\n> like the way to fix :(\n\nProbably, this is caused by some compiler optimization. Could you\nre-check the issue with different compilers and optimization levels?\n\nRegarding the memory leak, could you add a corresponding regression\ntest to the patch (probably similar to Tom's query upthread)?\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 7 Feb 2022 23:20:09 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "On Mon, Feb 7, 2022 at 11:20 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> Regarding the memory leak, could you add a corresponding regression\n> test to the patch (probably similar to Tom's query upthread)?\n\nYes, added similar to Tom's query but with polygons instead of circles.", "msg_date": "Thu, 10 Feb 2022 01:19:15 +0300", "msg_from": "Aliaksandr Kalenik <akalenik@kontur.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "On Thu, Feb 10, 2022 at 1:19 AM Aliaksandr Kalenik <akalenik@kontur.io> wrote:\n> On Mon, Feb 7, 2022 at 11:20 PM Alexander Korotkov <aekorotkov@gmail.com> wrote:\n> > Regarding the memory leak, could you add a corresponding regression\n> > test to the patch (probably similar to Tom's query upthread)?\n>\n> Yes, added similar to Tom's query but with polygons instead of circles.\n\nI've rechecked that the now patched code branch is covered by\nregression tests. I think the memory leak issue is independent of the\ncomputational errors we've observed.\n\nSo, I'm going to push and backpatch this if no objections.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Thu, 10 Feb 2022 02:12:37 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "Alexander Korotkov <aekorotkov@gmail.com> writes:\n> I've rechecked that the now patched code branch is covered by\n> regression tests. I think the memory leak issue is independent of the\n> computational errors we've observed.\n> So, I'm going to push and backpatch this if no objections.\n\n+1. We should work on the roundoff-error issue as well, but\nas you say, it's an independent problem.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 18:35:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" }, { "msg_contents": "On Thu, Feb 10, 2022 at 2:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Alexander Korotkov <aekorotkov@gmail.com> writes:\n> > I've rechecked that the now patched code branch is covered by\n> > regression tests. I think the memory leak issue is independent of the\n> > computational errors we've observed.\n> > So, I'm going to push and backpatch this if no objections.\n>\n> +1. We should work on the roundoff-error issue as well, but\n> as you say, it's an independent problem.\n\nPushed, thank you.\n\n------\nRegards,\nAlexander Korotkov\n\n\n", "msg_date": "Mon, 14 Feb 2022 04:18:53 +0300", "msg_from": "Alexander Korotkov <aekorotkov@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] nodeindexscan with reorder memory leak" } ]
[ { "msg_contents": "Speaking of buildfarm breakage, seawasp has been failing for the\npast several days. It looks like bleeding-edge LLVM has again\nchanged some APIs we depend on. First failure is here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-01-28%2000%3A17%3A48\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 30 Jan 2022 21:38:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Latest LLVM breaks our code again" }, { "msg_contents": "On Mon, Jan 31, 2022 at 3:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Speaking of buildfarm breakage, seawasp has been failing for the\n> past several days. It looks like bleeding-edge LLVM has again\n> changed some APIs we depend on. First failure is here:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-01-28%2000%3A17%3A48\n\nOops, I missed this message. The patch at\nhttps://www.postgresql.org/message-id/CA%2BhUKG%2BaGPBkBT5_4czkvzMu6-D%2BJgPaVL7mX_WBPXgGRndtXA%40mail.gmail.com\nfixes the first two of these problems, and the third needs some more\nwork:\n\n1. There's a missing #include <new> (must have been transitively\nincluded before).\n2. There's an API change that I'd written about already but hadn't\ncommitted yet because seawasp is running behind my laptop at tracking\nLLVM and I figured we might as well 'nagel' these kinds of commits.\n3. We'll need to look into how to switch LLVMBuildCall -> LLVMBuildCall2 etc.\n\n\n", "msg_date": "Mon, 31 Jan 2022 18:46:42 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" }, { "msg_contents": "\n> Speaking of buildfarm breakage, seawasp has been failing for the\n> past several days. It looks like bleeding-edge LLVM has again\n> changed some APIs we depend on. First failure is here:\n>\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=seawasp&dt=2022-01-28%2000%3A17%3A48\n\nIndeed.\n\nI'm sorry for the late warning: clang could not compile with its previous \nversion at some point so my weekly recompilation got stuck and I did not \nnotice for some months. I repaired by switching compiling clang with gcc \ninstead. ::sigh::\n\n-- \nFabien.\n\n\n", "msg_date": "Tue, 1 Feb 2022 20:25:50 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" }, { "msg_contents": "Hi,\n\nOn 2022-01-30 21:38:16 -0500, Tom Lane wrote:\n> Speaking of buildfarm breakage, seawasp has been failing for the\n> past several days. It looks like bleeding-edge LLVM has again\n> changed some APIs we depend on. First failure is here:\n\nI'm doubtful that tracking development branches of LLVM is a good\ninvestment. Their development model is to do changes in-tree much more than we\ndo. Adjusting to API changes the moment they're made will often end up with\nfurther changes to the same / related lines. Once they open the relevant\nrelease-NN branch, it's a different story.\n\nMaybe it'd make sense to disable --with-llvm on seawasp and have a separate\nanimal that tracks the newest release branch?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 1 Feb 2022 12:02:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" }, { "msg_contents": "Hello Andres,\n\n> I'm doubtful that tracking development branches of LLVM is a good\n> investment. Their development model is to do changes in-tree much more than we\n> do. Adjusting to API changes the moment they're made will often end up with\n> further changes to the same / related lines. Once they open the relevant\n> release-NN branch, it's a different story.\n>\n> Maybe it'd make sense to disable --with-llvm on seawasp>\n> and have ppppppa separate animal that tracks the newest release branch?\n\nThe point of seawasp is somehow to have an early warning of upcoming \nchanges, especially the --with-llvm part. LLVM indeed is a moving target, \nbut they very rarely back down on their API changes… As pg version are \nexpected to run for 5 years, they will encounter newer compiler versions, \nso being as ready as possible seems worthwhile.\n\nISTM that there is no actual need to fix LLVM breakage on the spot, but I \nthink it is pertinent to be ok near release time. This is why there is a \n\"EXPERIMENTAL\" marker in the system description. There can be some noise \nwhen LLVM development version is broken, this has happened in the past \nwith seawasp (llvm) and moonjelly (gcc), but not often.\n\nAbout tracking releases: this means more setups and runs and updates, and \nas I think they do care about compatibility *within* a release we would \nnot see breakages so it would not be very useful, and we would only get \nthe actual breakages at LLVM release time, which may be much later.\n\nFor these reasons, I'm inclined to let seawasp as it is.\n\n-- \nFabien.", "msg_date": "Thu, 3 Feb 2022 10:44:11 +0100 (CET)", "msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" }, { "msg_contents": "Hi,\n\nOn 2022-02-03 10:44:11 +0100, Fabien COELHO wrote:\n> > I'm doubtful that tracking development branches of LLVM is a good\n> > investment. Their development model is to do changes in-tree much more than we\n> > do. Adjusting to API changes the moment they're made will often end up with\n> > further changes to the same / related lines. Once they open the relevant\n> > release-NN branch, it's a different story.\n> >\n> > Maybe it'd make sense to disable --with-llvm on seawasp>\n> > and have ppppppa separate animal that tracks the newest release branch?\n>\n> The point of seawasp is somehow to have an early warning of upcoming\n> changes, especially the --with-llvm part. LLVM indeed is a moving target,\n> but they very rarely back down on their API changes…\n\nThe point is less backing down, but making iterative changes that require\nchanging the same lines repeatedly...\n\n\n> About tracking releases: this means more setups and runs and updates, and as\n> I think they do care about compatibility *within* a release we would not see\n> breakages so it would not be very useful, and we would only get the actual\n> breakages at LLVM release time, which may be much later.\n\nLLVM's release branches are created well before the release. E.g 13 was afaics\nbranched off 2021-07-27 [1], released 2021-09-30 [2].\n\n\n> For these reasons, I'm inclined to let seawasp as it is.\n\nI find seawasp tracking the development trunk compilers useful. Just not for\n--with-llvm. The latter imo *reduces* seawasp's, because once there's an API\nchange we can't see whether there's e.g. a compiler codegen issue leading to\ncrashes or whatnot.\n\nWhat I was proposing was to remove --with-llvm from seawasp, and have a\nseparate animal tracking the newest llvm release branch (I can run/host that\nif needed).\n\nGreetings,\n\nAndres Freund\n\n[1] git show $(git merge-base origin/release/13.x origin/main)\n[2] git show llvmorg-13.0.0\n\n\n", "msg_date": "Thu, 3 Feb 2022 11:12:00 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" }, { "msg_contents": "On Fri, Feb 4, 2022 at 8:12 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-03 10:44:11 +0100, Fabien COELHO wrote:\n> > For these reasons, I'm inclined to let seawasp as it is.\n\nIt might be easier to use the nightly packages at\nhttps://apt.llvm.org/. You could update daily and still save so much\nCPU that ...\n\n> I find seawasp tracking the development trunk compilers useful. Just not for\n> --with-llvm. The latter imo *reduces* seawasp's, because once there's an API\n> change we can't see whether there's e.g. a compiler codegen issue leading to\n> crashes or whatnot.\n>\n> What I was proposing was to remove --with-llvm from seawasp, and have a\n> separate animal tracking the newest llvm release branch (I can run/host that\n> if needed).\n\n... you could do a couple of variations like that ^ on the same budget :-)\n\nFWIW 14 just branched. Vive LLVM 15.\n\n\n", "msg_date": "Sat, 5 Feb 2022 21:29:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Latest LLVM breaks our code again" } ]
[ { "msg_contents": "Hello, \n\nI made a wiki page to describe a research on scale-out solutions\nand an interview on scale-out solutions to a few IT companies in\nJapan. \nhttps://wiki.postgresql.org/wiki/PGECons_CR_Scaleout_2021\n\nThis research and interview were conducted by the community\nrelation team of PostgreSQL Enterprise Consortium [1]. We want\nto share the results and hope that our feedback would be\nbeneficial to both PostgreSQL users and developers.\n\n[1] PGECons is a non profit organization comprised of companies\nin Japan to promote PostgreSQL (https://www.pgecons.org). \nSRA OSS, Inc. is one of the members.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Tue, 1 Feb 2022 10:29:13 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Research and Interview on scale-out solutions in Japan" } ]
[ { "msg_contents": "Hi,\n\nI think emitting DBState (staring up, shut down, shut down in\nrecovery, shutting down, in crash recovery, in archive recovery, in\nproduction) via the pg_control_system function would help know the\ndatabase state, especially during PITR/archive recovery. During\narchive recovery, the server can open up for read-only connections\neven before the archive recovery finishes. Having, pg_control_system\nemit database state would help the users/service layers know it and so\nthey can take some actions based on it.\n\nAttaching a patch herewith.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Tue, 1 Feb 2022 09:52:26 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Add DBState to pg_control_system function" }, { "msg_contents": "\n\nOn 2022/02/01 13:22, Bharath Rupireddy wrote:\n> Hi,\n> \n> I think emitting DBState (staring up, shut down, shut down in\n> recovery, shutting down, in crash recovery, in archive recovery, in\n> production) via the pg_control_system function would help know the\n> database state, especially during PITR/archive recovery. During\n> archive recovery, the server can open up for read-only connections\n> even before the archive recovery finishes. Having, pg_control_system\n> emit database state would help the users/service layers know it and so\n> they can take some actions based on it.\n> \n> Attaching a patch herewith.\n> \n> Thoughts?\n\nThis information seems not so helpful because only \"in production\" and \"archive recovery\" can be reported during normal running and recovery, respectively. No? In the state other than them, we cannot connect to the server and execute pg_control_system().\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 22 Feb 2022 03:01:23 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Add DBState to pg_control_system function" } ]
[ { "msg_contents": "I got a syntax error when using the command according to the existing\ndocumentation. The output_plugin parameter needs to be passed too.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 01 Feb 2022 11:16:48 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Doc: CREATE_REPLICATION_SLOT command requires the plugin name" }, { "msg_contents": "On Tue, Feb 1, 2022 at 3:44 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> I got a syntax error when using the command according to the existing\n> documentation. The output_plugin parameter needs to be passed too.\n>\n\nWhy do we need it for physical slots? The syntax in repl_gram.y is as follows:\n/* CREATE_REPLICATION_SLOT slot TEMPORARY PHYSICAL [options] */\nK_CREATE_REPLICATION_SLOT IDENT opt_temporary K_PHYSICAL create_slot_options\n...\n/* CREATE_REPLICATION_SLOT slot TEMPORARY LOGICAL plugin [options] */\n| K_CREATE_REPLICATION_SLOT IDENT opt_temporary K_LOGICAL IDENT\ncreate_slot_options\n\nThe logical slots do need output_plugin but not physical ones.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Tue, 1 Feb 2022 16:58:51 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc: CREATE_REPLICATION_SLOT command requires the plugin name" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Feb 1, 2022 at 3:44 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > I got a syntax error when using the command according to the existing\n> > documentation. The output_plugin parameter needs to be passed too.\n> >\n> \n> Why do we need it for physical slots?\n\nSure we don't, the missing curly brackets seem to be the problem. I used the\nother form of the command for reference, which therefore might need a minor\nfix too.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 01 Feb 2022 13:15:29 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Doc: CREATE_REPLICATION_SLOT command requires the plugin name" }, { "msg_contents": "On Tue, Feb 1, 2022 at 5:43 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > On Tue, Feb 1, 2022 at 3:44 PM Antonin Houska <ah@cybertec.at> wrote:\n> > >\n> > > I got a syntax error when using the command according to the existing\n> > > documentation. The output_plugin parameter needs to be passed too.\n> > >\n> >\n> > Why do we need it for physical slots?\n>\n> Sure we don't, the missing curly brackets seem to be the problem. I used the\n> other form of the command for reference, which therefore might need a minor\n> fix too.\n>\n\nOkay, I see that it is not very clear from the documentation.\n\n <varlistentry>\n- <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable\nclass=\"parameter\">slot_name</replaceable> [\n<literal>TEMPORARY</literal> ] { <literal>PHYSICAL</literal> [\n<literal>RESERVE_WAL</literal> ] | <literal>LOGICAL</literal>\n<replaceable class=\"parameter\">output_plugin</replaceable> [\n<literal>EXPORT_SNAPSHOT</literal> |\n<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>\n| <literal>TWO_PHASE</literal> ] }\n+ <term><literal>CREATE_REPLICATION_SLOT</literal> <replaceable\nclass=\"parameter\">slot_name</replaceable> [\n<literal>TEMPORARY</literal> ] { <literal>PHYSICAL</literal> [\n<literal>RESERVE_WAL</literal> ] | { <literal>LOGICAL</literal>\n<replaceable class=\"parameter\">output_plugin</replaceable>\n+} [ <literal>EXPORT_SNAPSHOT</literal> |\n<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>\n| <literal>TWO_PHASE</literal> ] }\n\nInstead of adding additional '{}', can't we simply use:\n{ <literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |\n<literal>LOGICAL</literal> <replaceable\nclass=\"parameter\">output_plugin</replaceable> } [\n<literal>EXPORT_SNAPSHOT</literal> |\n<literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>\n| <literal>TWO_PHASE</literal> ]\n\nI am not sure change to other syntax is useful as that is kept for\nbackward compatibility. I think we can keep that as it is.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Wed, 2 Feb 2022 09:03:09 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc: CREATE_REPLICATION_SLOT command requires the plugin name" }, { "msg_contents": "Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Tue, Feb 1, 2022 at 5:43 PM Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > > On Tue, Feb 1, 2022 at 3:44 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > >\n> > > > I got a syntax error when using the command according to the existing\n> > > > documentation. The output_plugin parameter needs to be passed too.\n> > > >\n> > >\n> > > Why do we need it for physical slots?\n> >\n> > Sure we don't, the missing curly brackets seem to be the problem. I used the\n> > other form of the command for reference, which therefore might need a minor\n> > fix too.\n> >\n> \n> Instead of adding additional '{}', can't we simply use:\n> { <literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |\n> <literal>LOGICAL</literal> <replaceable\n> class=\"parameter\">output_plugin</replaceable> } [\n> <literal>EXPORT_SNAPSHOT</literal> |\n> <literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>\n> | <literal>TWO_PHASE</literal> ]\n\nDo you mean changing\n\nCREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT | TWO_PHASE ] }\n\nto\n\nCREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin } [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT | TWO_PHASE ]\n\n?\n\nI'm not sure, IMHO that would still allow for commands like\n\nCREATE_REPLICATION_SLOT slot_name PHYSICAL output_plugin\n\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 02 Feb 2022 08:13:39 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Doc: CREATE_REPLICATION_SLOT command requires the plugin name" }, { "msg_contents": "On Wed, Feb 2, 2022 at 12:41 PM Antonin Houska <ah@cybertec.at> wrote:\n>\n> Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> > On Tue, Feb 1, 2022 at 5:43 PM Antonin Houska <ah@cybertec.at> wrote:\n> > >\n> > > Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > >\n> > > > On Tue, Feb 1, 2022 at 3:44 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > > >\n> > > > > I got a syntax error when using the command according to the existing\n> > > > > documentation. The output_plugin parameter needs to be passed too.\n> > > > >\n> > > >\n> > > > Why do we need it for physical slots?\n> > >\n> > > Sure we don't, the missing curly brackets seem to be the problem. I used the\n> > > other form of the command for reference, which therefore might need a minor\n> > > fix too.\n> > >\n> >\n> > Instead of adding additional '{}', can't we simply use:\n> > { <literal>PHYSICAL</literal> [ <literal>RESERVE_WAL</literal> ] |\n> > <literal>LOGICAL</literal> <replaceable\n> > class=\"parameter\">output_plugin</replaceable> } [\n> > <literal>EXPORT_SNAPSHOT</literal> |\n> > <literal>NOEXPORT_SNAPSHOT</literal> | <literal>USE_SNAPSHOT</literal>\n> > | <literal>TWO_PHASE</literal> ]\n>\n> Do you mean changing\n>\n> CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT | TWO_PHASE ] }\n>\n> to\n>\n> CREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [ RESERVE_WAL ] | LOGICAL output_plugin } [ EXPORT_SNAPSHOT | NOEXPORT_SNAPSHOT | USE_SNAPSHOT | TWO_PHASE ]\n>\n> ?\n>\n> I'm not sure, IMHO that would still allow for commands like\n>\n> CREATE_REPLICATION_SLOT slot_name PHYSICAL output_plugin\n>\n\nI don't think so. Symbol '|' differentiates that, check its usage at\nother places like Create Table doc page [1]. Considering that, it\nseems to me that the way it is currently mentioned in docs is correct.\n\nCREATE_REPLICATION_SLOT slot_name [ TEMPORARY ] { PHYSICAL [\nRESERVE_WAL ] | LOGICAL output_plugin [ EXPORT_SNAPSHOT |\nNOEXPORT_SNAPSHOT | USE_SNAPSHOT | TWO_PHASE ] }\n\nAccording to me, this means output_plugin and all other options\nrelated to snapshot can be mentioned only for logical slots.\n\n[1] - https://www.postgresql.org/docs/devel/sql-createtable.html\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n", "msg_date": "Thu, 3 Feb 2022 12:04:05 +0530", "msg_from": "Amit Kapila <amit.kapila16@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Doc: CREATE_REPLICATION_SLOT command requires the plugin name" } ]
[ { "msg_contents": "Hi,\n\nI noticed a minor memleak in pg_dump. ReadStr() returns a malloc'ed pointer which\nshould then be freed. While reading the Table of Contents, it was called as an argument\nwithin a function call, leading to a memleak.\n\nPlease accept the attached as a proposed fix.\n\nCheers,\n\n//Georgios", "msg_date": "Tue, 01 Feb 2022 13:36:05 +0000", "msg_from": "gkokolatos@pm.me", "msg_from_op": true, "msg_subject": "Plug minor memleak in pg_dump" }, { "msg_contents": "On Tue, Feb 1, 2022 at 7:06 PM <gkokolatos@pm.me> wrote:\n>\n> Hi,\n>\n> I noticed a minor memleak in pg_dump. ReadStr() returns a malloc'ed pointer which\n> should then be freed. While reading the Table of Contents, it was called as an argument\n> within a function call, leading to a memleak.\n>\n> Please accept the attached as a proposed fix.\n\n+1. IMO, having \"restoring tables WITH OIDS is not supported anymore\"\ntwice doesn't look good, how about as shown in [1]?\n\n[1]\ndiff --git a/src/bin/pg_dump/pg_backup_archiver.c\nb/src/bin/pg_dump/pg_backup_archiver.c\nindex 49bf0907cd..777ff6fcfe 100644\n--- a/src/bin/pg_dump/pg_backup_archiver.c\n+++ b/src/bin/pg_dump/pg_backup_archiver.c\n@@ -2494,6 +2494,7 @@ ReadToc(ArchiveHandle *AH)\n int depIdx;\n int depSize;\n TocEntry *te;\n+ bool is_supported = true;\n\n AH->tocCount = ReadInt(AH);\n AH->maxDumpId = 0;\n@@ -2574,7 +2575,20 @@ ReadToc(ArchiveHandle *AH)\n te->tableam = ReadStr(AH);\n\n te->owner = ReadStr(AH);\n- if (AH->version < K_VERS_1_9 || strcmp(ReadStr(AH),\n\"true\") == 0)\n+\n+ if (AH->version < K_VERS_1_9)\n+ is_supported = false;\n+ else\n+ {\n+ tmp = ReadStr(AH);\n+\n+ if (strcmp(tmp, \"true\") == 0)\n+ is_supported = false;\n+\n+ pg_free(tmp);\n+ }\n+\n+ if (!is_supported)\n pg_log_warning(\"restoring tables WITH OIDS is\nnot supported anymore\");\n\n /* Read TOC entry dependencies */\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Tue, 1 Feb 2022 19:48:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "At Tue, 1 Feb 2022 19:48:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Feb 1, 2022 at 7:06 PM <gkokolatos@pm.me> wrote:\n> >\n> > Hi,\n> >\n> > I noticed a minor memleak in pg_dump. ReadStr() returns a malloc'ed pointer which\n> > should then be freed. While reading the Table of Contents, it was called as an argument\n> > within a function call, leading to a memleak.\n> >\n> > Please accept the attached as a proposed fix.\n\nIt is freed in other temporary use of the result of ReadStr(). So\nfreeing it sounds sensible at a glance.\n\n> +1. IMO, having \"restoring tables WITH OIDS is not supported anymore\"\n> twice doesn't look good, how about as shown in [1]?\n\nMaybe [2] is smaller :)\n\n--- a/src/bin/pg_dump/pg_backup_archiver.c\n+++ b/src/bin/pg_dump/pg_backup_archiver.c\n@@ -2494,6 +2494,7 @@ ReadToc(ArchiveHandle *AH)\n int depIdx;\n int depSize;\n TocEntry *te;\n+ char *tmpstr = NULL;\n \n AH->tocCount = ReadInt(AH);\n AH->maxDumpId = 0;\n@@ -2574,8 +2575,14 @@ ReadToc(ArchiveHandle *AH)\n te->tableam = ReadStr(AH);\n \n te->owner = ReadStr(AH);\n- if (AH->version < K_VERS_1_9 || strcmp(ReadStr(AH), \"true\") == 0)\n+ if (AH->version < K_VERS_1_9 ||\n+ strcmp((tmpstr = ReadStr(AH)), \"true\") == 0)\n pg_log_warning(\"restoring tables WITH OIDS is not supported anymore\");\n+ if (tmpstr)\n+ {\n+ pg_free(tmpstr);\n+ tmpstr = NULL;\n+ }\n \n /* Read TOC entry dependencies */\n\n\nThus.. I came to doubt of its worthiness to the complexity. The\namount of the leak is (perhaps) negligible.\n\n\nSo, I would just write a comment there.\n\n+++ b/src/bin/pg_dump/pg_backup_archiver.c\n@@ -2574,6 +2574,8 @@ ReadToc(ArchiveHandle *AH)\n te->tableam = ReadStr(AH);\n \n te->owner = ReadStr(AH);\n+\n+ /* deliberately leak the result of ReadStr for simplicity */\n if (AH->version < K_VERS_1_9 || strcmp(ReadStr(AH), \"true\") == 0)\n pg_log_warning(\"restoring tables WITH OIDS is not supported anymore\");\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 02 Feb 2022 17:29:57 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "> On 2 Feb 2022, at 09:29, Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\n> \n> At Tue, 1 Feb 2022 19:48:01 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n>> On Tue, Feb 1, 2022 at 7:06 PM <gkokolatos@pm.me> wrote:\n>>> \n>>> Hi,\n>>> \n>>> I noticed a minor memleak in pg_dump. ReadStr() returns a malloc'ed pointer which\n>>> should then be freed. While reading the Table of Contents, it was called as an argument\n>>> within a function call, leading to a memleak.\n>>> \n>>> Please accept the attached as a proposed fix.\n> \n> It is freed in other temporary use of the result of ReadStr(). So\n> freeing it sounds sensible at a glance.\n> \n>> +1. IMO, having \"restoring tables WITH OIDS is not supported anymore\"\n>> twice doesn't look good, how about as shown in [1]?\n> \n> Maybe [2] is smaller :)\n\nIt is smaller, but I think Bharath's version wins in terms of readability.\n\n> Thus.. I came to doubt of its worthiness to the complexity. The\n> amount of the leak is (perhaps) negligible.\n> \n> So, I would just write a comment there.\n\nThe leak itself is clearly not something to worry about wrt memory pressure.\nWe do read into tmp and free it in other places in the same function though (as\nyou note above), so for code consistency alone this is worth doing IMO (and it\nreduces the risk of static analyzers flagging this).\n\nUnless objected to I will go ahead with getting this committed.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 10:06:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "On Wed, Feb 02, 2022 at 10:06:13AM +0100, Daniel Gustafsson wrote:\n> The leak itself is clearly not something to worry about wrt memory pressure.\n> We do read into tmp and free it in other places in the same function though (as\n> you note above), so for code consistency alone this is worth doing IMO (and it\n> reduces the risk of static analyzers flagging this).\n> \n> Unless objected to I will go ahead with getting this committed.\n\nLooks like you forgot to apply that?\n--\nMichael", "msg_date": "Wed, 9 Feb 2022 11:56:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "On Wed, Feb 9, 2022 at 8:26 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Feb 02, 2022 at 10:06:13AM +0100, Daniel Gustafsson wrote:\n> > The leak itself is clearly not something to worry about wrt memory pressure.\n> > We do read into tmp and free it in other places in the same function though (as\n> > you note above), so for code consistency alone this is worth doing IMO (and it\n> > reduces the risk of static analyzers flagging this).\n> >\n> > Unless objected to I will go ahead with getting this committed.\n>\n> Looks like you forgot to apply that?\n\nAttaching the patch that I suggested above, also the original patch\nproposed by Georgios is at [1], leaving the decision to the committer\nto pick up the best one.\n\n[1] https://www.postgresql.org/message-id/oZwKiUxFsVaetG2xOJp7Hwao8F1AKIdfFDQLNJrnwoaxmjyB-45r_aYmhgXHKLcMI3GT24m9L6HafSi2ns7WFxXe0mw2_tIJpD-Z3vb_eyI%3D%40pm.me\n\nRegards,\nBharath Rupireddy.", "msg_date": "Wed, 9 Feb 2022 10:32:59 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "> On 9 Feb 2022, at 03:56, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Feb 02, 2022 at 10:06:13AM +0100, Daniel Gustafsson wrote:\n>> The leak itself is clearly not something to worry about wrt memory pressure.\n>> We do read into tmp and free it in other places in the same function though (as\n>> you note above), so for code consistency alone this is worth doing IMO (and it\n>> reduces the risk of static analyzers flagging this).\n>> \n>> Unless objected to I will go ahead with getting this committed.\n> \n> Looks like you forgot to apply that?\n\nNo, but I was distracted by other things leaving this on the TODO list. It's\nbeen pushed now.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 14:14:50 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" } ]
[ { "msg_contents": "This patch adds to database objects the same version tracking that \ncollation objects have. There is a new pg_database column \ndatcollversion that stores the version, a new function \npg_database_collation_actual_version() to get the version from the \noperating system, and a new subcommand ALTER DATABASE ... REFRESH \nCOLLATION VERSION.\n\nThis was not originally added together with pg_collation.collversion, \nsince originally version tracking was only supported for ICU, and ICU on \na database-level is not currently supported. But we now have version \ntracking for glibc (since PG13), FreeBSD (since PG14), and Windows \n(since PG13), so this is useful to have now. And of course ICU on \ndatabase-level is being worked on at the moment as well.\n\nThis patch is pretty much complete AFAICT. One bonus thing would be to \nadd a query to the ALTER DATABASE ref page similar to the one on the \nALTER COLLATION ref page that identifies the objects that are affected \nby outdated collations. The database version of that might just show \nall collation-using objects or something like that. Suggestions welcome.\n\nAlso, one curious behavior is that if you get to a situation where the \ncollation version is mismatched, every time an autovacuum worker \nlaunches you get the collation version mismatch warning in the log. \nMaybe that's actually correct, but maybe we want to dial it down a bit \nfor non-interactive sessions.", "msg_date": "Tue, 1 Feb 2022 16:20:14 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Database-level collation version tracking" }, { "msg_contents": "On Tue, Feb 01, 2022 at 04:20:14PM +0100, Peter Eisentraut wrote:\n> This patch adds to database objects the same version tracking that collation\n> objects have.\n\nThis version conflicts with 87669de72c2 (Some cleanup for change of collate and\nctype fields to type text), so I'm attaching a simple rebase of your patch to\nmake the cfbot happy, no other changes.\n\n> There is a new pg_database column datcollversion that stores\n> the version, a new function pg_database_collation_actual_version() to get\n> the version from the operating system, and a new subcommand ALTER DATABASE\n> ... REFRESH COLLATION VERSION.\n> \n> This was not originally added together with pg_collation.collversion, since\n> originally version tracking was only supported for ICU, and ICU on a\n> database-level is not currently supported. But we now have version tracking\n> for glibc (since PG13), FreeBSD (since PG14), and Windows (since PG13), so\n> this is useful to have now. And of course ICU on database-level is being\n> worked on at the moment as well.\n> This patch is pretty much complete AFAICT.\n\nAgreed. Here's a review of the patch:\n\n- there should be a mention to the need for a catversion bump in the message\n comment\n- there is no test\n- it's missing some updates in create_database.sgml, and psql tab completion\n for CREATE DATABASE with the new collation_version defelem.\n\n- that's not really something new with this patch, but should we output the\n collation version info or mismatch info in \\l / \\dO?\n\n+ if (!actual_versionstr)\n+ ereport(ERROR,\n+ (errmsg(\"database \\\"%s\\\" has no actual collation version, but a version was specified\",\n+ name)));-\n\nthis means you can't connect on such a database anymore. The level is probably\nok for collation version check, but for db isn't that too much?\n\n+Oid\n+AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n+{\n+ Relation rel;\n+ Oid dboid;\n+ HeapTuple tup;\n+ Datum datum;\n+ bool isnull;\n+ char *oldversion;\n+ char *newversion;\n+\n+ rel = table_open(DatabaseRelationId, RowExclusiveLock);\n+ dboid = get_database_oid(stmt->dbname, false);\n+\n+ if (!pg_database_ownercheck(dboid, GetUserId()))\n+ aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n+ stmt->dbname);\n+\n+ tup = SearchSysCacheCopy1(DATABASEOID, ObjectIdGetDatum(dboid));\n+ if (!HeapTupleIsValid(tup))\n+ elog(ERROR, \"cache lookup failed for database %u\", dboid);\n\nIs that ok to not obtain a lock on the database when refreshing the collation?\nI guess it's not worth bothering as it can only lead to spurious messages for\nconnection done concurrently, but there should be a comment to clarify it.\nAlso, it means that someone can drop the database concurrently. So it's\nshould be a \"database does not exist\" rather than a cache lookup failed error\nmessage.\n\n+ /*\n+ * Check collation version. See similar code in\n+ * pg_newlocale_from_collation().\n+ */\n+ datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_datcollversion,\n+ &isnull);\n+ if (!isnull)\n+ {\n\nThis (and pg_newlocale_from_collation()) reports a problem if a recorded\ncollation version is found but there's no reported collation version.\nShouldn't it also complain if it's the opposite? It's otherwise a backdoor to\nmake sure there won't be any check about the version anymore, and while it can\nprobably happen if you mess with the catalogs it still doesn't look great.\n\n+ /*\n+ * template0 shouldn't have any collation-dependent objects, so unset\n+ * the collation version. This avoids warnings when making a new\n+ * database from it.\n+ */\n+ \"UPDATE pg_database SET datcollversion = NULL WHERE datname = 'template0';\\n\\n\",\n\nI'm not opposed, but shouldn't there indeed be a warning in case of discrepancy\nin the source database (whether template or not)?\n\n# update pg_database set datcollversion = 'meh' where datname in ('postgres', 'template1');\nUPDATE 2\n\n# create database test1 template postgres;\nCREATE DATABASE\n\n# create database test2 template template1;\nCREATE DATABASE\n\n# \\c test2\nWARNING: database \"test2\" has a collation version mismatch\nDETAIL: The database was created using collation version meh, but the operating system provides version 2.34.\nHINT: Rebuild all objects affected by collation in this database and run ALTER DATABASE test2 REFRESH COLLATION VERSION, or build PostgreSQL with the right library version.\nYou are now connected to database \"test2\" as user \"rjuju\".\n\nThe rest of the patch looks good to me. There's notably pg_dump and pg_upgrade\nsupport so it indeed looks complete.\n\n> One bonus thing would be to add\n> a query to the ALTER DATABASE ref page similar to the one on the ALTER\n> COLLATION ref page that identifies the objects that are affected by outdated\n> collations. The database version of that might just show all\n> collation-using objects or something like that. Suggestions welcome.\n\nThat would be nice, but that's something quite hard to do and the resulting\nquery would be somewhat unreadable.\nAlso, you need to look at custom data types, expression and quals at least, so\nI'm not even sure that you can actually do it in pure SQL without additional\ninfrastructure.\n\n> Also, one curious behavior is that if you get to a situation where the\n> collation version is mismatched, every time an autovacuum worker launches\n> you get the collation version mismatch warning in the log. Maybe that's\n> actually correct, but maybe we want to dial it down a bit for\n> non-interactive sessions.\n\nI'm +0.5 to keep it that way. Index corruption is a real danger, so if you\nhave enough autovacuum worker launched to have a real problem with that, you\nclearly should take care of the problem even faster.", "msg_date": "Mon, 7 Feb 2022 18:29:47 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 07.02.22 11:29, Julien Rouhaud wrote:\n> - there should be a mention to the need for a catversion bump in the message\n> comment\n\ndone\n\n> - there is no test\n\nSuggestions where to put it? We don't really have tests for the \ncollation-level versioning either, do we?\n\n> - it's missing some updates in create_database.sgml, and psql tab completion\n> for CREATE DATABASE with the new collation_version defelem.\n\nAdded to create_database.sgml, but not to psql. We don't have \ncompletion for the collation option either, since it's only meant to be \nused by pg_upgrade, not interactively.\n\n> - that's not really something new with this patch, but should we output the\n> collation version info or mismatch info in \\l / \\dO?\n\nIt's a possibility. Perhaps there is a question of performance if we \nshow it in \\l and people have tons of databases and they have to make a \nlocale call for each one. As you say, it's more an independent feature, \nbut it's worth looking into.\n\n> + if (!actual_versionstr)\n> + ereport(ERROR,\n> + (errmsg(\"database \\\"%s\\\" has no actual collation version, but a version was specified\",\n> + name)));-\n> \n> this means you can't connect on such a database anymore. The level is probably\n> ok for collation version check, but for db isn't that too much?\n\nRight, changed to warning.\n\n> +Oid\n> +AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n> +{\n> + Relation rel;\n> + Oid dboid;\n> + HeapTuple tup;\n> + Datum datum;\n> + bool isnull;\n> + char *oldversion;\n> + char *newversion;\n> +\n> + rel = table_open(DatabaseRelationId, RowExclusiveLock);\n> + dboid = get_database_oid(stmt->dbname, false);\n> +\n> + if (!pg_database_ownercheck(dboid, GetUserId()))\n> + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n> + stmt->dbname);\n> +\n> + tup = SearchSysCacheCopy1(DATABASEOID, ObjectIdGetDatum(dboid));\n> + if (!HeapTupleIsValid(tup))\n> + elog(ERROR, \"cache lookup failed for database %u\", dboid);\n> \n> Is that ok to not obtain a lock on the database when refreshing the collation?\n\nThat code takes a RowExclusiveLock on pg_database. Did you have \nsomething else in mind?\n\n> + /*\n> + * Check collation version. See similar code in\n> + * pg_newlocale_from_collation().\n> + */\n> + datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_datcollversion,\n> + &isnull);\n> + if (!isnull)\n> + {\n> \n> This (and pg_newlocale_from_collation()) reports a problem if a recorded\n> collation version is found but there's no reported collation version.\n> Shouldn't it also complain if it's the opposite? It's otherwise a backdoor to\n> make sure there won't be any check about the version anymore, and while it can\n> probably happen if you mess with the catalogs it still doesn't look great.\n\nget_collation_actual_version() always returns either null or not null \nfor a given installation. So the situation that the stored version is \nnull and the actual version is not null can only happen as part of a \nsoftware upgrade. In that case, all uses of collations after an upgrade \nwould immediately start complaining about missing versions, which seems \nlike a bad experience. Users can explicitly opt in to version tracking \nby running REFRESH VERSION once.\n\n> + /*\n> + * template0 shouldn't have any collation-dependent objects, so unset\n> + * the collation version. This avoids warnings when making a new\n> + * database from it.\n> + */\n> + \"UPDATE pg_database SET datcollversion = NULL WHERE datname = 'template0';\\n\\n\",\n> \n> I'm not opposed, but shouldn't there indeed be a warning in case of discrepancy\n> in the source database (whether template or not)?\n> \n> # update pg_database set datcollversion = 'meh' where datname in ('postgres', 'template1');\n> UPDATE 2\n> \n> # create database test1 template postgres;\n> CREATE DATABASE\n> \n> # create database test2 template template1;\n> CREATE DATABASE\n> \n> # \\c test2\n> WARNING: database \"test2\" has a collation version mismatch\n\nI don't understand what the complaint is here. It seems to work ok?", "msg_date": "Mon, 7 Feb 2022 16:44:24 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Mon, Feb 07, 2022 at 04:44:24PM +0100, Peter Eisentraut wrote:\n> On 07.02.22 11:29, Julien Rouhaud wrote:\n> \n> > - there is no test\n> \n> Suggestions where to put it? We don't really have tests for the\n> collation-level versioning either, do we?\n\nThere's so limited testing in collate.* regression tests, so I thought it would\nbe ok to add it there. At least some ALTER DATABASE ... REFRESH VERSION would\nbe good, similarly to collation-level versioning.\n\n> > - it's missing some updates in create_database.sgml, and psql tab completion\n> > for CREATE DATABASE with the new collation_version defelem.\n> \n> Added to create_database.sgml, but not to psql. We don't have completion\n> for the collation option either, since it's only meant to be used by\n> pg_upgrade, not interactively.\n\nOk.\n\n> \n> > - that's not really something new with this patch, but should we output the\n> > collation version info or mismatch info in \\l / \\dO?\n> \n> It's a possibility. Perhaps there is a question of performance if we show\n> it in \\l and people have tons of databases and they have to make a locale\n> call for each one. As you say, it's more an independent feature, but it's\n> worth looking into.\n\nAgreed.\n\n> > +Oid\n> > +AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n> > +{\n> > + Relation rel;\n> > + Oid dboid;\n> > + HeapTuple tup;\n> > + Datum datum;\n> > + bool isnull;\n> > + char *oldversion;\n> > + char *newversion;\n> > +\n> > + rel = table_open(DatabaseRelationId, RowExclusiveLock);\n> > + dboid = get_database_oid(stmt->dbname, false);\n> > +\n> > + if (!pg_database_ownercheck(dboid, GetUserId()))\n> > + aclcheck_error(ACLCHECK_NOT_OWNER, OBJECT_DATABASE,\n> > + stmt->dbname);\n> > +\n> > + tup = SearchSysCacheCopy1(DATABASEOID, ObjectIdGetDatum(dboid));\n> > + if (!HeapTupleIsValid(tup))\n> > + elog(ERROR, \"cache lookup failed for database %u\", dboid);\n> > \n> > Is that ok to not obtain a lock on the database when refreshing the collation?\n> \n> That code takes a RowExclusiveLock on pg_database. Did you have something\n> else in mind?\n\nBut that lock won't prevent a concurrent DROP DATABASE, so it's totally\nexpected to hit that cache lookup failed error. There should either be a\nshdepLockAndCheckObject(), or changing the error message to some errmsg(\"data\nxxx does not exist\").\n\n> > + /*\n> > + * Check collation version. See similar code in\n> > + * pg_newlocale_from_collation().\n> > + */\n> > + datum = SysCacheGetAttr(DATABASEOID, tup, Anum_pg_database_datcollversion,\n> > + &isnull);\n> > + if (!isnull)\n> > + {\n> > \n> > This (and pg_newlocale_from_collation()) reports a problem if a recorded\n> > collation version is found but there's no reported collation version.\n> > Shouldn't it also complain if it's the opposite? It's otherwise a backdoor to\n> > make sure there won't be any check about the version anymore, and while it can\n> > probably happen if you mess with the catalogs it still doesn't look great.\n> \n> get_collation_actual_version() always returns either null or not null for a\n> given installation. So the situation that the stored version is null and\n> the actual version is not null can only happen as part of a software\n> upgrade. In that case, all uses of collations after an upgrade would\n> immediately start complaining about missing versions, which seems like a bad\n> experience. Users can explicitly opt in to version tracking by running\n> REFRESH VERSION once.\n\nAh right, I do remember that point which was also discussed in the collation\nversion tracking. Sorry about the noise.\n\n> > + /*\n> > + * template0 shouldn't have any collation-dependent objects, so unset\n> > + * the collation version. This avoids warnings when making a new\n> > + * database from it.\n> > + */\n> > + \"UPDATE pg_database SET datcollversion = NULL WHERE datname = 'template0';\\n\\n\",\n> > \n> > I'm not opposed, but shouldn't there indeed be a warning in case of discrepancy\n> > in the source database (whether template or not)?\n> > \n> > # update pg_database set datcollversion = 'meh' where datname in ('postgres', 'template1');\n> > UPDATE 2\n> > \n> > # create database test1 template postgres;\n> > CREATE DATABASE\n> > \n> > # create database test2 template template1;\n> > CREATE DATABASE\n> > \n> > # \\c test2\n> > WARNING: database \"test2\" has a collation version mismatch\n> \n> I don't understand what the complaint is here. It seems to work ok?\n\nThe comment says that you explicitly set a NULL collation version to avoid\nwarning when creating a database using template0 as a template, presumably\nafter a collation lib upgrade.\n\nBut as far as I can see the source database collation version is not checked\nwhen creating a new database, so it seems to me that either the comment is\nwrong, or we need another check for that. The latter seems preferable to me.\n\n\n", "msg_date": "Tue, 8 Feb 2022 00:08:14 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 07.02.22 17:08, Julien Rouhaud wrote:\n> There's so limited testing in collate.* regression tests, so I thought it would\n> be ok to add it there. At least some ALTER DATABASE ... REFRESH VERSION would\n> be good, similarly to collation-level versioning.\n\nI don't think you can run ALTER DATABASE from the regression test \nscripts, since the database name is not fixed. You'd have to paste the \ncommand together using psql tricks or something. I guess it could be \ndone, but maybe there is a better solution. We could put it into the \ncreatedb test suite, or write a new TAP test suite somewhere. There is \nno good precedent for this.\n\n>> That code takes a RowExclusiveLock on pg_database. Did you have something\n>> else in mind?\n> \n> But that lock won't prevent a concurrent DROP DATABASE, so it's totally\n> expected to hit that cache lookup failed error. There should either be a\n> shdepLockAndCheckObject(), or changing the error message to some errmsg(\"data\n> xxx does not exist\").\n\nI was not familiar with that function. AFAICT, it is only used for \ndatabase and role settings (AlterDatabaseSet(), AlterRoleSet()), but not \nwhen just updating the role or database catalog (e.g., AlterRole(), \nRenameRole(), RenameDatabase()). So I don't think it is needed here. \nMaybe I'm missing something.\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:14:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Tue, Feb 08, 2022 at 12:14:02PM +0100, Peter Eisentraut wrote:\n> On 07.02.22 17:08, Julien Rouhaud wrote:\n> > There's so limited testing in collate.* regression tests, so I thought it would\n> > be ok to add it there. At least some ALTER DATABASE ... REFRESH VERSION would\n> > be good, similarly to collation-level versioning.\n> \n> I don't think you can run ALTER DATABASE from the regression test scripts,\n> since the database name is not fixed. You'd have to paste the command\n> together using psql tricks or something. I guess it could be done, but\n> maybe there is a better solution. We could put it into the createdb test\n> suite, or write a new TAP test suite somewhere. There is no good precedent\n> for this.\n\nI was thinking using a simple DO command, as there are already some other usage\nfor that for non deterministic things (like TOAST tables). If it's too\nproblematic I'm fine with not having tests for that for now.\n\n> > > That code takes a RowExclusiveLock on pg_database. Did you have something\n> > > else in mind?\n> > \n> > But that lock won't prevent a concurrent DROP DATABASE, so it's totally\n> > expected to hit that cache lookup failed error. There should either be a\n> > shdepLockAndCheckObject(), or changing the error message to some errmsg(\"data\n> > xxx does not exist\").\n> \n> I was not familiar with that function. AFAICT, it is only used for database\n> and role settings (AlterDatabaseSet(), AlterRoleSet()), but not when just\n> updating the role or database catalog (e.g., AlterRole(), RenameRole(),\n> RenameDatabase()). So I don't think it is needed here. Maybe I'm missing\n> something.\n\nI'm just saying that without such a lock you can easily trigger the \"cache\nlookup\" error, and that's something that's supposed to happen with normal\nusage I think. So it should be a better message saying that the database has\nbeen concurrently dropped, or actually simply does not exist like it's done in\nAlterDatabaseOwner() for the same pattern:\n\n[...]\n\ttuple = systable_getnext(scan);\n\tif (!HeapTupleIsValid(tuple))\n\t\tereport(ERROR,\n\t\t\t\t(errcode(ERRCODE_UNDEFINED_DATABASE),\n\t\t\t\t errmsg(\"database \\\"%s\\\" does not exist\", dbname)));\n[...]\n\nApart from that I still think that we should check the collation version of the\nsource database when creating a new database. It won't cost much but will give\nthe DBA a chance to recreate the indexes before risking invalid index usage.\n\n\n", "msg_date": "Tue, 8 Feb 2022 20:55:18 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Mon, Feb 07, 2022 at 04:44:24PM +0100, Peter Eisentraut wrote:\n> On 07.02.22 11:29, Julien Rouhaud wrote:\n> > - that's not really something new with this patch, but should we output the\n> > collation version info or mismatch info in \\l / \\dO?\n> \n> It's a possibility. Perhaps there is a question of performance if we show\n> it in \\l and people have tons of databases and they have to make a locale\n> call for each one. As you say, it's more an independent feature, but it's\n> worth looking into.\n\nOk, but \\l+ shows among others the database size, so I guess at that\nlevel also showing this should be fine? (or is that already the case?)\n\n\nMichael\n\n\n", "msg_date": "Tue, 8 Feb 2022 15:08:43 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 08.02.22 13:55, Julien Rouhaud wrote:\n> I'm just saying that without such a lock you can easily trigger the \"cache\n> lookup\" error, and that's something that's supposed to happen with normal\n> usage I think. So it should be a better message saying that the database has\n> been concurrently dropped, or actually simply does not exist like it's done in\n> AlterDatabaseOwner() for the same pattern:\n> \n> [...]\n> \ttuple = systable_getnext(scan);\n> \tif (!HeapTupleIsValid(tuple))\n> \t\tereport(ERROR,\n> \t\t\t\t(errcode(ERRCODE_UNDEFINED_DATABASE),\n> \t\t\t\t errmsg(\"database \\\"%s\\\" does not exist\", dbname)));\n> [...]\n\nIn my code, the existence of the database is checked by\n\n dboid = get_database_oid(stmt->dbname, false);\n\nThis also issues an appropriate user-facing error message if the \ndatabase does not exist.\n\nThe flow in AlterDatabaseOwner() is a bit different, it looks up the \npg_database tuple directly from the name. I think both are correct. My \ncode has been copied from the analogous code in AlterCollation().\n\n\n", "msg_date": "Wed, 9 Feb 2022 12:48:35 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Wed, Feb 09, 2022 at 12:48:35PM +0100, Peter Eisentraut wrote:\n> On 08.02.22 13:55, Julien Rouhaud wrote:\n> > I'm just saying that without such a lock you can easily trigger the \"cache\n> > lookup\" error, and that's something that's supposed to happen with normal\n> > usage I think. So it should be a better message saying that the database has\n> > been concurrently dropped, or actually simply does not exist like it's done in\n> > AlterDatabaseOwner() for the same pattern:\n> > \n> > [...]\n> > \ttuple = systable_getnext(scan);\n> > \tif (!HeapTupleIsValid(tuple))\n> > \t\tereport(ERROR,\n> > \t\t\t\t(errcode(ERRCODE_UNDEFINED_DATABASE),\n> > \t\t\t\t errmsg(\"database \\\"%s\\\" does not exist\", dbname)));\n> > [...]\n> \n> In my code, the existence of the database is checked by\n> \n> dboid = get_database_oid(stmt->dbname, false);\n> \n> This also issues an appropriate user-facing error message if the database\n> does not exist.\n\nYes but if you run a DROP DATABASE concurrently you will either get a\n\"database does not exist\" or \"cache lookup failed\" depending on whether the\nDROP is processed before or after the get_database_oid().\n\nI agree that it's not worth trying to make it concurrent-drop safe, but I also\nthought that throwing plain elog(ERROR) should be avoided when reasonably\ndoable. And in that situation we know it can happen in normal situation, so\nhaving a real error message looks like a cost-free improvement. Now if it's\nbetter to have a cache lookup error even in that situation just for safety or\nsomething ok, it's not like trying to refresh a db collation and having someone\nelse dropping it at the same time is going to be a common practice anway.\n\n> The flow in AlterDatabaseOwner() is a bit different, it looks up the\n> pg_database tuple directly from the name. I think both are correct. My\n> code has been copied from the analogous code in AlterCollation().\n\nI also think it would be better to have a \"collation does not exist\" in the\nsyscache failure message, but same here dropping collation is probably even\nless frequent than dropping database, let alone while refreshing the collation\nversion.\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:31:18 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 08.02.22 13:55, Julien Rouhaud wrote:\n> Apart from that I still think that we should check the collation version of the\n> source database when creating a new database. It won't cost much but will give\n> the DBA a chance to recreate the indexes before risking invalid index usage.\n\nA question on this: In essence, this would be putting code into \ncreatedb() similar to the code in postinit.c:CheckMyDatabase(). But \nwhat should we make it do and say exactly?\n\nAfter thinking about this for a bit, I suggest: If the actual collation \nversion of the newly created database does not match the recorded \ncollation version of the template database, we should error out and make \nthe user fix the template database (by reindexing there etc.).\n\nThe alternative is to warn, as it does now in postinit.c. But then the \nphrasing of the message becomes complicated: Should we make the user fix \nthe new database or the template database or both? And if they don't \nfix the template database, they will have the same problem again. So \nmaking it a hard failure seems better to me.\n\n\n", "msg_date": "Wed, 9 Feb 2022 17:12:41 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 2022-Feb-08, Julien Rouhaud wrote:\n\n> On Tue, Feb 08, 2022 at 12:14:02PM +0100, Peter Eisentraut wrote:\n\n> > I don't think you can run ALTER DATABASE from the regression test scripts,\n> > since the database name is not fixed. You'd have to paste the command\n> > together using psql tricks or something. I guess it could be done, but\n> > maybe there is a better solution. We could put it into the createdb test\n> > suite, or write a new TAP test suite somewhere. There is no good precedent\n> > for this.\n> \n> I was thinking using a simple DO command, as there are already some other usage\n> for that for non deterministic things (like TOAST tables). If it's too\n> problematic I'm fine with not having tests for that for now.\n\nYou can do this:\n\nselect current_database() as datname \\gset\nalter database :\"datname\" owner to foo;\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 9 Feb 2022 13:25:08 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Wed, Feb 09, 2022 at 05:12:41PM +0100, Peter Eisentraut wrote:\n> On 08.02.22 13:55, Julien Rouhaud wrote:\n> > Apart from that I still think that we should check the collation version of the\n> > source database when creating a new database. It won't cost much but will give\n> > the DBA a chance to recreate the indexes before risking invalid index usage.\n> \n> A question on this: In essence, this would be putting code into createdb()\n> similar to the code in postinit.c:CheckMyDatabase(). But what should we\n> make it do and say exactly?\n> \n> After thinking about this for a bit, I suggest: If the actual collation\n> version of the newly created database does not match the recorded collation\n> version of the template database, we should error out and make the user fix\n> the template database (by reindexing there etc.).\n\nI'm not sure what you really mean by \"actual collation version of the newly\ncreated database\". Is it really a check after having done all the work or just\nchecking a discrepancy when computing the to-be-created database version from\nthe source database, ie. something like\n\n if (dbcollversion == NULL)\n+ {\n dbcollversion = src_collversion;\n+ if src_collversion != get_collation_actual_version(the source db)\n+ // raise error or warning\n\n> The alternative is to warn, as it does now in postinit.c. But then the\n> phrasing of the message becomes complicated: Should we make the user fix the\n> new database or the template database or both? And if they don't fix the\n> template database, they will have the same problem again. So making it a\n> hard failure seems better to me.\n\nAgreed, I'm in favor of a hard error. Maybe a message like:\n\nerrmsg(cannot create database %s)\nerrdetail(the template database %s was created using collation version %s, but\nthe operating system provides version %s)\n\nAlso, that check shouldn't be done when using the COLLATION_VERSION option of\ncreate database, since it's there for pg_upgrade usage?\n\n\n", "msg_date": "Thu, 10 Feb 2022 10:54:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "New patch that fixes all reported issues, I think:\n\n- Added test for ALTER DATABASE / REFRESH COLLATION VERSION\n\n- Rewrote AlterDatabaseRefreshCollVersion() with better locking\n\n- Added version checking in createdb()", "msg_date": "Thu, 10 Feb 2022 09:57:59 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Thu, Feb 10, 2022 at 09:57:59AM +0100, Peter Eisentraut wrote:\n> New patch that fixes all reported issues, I think:\n> \n> - Added test for ALTER DATABASE / REFRESH COLLATION VERSION\n> \n> - Rewrote AlterDatabaseRefreshCollVersion() with better locking\n> \n> - Added version checking in createdb()\n\nThanks! All issues are indeed fixed. I just have a few additional comments:\n\n\n\n> From 290ebb9ca743a2272181f435d5ea76d8a7280a0a Mon Sep 17 00:00:00 2001\n> From: Peter Eisentraut <peter@eisentraut.org>\n> Date: Thu, 10 Feb 2022 09:44:20 +0100\n> Subject: [PATCH v4] Database-level collation version tracking\n\n\n\n> +\t * collation version was specified explicitly as an statement option; that\n\ntypo, should be \"as a statement\"\n\n> +\t\tactual_versionstr = get_collation_actual_version(COLLPROVIDER_LIBC, dbcollate);\n> +\t\tif (!actual_versionstr)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errmsg(\"template database \\\"%s\\\" has a collation version, but no actual collation version could be determined\",\n> +\t\t\t\t\t\t\tdbtemplate)));\n> +\n> +\t\tif (strcmp(actual_versionstr, src_collversion) != 0)\n> +\t\t\tereport(ERROR,\n> +\t\t\t\t\t(errmsg(\"template database \\\"%s\\\" has a collation version mismatch\",\n> +\t\t\t\t\t\t\tdbtemplate),\n> +\t\t\t\t\t errdetail(\"The template database was created using collation version %s, \"\n> +\t\t\t\t\t\t\t \"but the operating system provides version %s.\",\n> +\t\t\t\t\t\t\t src_collversion, actual_versionstr),\n> +\t\t\t\t\t errhint(\"Rebuild all objects affected by collation in the template database and run \"\n> +\t\t\t\t\t\t\t \"ALTER DATABASE %s REFRESH COLLATION VERSION, \"\n> +\t\t\t\t\t\t\t \"or build PostgreSQL with the right library version.\",\n> +\t\t\t\t\t\t\t quote_identifier(dbtemplate))));\n\nAfter a second read I think the messages are slightly ambiguous. What do you\nthink about specifying the problematic collation name and provider?\n\nFor now we only support libc default collation so users will probably have to\nreindex almost everything on that database (not sure if the versioning is more\nfine grained on Windows), but we should probably still specify \"affected by\nlibc collation\" in the errhint so they have a chance to avoid unnecessary\nreindex.\n\nAnd this will hopefully become more important to have those information, when\nICU default collations will land :)\n\n> +/*\n> + * ALTER DATABASE name REFRESH COLLATION VERSION\n> + */\n> +ObjectAddress\n> +AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n\nI'm wondering why you changed this function to return an ObjectAddress rather\nthan an Oid? There's no event trigger support for ALTER DATABASE, and the rest\nof similar utility commands also returns Oid.\n\nOther than that it all looks good to me!\n\n\n", "msg_date": "Thu, 10 Feb 2022 19:08:02 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 10.02.22 12:08, Julien Rouhaud wrote:\n>> +\t\t\t\t\t errhint(\"Rebuild all objects affected by collation in the template database and run \"\n>> +\t\t\t\t\t\t\t \"ALTER DATABASE %s REFRESH COLLATION VERSION, \"\n>> +\t\t\t\t\t\t\t \"or build PostgreSQL with the right library version.\",\n>> +\t\t\t\t\t\t\t quote_identifier(dbtemplate))));\n> \n> After a second read I think the messages are slightly ambiguous. What do you\n> think about specifying the problematic collation name and provider?\n> \n> For now we only support libc default collation so users will probably have to\n> reindex almost everything on that database (not sure if the versioning is more\n> fine grained on Windows), but we should probably still specify \"affected by\n> libc collation\" in the errhint so they have a chance to avoid unnecessary\n> reindex.\n\nI think accurate would be something like \"objects using the default \ncollation\", since objects using a specific collation are not meant, even \nif they use the same provider.\n\n>> +/*\n>> + * ALTER DATABASE name REFRESH COLLATION VERSION\n>> + */\n>> +ObjectAddress\n>> +AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n> \n> I'm wondering why you changed this function to return an ObjectAddress rather\n> than an Oid? There's no event trigger support for ALTER DATABASE, and the rest\n> of similar utility commands also returns Oid.\n\nHmm, I was looking at RenameDatabase() and AlterDatabaseOwner(), which \nreturn ObjectAddress.\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:07:02 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:07:02PM +0100, Peter Eisentraut wrote:\n> On 10.02.22 12:08, Julien Rouhaud wrote:\n> > > +\t\t\t\t\t errhint(\"Rebuild all objects affected by collation in the template database and run \"\n> > > +\t\t\t\t\t\t\t \"ALTER DATABASE %s REFRESH COLLATION VERSION, \"\n> > > +\t\t\t\t\t\t\t \"or build PostgreSQL with the right library version.\",\n> > > +\t\t\t\t\t\t\t quote_identifier(dbtemplate))));\n> > \n> > After a second read I think the messages are slightly ambiguous. What do you\n> > think about specifying the problematic collation name and provider?\n> > \n> > For now we only support libc default collation so users will probably have to\n> > reindex almost everything on that database (not sure if the versioning is more\n> > fine grained on Windows), but we should probably still specify \"affected by\n> > libc collation\" in the errhint so they have a chance to avoid unnecessary\n> > reindex.\n> \n> I think accurate would be something like \"objects using the default\n> collation\", since objects using a specific collation are not meant, even if\n> they use the same provider.\n\nTechnically is the objects explicitly use the same collation as the default\ncollation they should be impacted the same way, but agreed.\n\n> > > +/*\n> > > + * ALTER DATABASE name REFRESH COLLATION VERSION\n> > > + */\n> > > +ObjectAddress\n> > > +AlterDatabaseRefreshColl(AlterDatabaseRefreshCollStmt *stmt)\n> > \n> > I'm wondering why you changed this function to return an ObjectAddress rather\n> > than an Oid? There's no event trigger support for ALTER DATABASE, and the rest\n> > of similar utility commands also returns Oid.\n> \n> Hmm, I was looking at RenameDatabase() and AlterDatabaseOwner(), which\n> return ObjectAddress.\n\nApparently I managed to only check AlterDatabase and AlterDatabaseSet, which\nboth return an Oid. Maybe we could also update those two to also return an\nObjectAddress, for consistency?\n\n\n", "msg_date": "Fri, 11 Feb 2022 20:51:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 11.02.22 13:51, Julien Rouhaud wrote:\n>>> I'm wondering why you changed this function to return an ObjectAddress rather\n>>> than an Oid? There's no event trigger support for ALTER DATABASE, and the rest\n>>> of similar utility commands also returns Oid.\n>>\n>> Hmm, I was looking at RenameDatabase() and AlterDatabaseOwner(), which\n>> return ObjectAddress.\n> \n> Apparently I managed to only check AlterDatabase and AlterDatabaseSet, which\n> both return an Oid. Maybe we could also update those two to also return an\n> ObjectAddress, for consistency?\n\nI have committed this patch.\n\nI didn't address the above issue. I looked at it a bit, but I also \nfound other (non-database) object types that had a mix of different \nreturn types. It's not clear to me what this is all supposed to mean. \nIf no one is checking the return, they should really all be turned into \nvoid, IMO. Maybe this should be a separate discussion.\n\n\n", "msg_date": "Mon, 14 Feb 2022 09:55:19 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "Hi,\n\nOn Mon, Feb 14, 2022 at 09:55:19AM +0100, Peter Eisentraut wrote:\n> I have committed this patch.\n\nGreat! Do you plan to send a rebased version of the ICU default collation\nsoon or should I start looking at the current v4?\n\n> I didn't address the above issue. I looked at it a bit, but I also found\n> other (non-database) object types that had a mix of different return types.\n> It's not clear to me what this is all supposed to mean. If no one is\n> checking the return, they should really all be turned into void, IMO. Maybe\n> this should be a separate discussion.\n\nAgreed.\n\n\n", "msg_date": "Mon, 14 Feb 2022 17:14:35 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 14.02.22 10:14, Julien Rouhaud wrote:\n> Do you plan to send a rebased version of the ICU default collation\n> soon or should I start looking at the current v4?\n\nI will send an updated patch in the next few days.\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 16:50:03 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Database-level collation version tracking" }, { "msg_contents": "On 2022-Feb-14, Peter Eisentraut wrote:\n\n> On 11.02.22 13:51, Julien Rouhaud wrote:\n> > > > I'm wondering why you changed this function to return an ObjectAddress rather\n> > > > than an Oid? There's no event trigger support for ALTER DATABASE, and the rest\n> > > > of similar utility commands also returns Oid.\n> > > \n> > > Hmm, I was looking at RenameDatabase() and AlterDatabaseOwner(), which\n> > > return ObjectAddress.\n> > \n> > Apparently I managed to only check AlterDatabase and AlterDatabaseSet, which\n> > both return an Oid. Maybe we could also update those two to also return an\n> > ObjectAddress, for consistency?\n\n> I didn't address the above issue. I looked at it a bit, but I also found\n> other (non-database) object types that had a mix of different return types.\n> It's not clear to me what this is all supposed to mean. If no one is\n> checking the return, they should really all be turned into void, IMO. Maybe\n> this should be a separate discussion.\n\nIIRC we changed the return types of all DDL back when we were doing the\nevent triggers work (first to OIDs and then to ObjectAddress), but we\ndidn't realize at the time that shared objects such as databases etc\nwere not going to be supported by event triggers. So those particular\nchanges were for naught, but we never reverted them.\n\nMaybe it's OK to have all the functions supporting databases and\ntablespaces return void.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n<inflex> really, I see PHP as like a strange amalgamation of C, Perl, Shell\n<crab> inflex: you know that \"amalgam\" means \"mixture with mercury\",\n more or less, right?\n<crab> i.e., \"deadly poison\"\n\n\n", "msg_date": "Thu, 31 Mar 2022 11:14:29 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Database-level collation version tracking" } ]
[ { "msg_contents": "I chased down the failure that kittiwake has been showing since\n02b8048ba [1]. It's not hard to reproduce if you have an older\nDebian release and you build --with-libedit-preferred.\nManual experimentation shows that when these versions of libedit\ncomplete a string containing double quotes, they insist on\nbackslashing all the double quotes :-(. That is, if you have\na table \"foobar\" and you type\n\nselect * from \"foo<TAB>\n\nwhat you'll get is\n\nselect * from \\\"foobar\\\"\n\nDigging into the source code, I find that libedit versions prior\nto 3.1-20210522 are unshakably convinced that anything you are\ncompleting should be escaped per shell quoting rules :-(. AFAICT\nthere is no way to turn that off without replacing the *entire*\ntab completion infrastructure.\n\n3.1-20210522 changed this to the extent of disabling quoting\nif the application supplies a rl_attempted_completion_function,\nas we do. (Fine for us, sucks for anybody who did want shell\nquoting.)\n\nI'm not too sure about the relationship of Debian's version of\nlibedit to anyone else's. Apple's version, for one, lacks this\nbug. But it does appear that you don't want to use libedit on\nDebian unless you have a very late-model release.\n\nWe can paper over the regression failure by making the test\ncases allow backslashes, but I wonder if we need something\nin the documentation discouraging people from choosing these\nversions of libedit over readline. They're pretty broken for\ncompletions involving double-quoted names even before 02b8048ba.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=kittiwake&dt=2022-01-31%2016%3A24%3A48\n\n\n", "msg_date": "Tue, 01 Feb 2022 16:30:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "psql tab completion versus Debian's libedit" }, { "msg_contents": "Hi,\n\nOn 2022-02-01 16:30:11 -0500, Tom Lane wrote:\n> I chased down the failure that kittiwake has been showing since\n> 02b8048ba [1].\n\nI just rebased my meson branch across the commit d33a81203e9. And on freebsd\nthe meson based build failed in the expanded tests, while autoconf succeeded.\n\nThe failure is:\n\nnot ok 22 - complete schema-qualified name\n# Failed test 'complete schema-qualified name'\n# at /tmp/cirrus-ci-build/src/bin/psql/t/010_tab_completion.pl line 236.\n# Actual output was \"tab \"\n# Did not match \"(?^:tab1 )\"\n\n\nI think this is caused by the feature flag detection being broken in the meson\nbranch - unrelated to your commit - ending up with falsely believing that none\nof the rl_* variables exist (below for more on that aspect).\n\nDo we care that the tests would fail when using a readline without any of the\nrl_* variables? I don't know if those even exist.\n\n\nThe reason for meson not detecting the variables is either an \"andres\" or\nfreebsd / readline issue. The tests fail with:\n\n/usr/local/include/readline/rltypedefs.h:71:36: error: unknown type name 'FILE'\ntypedef int rl_getc_func_t PARAMS((FILE *));\n ^\napparently the readline header on freebsd somehow has a dependency on stdio.h\nbeing included.\n\nLooks like it's easy enough to work around. My local copy of readline.h (8.1\non debian sid) has an explicit stdio.h include, but it looks like that's a\ndebian addition...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Feb 2022 18:10:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: psql tab completion versus Debian's libedit" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I think this is caused by the feature flag detection being broken in the meson\n> branch - unrelated to your commit - ending up with falsely believing that none\n> of the rl_* variables exist (below for more on that aspect).\n> Do we care that the tests would fail when using a readline without any of the\n> rl_* variables? I don't know if those even exist.\n\nHm, interesting. I reproduced this by #undef'ing\nHAVE_RL_COMPLETION_APPEND_CHARACTER in pg_config.h. It's not too\nsurprising, because without that, we have no way to prevent readline\nfrom appending a space to a completion. In the test at hand, that\ncauses two failures:\n\n1. After we do \"pub<TAB>\", we get \"public. \" not just \"public.\";\nand now, that is no longer part of the word-to-be-completed.\nThat breaks the test because FROM is no longer the previous word\nbut the one before that, so we won't try to do table completion.\n\n2. After we do \"tab<TAB>\", we normally get \"tab1 \", but since no\ncompletion is attempted, we should get just \"tab\". But we\nget \"tab \" because the dummy-completion code at the bottom\nof psql_completion fails to suppress adding a space. In general,\n*any* failed completion would nonetheless append a space.\n\nEach of these is sufficiently bad to prompt user complaints,\nI think, but we've seen none. And I observe that every buildfarm\nanimal reports having rl_completion_append_character.\n\nI conclude that there are no extant versions of readline/libedit\nthat don't have rl_completion_append_character, so we could\ndrop that configure test and save a cycle or two.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 22:28:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: psql tab completion versus Debian's libedit" }, { "msg_contents": "On 2022-02-02 22:28:31 -0500, Tom Lane wrote:\n> I conclude that there are no extant versions of readline/libedit\n> that don't have rl_completion_append_character, so we could\n> drop that configure test and save a cycle or two.\n\nSounds good to me!\n\n\n", "msg_date": "Wed, 2 Feb 2022 19:33:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: psql tab completion versus Debian's libedit" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-02 22:28:31 -0500, Tom Lane wrote:\n>> I conclude that there are no extant versions of readline/libedit\n>> that don't have rl_completion_append_character, so we could\n>> drop that configure test and save a cycle or two.\n\n> Sounds good to me!\n\nOn it now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 22:40:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: psql tab completion versus Debian's libedit" } ]
[ { "msg_contents": "Hi,\n\nThis patch adds a new option (-J num, --jobs-per-disk=num) in \npg_upgrade to speed up copy mode. This generates upto ${num} \nprocesses per tablespace to copy segments of the same relfilenode \nin parallel.\n\nThis can help when you have many multi gigabyte tables (each segment \nis 1GB by default) in different tablespaces (each tablespace in a \ndifferent disk) and multiple processors.\n\nIn a customer's database (~20Tb) it went down from 6h to 4h 45min.\n\nIt lacks documentation and I need help with WIN32 part of it, I created\nthis new mail to put the patch on the next commitfest.\n\nOriginal thread: https://www.postgresql.org/message-id/flat/YZVbtHKYP02AZDIO%40ahch-to\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL", "msg_date": "Tue, 1 Feb 2022 21:57:00 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "[WIP] Allow pg_upgrade to copy segments of the same relfilenode in\n parallel" }, { "msg_contents": "Hi,\n\nOn 2022-02-01 21:57:00 -0500, Jaime Casanova wrote:\n> This patch adds a new option (-J num, --jobs-per-disk=num) in \n> pg_upgrade to speed up copy mode. This generates upto ${num} \n> processes per tablespace to copy segments of the same relfilenode \n> in parallel.\n> \n> This can help when you have many multi gigabyte tables (each segment \n> is 1GB by default) in different tablespaces (each tablespace in a \n> different disk) and multiple processors.\n> \n> In a customer's database (~20Tb) it went down from 6h to 4h 45min.\n> \n> It lacks documentation and I need help with WIN32 part of it, I created\n> this new mail to put the patch on the next commitfest.\n\nThe patch currently fails on cfbot due to warnings, likely related due to the\nwin32 issue: https://cirrus-ci.com/task/4566046517493760?logs=mingw_cross_warning#L388\n\nAs it's a new patch submitted to the last CF, hasn't gotten any review yet and\nmisses some platform support, it seems like there's no chance it can make it\ninto 15?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:34:31 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [WIP] Allow pg_upgrade to copy segments of the same relfilenode\n in parallel" }, { "msg_contents": "On Mon, Mar 21, 2022 at 05:34:31PM -0700, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-01 21:57:00 -0500, Jaime Casanova wrote:\n> > This patch adds a new option (-J num, --jobs-per-disk=num) in \n> > pg_upgrade to speed up copy mode. This generates upto ${num} \n> > processes per tablespace to copy segments of the same relfilenode \n> > in parallel.\n> > \n> > This can help when you have many multi gigabyte tables (each segment \n> > is 1GB by default) in different tablespaces (each tablespace in a \n> > different disk) and multiple processors.\n> > \n> > In a customer's database (~20Tb) it went down from 6h to 4h 45min.\n> > \n> > It lacks documentation and I need help with WIN32 part of it, I created\n> > this new mail to put the patch on the next commitfest.\n> \n> The patch currently fails on cfbot due to warnings, likely related due to the\n> win32 issue: https://cirrus-ci.com/task/4566046517493760?logs=mingw_cross_warning#L388\n> \n> As it's a new patch submitted to the last CF, hasn't gotten any review yet and\n> misses some platform support, it seems like there's no chance it can make it\n> into 15?\n> \n\nHi,\n\nBecause I have zero experience on the windows side of this, I will take\nsome time to complete that part.\n\nShould we move this to the next commitfest (and make 16 the target for\nthis)?\n\n-- \nJaime Casanova\nDirector de Servicios Profesionales\nSystemGuards - Consultores de PostgreSQL\n\n\n", "msg_date": "Sun, 27 Mar 2022 11:07:27 -0500", "msg_from": "Jaime Casanova <jcasanov@systemguards.com.ec>", "msg_from_op": true, "msg_subject": "Re: [WIP] Allow pg_upgrade to copy segments of the same relfilenode\n in parallel" }, { "msg_contents": "On Sun, Mar 27, 2022 at 11:07:27AM -0500, Jaime Casanova wrote:\n> > > It lacks documentation and I need help with WIN32 part of it, I created\n> > > this new mail to put the patch on the next commitfest.\n> > \n> > The patch currently fails on cfbot due to warnings, likely related due to the\n> > win32 issue: https://cirrus-ci.com/task/4566046517493760?logs=mingw_cross_warning#L388\n> > \n> > As it's a new patch submitted to the last CF, hasn't gotten any review yet and\n> > misses some platform support, it seems like there's no chance it can make it\n> > into 15?\n> \n> Because I have zero experience on the windows side of this, I will take\n> some time to complete that part.\n> \n> Should we move this to the next commitfest (and make 16 the target for\n> this)?\n\nDone.\n\nsrc/tools/ci/README may help test this under windows, but that's probably not enough\nto allow writing the win-specific parts.\n\nI guess you'll need to write tests for this..unfortunately that requires files\n>1GB in size, unless you recompile postgres :(\n\nIt may be good enough to write an 0002 patch meant for CI only, but not\nintended to be merged. That can create a 2300MB table in src/test/regress, and\nchange pg_upgrade to run with (or default to) multiple jobs per tablespace.\nMake sure it fails if the loop around relfilenodes doesn't work.\n\nI can't help with win32, but that would be enough to verify it if someone else\nfills in the windows parts.\n\n-- \nJustin\n\n\n", "msg_date": "Mon, 28 Mar 2022 09:33:19 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [WIP] Allow pg_upgrade to copy segments of the same relfilenode\n in parallel" }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3525/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:58:06 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [WIP] Allow pg_upgrade to copy segments of the same relfilenode\n in parallel" } ]
[ { "msg_contents": "I noticed $subject while looking at something involving SubLinks and\nSubPlans. It seems eab6b8b27eb removed the \"plan\" field from the\nSubPlan node struct definition, but the following line from\nexpression_tree_mutator():\n\n /* but not the sub-Plan itself, which is referenced as-is */\n\nand the following from expression_tree_walker():\n\n /* recurse into the testexpr, but not into the Plan */\n\nboth of which I think refer to that no-longer-existent field, appear\nto have survived multiple commits that moved the SubPlan expression\nprocessing code around.\n\nAttached patch removes those.\n\n-- \nAmit Langote\nEDB: http://www.enterprisedb.com", "msg_date": "Wed, 2 Feb 2022 17:07:39 +0900", "msg_from": "Amit Langote <amitlangote09@gmail.com>", "msg_from_op": true, "msg_subject": "obsolete reference to a SubPlan field" }, { "msg_contents": "On Wed, Feb 2, 2022 at 3:08 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> I noticed $subject while looking at something involving SubLinks and\n> SubPlans. It seems eab6b8b27eb removed the \"plan\" field from the\n> SubPlan node struct definition, but the following line from\n> expression_tree_mutator():\n>\n> /* but not the sub-Plan itself, which is referenced as-is */\n>\n> and the following from expression_tree_walker():\n>\n> /* recurse into the testexpr, but not into the Plan */\n>\n> both of which I think refer to that no-longer-existent field, appear\n> to have survived multiple commits that moved the SubPlan expression\n> processing code around.\n>\n> Attached patch removes those.\n\nLooks right to me. Tom, any comments?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 13:31:15 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: obsolete reference to a SubPlan field" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Feb 2, 2022 at 3:08 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Attached patch removes those.\n\n> Looks right to me. Tom, any comments?\n\nI'm pretty sure I left those comments alone on purpose back in 2007,\nand I don't find simply removing them to be an improvement.\n\nIn principle, readers might expect that tree walkers/mutators\nwould descend to a SubPlan's query, as they do for a SubLink's\nquery. Calling out the fact that that doesn't happen seems\nuseful to me. If you don't like this particular wording of those\ncomments, we can discuss better wordings ... but I doubt that\nnothing-at-all is better.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Mar 2022 13:45:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: obsolete reference to a SubPlan field" }, { "msg_contents": "On Mon, Mar 14, 2022 at 1:45 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Feb 2, 2022 at 3:08 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >> Attached patch removes those.\n>\n> > Looks right to me. Tom, any comments?\n>\n> I'm pretty sure I left those comments alone on purpose back in 2007,\n> and I don't find simply removing them to be an improvement.\n>\n> In principle, readers might expect that tree walkers/mutators\n> would descend to a SubPlan's query, as they do for a SubLink's\n> query. Calling out the fact that that doesn't happen seems\n> useful to me. If you don't like this particular wording of those\n> comments, we can discuss better wordings ... but I doubt that\n> nothing-at-all is better.\n\nOK, well if it was intentional, then I do think some rewording is\njustified. It seems to me that those comments, and especially the one\nin expression_tree_mutator(), seem like they are talking about an\nactual pointer, rather than some other kind of logical link. Otherwise\nwhy is it sort of referenced along the way as we go through other\nthings that are all actual pointers? It certainly leads one to expect\na Plan * that isn't there.\n\nI think it's a fine idea to explain that, on a conceptual level, we're\njust walking the Plan data structure and the pointers which it\ncontains, and therefore won't reach down into a sub-Plan that is not\nsomething we actually point to. Or whatever words clarify the intent.\nBut a passing reference that sounds like a pointer when it isn't is\nworse than nothing at all.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Mar 2022 14:27:32 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: obsolete reference to a SubPlan field" } ]
[ { "msg_contents": "As part of the NSS patchset (and Secure Transport before that), I had to\nrefactor the SSL tests to handle different SSL libraries. The current tests\nand test module is quite tied to how OpenSSL works wrt setting up the server,\nthe attached refactors this and abstracts the OpenSSL specifics more like how\nthe rest of the codebase is set up.\n\nThe tight coupling is of course not a problem right now, but I think this patch\nhas more benefits making it a candidate for going in regardless of the fate of\nthe NSS patchset. This is essentially the 0002 patch from that patchset with\nadditional cleanup and documentation:\n\n* switch_server_cert takes a set of named parameters rather than a fixed set\nwith defaults depending on each other, which made adding ssl_passphrase to it\ncumbersome. It also adds readability IMO.\n\n* SSLServer is renamed SSL::Server, which in turn use SSL::Backend::X where X\nis the backend pointed to by with_ssl. Each backend will implement its own\nmodule which is responsible for setting up keys/certs and to resolve sslkey\nvalues to their full paths. The idea is that the namespace will also allow for\nan SSL::Client in the future when we implment running client tests against\ndifferent servers etc.\n\n* The modules are POD documented.\n\n* While not related to the refactor per se, the hardcoded number of planned\ntests is removed in favor of calling done_testing().\n\nWith this, adding a new SSL library is quite straightforward, I've done the\nlegwork to test that =)\n\nI opted for avoiding too invasive changes leaving the tests somewhat easy to\ncompare to back branches.\n\nThoughts? I'm fairly sure there are many crimes against Perl in this patch,\nI'm happy to take pointers on how to improve that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 2 Feb 2022 14:26:02 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Refactoring SSL tests" }, { "msg_contents": "\nOn 2/2/22 08:26, Daniel Gustafsson wrote:\n> Thoughts? I'm fairly sure there are many crimes against Perl in this patch,\n> I'm happy to take pointers on how to improve that.\n\n\nIt feels a bit odd to me from a perl POV. I think it needs to more along\nthe lines of standard OO patterns. I'll take a stab at that based on\nthis, might be a few days.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 11:09:35 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "> On 2 Feb 2022, at 17:09, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2/2/22 08:26, Daniel Gustafsson wrote:\n\n>> Thoughts? I'm fairly sure there are many crimes against Perl in this patch,\n>> I'm happy to take pointers on how to improve that.\n> \n> It feels a bit odd to me from a perl POV. I think it needs to more along\n> the lines of standard OO patterns. I'll take a stab at that based on\n> this, might be a few days.\n\nThat would be great, thanks!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 20:50:05 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "On 2/2/22 14:50, Daniel Gustafsson wrote:\n>> On 2 Feb 2022, at 17:09, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> On 2/2/22 08:26, Daniel Gustafsson wrote:\n>>> Thoughts? I'm fairly sure there are many crimes against Perl in this patch,\n>>> I'm happy to take pointers on how to improve that.\n>> It feels a bit odd to me from a perl POV. I think it needs to more along\n>> the lines of standard OO patterns. I'll take a stab at that based on\n>> this, might be a few days.\n> That would be great, thanks!\n>\n\nHere's the result of that surgery.  It's a little incomplete in that it\nneeds some POD adjustment, but I think the code is right - it passes\ntesting for me.\n\nOne of the advantages of this, apart from being more idiomatic, is that\nby avoiding the use of package level variables you can have two\nSSL::Server objects, one for OpenSSL and (eventually) one for NSS. This\nwas the original motivation for the recent install_path additions to\nPostgreSQL::Test::Cluster, so it complements that work nicely.\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com", "msg_date": "Mon, 7 Feb 2022 11:29:54 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "> On 7 Feb 2022, at 17:29, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2/2/22 14:50, Daniel Gustafsson wrote:\n\n>>> On 2 Feb 2022, at 17:09, Andrew Dunstan <andrew@dunslane.net> wrote:\n>>> On 2/2/22 08:26, Daniel Gustafsson wrote:\n>>>> Thoughts? I'm fairly sure there are many crimes against Perl in this patch,\n>>>> I'm happy to take pointers on how to improve that.\n>>> It feels a bit odd to me from a perl POV. I think it needs to more along\n>>> the lines of standard OO patterns. I'll take a stab at that based on\n>>> this, might be a few days.\n>> That would be great, thanks!\n> \n> Here's the result of that surgery. It's a little incomplete in that it\n> needs some POD adjustment, but I think the code is right - it passes\n> testing for me.\n\nConfirmed, it passes all tests for me as well.\n\n> One of the advantages of this, apart from being more idiomatic, is that\n> by avoiding the use of package level variables you can have two\n> SSL::Server objects, one for OpenSSL and (eventually) one for NSS. This\n> was the original motivation for the recent install_path additions to\n> PostgreSQL::Test::Cluster, so it complements that work nicely.\n\nAgreed, this version is a clear improvement over my attempt. Thanks!\n\nThe attached v2 takes a stab at fixing up the POD sections.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Tue, 8 Feb 2022 15:24:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "\nOn 2/8/22 09:24, Daniel Gustafsson wrote:\n>\n> The attached v2 takes a stab at fixing up the POD sections.\n\n\nThere a capitalization typo in SSL/Backend/OpenSSL.pm - looks like\nthat's my fault:\n\n\n+  my $backend = SSL::backend::OpenSSL->new();\n\n\nAlso, I think we should document that SSL::Server::new() takes an\noptional flavor parameter, in the absence of which it uses\n$ENV{with_ssl}, and that 'openssl' is the only currently supported value.\n\n\ncheers\n\n\nandrew\n\n\n-- \n\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Tue, 8 Feb 2022 10:46:29 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "> On 8 Feb 2022, at 16:46, Andrew Dunstan <andrew@dunslane.net> wrote:\n\n> There a capitalization typo in SSL/Backend/OpenSSL.pm - looks like\n> that's my fault:\n> \n> + my $backend = SSL::backend::OpenSSL->new();\n\nFixed.\n\n> Also, I think we should document that SSL::Server::new() takes an\n> optional flavor parameter, in the absence of which it uses\n> $ENV{with_ssl}, and that 'openssl' is the only currently supported value.\n\nGood point, done in the attached.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 9 Feb 2022 14:11:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "\nOn 2/9/22 08:11, Daniel Gustafsson wrote:\n>> On 8 Feb 2022, at 16:46, Andrew Dunstan <andrew@dunslane.net> wrote:\n>> There a capitalization typo in SSL/Backend/OpenSSL.pm - looks like\n>> that's my fault:\n>>\n>> + my $backend = SSL::backend::OpenSSL->new();\n> Fixed.\n>\n>> Also, I think we should document that SSL::Server::new() takes an\n>> optional flavor parameter, in the absence of which it uses\n>> $ENV{with_ssl}, and that 'openssl' is the only currently supported value.\n> Good point, done in the attached.\n\n\n\nLGTM\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 08:28:59 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Refactoring SSL tests" }, { "msg_contents": "> On 9 Feb 2022, at 14:28, Andrew Dunstan <andrew@dunslane.net> wrote:\n> On 2/9/22 08:11, Daniel Gustafsson wrote:\n\n>> Good point, done in the attached.\n> \n> LGTM\n\nNow that the recent changes to TAP and SSL tests have settled, I took another\npass at this. After rebasing and fixing and polishing and taking it for\nmultiple spins on the CI I've pushed this today to master.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 26 Mar 2022 22:08:25 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Refactoring SSL tests" } ]
[ { "msg_contents": "As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\ninspecting STDERR in connect_ok and require it to be empty. This is not really\nNSS specific, and could help find issues in other libraries as well so I\npropose to apply it regardless of the fate of the NSS patchset.\n\n(The change in the SCRAM tests stems from this now making all testruns have the\nsame number of tests. While I prefer to not plan at all and instead run\ndone_testing(), doing that consistently is for another patch, keeping this with\nthe remainder of the suites.)\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 2 Feb 2022 15:40:39 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "On 2022-Feb-02, Daniel Gustafsson wrote:\n\n> As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\n> inspecting STDERR in connect_ok and require it to be empty. This is not really\n> NSS specific, and could help find issues in other libraries as well so I\n> propose to apply it regardless of the fate of the NSS patchset.\n>\n> (The change in the SCRAM tests stems from this now making all testruns have the\n> same number of tests. While I prefer to not plan at all and instead run\n> done_testing(), doing that consistently is for another patch, keeping this with\n> the remainder of the suites.)\n\nSince commit 405f32fc4960 we can rely on subtests for this, so perhaps\nwe should group all the tests in connect_ok() (including your new one)\ninto a subtest; then each connect_ok() calls count as a single test, and\nwe can add more tests in it without having to change the callers.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\nAl principio era UNIX, y UNIX habló y dijo: \"Hello world\\n\".\nNo dijo \"Hello New Jersey\\n\", ni \"Hello USA\\n\".\n\n\n", "msg_date": "Wed, 2 Feb 2022 12:01:14 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\n> inspecting STDERR in connect_ok and require it to be empty. This is not really\n> NSS specific, and could help find issues in other libraries as well so I\n> propose to apply it regardless of the fate of the NSS patchset.\n\n+1\n\n> (The change in the SCRAM tests stems from this now making all testruns have the\n> same number of tests. While I prefer to not plan at all and instead run\n> done_testing(), doing that consistently is for another patch, keeping this with\n> the remainder of the suites.)\n\n+1 to that too, counting the tests is a pretty useless exercise.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 02 Feb 2022 10:01:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "> On 2 Feb 2022, at 16:01, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> On 2022-Feb-02, Daniel Gustafsson wrote:\n> \n>> As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\n>> inspecting STDERR in connect_ok and require it to be empty. This is not really\n>> NSS specific, and could help find issues in other libraries as well so I\n>> propose to apply it regardless of the fate of the NSS patchset.\n>> \n>> (The change in the SCRAM tests stems from this now making all testruns have the\n>> same number of tests. While I prefer to not plan at all and instead run\n>> done_testing(), doing that consistently is for another patch, keeping this with\n>> the remainder of the suites.)\n> \n> Since commit 405f32fc4960 we can rely on subtests for this, so perhaps\n> we should group all the tests in connect_ok() (including your new one)\n> into a subtest; then each connect_ok() calls count as a single test, and\n> we can add more tests in it without having to change the callers.\n\nDisclaimer: longer term I would prefer to remove test plan counting like above\nand Toms comment downthread. This is just to make this patch more palatable on\nthe way there.\n\nMaking this a subtest in order to not having to change the callers, turns the\npatch into the attached. For this we must group the new test with one already\nexisting test, if we group more into it (which would make more sense) then we\nneed to change callers as that reduce the testcount across the tree.\n\nThis makes the patch smaller, but not more readable IMO (given that we can't\nmake it fully use subtests), so my preference is still the v1 patch.\n\nOr did I misunderstand you?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 2 Feb 2022 16:42:09 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "On 2022-Feb-02, Daniel Gustafsson wrote:\n\n> Making this a subtest in order to not having to change the callers, turns the\n> patch into the attached. For this we must group the new test with one already\n> existing test, if we group more into it (which would make more sense) then we\n> need to change callers as that reduce the testcount across the tree.\n\nWell, it wasn't my intention that this patch would not have to change\nthe callers. What I was thinking about is making connect_ok() have a\nsubtest for *all* the tests it contains (and changing the callers to\nmatch), so that *in the future* we can add more tests there without\nhaving to change the callers *again*.\n\n> Or did I misunderstand you?\n\nI think you did.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Cómo ponemos nuestros dedos en la arcilla del otro. Eso es la amistad; jugar\nal alfarero y ver qué formas se pueden sacar del otro\" (C. Halloway en\nLa Feria de las Tinieblas, R. Bradbury)\n\n\n", "msg_date": "Wed, 2 Feb 2022 12:48:22 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>\n>> While I prefer to not plan at all and instead run done_testing(),\n>> doing that consistently is for another patch, keeping this with the\n>> remainder of the suites.\n>\n> +1 to that too, counting the tests is a pretty useless exercise.\n\nRather than waiting for Someone™ to find a suitably-shaped tuit to do a\nwhole sweep converting everything to done_testing(), I think we should\nmake a habit of converting individual scripts whenever a change breaks\nthe count.\n\n- ilmari\n\n\n", "msg_date": "Wed, 02 Feb 2022 16:01:18 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "\nOn 2/2/22 11:01, Dagfinn Ilmari Mannsåker wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>\n>>> While I prefer to not plan at all and instead run done_testing(),\n>>> doing that consistently is for another patch, keeping this with the\n>>> remainder of the suites.\n>> +1 to that too, counting the tests is a pretty useless exercise.\n> Rather than waiting for Someone™ to find a suitably-shaped tuit to do a\n> whole sweep converting everything to done_testing(), I think we should\n> make a habit of converting individual scripts whenever a change breaks\n> the count.\n\n\nOr when anyone edits a script, even if the number of tests doesn't get\nbroken.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 11:12:27 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "On 2022-Feb-02, Andrew Dunstan wrote:\n\n> On 2/2/22 11:01, Dagfinn Ilmari Mannsåker wrote:\n\n> > Rather than waiting for Someone™ to find a suitably-shaped tuit to do a\n> > whole sweep converting everything to done_testing(), I think we should\n> > make a habit of converting individual scripts whenever a change breaks\n> > the count.\n\n+1.\n\n> Or when anyone edits a script, even if the number of tests doesn't get\n> broken.\n\nSure, if the committer is up to doing it, but let's not make that a hard\nrequirement for any commit that touches a test script.\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"Ni aún el genio muy grande llegaría muy lejos\nsi tuviera que sacarlo todo de su propio interior\" (Goethe)\n\n\n", "msg_date": "Wed, 2 Feb 2022 13:24:09 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "On Wed, Feb 02, 2022 at 04:42:09PM +0100, Daniel Gustafsson wrote:\n> Disclaimer: longer term I would prefer to remove test plan counting like above\n> and Toms comment downthread.\n\nThat would be really nice.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 15:52:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "On Wed, Feb 02, 2022 at 03:40:39PM +0100, Daniel Gustafsson wrote:\n> As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\n> inspecting STDERR in connect_ok and require it to be empty. This is not really\n> NSS specific, and could help find issues in other libraries as well so I\n> propose to apply it regardless of the fate of the NSS patchset.\n\nSounds sensible from here. Thanks!\n\nAll the code paths seem to be covered, at quick glance.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 16:00:58 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" }, { "msg_contents": "> On 6 Feb 2022, at 08:00, Michael Paquier <michael@paquier.xyz> wrote:\n> \n> On Wed, Feb 02, 2022 at 03:40:39PM +0100, Daniel Gustafsson wrote:\n>> As part of the NSS patchset, quite a few bugs (and NSS quirks) were found by\n>> inspecting STDERR in connect_ok and require it to be empty. This is not really\n>> NSS specific, and could help find issues in other libraries as well so I\n>> propose to apply it regardless of the fate of the NSS patchset.\n> \n> Sounds sensible from here. Thanks!\n> \n> All the code paths seem to be covered, at quick glance.\n\nApplied, minus the changes to the test plans which are no longer required after\n549ec201d6.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 12:52:00 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Ensure that STDERR is empty during connect_ok" } ]
[ { "msg_contents": "Hi,\n\nHere are some improvements we can make to pg_receivewal that were\nemanated after working with it in production environments:\n\n1) As a user, I, sometimes, want my pg_receivewal to start streaming\nfrom the LSN that I provide as an input i.e. startpos instead of it\ncalculating the stream start position 1) from its target directory or\n2) from its replication slot's restart_lsn or 3) after sending\nIDENTIFY_SYSTEM on to the primary.\nThis will particularly be useful when the primary is down for some\ntime (for whatever reasons) and the WAL files that are required by the\npg_receivewal may have been removed by it (I know this situation is a\nbit messy, but it is quite possible in production environments). Then,\nthe pg_receivewal will\ncalculate the start position from its target directory and request the\nprimary with it, which the primary may not have. I have to intervene\nand manually delete/move the WAL files in the pg_receivewal target\ndirectory and restart the pg_receivewal so that it can continue.\nInstead, if pg_receivewal can accept a startpos as an option, it can\njust go ahead and stream from the primary.\n\n2) Currently, RECONNECT_SLEEP_TIME is 5sec - but I may want to have\nmore reconnect time as I know that the primary can go down at any time\nfor whatever reasons in production environments which can take some\ntime till I bring up primary and I don't want to waste compute cycles\nin the node on which pg_receivewal is running and I should be able to\njust set it to a higher value, say 5 min or so, after which\npg_receivewal can try to perform StreamLog(); and attempt connection\nto primary.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Feb 2022 20:53:13 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "pg_receivewal - couple of improvements" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 02, 2022 at 08:53:13PM +0530, Bharath Rupireddy wrote:\n> \n> Here are some improvements we can make to pg_receivewal that were\n> emanated after working with it in production environments:\n> \n> 1) As a user, I, sometimes, want my pg_receivewal to start streaming\n> from the LSN that I provide as an input i.e. startpos instead of it\n> calculating the stream start position 1) from its target directory or\n> 2) from its replication slot's restart_lsn or 3) after sending\n> IDENTIFY_SYSTEM on to the primary.\n\nThis is already being discussed (at least part of) as of\nhttps://www.postgresql.org/message-id/flat/18708360.4lzOvYHigE%40aivenronan.\n\n\n", "msg_date": "Wed, 2 Feb 2022 23:35:27 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Wed, Feb 2, 2022 at 9:05 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Hi,\n>\n> On Wed, Feb 02, 2022 at 08:53:13PM +0530, Bharath Rupireddy wrote:\n> >\n> > Here are some improvements we can make to pg_receivewal that were\n> > emanated after working with it in production environments:\n> >\n> > 1) As a user, I, sometimes, want my pg_receivewal to start streaming\n> > from the LSN that I provide as an input i.e. startpos instead of it\n> > calculating the stream start position 1) from its target directory or\n> > 2) from its replication slot's restart_lsn or 3) after sending\n> > IDENTIFY_SYSTEM on to the primary.\n>\n> This is already being discussed (at least part of) as of\n> https://www.postgresql.org/message-id/flat/18708360.4lzOvYHigE%40aivenronan.\n\nFYI that thread is closed, it committed the change (f61e1dd [1]) that\npg_receivewal can read from its replication slot restart lsn.\n\nI know that providing the start pos as an option came up there [2],\nbut I wanted to start the discussion fresh as that thread got closed.\n\n[1]\ncommit f61e1dd2cee6b1a1da75c2bb0ca3bc72f18748c1\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Tue Oct 26 09:30:37 2021 +0900\n\n Allow pg_receivewal to stream from a slot's restart LSN\n\n Author: Ronan Dunklau\n Reviewed-by: Kyotaro Horiguchi, Michael Paquier, Bharath Rupireddy\n Discussion: https://postgr.es/m/18708360.4lzOvYHigE@aivenronan\n\n[2] https://www.postgresql.org/message-id/20210902.144554.1303720268994714850.horikyota.ntt%40gmail.com\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Wed, 2 Feb 2022 21:14:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Wed, Feb 02, 2022 at 09:14:03PM +0530, Bharath Rupireddy wrote:\n> \n> FYI that thread is closed, it committed the change (f61e1dd [1]) that\n> pg_receivewal can read from its replication slot restart lsn.\n> \n> I know that providing the start pos as an option came up there [2],\n> but I wanted to start the discussion fresh as that thread got closed.\n\nAh sorry I misunderstood your email.\n\nI'm not sure it's a good idea. If you have missing WALs in your target\ndirectory but have an alternative backup location, you will have to restore the\nWAL from that alternative location anyway, so I'm not sure how accepting a\ndifferent start position is going to help in that scenario. On the other hand\nallowing a position at the command line can also lead to accepting a bogus\nposition, which could possibly make things worse.\n\n> 2) Currently, RECONNECT_SLEEP_TIME is 5sec - but I may want to have\n> more reconnect time as I know that the primary can go down at any time\n> for whatever reasons in production environments which can take some\n> time till I bring up primary and I don't want to waste compute cycles\n> in the node on which pg_receivewal is running\n\nI don't think that attempting a connection is really costly. Also, increasing\nthis retry time also increases the amount of time you're not streaming WALs,\nand thus the amount of data you can lose so I'm not sure that's actually a good\nidea. But you might also want to make it more aggressive, so no objection to\nmake it configurable.\n\n\n", "msg_date": "Wed, 2 Feb 2022 23:57:51 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Wed, Feb 2, 2022 at 9:28 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Feb 02, 2022 at 09:14:03PM +0530, Bharath Rupireddy wrote:\n> >\n> > FYI that thread is closed, it committed the change (f61e1dd [1]) that\n> > pg_receivewal can read from its replication slot restart lsn.\n> >\n> > I know that providing the start pos as an option came up there [2],\n> > but I wanted to start the discussion fresh as that thread got closed.\n>\n> Ah sorry I misunderstood your email.\n>\n> I'm not sure it's a good idea. If you have missing WALs in your target\n> directory but have an alternative backup location, you will have to restore the\n> WAL from that alternative location anyway, so I'm not sure how accepting a\n> different start position is going to help in that scenario. On the other hand\n> allowing a position at the command line can also lead to accepting a bogus\n> position, which could possibly make things worse.\n\nIsn't complex for anyone to go to the archive location which involves\nextra steps - getting authentication tokens, searching there for the\nrequired WAL file, downloading it, unzipping it, copying back to\npg_receivewal node etc. in production environments? You know, this\nwill just be problematic and adds more time for bringing up the\npg_receivewal. Instead if I know that the latest checkpoint LSN and\narchived WAL file from the primary, I can just provide the startpos\n(probably the last checkpoint LSN) to pg_receivewal so that it can\ncontinue getting the WAL records from primary, avoiding the whole\nbunch of the manual work that I had to do.\n\n> > 2) Currently, RECONNECT_SLEEP_TIME is 5sec - but I may want to have\n> > more reconnect time as I know that the primary can go down at any time\n> > for whatever reasons in production environments which can take some\n> > time till I bring up primary and I don't want to waste compute cycles\n> > in the node on which pg_receivewal is running\n>\n> I don't think that attempting a connection is really costly. Also, increasing\n> this retry time also increases the amount of time you're not streaming WALs,\n> and thus the amount of data you can lose so I'm not sure that's actually a good\n> idea. But you might also want to make it more aggressive, so no objection to\n> make it configurable.\n\nYeah, making it configurable helps tune the reconnect time as per the\nrequirements.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Thu, 3 Feb 2022 18:40:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Thu, Feb 03, 2022 at 06:40:55PM +0530, Bharath Rupireddy wrote:\n> \n> Isn't complex for anyone to go to the archive location which involves\n> extra steps - getting authentication tokens, searching there for the\n> required WAL file, downloading it, unzipping it, copying back to\n> pg_receivewal node etc. in production environments? You know, this\n> will just be problematic and adds more time for bringing up the\n> pg_receivewal. Instead if I know that the latest checkpoint LSN and\n> archived WAL file from the primary, I can just provide the startpos\n> (probably the last checkpoint LSN) to pg_receivewal so that it can\n> continue getting the WAL records from primary, avoiding the whole\n> bunch of the manual work that I had to do.\n\nI don't get it. If you're missing WAL it means that will you have to do that\ntedious manual work to retrieve them no matter what. So on top of that tedious\nwork, you also have to make sure that you don't provide a bogus start position.\n\nAlso, doesn't this scenario implies that you have both archive_command and\npg_receivewal for storing your WALs elsewhere? In my experience this isn't\nreally common.\n\nIf you want to make sure you won't have to do that tedious work of retrieving\nthe WALs from a different location, you should probably rely on the facilities\nto make sure it won't happen, like using a replication slots and monitoring the\npg_wal usage.\n\nMaybe that's a good idea but I'm still having a hard time imagining a scenario\nwhere it would actually be a good idea.\n\n\n", "msg_date": "Thu, 3 Feb 2022 22:01:42 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Thu, Feb 03, 2022 at 10:01:42PM +0800, Julien Rouhaud wrote:\n> I don't get it. If you're missing WAL it means that will you have to do that\n> tedious manual work to retrieve them no matter what. So on top of that tedious\n> work, you also have to make sure that you don't provide a bogus start position.\n\nI may be wrong in saying that, but the primary use case I have seen\nfor pg_receivewal is a service integration for archiving.\n\n> Maybe that's a good idea but I'm still having a hard time imagining a scenario\n> where it would actually be a good idea.\n\nWith the defaults that we have now in place (current LSN location,\ncurrent slot's location or the archive location), I am not really\nconvinced that we need more control in this area with the proposed\noption.\n--\nMichael", "msg_date": "Sun, 6 Feb 2022 15:46:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Sun, Feb 6, 2022 at 12:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Feb 03, 2022 at 10:01:42PM +0800, Julien Rouhaud wrote:\n> > I don't get it. If you're missing WAL it means that will you have to do that\n> > tedious manual work to retrieve them no matter what. So on top of that tedious\n> > work, you also have to make sure that you don't provide a bogus start position.\n>\n> I may be wrong in saying that, but the primary use case I have seen\n> for pg_receivewal is a service integration for archiving.\n\nNot only for archiving, but it can also be used as a\nsynchronous/asynchronous standby.\n\n> > Maybe that's a good idea but I'm still having a hard time imagining a scenario\n> > where it would actually be a good idea.\n>\n> With the defaults that we have now in place (current LSN location,\n> current slot's location or the archive location), I am not really\n> convinced that we need more control in this area with the proposed\n> option.\n\nHaving the start position as an option for pg_receivewal can be useful\nin scenarios where the LSN/WAL file calculated from the pg_receivwal's\ntarget directory is removed by the primary (for whatever reasons). In\nsuch scenarios, we have to manually remove the WAL files(a risky thing\nIMO) in the pg_receivwal's target directory so that the pg_receivewal\nwill calculate the next start position from the slot info and then\nfrom the server position via IDENTIFY_SYSTEM command.\n\nWith the start position as an option, users can just provide the LSN\nfrom which they want to stream the WAL, in the above case, it can be\nfrom primary's latest checkpoint LSN.\n\nI still feel that the start position as an option would be a good\naddition here, so that users can choose the start position.\n\nIf others don't agree with the start position as an option, at least\nhaving the current start position calculation (first, from\npg_receivewal's target directory, if not, from slot's restart_lsn, if\nnot, from server's identifiy_system command) as a user specified\noption will be of great help. Users can say, 'calculate start position\nfrom target directory or slot's restart_lsn or server's\nidentify_system command).\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 6 Feb 2022 13:01:41 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Sun, Feb 06, 2022 at 01:01:41PM +0530, Bharath Rupireddy wrote:\n> With the start position as an option, users can just provide the LSN\n> from which they want to stream the WAL, in the above case, it can be\n> from primary's latest checkpoint LSN.\n\nThis still strikes me as a dangerous thing to have, prone to errors\nwith a bunch of base backups wasted if one does a mistake as it would\nbe very easy to cause holes in the WAL stored, until one has to\nredeploy from a base backup in urgency. This kind of control is\nprovided by replication slots for new locations, and the current\narchive location if anything is stored, so I would leave it at that.\n\nOn top of that, this kind of control is prone to more race conditions\nwith the backend, as a concurrent checkpoint could make the LSN you\nare looking for irrelevant.\n--\nMichael", "msg_date": "Mon, 7 Feb 2022 11:53:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "On Mon, Feb 7, 2022 at 8:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sun, Feb 06, 2022 at 01:01:41PM +0530, Bharath Rupireddy wrote:\n> > With the start position as an option, users can just provide the LSN\n> > from which they want to stream the WAL, in the above case, it can be\n> > from primary's latest checkpoint LSN.\n>\n> This still strikes me as a dangerous thing to have, prone to errors\n> with a bunch of base backups wasted if one does a mistake as it would\n> be very easy to cause holes in the WAL stored, until one has to\n> redeploy from a base backup in urgency. This kind of control is\n> provided by replication slots for new locations, and the current\n> archive location if anything is stored, so I would leave it at that.\n\nWhat if someone doesn't use pg_receivewal as an archive location? The\npg_receivewal can also be used for synchronous replication quorum\nright? In this situation, I don't mind if some of the WAL files are\nmissing in pg_receivewal's target directory, but I don't want to do\nthe manual work of getting the WAL files to the pg_receivewal's target\ndirectory from my archive location to make the pg_receivewal up and\nconnect with primary again? The primary can still remove the WAL files\nneeded by pg_receivewal (after the max_slot_wal_keep_size limit). If I\ncan tell pg_receivewal where to get the start position, then that will\nbe a good idea.\n\n> On top of that, this kind of control is prone to more race conditions\n> with the backend, as a concurrent checkpoint could make the LSN you\n> are looking for irrelevant.\n\nI understand that having a start position as an option is more\nerror-prone and creates holes in the WAL file. Why can't we allow\nusers to choose the current start position calculation of the\npg_receivewal? Users can choose to tell pg_receivewal either to get\nstart position from its target directory or from its slot's\nrestart_lsn or from the server's identify_system command. Thoughts?\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 7 Feb 2022 12:03:03 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal - couple of improvements" }, { "msg_contents": "Hi,\n\nOn Mon, Feb 07, 2022 at 12:03:03PM +0530, Bharath Rupireddy wrote:\n> \n> What if someone doesn't use pg_receivewal as an archive location? The\n> pg_receivewal can also be used for synchronous replication quorum\n> right? In this situation, I don't mind if some of the WAL files are\n> missing in pg_receivewal's target directory\n\nThose two seem entirely incompatible, why would you have a synchronous\npg_receivewal that would ask for records removed by the primary, even if part\nof synchronous quorum, apart from inadequate (and dangerous) configuration?\n\nAlso, in which scenario exactly would you be willing to pay a huge overhead to\nmake sure that all the WAL records are safely transferred to one or multiple\nalternative location but at the same time don't mind if you're missing some WAL\nsegments?\n\n\n", "msg_date": "Mon, 7 Feb 2022 14:45:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal - couple of improvements" } ]
[ { "msg_contents": "Hi,\n\nOn windows cfbot currently regularly hangs / times out. Presumably this is due\nto the issues discussed in https://postgr.es/m/CA%2BhUKG%2BG5DUNJfdE-qusq5pcj6omYTuWmmFuxCvs%3Dq1jNjkKKA%40mail.gmail.com\nwhich lead to reverting [1] some networking related changes everywhere but\nmaster.\n\nBut it's hard to tell - because the entire test task times out, we don't get\nto see debugging information.\n\nIn earlier versions of the CI script I had tests run under a timeout command,\nthat killed the entire test run. I found that to be helpful when working on\nAIO. But I removed that, in an attempt to simplify things, before\nsubmitting. Turns out it was needed complexity.\n\nThe attached test adds a timeout (using git's timeout binary) to all vcregress\ninvocations. I've not re-added it to the other OSs, but I'm on the fence about\ndoing so.\n\nThe diff is a bit larger than one might think necessary: Yaml doesn't like % -\nfrom the windows command variable syntax - at the start of an unquoted\nstring...\n\n\nSeparately, we should probably make Cluster.pm::psql() etc always use a\n\"fallback\" timeout (rather than just when the test writer thought it's\nnecessary). Or perhaps Utils.pm's INIT should set up a timer after which an\nindividual test is terminated?\n\n\nGreetings,\n\nAndres Freund\n\n[1]\ncommit 75674c7ec1b1607e7013b5cebcb22d9c8b4b2cb6\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2022-01-25 12:17:40 -0500\n\n Revert \"graceful shutdown\" changes for Windows, in back branches only.\n\n This reverts commits 6051857fc and ed52c3707, but only in the back\n branches. Further testing has shown that while those changes do fix\n some things, they also break others; in particular, it looks like\n walreceivers fail to detect walsender-initiated connection close\n reliably if the walsender shuts down this way. We'll keep trying to\n improve matters in HEAD, but it now seems unwise to push these changes\n into stable releases.\n\n Discussion: https://postgr.es/m/CA+hUKG+OeoETZQ=Qw5Ub5h3tmwQhBmDA=nuNO3KG=zWfUypFAw@mail.gmail.com", "msg_date": "Wed, 2 Feb 2022 10:31:07 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "ci/cfbot: run windows tests under a timeout" }, { "msg_contents": "Hi,\n\nOn 2022-02-02 10:31:07 -0800, Andres Freund wrote:\n> The attached test adds a timeout (using git's timeout binary) to all vcregress\n> invocations. I've not re-added it to the other OSs, but I'm on the fence about\n> doing so.\n\nI've pushed this now.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Feb 2022 17:38:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: ci/cfbot: run windows tests under a timeout" } ]
[ { "msg_contents": "The Postgres community is great at diagnosing problems and giving users\nfeedback. In most cases, we can either diagnose a problem and give a\nfix, or at least give users a hint at finding the cause.\n\nHowever, there is a class of problems that are very hard to help with,\nand I have perhaps seen an increasing number of them recently, e.g.:\n\n\thttps://www.postgresql.org/message-id/17384-f50f2eedf541e512%40postgresql.org\n\thttps://www.postgresql.org/message-id/CALTnk7uOrMztNfzjNOZe3TdquAXDPD3vZKjWFWj%3D-Fv-gmROUQ%40mail.gmail.com\n\nI consider these as problems that need digging to find the cause, and\nusers are usually unable to do sufficient digging, and we don't have\ntime to give them instructions, so they never get a reply.\n\nIs there something we can do to improve this situation? Should we just\ntell them they need to hire a Postgres expert? I assume these are users\nwho do not already have access to such experts.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Wed, 2 Feb 2022 19:35:36 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Unclear problem reports" }, { "msg_contents": "On Wed, Feb 2, 2022 at 5:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> I consider these as problems that need digging to find the cause, and\n> users are usually unable to do sufficient digging, and we don't have\n> time to give them instructions, so they never get a reply.\n>\n> Is there something we can do to improve this situation? Should we just\n> tell them they need to hire a Postgres expert? I assume these are users\n> who do not already have access to such experts.\n>\n>\nHave pg_lister queue up a check for, say, two or three days after the bug\nreporting form is filled out. If the report hasn't been responded to by\nsomeone other than the OP send out a reply that basically says:\n\nWe're sorry your message hasn't yet attracted a response. If your report\nfalls into the category of \"tech support for a malfunctioning server\", and\nthis includes error messages that are difficult or impossible to replicate\nin a development environment (maybe give some examples), you may wish to\nconsider eliciting paid professional help. Please see this page on our\nwebsite for a directory of companies that provide such services. The\nPostgreSQL Core Project itself refrains from making recommendations since\nmany, if not all, of these companies contribute back to the project in\norder to keep it both free and open source.\n\nI would also consider adding a form that directs messages to pg_admin\ninstead of pg_bugs. On that form we could put the above message upfront -\nbasically saying that here is a place to ask for help but the core project\n(this website) doesn't directly provide such services: so if you don't\nreceive a reply, or your needs are immediate, consider enlisting a paid\nsupport from our directory.\n\nThe problem with the separate form is that we would need users to\nself-select to report their problem to the support list instead of using a\nbug reporting form. Neither of the mentioned emails are good examples of\nbug reports. Either we make it easier and hope self-selection works, or\njust resign ourselves to seeing these messages on -bugs and use the first\noption to at least be a bit more responsive. The risks related to having a\nrote response, namely it not really being applicable to the situation, seem\nminimal and others seeing that response (assuming we'd send it to the list\nand not just the OP) would likely encourage someone to at least give better\nsuggestions for next steps should that be necessary.\n\nDavid J.\n\nOn Wed, Feb 2, 2022 at 5:35 PM Bruce Momjian <bruce@momjian.us> wrote:\nI consider these as problems that need digging to find the cause, and\nusers are usually unable to do sufficient digging, and we don't have\ntime to give them instructions, so they never get a reply.\n\nIs there something we can do to improve this situation?  Should we just\ntell them they need to hire a Postgres expert?  I assume these are users\nwho do not already have access to such experts.Have pg_lister queue up a check for, say, two or three days after the bug reporting form is filled out.  If the report hasn't been responded to by someone other than the OP send out a reply that basically says:We're sorry your message hasn't yet attracted a response.  If your report falls into the category of \"tech support for a malfunctioning server\", and this includes error messages that are difficult or impossible to replicate in a development environment (maybe give some examples), you may wish to consider eliciting paid professional help.  Please see this page on our website for a directory of companies that provide such services.  The PostgreSQL Core Project itself refrains from making recommendations since many, if not all, of these companies contribute back to the project in order to keep it both free and open source.I would also consider adding a form that directs messages to pg_admin instead of pg_bugs.  On that form we could put the above message upfront - basically saying that here is a place to ask for help but the core project (this website) doesn't directly provide such services: so if you don't receive a reply, or your needs are immediate, consider enlisting a paid support from our directory.The problem with the separate form is that we would need users to self-select to report their problem to the support list instead of using a bug reporting form.  Neither of the mentioned emails are good examples of bug reports.  Either we make it easier and hope self-selection works, or just resign ourselves to seeing these messages on -bugs and use the first option to at least be a bit more responsive.  The risks related to having a rote response, namely it not really being applicable to the situation, seem minimal and others seeing that response (assuming we'd send it to the list and not just the OP) would likely encourage someone to at least give better suggestions for next steps should that be necessary.David J.", "msg_date": "Wed, 2 Feb 2022 19:21:19 -0700", "msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 02, 2022 at 07:35:36PM -0500, Bruce Momjian wrote:\n> The Postgres community is great at diagnosing problems and giving users\n> feedback. In most cases, we can either diagnose a problem and give a\n> fix, or at least give users a hint at finding the cause.\n> \n> However, there is a class of problems that are very hard to help with,\n> and I have perhaps seen an increasing number of them recently, e.g.:\n> \n> \thttps://www.postgresql.org/message-id/17384-f50f2eedf541e512%40postgresql.org\n> \thttps://www.postgresql.org/message-id/CALTnk7uOrMztNfzjNOZe3TdquAXDPD3vZKjWFWj%3D-Fv-gmROUQ%40mail.gmail.com\n> \n> I consider these as problems that need digging to find the cause, and\n> users are usually unable to do sufficient digging, and we don't have\n> time to give them instructions, so they never get a reply.\n> \n> Is there something we can do to improve this situation? Should we just\n> tell them they need to hire a Postgres expert? I assume these are users\n> who do not already have access to such experts.\n\nThere's also only so much you can do to help interacting by mail without\naccess to the machine, even if you're willing to spend time helping people for\nfree.\n\nOne thing that might help would be to work on a troubleshooting section on the\nwiki with some common problems and some clear instructions on either how to try\nto fix the problem or provide needed information for further debugging. At\nleast it would save the time for those steps and one could quickly respond with\nthat link, similarly to how we do for the slow query questions wiki entry.\n\n\n", "msg_date": "Thu, 3 Feb 2022 12:28:05 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Wed, Feb 2, 2022 at 07:21:19PM -0700, David G. Johnston wrote:\n> On Wed, Feb 2, 2022 at 5:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> \n> I consider these as problems that need digging to find the cause, and\n> users are usually unable to do sufficient digging, and we don't have\n> time to give them instructions, so they never get a reply.\n> \n> Is there something we can do to improve this situation?  Should we just\n> tell them they need to hire a Postgres expert?  I assume these are users\n> who do not already have access to such experts.\n> \n> \n> \n> Have pg_lister queue up a check for, say, two or three days after the bug\n> reporting form is filled out.  If the report hasn't been responded to by\n> someone other than the OP send out a reply that basically says:\n> \n> We're sorry your message hasn't yet attracted a response.  If your report falls\n> into the category of \"tech support for a malfunctioning server\", and this\n> includes error messages that are difficult or impossible to replicate in a\n> development environment (maybe give some examples), you may wish to consider\n> eliciting paid professional help.  Please see this page on our website for a\n> directory of companies that provide such services.  The PostgreSQL Core Project\n> itself refrains from making recommendations since many, if not all, of these\n> companies contribute back to the project in order to keep it both free and open\n> source.\n\nYes, that is an idea. I have canned email responses for common issues\nlike PGAdmin questions on the bugs list, but for these cases, I don't\nknow if someone might actually know the answer, and I don't know how\nlong to wait for an answer. Should we be going the other way and state\non the bugs submission page, https://www.postgresql.org/account/submitbug/:\n\n\tIf you are having a serious problem with the software and do not\n\treceive a reply, consider additional support channels, including\n\tprofessional support (https://www.postgresql.org/support/).\n\nI have been hesitant to recommend professional support myself since I\nwork for a professional support company, but in these cases it is\nclearly something the user should consider.\n\n> I would also consider adding a form that directs messages to pg_admin instead\n> of pg_bugs.  On that form we could put the above message upfront - basically\n> saying that here is a place to ask for help but the core project (this website)\n> doesn't directly provide such services: so if you don't receive a reply, or\n> your needs are immediate, consider enlisting a paid support from our directory.\n\nYes, like I stated above. Not sure how pg_admin fits in here because we\ngets lots of questions that aren't \"server not starting\".\n\n> The problem with the separate form is that we would need users to self-select\n> to report their problem to the support list instead of using a bug reporting\n> form.  Neither of the mentioned emails are good examples of bug reports. \n\nYes, given the experience of many of the bugs posters, I don't think\nself-selecting would help.\n\n> Either we make it easier and hope self-selection works, or just resign\n> ourselves to seeing these messages on -bugs and use the first option to at\n> least be a bit more responsive.  The risks related to having a rote response,\n> namely it not really being applicable to the situation, seem minimal and others\n> seeing that response (assuming we'd send it to the list and not just the OP)\n> would likely encourage someone to at least give better suggestions for next\n> steps should that be necessary.\n\nThe problem is that I don't even know how long to wait for someone to\nreply, so doing it up front makes more sense.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:36:46 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Thu, Feb 3, 2022 at 12:28:05PM +0800, Julien Rouhaud wrote:\n> Hi,\n> \n> On Wed, Feb 02, 2022 at 07:35:36PM -0500, Bruce Momjian wrote:\n> > The Postgres community is great at diagnosing problems and giving users\n> > feedback. In most cases, we can either diagnose a problem and give a\n> > fix, or at least give users a hint at finding the cause.\n> > \n> > However, there is a class of problems that are very hard to help with,\n> > and I have perhaps seen an increasing number of them recently, e.g.:\n> > \n> > \thttps://www.postgresql.org/message-id/17384-f50f2eedf541e512%40postgresql.org\n> > \thttps://www.postgresql.org/message-id/CALTnk7uOrMztNfzjNOZe3TdquAXDPD3vZKjWFWj%3D-Fv-gmROUQ%40mail.gmail.com\n> > \n> > I consider these as problems that need digging to find the cause, and\n> > users are usually unable to do sufficient digging, and we don't have\n> > time to give them instructions, so they never get a reply.\n> > \n> > Is there something we can do to improve this situation? Should we just\n> > tell them they need to hire a Postgres expert? I assume these are users\n> > who do not already have access to such experts.\n> \n> There's also only so much you can do to help interacting by mail without\n> access to the machine, even if you're willing to spend time helping people for\n> free.\n\nAgreed.\n\n> One thing that might help would be to work on a troubleshooting section on the\n> wiki with some common problems and some clear instructions on either how to try\n> to fix the problem or provide needed information for further debugging. At\n> least it would save the time for those steps and one could quickly respond with\n> that link, similarly to how we do for the slow query questions wiki entry.\n\nI think the challenge there is that the problems are all different, and\nif we had a pattern we could at least supply better errors and hints,\nunless I am missing something obvious. Does someone want to go back\nthrough the bugs@ archive and see if they can find a pattern?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:38:14 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": true, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Fri, Feb 04, 2022 at 01:38:14PM -0500, Bruce Momjian wrote:\n> On Thu, Feb 3, 2022 at 12:28:05PM +0800, Julien Rouhaud wrote:\n> \n> > One thing that might help would be to work on a troubleshooting section on the\n> > wiki with some common problems and some clear instructions on either how to try\n> > to fix the problem or provide needed information for further debugging. At\n> > least it would save the time for those steps and one could quickly respond with\n> > that link, similarly to how we do for the slow query questions wiki entry.\n> \n> I think the challenge there is that the problems are all different, and\n> if we had a pattern we could at least supply better errors and hints,\n> unless I am missing something obvious. Does someone want to go back\n> through the bugs@ archive and see if they can find a pattern?\n\nYes we definitely don't want to have a massive page for every single problem,\nbut there are probably so really frequent reports for which we can't really\nimprove the error, and for which we tend to ask the same questions every time\nwhich could benefit from that.\n\nFor instance for index corruption, we can suggest using enable_* and checking\nthe query again to highlight a problem with the index, and hint about system\ncollation library upgrade and so on. I probably answered a few reports about\nthis over the last couple of months, and other also did.\n\nSome immediate errors when trying to start the instance could also be covered\n(like the no valid checkpoint record problem), at least to warn to *not* use\npg_resetwal.\n\n\n", "msg_date": "Sat, 5 Feb 2022 13:17:09 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Fri, Feb 04, 2022 at 01:36:46PM -0500, Bruce Momjian wrote:\n> On Wed, Feb 2, 2022 at 07:21:19PM -0700, David G. Johnston wrote:\n> > On Wed, Feb 2, 2022 at 5:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > Have pg_lister queue up a check for, say, two or three days after the bug\n> > reporting form is filled out.� If the report hasn't been responded to by\n> > someone other than the OP send out a reply that basically says:\n> > \n> > We're sorry your message hasn't yet attracted a response.� If your report falls\n> > into the category of \"tech support for a malfunctioning server\", and this\n> > includes error�messages that are difficult or impossible to replicate in a\n> > development environment (maybe give some examples), you may wish to consider\n> > eliciting paid professional help.� Please see this page on our website for a\n> > directory of companies that provide such services.� The PostgreSQL Core Project\n> > itself refrains from making recommendations since many, if not all, of these\n> > companies contribute back to the project in order to keep it both free and open\n> > source.\n> \n> Yes, that is an idea. I have canned email responses for common issues\n> like PGAdmin questions on the bugs list, but for these cases, I don't\n> know if someone might actually know the answer, and I don't know how\n> long to wait for an answer. Should we be going the other way and state\n> on the bugs submission page, https://www.postgresql.org/account/submitbug/:\n> \n> \tIf you are having a serious problem with the software and do not\n> \treceive a reply, consider additional support channels, including\n> \tprofessional support (https://www.postgresql.org/support/).\n\nhttps://www.postgresql.org/account/submitbug/ does say to \"ensure you have\nread\" https://www.postgresql.org/docs/current/bug-reporting.html, which says,\n\"If you need help immediately, consider obtaining a commercial support\ncontract.\" Hence, this expands on an existing message and makes it more\nprominent. I think that's reasonable, though I'm not sure it's a net win.\nOne could also edit the page that appears after submitting a bug. Editing a\nweb page is superior to using canned email responses, for two reasons:\n\n- Canned responses are noise to every list reader other than the OP.\n- Canned responses make the list feel like a sales funnel rather than a normal\n free software mailing list.\n\n\nIt's safe to say $SUBJECT won't have a general solution. There will always be\nsome vanguard of tough user experiences, each one implying a separate project.\nFor example, Julien replied to the \"could not locate a valid checkpoint\nrecord\" -bugs thread with some followup questions, but one could improve the\nlog messages to answer those questions. Where we don't know how to improve\nthe product enough, wiki pages (Julien suggested this) seem good.\n\n\n", "msg_date": "Sat, 5 Feb 2022 15:02:31 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "Hi,\n\nOn 2022-02-02 19:35:36 -0500, Bruce Momjian wrote:\n> The Postgres community is great at diagnosing problems and giving users\n> feedback. In most cases, we can either diagnose a problem and give a\n> fix, or at least give users a hint at finding the cause.\n>\n> However, there is a class of problems that are very hard to help with,\n> and I have perhaps seen an increasing number of them recently, e.g.:\n>\n> \thttps://www.postgresql.org/message-id/17384-f50f2eedf541e512%40postgresql.org\n> \thttps://www.postgresql.org/message-id/CALTnk7uOrMztNfzjNOZe3TdquAXDPD3vZKjWFWj%3D-Fv-gmROUQ%40mail.gmail.com\n>\n> I consider these as problems that need digging to find the cause, and\n> users are usually unable to do sufficient digging, and we don't have\n> time to give them instructions, so they never get a reply.\n>\n> Is there something we can do to improve this situation?\n\nI think some of this can be improved measurably, with moderate effort, by\nimproving our error messages / diagnostics.\n\n\nTaking e.g. the second report:\n\n[7804] LOG: starting PostgreSQL 13.1, compiled by Visual C++ build 1914, 64-bit\n[8812] LOG: invalid primary checkpoint record\n[8812] PANIC: could not locate a valid checkpoint record\n[7804] LOG: startup process (PID 8812) was terminated by exception 0xC0000409\n[7804] HINT: See C include file \"ntstatus.h\" for a description of the\nhexadecimal value.\n[7804] LOG: aborting startup due to startup process failure\n[7804] LOG: database system is shut down\n\nWe don't provide any details in this log report that allow to diagnose the\nproblem:\n\"invalid primary checkpoint record\" doesn't say *why* it's invalid, nor does\nit say the location at which the checkpoint record was, nor whether the\ncluster was shutdown gracefully before. It could be that the permissions on\nthe WAL files were wrong, it could be missing files, actually invalid WAL, ...\n\nThe while following bit about \"startup process ... was terminated\" and it's\nHINT is useless information, because we actually \"knowingly\" caused that\nPANIC. Even disregarding that, it certainly doesn't help to list numerical\nexception codes and references to ntstatus.h.\n\n\nThe first report is similar, although it's a bit harder to improve:\nERROR: missing chunk number 0 for toast value 152073604 in pg_toast_2619\n\nEasy to fix: We don't mention the table that pg_toast_2619 corresponds to -\nsomething we can determine, rather forcing the user to figure out how to do\nso.\n\nHarder to fix: We don't provide information about where the reference to the\ntoast value comes from. In general we don't necessarily know that, because\nDatum's are handed around fairly widely. But in a lot of cases we actually\ncould know.\n\n\n> Should we just tell them they need to hire a Postgres expert? I assume\n> these are users who do not already have access to such experts.\n\nI think these are more our fault than our users :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 5 Feb 2022 15:20:45 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Sun, Feb 6, 2022 at 12:02 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Fri, Feb 04, 2022 at 01:36:46PM -0500, Bruce Momjian wrote:\n> > On Wed, Feb 2, 2022 at 07:21:19PM -0700, David G. Johnston wrote:\n> > > On Wed, Feb 2, 2022 at 5:35 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > > Have pg_lister queue up a check for, say, two or three days after the bug\n> > > reporting form is filled out. If the report hasn't been responded to by\n> > > someone other than the OP send out a reply that basically says:\n> > >\n> > > We're sorry your message hasn't yet attracted a response. If your report falls\n> > > into the category of \"tech support for a malfunctioning server\", and this\n> > > includes error messages that are difficult or impossible to replicate in a\n> > > development environment (maybe give some examples), you may wish to consider\n> > > eliciting paid professional help. Please see this page on our website for a\n> > > directory of companies that provide such services. The PostgreSQL Core Project\n> > > itself refrains from making recommendations since many, if not all, of these\n> > > companies contribute back to the project in order to keep it both free and open\n> > > source.\n> >\n> > Yes, that is an idea. I have canned email responses for common issues\n> > like PGAdmin questions on the bugs list, but for these cases, I don't\n> > know if someone might actually know the answer, and I don't know how\n> > long to wait for an answer. Should we be going the other way and state\n> > on the bugs submission page, https://www.postgresql.org/account/submitbug/:\n> >\n> > If you are having a serious problem with the software and do not\n> > receive a reply, consider additional support channels, including\n> > professional support (https://www.postgresql.org/support/).\n>\n> https://www.postgresql.org/account/submitbug/ does say to \"ensure you have\n> read\" https://www.postgresql.org/docs/current/bug-reporting.html, which says,\n> \"If you need help immediately, consider obtaining a commercial support\n> contract.\" Hence, this expands on an existing message and makes it more\n> prominent. I think that's reasonable, though I'm not sure it's a net win.\n> One could also edit the page that appears after submitting a bug. Editing a\n> web page is superior to using canned email responses, for two reasons:\n>\n> - Canned responses are noise to every list reader other than the OP.\n> - Canned responses make the list feel like a sales funnel rather than a normal\n> free software mailing list.\n\nAll bug reports posted through the bug form gets moderated before\nthey're passed through. Moderators also have access to a set of\npre-defined canned responses (such as the example of where they should\ngo to report pgadmin bugs). It will also catch posts made directly to\n-bugs by people who are not subscribed, but people who are subscribed\nwill not have their posts moderated by default.\n\nIf we're talking about people submitting bug reports, åperhaps a lot\ncan be done with (1) a couple of more such responses and (2) more\nclear instructions fo the moderators of when they are supposed to use\nthem.\n\nError on the side of letting them through is probably always the best\nchoice, but if it's clear that it can be better handled by a canned\nresponse, we do have an actual system for that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 6 Feb 2022 00:41:52 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Sun, Feb 06, 2022 at 12:41:52AM +0100, Magnus Hagander wrote:\n> On Sun, Feb 6, 2022 at 12:02 AM Noah Misch <noah@leadboat.com> wrote:\n> > On Fri, Feb 04, 2022 at 01:36:46PM -0500, Bruce Momjian wrote:\n> > > On Wed, Feb 2, 2022 at 07:21:19PM -0700, David G. Johnston wrote:\n> > > > Have pg_lister queue up a check for, say, two or three days after the bug\n> > > > reporting form is filled out. If the report hasn't been responded to by\n> > > > someone other than the OP send out a reply that basically says:\n> > > >\n> > > > We're sorry your message hasn't yet attracted a response. If your report falls\n> > > > into the category of \"tech support for a malfunctioning server\", and this\n> > > > includes error messages that are difficult or impossible to replicate in a\n> > > > development environment (maybe give some examples), you may wish to consider\n> > > > eliciting paid professional help. Please see this page on our website for a\n> > > > directory of companies that provide such services. The PostgreSQL Core Project\n> > > > itself refrains from making recommendations since many, if not all, of these\n> > > > companies contribute back to the project in order to keep it both free and open\n> > > > source.\n> > >\n> > > Yes, that is an idea. I have canned email responses for common issues\n> > > like PGAdmin questions on the bugs list, but for these cases, I don't\n> > > know if someone might actually know the answer, and I don't know how\n> > > long to wait for an answer. Should we be going the other way and state\n> > > on the bugs submission page, https://www.postgresql.org/account/submitbug/:\n> > >\n> > > If you are having a serious problem with the software and do not\n> > > receive a reply, consider additional support channels, including\n> > > professional support (https://www.postgresql.org/support/).\n> >\n> > https://www.postgresql.org/account/submitbug/ does say to \"ensure you have\n> > read\" https://www.postgresql.org/docs/current/bug-reporting.html, which says,\n> > \"If you need help immediately, consider obtaining a commercial support\n> > contract.\" Hence, this expands on an existing message and makes it more\n> > prominent. I think that's reasonable, though I'm not sure it's a net win.\n> > One could also edit the page that appears after submitting a bug. Editing a\n> > web page is superior to using canned email responses, for two reasons:\n> >\n> > - Canned responses are noise to every list reader other than the OP.\n> > - Canned responses make the list feel like a sales funnel rather than a normal\n> > free software mailing list.\n> \n> All bug reports posted through the bug form gets moderated before\n> they're passed through. Moderators also have access to a set of\n> pre-defined canned responses (such as the example of where they should\n> go to report pgadmin bugs). It will also catch posts made directly to\n> -bugs by people who are not subscribed, but people who are subscribed\n> will not have their posts moderated by default.\n> \n> If we're talking about people submitting bug reports, �perhaps a lot\n> can be done with (1) a couple of more such responses and (2) more\n> clear instructions fo the moderators of when they are supposed to use\n> them.\n> \n> Error on the side of letting them through is probably always the best\n> choice, but if it's clear that it can be better handled by a canned\n> response, we do have an actual system for that.\n\nI was referring to David G. Johnston's proposal to send canned email responses\nwhen a thread (presumably containing a not-obviously-unreasonable PostgreSQL\nquestion or report) gets no replies in N days. The proposed email directed\nthe user toward paid professional services. Sending such a message at\nmoderation time would solve the noise problem, but it would make the \"sales\nfunnel\" problem worse. (Also, I figure moderators won't know at moderation\ntime which messages will get zero replies.) For which part of $SUBJECT did\nyou envision using new moderator responses?\n\n\n", "msg_date": "Sat, 5 Feb 2022 15:58:06 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Sun, Feb 6, 2022 at 5:28 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> > > - Canned responses are noise to every list reader other than the OP.\n> > > - Canned responses make the list feel like a sales funnel rather than a normal\n> > > free software mailing list.\n> >\n> > All bug reports posted through the bug form gets moderated before\n> > they're passed through. Moderators also have access to a set of\n> > pre-defined canned responses (such as the example of where they should\n> > go to report pgadmin bugs). It will also catch posts made directly to\n> > -bugs by people who are not subscribed, but people who are subscribed\n> > will not have their posts moderated by default.\n> >\n> > If we're talking about people submitting bug reports, åperhaps a lot\n> > can be done with (1) a couple of more such responses and (2) more\n> > clear instructions fo the moderators of when they are supposed to use\n> > them.\n> >\n> > Error on the side of letting them through is probably always the best\n> > choice, but if it's clear that it can be better handled by a canned\n> > response, we do have an actual system for that.\n>\n> I was referring to David G. Johnston's proposal to send canned email responses\n> when a thread (presumably containing a not-obviously-unreasonable PostgreSQL\n> question or report) gets no replies in N days. The proposed email directed\n> the user toward paid professional services. Sending such a message at\n> moderation time would solve the noise problem, but it would make the \"sales\n> funnel\" problem worse. (Also, I figure moderators won't know at moderation\n> time which messages will get zero replies.) For which part of $SUBJECT did\n> you envision using new moderator responses?\n\nIMO, having a postgresql wiki page describing the steps to debug or\nperhaps fix the most common problems/bugs is a good idea. Of course,\nadding the steps to wiki and maintaining it takes some effort from the\nexperts here. But this is better than having canned email responses\nfor bug reports which requires more effort than a wiki page. Today,\nthere are a lot of outside resources listing the steps to fix/debug a\npostgres related issue. The users can always refer to the wiki page\nand try out things on their own, if the steps don't fix the issue,\nthey can always reach out via postgresql bugs list.\n\nI also agree with the point that improving some of the most common\nerror messages with hints on what to do next to fix the issues.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Sun, 6 Feb 2022 11:37:57 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" }, { "msg_contents": "On Sun, Feb 6, 2022 at 12:58 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> On Sun, Feb 06, 2022 at 12:41:52AM +0100, Magnus Hagander wrote:\n> > On Sun, Feb 6, 2022 at 12:02 AM Noah Misch <noah@leadboat.com> wrote:\n> > > On Fri, Feb 04, 2022 at 01:36:46PM -0500, Bruce Momjian wrote:\n> > > > On Wed, Feb 2, 2022 at 07:21:19PM -0700, David G. Johnston wrote:\n> > > > > Have pg_lister queue up a check for, say, two or three days after the bug\n> > > > > reporting form is filled out. If the report hasn't been responded to by\n> > > > > someone other than the OP send out a reply that basically says:\n> > > > >\n> > > > > We're sorry your message hasn't yet attracted a response. If your report falls\n> > > > > into the category of \"tech support for a malfunctioning server\", and this\n> > > > > includes error messages that are difficult or impossible to replicate in a\n> > > > > development environment (maybe give some examples), you may wish to consider\n> > > > > eliciting paid professional help. Please see this page on our website for a\n> > > > > directory of companies that provide such services. The PostgreSQL Core Project\n> > > > > itself refrains from making recommendations since many, if not all, of these\n> > > > > companies contribute back to the project in order to keep it both free and open\n> > > > > source.\n> > > >\n> > > > Yes, that is an idea. I have canned email responses for common issues\n> > > > like PGAdmin questions on the bugs list, but for these cases, I don't\n> > > > know if someone might actually know the answer, and I don't know how\n> > > > long to wait for an answer. Should we be going the other way and state\n> > > > on the bugs submission page, https://www.postgresql.org/account/submitbug/:\n> > > >\n> > > > If you are having a serious problem with the software and do not\n> > > > receive a reply, consider additional support channels, including\n> > > > professional support (https://www.postgresql.org/support/).\n> > >\n> > > https://www.postgresql.org/account/submitbug/ does say to \"ensure you have\n> > > read\" https://www.postgresql.org/docs/current/bug-reporting.html, which says,\n> > > \"If you need help immediately, consider obtaining a commercial support\n> > > contract.\" Hence, this expands on an existing message and makes it more\n> > > prominent. I think that's reasonable, though I'm not sure it's a net win.\n> > > One could also edit the page that appears after submitting a bug. Editing a\n> > > web page is superior to using canned email responses, for two reasons:\n> > >\n> > > - Canned responses are noise to every list reader other than the OP.\n> > > - Canned responses make the list feel like a sales funnel rather than a normal\n> > > free software mailing list.\n> >\n> > All bug reports posted through the bug form gets moderated before\n> > they're passed through. Moderators also have access to a set of\n> > pre-defined canned responses (such as the example of where they should\n> > go to report pgadmin bugs). It will also catch posts made directly to\n> > -bugs by people who are not subscribed, but people who are subscribed\n> > will not have their posts moderated by default.\n> >\n> > If we're talking about people submitting bug reports, åperhaps a lot\n> > can be done with (1) a couple of more such responses and (2) more\n> > clear instructions fo the moderators of when they are supposed to use\n> > them.\n> >\n> > Error on the side of letting them through is probably always the best\n> > choice, but if it's clear that it can be better handled by a canned\n> > response, we do have an actual system for that.\n>\n> I was referring to David G. Johnston's proposal to send canned email responses\n> when a thread (presumably containing a not-obviously-unreasonable PostgreSQL\n> question or report) gets no replies in N days. The proposed email directed\n> the user toward paid professional services. Sending such a message at\n> moderation time would solve the noise problem, but it would make the \"sales\n> funnel\" problem worse. (Also, I figure moderators won't know at moderation\n> time which messages will get zero replies.) For which part of $SUBJECT did\n> you envision using new moderator responses?\n\nIt depends. For some of them it would, and I believe they are used for\nthat as well. But yes, it will only be able to replace those cases\nwhere people send a canned response today (\"report pgadmin bugs over\nhere instead\", \"you need to talk to your cloud vendor\" and things like\nthat). It won't help for anything more advanced than that.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Sun, 6 Feb 2022 14:10:51 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: Unclear problem reports" } ]
[ { "msg_contents": "Hi,\n\nI'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n\nMy proposal implements modules as schema objects to be stored in a new\nsystem catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\nALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\nmodules and module routines. I am attempting to follow the SQL spec.\nHowever, for right now, I'm proposing to support only routines as module\ncontents, with local temporary tables and path specifications as defined\nin the SQL spec, to be supported in a future submission. We could also\ninclude support for variables depending on its status. [2]\n\nFollowing are some examples of what the new module syntax would look\nlike. The attached patch has detailed documentation.\n\nCREATE MODULE mtest1 CREATE FUNCTION m1testa() RETURNS text\n LANGUAGE sql\n RETURN '1x';\nSELECT mtest1.m1testa();\nALTER MODULE mtest1 CREATE FUNCTION m1testd() RETURNS text\n LANGUAGE sql\n RETURN 'm1testd';\nSELECT mtest1.m1testd();\nALTER MODULE mtest1 RENAME TO mtest1renamed;\nSELECT mtest1renamed.m1testd();\nREVOKE ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() FROM public;\nGRANT ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() TO\nregress_priv_user1;\n\nI am new to the PostgreSQL community and would really appreciate your\ninput and feedback.\n\nThanks,\nSwaha Miller\nAmazon Web Services\n\n[1]\nhttps://www.postgresql.org/message-id/CAB_5SRebSCjO12%3DnLsaLCBw2vnkiNH7jcNchirPc0yQ2KmiknQ%40mail.gmail.com\n\n[2]\nhttps://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com", "msg_date": "Wed, 2 Feb 2022 18:28:30 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "support for CREATE MODULE" }, { "msg_contents": "Hi\n\nčt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\nnapsal:\n\n> Hi,\n>\n> I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n>\n> My proposal implements modules as schema objects to be stored in a new\n> system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n> ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> modules and module routines. I am attempting to follow the SQL spec.\n> However, for right now, I'm proposing to support only routines as module\n> contents, with local temporary tables and path specifications as defined\n> in the SQL spec, to be supported in a future submission. We could also\n> include support for variables depending on its status. [2]\n>\n> Following are some examples of what the new module syntax would look\n> like. The attached patch has detailed documentation.\n>\n> CREATE MODULE mtest1 CREATE FUNCTION m1testa() RETURNS text\n> LANGUAGE sql\n> RETURN '1x';\n> SELECT mtest1.m1testa();\n> ALTER MODULE mtest1 CREATE FUNCTION m1testd() RETURNS text\n> LANGUAGE sql\n> RETURN 'm1testd';\n> SELECT mtest1.m1testd();\n> ALTER MODULE mtest1 RENAME TO mtest1renamed;\n> SELECT mtest1renamed.m1testd();\n> REVOKE ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() FROM public;\n> GRANT ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() TO\n> regress_priv_user1;\n>\n> I am new to the PostgreSQL community and would really appreciate your\n> input and feedback.\n>\n\nI dislike this feature. The modules are partially redundant to schemas and\nto extensions in Postgres, and I am sure, so there is no reason to\nintroduce this.\n\nWhat is the benefit against schemas and extensions?\n\nRegards\n\nPavel\n\n\n>\n> Thanks,\n> Swaha Miller\n> Amazon Web Services\n>\n> [1]\n> https://www.postgresql.org/message-id/CAB_5SRebSCjO12%3DnLsaLCBw2vnkiNH7jcNchirPc0yQ2KmiknQ%40mail.gmail.com\n>\n> [2]\n> https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com\n>\n>\n\nHičt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com> napsal:Hi,I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]My proposal implements modules as schema objects to be stored in a newsystem catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges onmodules and module routines. I am attempting to follow the SQL spec.However, for right now, I'm proposing to support only routines as modulecontents, with local temporary tables and path specifications as definedin the SQL spec, to be supported in a future submission. We could alsoinclude support for variables depending on its status. [2]Following are some examples of what the new module syntax would looklike. The attached patch has detailed documentation.CREATE MODULE mtest1 CREATE FUNCTION m1testa() RETURNS text    LANGUAGE sql    RETURN '1x';SELECT mtest1.m1testa();ALTER MODULE mtest1 CREATE FUNCTION m1testd() RETURNS text    LANGUAGE sql    RETURN 'm1testd';SELECT mtest1.m1testd();ALTER MODULE mtest1 RENAME TO mtest1renamed;SELECT mtest1renamed.m1testd();REVOKE ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() FROM public;GRANT ON MODULE mtest1 REFERENCES ON FUNCTION m1testa() TOregress_priv_user1;I am new to the PostgreSQL community and would really appreciate yourinput and feedback.I dislike this feature. The modules are partially redundant to schemas and to extensions in Postgres, and I am sure, so there is no reason to introduce this.What is the benefit against schemas and extensions?RegardsPavel Thanks,Swaha MillerAmazon Web Services[1] https://www.postgresql.org/message-id/CAB_5SRebSCjO12%3DnLsaLCBw2vnkiNH7jcNchirPc0yQ2KmiknQ%40mail.gmail.com[2] https://www.postgresql.org/message-id/flat/CAFj8pRD053CY_N4%3D6SvPe7ke6xPbh%3DK50LUAOwjC3jm8Me9Obg%40mail.gmail.com", "msg_date": "Thu, 3 Feb 2022 05:25:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n> \n> čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n> napsal:\n> \n> > Hi,\n> >\n> > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> >\n> > My proposal implements modules as schema objects to be stored in a new\n> > system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n> > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> > modules and module routines. I am attempting to follow the SQL spec.\n> > However, for right now, I'm proposing to support only routines as module\n> > contents, with local temporary tables and path specifications as defined\n> > in the SQL spec, to be supported in a future submission. We could also\n> > include support for variables depending on its status. [2]\n> \n> I dislike this feature. The modules are partially redundant to schemas and\n> to extensions in Postgres, and I am sure, so there is no reason to\n> introduce this.\n> \n> What is the benefit against schemas and extensions?\n\nI agree with Pavel. It seems that it's mainly adding another namespacing layer\nbetween schemas and objects, and it's likely going to create a mess.\nThat's also going to be problematic if you want to add support for module\nvariables, as you won't be able to use e.g.\ndbname.schemaname.modulename.variablename.fieldname.\n\nAlso, my understanding was that the major interest of modules (at least for the\nroutines part) was the ability to make some of them private to the module, but\nit doesn't look like it's doing that, so I also don't see any real benefit\ncompared to schemas and extensions.\n\n\n", "msg_date": "Thu, 3 Feb 2022 12:46:26 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "čt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:\n\n> Hi,\n>\n> On Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n> >\n> > čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n> > napsal:\n> >\n> > > Hi,\n> > >\n> > > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> > >\n> > > My proposal implements modules as schema objects to be stored in a new\n> > > system catalog pg_module with new syntax for CREATE [OR REPLACE]\n> MODULE,\n> > > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> > > modules and module routines. I am attempting to follow the SQL spec.\n> > > However, for right now, I'm proposing to support only routines as\n> module\n> > > contents, with local temporary tables and path specifications as\n> defined\n> > > in the SQL spec, to be supported in a future submission. We could also\n> > > include support for variables depending on its status. [2]\n> >\n> > I dislike this feature. The modules are partially redundant to schemas\n> and\n> > to extensions in Postgres, and I am sure, so there is no reason to\n> > introduce this.\n> >\n> > What is the benefit against schemas and extensions?\n>\n> I agree with Pavel. It seems that it's mainly adding another namespacing\n> layer\n> between schemas and objects, and it's likely going to create a mess.\n> That's also going to be problematic if you want to add support for module\n> variables, as you won't be able to use e.g.\n> dbname.schemaname.modulename.variablename.fieldname.\n>\n> Also, my understanding was that the major interest of modules (at least\n> for the\n> routines part) was the ability to make some of them private to the module,\n> but\n> it doesn't look like it's doing that, so I also don't see any real benefit\n> compared to schemas and extensions.\n>\n\nThe biggest problem is coexistence of Postgres's SEARCH_PATH object\nidentification, and local and public scopes used in MODULEs or in Oracle's\npackages.\n\nI can imagine MODULES as third level of database unit object grouping with\nfollowing functionality\n\n1. It should support all database objects like schemas\n2. all public objects should be accessed directly when outer schema is in\nSEARCH_PATH\n3. private objects cannot be accessed from other modules\n4. modules should be movable between schemas, databases without a loss of\nfunctionality\n5. modules should to support renaming without loss of functionality\n6. there should be redefined some rules of visibility, because there can be\nnew identifier's collisions and ambiguities\n7. there should be defined relation of modules's objects and schema's\nobjects. Maybe an introduction of the default module can be a good idea.\n\nI had the opportunity to see a man who modified routines in pgAdmin. It can\nbe hell, but if we introduce a new concept (and it is an important\nconcept), then there should be strong benefits - for example - possibility\nof strong encapsulation of code inside modules (or some units - the name is\nnot important).\n\nThe problem with pgAdmin maybe can be solved better by adding some\nannotations to database objects that allows more user friendly organization\nin the object tree in pgAdmin (and similar tools). Maybe it can be useful\nto have more tries (defined by locality, semantic, quality, ...).\n\nRegards\n\nPavel\n\nčt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n> \n> čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n> napsal:\n> \n> > Hi,\n> >\n> > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> >\n> > My proposal implements modules as schema objects to be stored in a new\n> > system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n> > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> > modules and module routines. I am attempting to follow the SQL spec.\n> > However, for right now, I'm proposing to support only routines as module\n> > contents, with local temporary tables and path specifications as defined\n> > in the SQL spec, to be supported in a future submission. We could also\n> > include support for variables depending on its status. [2]\n> \n> I dislike this feature. The modules are partially redundant to schemas and\n> to extensions in Postgres, and I am sure, so there is no reason to\n> introduce this.\n> \n> What is the benefit against schemas and extensions?\n\nI agree with Pavel.  It seems that it's mainly adding another namespacing layer\nbetween schemas and objects, and it's likely going to create a mess.\nThat's also going to be problematic if you want to add support for module\nvariables, as you won't be able to use e.g.\ndbname.schemaname.modulename.variablename.fieldname.\n\nAlso, my understanding was that the major interest of modules (at least for the\nroutines part) was the ability to make some of them private to the module, but\nit doesn't look like it's doing that, so I also don't see any real benefit\ncompared to schemas and extensions.The biggest problem is coexistence of Postgres's SEARCH_PATH  object identification, and local and public scopes used in MODULEs or in Oracle's packages.I can imagine MODULES as third level of database unit object grouping with following functionality1. It should support all database objects like schemas2. all public objects should be accessed directly when outer schema is in SEARCH_PATH3. private objects cannot be accessed from other modules4. modules should be movable between schemas, databases without a loss of functionality5. modules should to support renaming without loss of functionality6. there should be redefined some rules of visibility, because there can be new identifier's collisions and ambiguities 7. there should be defined relation of modules's objects and schema's objects. Maybe an introduction of the default module can be a good idea.I had the opportunity to see a man who modified routines in pgAdmin. It can be hell, but if we introduce a new concept (and it is an important concept), then there should be strong benefits - for example - possibility of strong encapsulation of code inside modules (or some units - the name is not important).The problem with pgAdmin maybe can be solved better by adding some annotations to database objects that allows more user friendly organization in the object tree in pgAdmin (and similar tools). Maybe it can be useful to have more tries (defined by locality, semantic, quality, ...). RegardsPavel", "msg_date": "Thu, 3 Feb 2022 08:22:16 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Thank you for the feedback Pavel and Julien. I'll try to explain some of\nthe issues and points you raise to the best of my understanding.\n\nThe reason for modules is that it would serve as an organizational unit\nthat can allow setting permissions on those units. So, for example, all\nfunctions in a module can be subject to setting access permissions on for\nsome user(s) or group(s). I didn't explain it well in the sgml docs, but\nalong with module syntax, I'm proposing introducing privileges to\ngrant/revoke on modules and routines in modules. And why modules for this\npurpose? Because its in the SQL spec so seems like a way to do it.\n\nI'm adding comments inline for the list of functionality you mentioned. I\nlook forward to discussing this more and figuring out how to make a useful\ncontribution to the community.\n\nOn Wed, Feb 2, 2022 at 11:22 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> čt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n> napsal:\n>\n>> Hi,\n>>\n>> On Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n>> >\n>> > čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n>> > napsal:\n>> >\n>> > > Hi,\n>> > >\n>> > > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n>> > >\n>> > > My proposal implements modules as schema objects to be stored in a new\n>> > > system catalog pg_module with new syntax for CREATE [OR REPLACE]\n>> MODULE,\n>> > > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n>> > > modules and module routines. I am attempting to follow the SQL spec.\n>> > > However, for right now, I'm proposing to support only routines as\n>> module\n>> > > contents, with local temporary tables and path specifications as\n>> defined\n>> > > in the SQL spec, to be supported in a future submission. We could also\n>> > > include support for variables depending on its status. [2]\n>> >\n>> > I dislike this feature. The modules are partially redundant to schemas\n>> and\n>> > to extensions in Postgres, and I am sure, so there is no reason to\n>> > introduce this.\n>> >\n>> > What is the benefit against schemas and extensions?\n>>\n>> I agree with Pavel. It seems that it's mainly adding another namespacing\n>> layer\n>> between schemas and objects, and it's likely going to create a mess.\n>> That's also going to be problematic if you want to add support for module\n>> variables, as you won't be able to use e.g.\n>> dbname.schemaname.modulename.variablename.fieldname.\n>>\n>>\nI haven't yet added support for variables so will need to look into the\nproblems with this if we're going to do that.\n\n\n> Also, my understanding was that the major interest of modules (at least\n>> for the\n>> routines part) was the ability to make some of them private to the\n>> module, but\n>> it doesn't look like it's doing that, so I also don't see any real benefit\n>> compared to schemas and extensions.\n>>\n>\n>\nYes, that is indeed the goal/use-case with setting permissions with grant\nand revoke. Right now, I have proposed create and reference as the kinds of\naccess that can be controlled on modules, and reference as the kind of\naccess that can be controlled on routines inside modules.\n\n\n> The biggest problem is coexistence of Postgres's SEARCH_PATH object\n> identification, and local and public scopes used in MODULEs or in Oracle's\n> packages.\n>\n>\nI am not extremely familiar with Oracle's packages, but do know of them.\nI'm wondering if local and public scopes for MODULE is in the SQL spec? (I\nwill check for that...) My thinking was to implement functionality that\nconforms to the SQL spec, not try to match Oracle's package which differs\nfrom the spec in some ways.\n\n\n> I can imagine MODULES as third level of database unit object grouping with\n> following functionality\n>\n> 1. It should support all database objects like schemas\n>\n\nDo you mean that schemas should be groupable under modules? My thinking was\nto follow what the SQL spec says about what objects should be in modules,\nand I started with routines as one of the objects that there are use cases\nfor. Such a controlling access permissions on routines at some granularity\nthat is not an entire schema and not individual functions/procedures.\n\n\n> 2. all public objects should be accessed directly when outer schema is in\n> SEARCH_PATH\n>\n\nYes, an object inside a module is in a schema and can be accessed with\nschemaname.func() as well as modulename.func() as well as\nschemaname.modulename.func(). I think you are saying it should be\naccessible with func() without a qualifying schemaname or modulename if the\nschemaname is in the search path, and that sounds reasonable too. Unless,\nof course, func() was created in a module, in which case access permissions\nfor the module and module contents will determine whether func() should be\ndirectly accessible. In my current proposal, a previously created func()\ncan't be added to a module created later. The purpose of creating routines\ninside a module (either when the module is created or after the module is\ncreated) would be with the intent of setting access permissions on those\nroutines differently than for the outer schema.\n\n\n> 3. private objects cannot be accessed from other modules\n>\n\nYes, I hope that is going to be the case with setting permissions with\ngrant and revoke. Right now, I have proposed create and reference as the\nkinds of access that can be controlled on modules, and reference as the\nkind of access that can be controlled on routines inside modules.\n\n\n> 4. modules should be movable between schemas, databases without a loss of\n> functionality\n>\n\npg_dump will dump modules so that can provide ways of moving them between\ndatabases. I hadn't envisioned moving modules between schemas, but can\nthink of ways that can be supported. Would the objects within the modules\nalso move implicitly to the new schema?\n\n\n> 5. modules should to support renaming without loss of functionality\n>\n\nyes renaming of modules is supported in my proposal\n\n\n> 6. there should be redefined some rules of visibility, because there can\n> be new identifier's collisions and ambiguities\n>\n\nI'm not sure I understand this point. Can you please explain more?\n\n\n>\n> 7. there should be defined relation of modules's objects and schema's\n> objects. Maybe an introduction of the default module can be a good idea.\n>\n\nI was thinking of module as a unit of organization (with the goal of\ncontrolling access to it) of objects that are still in some schema, and the\nmodule itself as an object that is also in a schema.\n\n\n>\n> I had the opportunity to see a man who modified routines in pgAdmin. It\n> can be hell, but if we introduce a new concept (and it is an important\n> concept), then there should be strong benefits - for example - possibility\n> of strong encapsulation of code inside modules (or some units - the name is\n> not important).\n>\n> The problem with pgAdmin maybe can be solved better by adding some\n> annotations to database objects that allows more user friendly organization\n> in the object tree in pgAdmin (and similar tools). Maybe it can be useful\n> to have more tries (defined by locality, semantic, quality, ...).\n>\n> Regards\n>\n> Pavel\n>\n\nBest regards,\nSwaha\n\nThank you for the feedback Pavel and Julien. I'll try to explain some of the issues and points you raise to the best of my understanding.The reason for modules is that it would serve as an organizational unit that can allow setting permissions on those units. So, for example, all functions in a module can be subject to setting access permissions on for some user(s) or group(s). I didn't explain it well in the sgml docs, but along with module syntax, I'm proposing introducing privileges to grant/revoke on modules and routines in modules. And why modules for this purpose? Because its in the SQL spec so seems like a way to do it.I'm adding comments inline for the list of functionality you mentioned. I look forward to discussing this more and figuring out how to make a useful contribution to the community.On Wed, Feb 2, 2022 at 11:22 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n> \n> čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n> napsal:\n> \n> > Hi,\n> >\n> > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> >\n> > My proposal implements modules as schema objects to be stored in a new\n> > system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n> > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> > modules and module routines. I am attempting to follow the SQL spec.\n> > However, for right now, I'm proposing to support only routines as module\n> > contents, with local temporary tables and path specifications as defined\n> > in the SQL spec, to be supported in a future submission. We could also\n> > include support for variables depending on its status. [2]\n> \n> I dislike this feature. The modules are partially redundant to schemas and\n> to extensions in Postgres, and I am sure, so there is no reason to\n> introduce this.\n> \n> What is the benefit against schemas and extensions?\n\nI agree with Pavel.  It seems that it's mainly adding another namespacing layer\nbetween schemas and objects, and it's likely going to create a mess.\nThat's also going to be problematic if you want to add support for module\nvariables, as you won't be able to use e.g.\ndbname.schemaname.modulename.variablename.fieldname.\nI haven't yet added support for variables so will need to look into the problems with this if we're going to do that. \nAlso, my understanding was that the major interest of modules (at least for the\nroutines part) was the ability to make some of them private to the module, but\nit doesn't look like it's doing that, so I also don't see any real benefit\ncompared to schemas and extensions.Yes, that is indeed the goal/use-case with setting permissions with grant and revoke. Right now, I have proposed create and reference as the kinds of access that can be controlled on modules, and reference as the kind of access that can be controlled on routines inside modules. The biggest problem is coexistence of Postgres's SEARCH_PATH  object identification, and local and public scopes used in MODULEs or in Oracle's packages.I am not extremely familiar with Oracle's packages, but do know of them. I'm wondering if local and public scopes for MODULE is in the SQL spec? (I will check for that...) My thinking was to implement functionality that conforms to the SQL spec, not try to match Oracle's package which differs from the spec in some ways. I can imagine MODULES as third level of database unit object grouping with following functionality1. It should support all database objects like schemasDo you mean that schemas should be groupable under modules? My thinking was to follow what the SQL spec says about what objects should be in modules, and I started with routines as one of the objects that there are use cases for. Such a controlling access permissions on routines at some granularity that is not an entire schema and not individual functions/procedures. 2. all public objects should be accessed directly when outer schema is in SEARCH_PATHYes, an object inside a module is in a schema and can be accessed with schemaname.func() as well as modulename.func() as well as schemaname.modulename.func(). I think you are saying it should be accessible with func() without a qualifying schemaname or modulename if the schemaname is in the search path, and that sounds reasonable too. Unless, of course, func() was created in a module, in which case access permissions for the module and module contents will determine whether func() should be directly accessible. In my current proposal, a previously created func() can't be added to a module created later. The purpose of creating routines inside a module (either when the module is created or after the module is created) would be with the intent of setting access permissions on those routines differently than for the outer schema. 3. private objects cannot be accessed from other modulesYes, I hope that is going to be the case with setting permissions with grant and revoke. Right now, I have proposed create and reference as the kinds of access that can be controlled on modules, and reference as the kind of access that can be controlled on routines inside modules. 4. modules should be movable between schemas, databases without a loss of functionalitypg_dump will dump modules so that can provide ways of moving them between databases. I hadn't envisioned moving modules between schemas, but can think of ways that can be supported. Would the objects within the modules also move implicitly to the new schema? 5. modules should to support renaming without loss of functionalityyes renaming of modules is supported in my proposal 6. there should be redefined some rules of visibility, because there can be new identifier's collisions and ambiguitiesI'm not sure I understand this point. Can you please explain more?  7. there should be defined relation of modules's objects and schema's objects. Maybe an introduction of the default module can be a good idea.I was thinking of module as a unit of organization (with the goal of controlling access to it) of objects that are still in some schema, and the module itself as an object that is also in a schema. I had the opportunity to see a man who modified routines in pgAdmin. It can be hell, but if we introduce a new concept (and it is an important concept), then there should be strong benefits - for example - possibility of strong encapsulation of code inside modules (or some units - the name is not important).The problem with pgAdmin maybe can be solved better by adding some annotations to database objects that allows more user friendly organization in the object tree in pgAdmin (and similar tools). Maybe it can be useful to have more tries (defined by locality, semantic, quality, ...). RegardsPavelBest regards,Swaha", "msg_date": "Thu, 3 Feb 2022 11:21:03 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "čt 3. 2. 2022 v 20:21 odesílatel Swaha Miller <swaha.miller@gmail.com>\nnapsal:\n\n> Thank you for the feedback Pavel and Julien. I'll try to explain some of\n> the issues and points you raise to the best of my understanding.\n>\n> The reason for modules is that it would serve as an organizational unit\n> that can allow setting permissions on those units. So, for example, all\n> functions in a module can be subject to setting access permissions on for\n> some user(s) or group(s). I didn't explain it well in the sgml docs, but\n> along with module syntax, I'm proposing introducing privileges to\n> grant/revoke on modules and routines in modules. And why modules for this\n> purpose? Because its in the SQL spec so seems like a way to do it.\n>\n\nThis part of the standard is dead - there is no strong reason to implement\nit.\n\n\n>\n> I'm adding comments inline for the list of functionality you mentioned. I\n> look forward to discussing this more and figuring out how to make a useful\n> contribution to the community.\n>\n> On Wed, Feb 2, 2022 at 11:22 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>>\n>> čt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com>\n>> napsal:\n>>\n>>> Hi,\n>>>\n>>> On Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n>>> >\n>>> > čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n>>> > napsal:\n>>> >\n>>> > > Hi,\n>>> > >\n>>> > > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n>>> > >\n>>> > > My proposal implements modules as schema objects to be stored in a\n>>> new\n>>> > > system catalog pg_module with new syntax for CREATE [OR REPLACE]\n>>> MODULE,\n>>> > > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n>>> > > modules and module routines. I am attempting to follow the SQL spec.\n>>> > > However, for right now, I'm proposing to support only routines as\n>>> module\n>>> > > contents, with local temporary tables and path specifications as\n>>> defined\n>>> > > in the SQL spec, to be supported in a future submission. We could\n>>> also\n>>> > > include support for variables depending on its status. [2]\n>>> >\n>>> > I dislike this feature. The modules are partially redundant to schemas\n>>> and\n>>> > to extensions in Postgres, and I am sure, so there is no reason to\n>>> > introduce this.\n>>> >\n>>> > What is the benefit against schemas and extensions?\n>>>\n>>> I agree with Pavel. It seems that it's mainly adding another\n>>> namespacing layer\n>>> between schemas and objects, and it's likely going to create a mess.\n>>> That's also going to be problematic if you want to add support for module\n>>> variables, as you won't be able to use e.g.\n>>> dbname.schemaname.modulename.variablename.fieldname.\n>>>\n>>>\n> I haven't yet added support for variables so will need to look into the\n> problems with this if we're going to do that.\n>\n>\n>> Also, my understanding was that the major interest of modules (at least\n>>> for the\n>>> routines part) was the ability to make some of them private to the\n>>> module, but\n>>> it doesn't look like it's doing that, so I also don't see any real\n>>> benefit\n>>> compared to schemas and extensions.\n>>>\n>>\n>>\n> Yes, that is indeed the goal/use-case with setting permissions with grant\n> and revoke. Right now, I have proposed create and reference as the kinds of\n> access that can be controlled on modules, and reference as the kind of\n> access that can be controlled on routines inside modules.\n>\n>\n>> The biggest problem is coexistence of Postgres's SEARCH_PATH object\n>> identification, and local and public scopes used in MODULEs or in Oracle's\n>> packages.\n>>\n>>\n> I am not extremely familiar with Oracle's packages, but do know of them.\n> I'm wondering if local and public scopes for MODULE is in the SQL spec? (I\n> will check for that...) My thinking was to implement functionality that\n> conforms to the SQL spec, not try to match Oracle's package which differs\n> from the spec in some ways.\n>\n>\n>> I can imagine MODULES as third level of database unit object grouping\n>> with following functionality\n>>\n>> 1. It should support all database objects like schemas\n>>\n>\n> Do you mean that schemas should be groupable under modules? My thinking\n> was to follow what the SQL spec says about what objects should be in\n> modules, and I started with routines as one of the objects that there are\n> use cases for. Such a controlling access permissions on routines at some\n> granularity that is not an entire schema and not individual\n> functions/procedures.\n>\n\nSQLspec says so there can be just temporary tables and routines. It is\nuseless. Unfortunately SQL/PSM came too late and there is no progress in\nthe last 20 years. It is a dead horse.\n\n\n>\n>> 2. all public objects should be accessed directly when outer schema is in\n>> SEARCH_PATH\n>>\n>\n> Yes, an object inside a module is in a schema and can be accessed with\n> schemaname.func() as well as modulename.func() as well as\n> schemaname.modulename.func(). I think you are saying it should be\n> accessible with func() without a qualifying schemaname or modulename if the\n> schemaname is in the search path, and that sounds reasonable too. Unless,\n> of course, func() was created in a module, in which case access permissions\n> for the module and module contents will determine whether func() should be\n> directly accessible. In my current proposal, a previously created func()\n> can't be added to a module created later. The purpose of creating routines\n> inside a module (either when the module is created or after the module is\n> created) would be with the intent of setting access permissions on those\n> routines differently than for the outer schema.\n>\n>\n>> 3. private objects cannot be accessed from other modules\n>>\n>\n> Yes, I hope that is going to be the case with setting permissions with\n> grant and revoke. Right now, I have proposed create and reference as the\n> kinds of access that can be controlled on modules, and reference as the\n> kind of access that can be controlled on routines inside modules.\n>\n\nThe permission is not enough strategy - if I implement some private\nobjects in the module, and I push this module to the schema on the search\npath, the private objects need to be invisible. I don't want to allow\nshadowing of public objects by private objects.\n\n\n>\n>\n>> 4. modules should be movable between schemas, databases without a loss of\n>> functionality\n>>\n>\n> pg_dump will dump modules so that can provide ways of moving them between\n> databases. I hadn't envisioned moving modules between schemas, but can\n> think of ways that can be supported. Would the objects within the modules\n> also move implicitly to the new schema?\n>\n\nI thought more about extending the CREATE EXTENSION command to support\nmodules.\n\n\n>\n>\n>> 5. modules should to support renaming without loss of functionality\n>>\n>\n> yes renaming of modules is supported in my proposal\n>\n\nBut if I call a module function from the same module, this should work\nafter renaming. That's mean so there should be some mechanism how to\nimplement routine call without necessity to use absolute path\n\n\n>\n>> 6. there should be redefined some rules of visibility, because there can\n>> be new identifier's collisions and ambiguities\n>>\n>\n> I'm not sure I understand this point. Can you please explain more?\n>\n\nI can have function fx in schema s, and then I can have module s in public\nschema with function fx. What will be called when I write s.fx() ?\n\n\n>\n>\n>>\n>> 7. there should be defined relation of modules's objects and schema's\n>> objects. Maybe an introduction of the default module can be a good idea.\n>>\n>\n> I was thinking of module as a unit of organization (with the goal of\n> controlling access to it) of objects that are still in some schema, and the\n> module itself as an object that is also in a schema.\n>\n\nI understand, but just this is not enough benefit for implementation, when\nPostgres supports schemas and extensions already. The benefit can be better\nencapsulation or better isolation than we have with schemas.\n\n\n>\n>\n>>\n>> I had the opportunity to see a man who modified routines in pgAdmin. It\n>> can be hell, but if we introduce a new concept (and it is an important\n>> concept), then there should be strong benefits - for example - possibility\n>> of strong encapsulation of code inside modules (or some units - the name is\n>> not important).\n>>\n>> The problem with pgAdmin maybe can be solved better by adding some\n>> annotations to database objects that allows more user friendly organization\n>> in the object tree in pgAdmin (and similar tools). Maybe it can be useful\n>> to have more tries (defined by locality, semantic, quality, ...).\n>>\n>> Regards\n>>\n>> Pavel\n>>\n>\n> Best regards,\n> Swaha\n>\n\nčt 3. 2. 2022 v 20:21 odesílatel Swaha Miller <swaha.miller@gmail.com> napsal:Thank you for the feedback Pavel and Julien. I'll try to explain some of the issues and points you raise to the best of my understanding.The reason for modules is that it would serve as an organizational unit that can allow setting permissions on those units. So, for example, all functions in a module can be subject to setting access permissions on for some user(s) or group(s). I didn't explain it well in the sgml docs, but along with module syntax, I'm proposing introducing privileges to grant/revoke on modules and routines in modules. And why modules for this purpose? Because its in the SQL spec so seems like a way to do it.This part of the standard is dead - there is no strong reason to implement it.  I'm adding comments inline for the list of functionality you mentioned. I look forward to discussing this more and figuring out how to make a useful contribution to the community.On Wed, Feb 2, 2022 at 11:22 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:čt 3. 2. 2022 v 5:46 odesílatel Julien Rouhaud <rjuju123@gmail.com> napsal:Hi,\n\nOn Thu, Feb 03, 2022 at 05:25:27AM +0100, Pavel Stehule wrote:\n> \n> čt 3. 2. 2022 v 3:28 odesílatel Swaha Miller <swaha.miller@gmail.com>\n> napsal:\n> \n> > Hi,\n> >\n> > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> >\n> > My proposal implements modules as schema objects to be stored in a new\n> > system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n> > ALTER MODULE, DROP MODULE and for GRANT and REVOKE for privileges on\n> > modules and module routines. I am attempting to follow the SQL spec.\n> > However, for right now, I'm proposing to support only routines as module\n> > contents, with local temporary tables and path specifications as defined\n> > in the SQL spec, to be supported in a future submission. We could also\n> > include support for variables depending on its status. [2]\n> \n> I dislike this feature. The modules are partially redundant to schemas and\n> to extensions in Postgres, and I am sure, so there is no reason to\n> introduce this.\n> \n> What is the benefit against schemas and extensions?\n\nI agree with Pavel.  It seems that it's mainly adding another namespacing layer\nbetween schemas and objects, and it's likely going to create a mess.\nThat's also going to be problematic if you want to add support for module\nvariables, as you won't be able to use e.g.\ndbname.schemaname.modulename.variablename.fieldname.\nI haven't yet added support for variables so will need to look into the problems with this if we're going to do that. \nAlso, my understanding was that the major interest of modules (at least for the\nroutines part) was the ability to make some of them private to the module, but\nit doesn't look like it's doing that, so I also don't see any real benefit\ncompared to schemas and extensions.Yes, that is indeed the goal/use-case with setting permissions with grant and revoke. Right now, I have proposed create and reference as the kinds of access that can be controlled on modules, and reference as the kind of access that can be controlled on routines inside modules. The biggest problem is coexistence of Postgres's SEARCH_PATH  object identification, and local and public scopes used in MODULEs or in Oracle's packages.I am not extremely familiar with Oracle's packages, but do know of them. I'm wondering if local and public scopes for MODULE is in the SQL spec? (I will check for that...) My thinking was to implement functionality that conforms to the SQL spec, not try to match Oracle's package which differs from the spec in some ways. I can imagine MODULES as third level of database unit object grouping with following functionality1. It should support all database objects like schemasDo you mean that schemas should be groupable under modules? My thinking was to follow what the SQL spec says about what objects should be in modules, and I started with routines as one of the objects that there are use cases for. Such a controlling access permissions on routines at some granularity that is not an entire schema and not individual functions/procedures.SQLspec says so there can be just temporary tables and routines. It is useless. Unfortunately SQL/PSM came too late and there is no progress in the last 20 years.  It is a dead horse.  2. all public objects should be accessed directly when outer schema is in SEARCH_PATHYes, an object inside a module is in a schema and can be accessed with schemaname.func() as well as modulename.func() as well as schemaname.modulename.func(). I think you are saying it should be accessible with func() without a qualifying schemaname or modulename if the schemaname is in the search path, and that sounds reasonable too. Unless, of course, func() was created in a module, in which case access permissions for the module and module contents will determine whether func() should be directly accessible. In my current proposal, a previously created func() can't be added to a module created later. The purpose of creating routines inside a module (either when the module is created or after the module is created) would be with the intent of setting access permissions on those routines differently than for the outer schema. 3. private objects cannot be accessed from other modulesYes, I hope that is going to be the case with setting permissions with grant and revoke. Right now, I have proposed create and reference as the kinds of access that can be controlled on modules, and reference as the kind of access that can be controlled on routines inside modules.The permission is not enough strategy -  if I implement some private objects in the module, and I push this module to the schema on the search path, the private objects need to be invisible. I don't want to allow shadowing of public objects by private objects.   4. modules should be movable between schemas, databases without a loss of functionalitypg_dump will dump modules so that can provide ways of moving them between databases. I hadn't envisioned moving modules between schemas, but can think of ways that can be supported. Would the objects within the modules also move implicitly to the new schema?I thought more about extending the CREATE EXTENSION command to support modules.  5. modules should to support renaming without loss of functionalityyes renaming of modules is supported in my proposalBut if I call a module function from the same module, this should work after renaming.  That's mean so there should be some mechanism how to implement routine call without necessity to use absolute path 6. there should be redefined some rules of visibility, because there can be new identifier's collisions and ambiguitiesI'm not sure I understand this point. Can you please explain more?I can have function fx in schema s, and then I can have module s in public schema with function fx. What will be called when I write s.fx() ?   7. there should be defined relation of modules's objects and schema's objects. Maybe an introduction of the default module can be a good idea.I was thinking of module as a unit of organization (with the goal of controlling access to it) of objects that are still in some schema, and the module itself as an object that is also in a schema.I understand, but just this is not enough benefit for implementation, when Postgres supports schemas and extensions already. The benefit can be better encapsulation or better isolation than we have with schemas.  I had the opportunity to see a man who modified routines in pgAdmin. It can be hell, but if we introduce a new concept (and it is an important concept), then there should be strong benefits - for example - possibility of strong encapsulation of code inside modules (or some units - the name is not important).The problem with pgAdmin maybe can be solved better by adding some annotations to database objects that allows more user friendly organization in the object tree in pgAdmin (and similar tools). Maybe it can be useful to have more tries (defined by locality, semantic, quality, ...). RegardsPavelBest regards,Swaha", "msg_date": "Thu, 3 Feb 2022 21:35:27 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On 2022-Feb-03, Pavel Stehule wrote:\n\n> The biggest problem is coexistence of Postgres's SEARCH_PATH object\n> identification, and local and public scopes used in MODULEs or in Oracle's\n> packages.\n> \n> I can imagine MODULES as third level of database unit object grouping with\n> following functionality\n> \n> 1. It should support all database objects like schemas\n\nI proposed a way for modules to coexist with schemas that got no reply,\nhttps://postgr.es/m/202106021908.ddmebx7qfdld@alvherre.pgsql\n\nI still think that that idea is valuable; it would let us create\n\"private\" routines, for example, which are good for encapsulation.\nBut the way it interacts with schemas means we don't end up with a total\nmess in the namespace resolution rules. I argued that modules would\nonly have functions, and maybe a few other useful object types, but not\n*all* object types, because we don't need all object types to become\nprivate. For example, I don't think I would like to have data types or\ncasts to be private, so they can only be in a schema and they cannot be\nin a module.\n\nOf course, that idea of modules would also ease porting large DB-based\napplications from other database systems.\n\nWhat do others think?\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 3 Feb 2022 22:42:32 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Hi,\n\nOn Thu, Feb 03, 2022 at 10:42:32PM -0300, Alvaro Herrera wrote:\n> On 2022-Feb-03, Pavel Stehule wrote:\n> \n> > The biggest problem is coexistence of Postgres's SEARCH_PATH object\n> > identification, and local and public scopes used in MODULEs or in Oracle's\n> > packages.\n> > \n> > I can imagine MODULES as third level of database unit object grouping with\n> > following functionality\n> > \n> > 1. It should support all database objects like schemas\n> \n> I proposed a way for modules to coexist with schemas that got no reply,\n> https://postgr.es/m/202106021908.ddmebx7qfdld@alvherre.pgsql\n\nAh, sorry I missed this one.\n\n> I still think that that idea is valuable; it would let us create\n> \"private\" routines, for example, which are good for encapsulation.\n> But the way it interacts with schemas means we don't end up with a total\n> mess in the namespace resolution rules.\n\n>I argued that modules would\n> only have functions, and maybe a few other useful object types, but not\n> *all* object types, because we don't need all object types to become\n> private. For example, I don't think I would like to have data types or\n> casts to be private, so they can only be in a schema and they cannot be\n> in a module.\n> \n> Of course, that idea of modules would also ease porting large DB-based\n> applications from other database systems.\n> \n> What do others think?\n\nThis approach seems way better as it indeed fixes the qualification issues with\nthe patch proposed in this thread.\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:50:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Wed, Feb 2, 2022 at 06:28:30PM -0800, Swaha Miller wrote:\n> Hi,\n> \n> I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> \n> My proposal implements modules as schema objects to be stored in a new\n> system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n\nYou might want to consider the steps that are most successful at getting\nPostgres patches accepted:\n\n\thttps://wiki.postgresql.org/wiki/Todo\n\tDesirability -> Design -> Implement -> Test -> Review -> Commit\n\nIn this case, you have jumped right to Implement. Asking about\nDesirability first can avoid a lot of effort.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Fri, 4 Feb 2022 13:46:01 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Feb 3, 2022 at 5:42 PM Alvaro Herrera <alvherre@alvh.no-ip.org>\nwrote:\n\n> On 2022-Feb-03, Pavel Stehule wrote:\n>\n> > The biggest problem is coexistence of Postgres's SEARCH_PATH object\n> > identification, and local and public scopes used in MODULEs or in\n> Oracle's\n> > packages.\n> >\n> > I can imagine MODULES as third level of database unit object grouping\n> with\n> > following functionality\n> >\n> > 1. It should support all database objects like schemas\n>\n> I proposed a way for modules to coexist with schemas that got no reply,\n> https://postgr.es/m/202106021908.ddmebx7qfdld@alvherre.pgsql\n>\n\nYes, I arrived a little after that thread, and used that as my starting\npoint.\n\nThe POC patch Jim Mlodgenski had on that thread was similar to your\nproposed\nway - modules were rows in pg_namespace, with the addition of a new column\nin pg_namespace for the nspkind (module or schema.)\n\nJim had asked about two options -- the above approach and an alternative\none\nof having a pg_module system catalog and got some input\nhttps://www.postgresql.org/message-id/2897116.1622642280%40sss.pgh.pa.us\n\nPicking up from there, I am exploring the alternative approach. And what\nqualified\nnames would look like and how they get interpreted unambiguously, when\nschemas and modules co-exist. (Also, being new to PostgreSQL, it has been a\ngreat learning exercise for me on some of the internals of PostgreSQL.)\n\nWith modules being their own type of object stored in a pg_module system\ncatalog, deconstructing a qualified path could always give precedence to\nschema over module. So when there is function f() in schema s and another\nfunction f() in module s in schema public, then s.f() would invoke the\nformer.\n\nWhat are some other directions we might want to take this patch?\n\nI still think that that idea is valuable; it would let us create\n> \"private\" routines, for example, which are good for encapsulation.\n> But the way it interacts with schemas means we don't end up with a total\n> mess in the namespace resolution rules. I argued that modules would\n> only have functions, and maybe a few other useful object types, but not\n> *all* object types, because we don't need all object types to become\n> private. For example, I don't think I would like to have data types or\n> casts to be private, so they can only be in a schema and they cannot be\n> in a module.\n>\n> Of course, that idea of modules would also ease porting large DB-based\n> applications from other database systems.\n>\n\nIndeed. Looking at commercial databases Oracle and Microsoft SQLServer,\nthey both have implemented module-like structures.\n\nSwaha Miller\n\nOn Thu, Feb 3, 2022 at 5:42 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:On 2022-Feb-03, Pavel Stehule wrote:\n\n> The biggest problem is coexistence of Postgres's SEARCH_PATH  object\n> identification, and local and public scopes used in MODULEs or in Oracle's\n> packages.\n> \n> I can imagine MODULES as third level of database unit object grouping with\n> following functionality\n> \n> 1. It should support all database objects like schemas\n\nI proposed a way for modules to coexist with schemas that got no reply,\nhttps://postgr.es/m/202106021908.ddmebx7qfdld@alvherre.pgsqlYes, I arrived a little after that thread, and used that as my starting point.The POC patch Jim Mlodgenski had on that thread was similar to your proposed way - modules were rows in pg_namespace, with the addition of a new column in pg_namespace for the nspkind (module or schema.)Jim had asked about two options  -- the above approach and an alternative one of having a pg_module system catalog and got some inputhttps://www.postgresql.org/message-id/2897116.1622642280%40sss.pgh.pa.usPicking up from there, I am exploring the alternative approach. And what qualified names would look like and how they get interpreted unambiguously, when schemas and modules co-exist. (Also, being new to PostgreSQL, it has been a great learning exercise for me on some of the internals of PostgreSQL.)With modules being their own type of object stored in a pg_module system catalog, deconstructing a qualified path could always give precedence to schema over module. So when there is function f() in schema s and another function f() in module s in schema public, then s.f() would invoke the former.What are some other directions we might want to take this patch? \nI still think that that idea is valuable; it would let us create\n\"private\" routines, for example, which are good for encapsulation.\nBut the way it interacts with schemas means we don't end up with a total\nmess in the namespace resolution rules.  I argued that modules would\nonly have functions, and maybe a few other useful object types, but not\n*all* object types, because we don't need all object types to become\nprivate.  For example, I don't think I would like to have data types or\ncasts to be private, so they can only be in a schema and they cannot be\nin a module.\n\nOf course, that idea of modules would also ease porting large DB-based\napplications from other database systems.Indeed. Looking at commercial databases Oracle and Microsoft SQLServer, they both have implemented module-like structures.Swaha Miller", "msg_date": "Fri, 4 Feb 2022 12:52:08 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Fri, Feb 4, 2022 at 10:46 AM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Wed, Feb 2, 2022 at 06:28:30PM -0800, Swaha Miller wrote:\n> > Hi,\n> >\n> > I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> >\n> > My proposal implements modules as schema objects to be stored in a new\n> > system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n>\n> You might want to consider the steps that are most successful at getting\n> Postgres patches accepted:\n>\n> https://wiki.postgresql.org/wiki/Todo\n> Desirability -> Design -> Implement -> Test -> Review -> Commit\n>\n> In this case, you have jumped right to Implement. Asking about\n> Desirability first can avoid a lot of effort.\n>\n\nThanks Bruce, that's really helpful. I was building on the discussion in\nJim's\noriginal thread, which is why I went ahead with another POC implementation,\nbut I do consider this implementation as part of the desirability/design\naspect\nand am hoping to get input from the community to shape this proposal/patch.\n\nSwaha Miller\n\nOn Fri, Feb 4, 2022 at 10:46 AM Bruce Momjian <bruce@momjian.us> wrote:On Wed, Feb  2, 2022 at 06:28:30PM -0800, Swaha Miller wrote:\n> Hi,\n> \n> I'm following up from Jim's POC for adding MODULE to PostgreSQL. [1]\n> \n> My proposal implements modules as schema objects to be stored in a new\n> system catalog pg_module with new syntax for CREATE [OR REPLACE] MODULE,\n\nYou might want to consider the steps that are most successful at getting\nPostgres patches accepted:\n\n        https://wiki.postgresql.org/wiki/Todo\n        Desirability -> Design -> Implement -> Test -> Review -> Commit\n\nIn this case, you have jumped right to Implement.  Asking about\nDesirability first can avoid a lot of effort.Thanks Bruce, that's really helpful. I was building on the discussion in Jim's original thread, which is why I went ahead with another POC implementation, but I do consider this implementation as part of the desirability/design aspect and am hoping to get input from the community to shape this proposal/patch.Swaha Miller", "msg_date": "Fri, 4 Feb 2022 13:04:27 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On 2022-Feb-04, Swaha Miller wrote:\n\n> The POC patch Jim Mlodgenski had on that thread was similar to your\n> proposed way - modules were rows in pg_namespace, with the addition of\n> a new column in pg_namespace for the nspkind (module or schema.)\n\nI don't agree that what he proposed was similar to my proposal. The\nmain problem I saw in his proposal is that he was saying that modules\nwould be *within* schemas, which is where I think the whole thing\nderailed completely.\n\nHe said:\n\n> [ This patch ] [...] allows for 3-part (or 4 with the database name)\n> naming of objects within the module. \n\nHe then showed the following example:\n\n> CREATE SCHEMA foo;\n> CREATE MODULE foo.bar\n> CREATE FUNCTION hello() [...]\n> SELECT foo.bar.hello();\n\nNotice the three-part name there. That's a disaster, because the name\nresolution rules become very complicated or ambiguous. What I describe\navoids that disaster, by forcing there to be two-part names only: a\nmodule lives on its own, not in a schema, so a function name always has\nat most two parts (never three), and the first part can always be\nresolved down to a pg_namespace row of some kind.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\"La conclusión que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusión de ellos\" (Tanenbaum)\n\n\n", "msg_date": "Fri, 4 Feb 2022 18:48:11 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Alvaro Herrera <alvherre@alvh.no-ip.org> writes:\n> He said:\n\n>> [ This patch ] [...] allows for 3-part (or 4 with the database name)\n>> naming of objects within the module. \n\n> He then showed the following example:\n\n>> CREATE SCHEMA foo;\n>> CREATE MODULE foo.bar\n>> CREATE FUNCTION hello() [...]\n>> SELECT foo.bar.hello();\n\n> Notice the three-part name there. That's a disaster, because the name\n> resolution rules become very complicated or ambiguous.\n\nRight. We've looked into that before --- when I made pg_namespace,\nI called it that because I thought we might be able to support\nnested namespaces --- but it'd really create a mess. In particular,\nthe SQL standard says what a three-part name means, and this ain't it.\n\nIf we invent modules I think they need to work more like extensions\nnaming-wise, ie they group objects but have no effect for naming.\nAlternatively, you could insist that a module *is* a schema for naming\npurposes, with some extra properties but acting exactly like a schema\nfor naming. But I don't see what that buys you that you can't get\nfrom an extension that owns a schema that contains all its objects.\n\nOn the whole I'm kind of allergic to inventing an entire new concept\nthat has as much overlap with extensions as modules seem to. I'd\nrather try to understand what functional requirements we're missing\nand see if we can add them to extensions. Yeah, we won't end up being\nbug-compatible with Oracle's feature, but that's not a project goal\nanyway --- and where we have tried to emulate Oracle closely, it's\noften not worked out well (poster child: to_date).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 17:12:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Fri, Feb 4, 2022 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>\n> On the whole I'm kind of allergic to inventing an entire new concept\n> that has as much overlap with extensions as modules seem to. I'd\n> rather try to understand what functional requirements we're missing\n> and see if we can add them to extensions. Yeah, we won't end up being\n> bug-compatible with Oracle's feature, but that's not a project goal\n> anyway --- and where we have tried to emulate Oracle closely, it's\n> often not worked out well (poster child: to_date).\n>\n>\nDevelopers need a way to group related objects in some fashion so\nthat they can be more easily reasoned about. Modules are just the\nway to do this in the spec, but if we want to leverage extensions,\nthat will work too. Many users who need this only have access through\na database connection. They wouldn't have access to the file system\nto add a control file nor a script to add the objects. Enhancing\nCREATE EXTENSION to be able to create some sort of empty extension\nand then having the ability to add and remove objects from that\nextension may be the minimum amount of functionality we would need\nto give users the ability to group their objects.\n\nOn Fri, Feb 4, 2022 at 5:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nOn the whole I'm kind of allergic to inventing an entire new concept\nthat has as much overlap with extensions as modules seem to.  I'd\nrather try to understand what functional requirements we're missing\nand see if we can add them to extensions.  Yeah, we won't end up being\nbug-compatible with Oracle's feature, but that's not a project goal\nanyway --- and where we have tried to emulate Oracle closely, it's\noften not worked out well (poster child: to_date).\nDevelopers need a way to group related objects in some fashion sothat they can be more easily reasoned about. Modules are just theway to do this in the spec, but if we want to leverage extensions, that will work too. Many users who need this only have access througha database connection. They wouldn't have access to the file system to add a control file nor a script to add the objects. EnhancingCREATE EXTENSION to be able to create some sort of empty extensionand then having the ability to add and remove objects from thatextension may be the minimum amount of functionality we would needto give users the ability to group their objects.", "msg_date": "Fri, 4 Feb 2022 18:25:31 -0500", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Fri, Feb 04, 2022 at 05:12:43PM -0500, Tom Lane wrote:\n> If we invent modules I think they need to work more like extensions\n> naming-wise, ie they group objects but have no effect for naming.\n> Alternatively, you could insist that a module *is* a schema for naming\n> purposes, with some extra properties but acting exactly like a schema\n> for naming. But I don't see what that buys you that you can't get\n> from an extension that owns a schema that contains all its objects.\n> \n> On the whole I'm kind of allergic to inventing an entire new concept\n> that has as much overlap with extensions as modules seem to. I'd\n> rather try to understand what functional requirements we're missing\n> and see if we can add them to extensions. Yeah, we won't end up being\n> bug-compatible with Oracle's feature, but that's not a project goal\n> anyway --- and where we have tried to emulate Oracle closely, it's\n> often not worked out well (poster child: to_date).\n\nIf I'm understanding correctly, you are suggesting that CREATE MODULE would\nbe more like creating an extension without a control file, installation\nscript, etc. Objects would be added aѕ members with something like ALTER\nMODULE ADD, and members could share properties such as access control. And\nthis might be possible to do by enhancing CREATE EXTENSION instead of\ncreating a new catalog, dependency type, etc.\n\nI think this could be a nice way to sidestep the naming resolution problems\ndiscussed upthread while still allowing folks to group objects together in\nsome meaningful way. Also, while it might be nice to have separate CREATE\nEXTENSION and CREATE MODULE commands, perhaps they would use roughly the\nsame code paths behind the scenes.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:33:48 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Fri, Feb 04, 2022 at 05:12:43PM -0500, Tom Lane wrote:\n>> On the whole I'm kind of allergic to inventing an entire new concept\n>> that has as much overlap with extensions as modules seem to. I'd\n>> rather try to understand what functional requirements we're missing\n>> and see if we can add them to extensions. Yeah, we won't end up being\n>> bug-compatible with Oracle's feature, but that's not a project goal\n>> anyway --- and where we have tried to emulate Oracle closely, it's\n>> often not worked out well (poster child: to_date).\n\n> If I'm understanding correctly, you are suggesting that CREATE MODULE would\n> be more like creating an extension without a control file, installation\n> script, etc. Objects would be added aѕ members with something like ALTER\n> MODULE ADD, and members could share properties such as access control. And\n> this might be possible to do by enhancing CREATE EXTENSION instead of\n> creating a new catalog, dependency type, etc.\n\n> I think this could be a nice way to sidestep the naming resolution problems\n> discussed upthread while still allowing folks to group objects together in\n> some meaningful way. Also, while it might be nice to have separate CREATE\n> EXTENSION and CREATE MODULE commands, perhaps they would use roughly the\n> same code paths behind the scenes.\n\nHm. If the functional requirement is \"group objects without needing\nany out-in-the-filesystem infrastructure\", then I could see defining\na module as being exactly like an extension except there's no such\ninfrastructure --- and hence no concept of versions, plus pg_dump\nneeds to act differently. That's probably enough semantic difference\nto justify using a separate word, even if we can share a lot of\ncode infrastructure.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 18:51:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "\nOn 04.02.22 23:12, Tom Lane wrote:\n> Right. We've looked into that before --- when I made pg_namespace,\n> I called it that because I thought we might be able to support\n> nested namespaces --- but it'd really create a mess. In particular,\n> the SQL standard says what a three-part name means, and this ain't it.\n\nModules are part of the SQL standard, so there is surely some \nname-resolution system specified there as well.\n\n\n", "msg_date": "Mon, 7 Feb 2022 16:54:29 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Fri, Feb 4, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Hm. If the functional requirement is \"group objects without needing\n> any out-in-the-filesystem infrastructure\", then I could see defining\n> a module as being exactly like an extension except there's no such\n> infrastructure --- and hence no concept of versions, plus pg_dump\n> needs to act differently. That's probably enough semantic difference\n> to justify using a separate word, even if we can share a lot of\n> code infrastructure.\n>\n\nThen as a first cut for modules, could we add CREATE MODULE\nsyntax which adds an entry to pg_extension like CREATE EXTENSION\ndoes? And also add a new column to pg_extension to distinguish\nmodules from extensions.\n\nThe three-part path name resolution for functions would remain the\nsame, nothing would need to change there because of modules.\n\nWould that be an acceptable direction to go?\n\nSwaha\n\nOn Fri, Feb 4, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\nHm. If the functional requirement is \"group objects without needing\nany out-in-the-filesystem infrastructure\", then I could see defining\na module as being exactly like an extension except there's no such\ninfrastructure --- and hence no concept of versions, plus pg_dump\nneeds to act differently.  That's probably enough semantic difference\nto justify using a separate word, even if we can share a lot of\ncode infrastructure.Then as a first cut for modules, could we add CREATE MODULEsyntax which adds an entry to pg_extension like CREATE EXTENSIONdoes? And also add a new column to pg_extension to distinguish modules from extensions. The three-part path name resolution for functions would remain the same, nothing would need to change there because of modules.Would that be an acceptable direction to go?Swaha", "msg_date": "Thu, 10 Feb 2022 08:53:15 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Feb 10, 2022 at 08:53:15AM -0800, Swaha Miller wrote:\n> On Fri, Feb 4, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Hm. If the functional requirement is \"group objects without needing\n> any out-in-the-filesystem infrastructure\", then I could see defining\n> a module as being exactly like an extension except there's no such\n> infrastructure --- and hence no concept of versions, plus pg_dump\n> needs to act differently.  That's probably enough semantic difference\n> to justify using a separate word, even if we can share a lot of\n> code infrastructure.\n> \n> Then as a first cut for modules, could we add CREATE MODULE\n> syntax which adds an entry to pg_extension like CREATE EXTENSION\n> does? And also add a new column to pg_extension to distinguish \n> modules from extensions. \n> \n> The three-part path name resolution for functions would remain the \n> same, nothing would need to change there because of modules.\n> \n> Would that be an acceptable direction to go?\n\nWell, that would allow us to have CREATE EXTENSION syntax, but what\nwould it do that CREATE SCHEMA does not?\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 16:06:08 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On 2022-Feb-04, Tom Lane wrote:\n\n> If we invent modules I think they need to work more like extensions\n> naming-wise, ie they group objects but have no effect for naming.\n\nI think modules are an orthogonal concept to extensions, and trying to\nmix both is messy.\n\nThe way I see modules working is as a \"logical\" grouping of objects --\nthey provide encapsulated units of functionality. A module has private\nfunctions, which cannot be called except from other functions in the\nsame module. You can abstract them out of the database design, leaving\nyou with only the exposed functions, the public API.\n\nAn extension is a way to distribute or package objects. An extension\ncan contain a module, and of course you should be able to use an\nextension to distribute a module, or even several modules. In fact, I\nthink it should be possible to have several extensions contribute\ndifferent objects to the same module.\n\nBut things like name resolution rules (search path) are not affected by\nhow the objects are distributed, whereas the search path is critical in\nhow you think about the objects in a module.\n\n\nIf modules are just going to be extensions, I see no point.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Thu, 10 Feb 2022 18:16:56 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Feb 10, 2022 at 4:17 PM Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> On 2022-Feb-04, Tom Lane wrote:\n> > If we invent modules I think they need to work more like extensions\n> > naming-wise, ie they group objects but have no effect for naming.\n>\n> I think modules are an orthogonal concept to extensions, and trying to\n> mix both is messy.\n>\n> The way I see modules working is as a \"logical\" grouping of objects --\n> they provide encapsulated units of functionality. A module has private\n> functions, which cannot be called except from other functions in the\n> same module. You can abstract them out of the database design, leaving\n> you with only the exposed functions, the public API.\n>\n> An extension is a way to distribute or package objects. An extension\n> can contain a module, and of course you should be able to use an\n> extension to distribute a module, or even several modules. In fact, I\n> think it should be possible to have several extensions contribute\n> different objects to the same module.\n>\n> But things like name resolution rules (search path) are not affected by\n> how the objects are distributed, whereas the search path is critical in\n> how you think about the objects in a module.\n>\n> If modules are just going to be extensions, I see no point.\n\n+1.\n\nI think that extensions, as we have them in PostgreSQL today, are\nbasically feature-complete. In my experience, they do all the things\nthat we need them to do, and pretty well. I think Tom was correct when\nhe predicted many years ago that getting the extension feature into\nPostgreSQL would be remembered as one the biggest improvements of that\nera. (I do not recall his exact words.)\n\nBut the same is clearly not true of schemas. Schemas are missing\nfeatures that people expect to have, private objects being one of\nthem, and variables another, and I think there are other things\nmissing, too. Also, search_path is an absolutely wretched design and\nI'd propose ripping it out if I had any sensible idea what would\nreplace it.\n\nIMHO, if we're going to do anything at all in this area, it ought to\nbe targeting the rather-large weaknesses of the schema system, rather\nthan anything about the extension system.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:46:45 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Feb 10, 2022 at 1:06 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Thu, Feb 10, 2022 at 08:53:15AM -0800, Swaha Miller wrote:\n> > On Fri, Feb 4, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Hm. If the functional requirement is \"group objects without needing\n> > any out-in-the-filesystem infrastructure\", then I could see defining\n> > a module as being exactly like an extension except there's no such\n> > infrastructure --- and hence no concept of versions, plus pg_dump\n> > needs to act differently. That's probably enough semantic difference\n> > to justify using a separate word, even if we can share a lot of\n> > code infrastructure.\n> >\n> > Then as a first cut for modules, could we add CREATE MODULE\n> > syntax which adds an entry to pg_extension like CREATE EXTENSION\n> > does? And also add a new column to pg_extension to distinguish\n> > modules from extensions.\n> >\n> > The three-part path name resolution for functions would remain the\n> > same, nothing would need to change there because of modules.\n> >\n> > Would that be an acceptable direction to go?\n>\n> Well, that would allow us to have CREATE EXTENSION syntax, but what\n> would it do that CREATE SCHEMA does not?\n>\n\nA prominent use case for grouping functions into modules would\nbe access control on the group as a whole, with one command\nfor an entire module instead of many individual functions. One reason\nfor such a grouping is to set ACLs. Users migrating their database from\ncommercial databases to PostgreSQL wanting to set ACLs on\nthousands or hundreds of thousands of functions would benefit from\na grouping concept like modules.\n\nIf users use schemas to group functions, then they have to break up\nfunctions that may have all been in one namespace into a bunch of new\nschemas. So we'd like to have a different grouping mechanism that can\ngroup functions within a schema. So we're looking at a new construct like\nmodules that can serve that purpose without introducing sub-schemas.\n\nAlso a module would be a grouping of primarily functions. Schemas can\nhave many other object types. Setting ACLs on groups of functions in a\nschema, which is what users could do with a module, would require\nbreaking up a schema into other schemas based on ACLs. But schemas\nallow setting ACLs on other objects in it, not only functions. If we\ncreate groupings based on functions, what happens to the other types\nof objects, what schema do they get grouped into? If modules are\nsupposed to solve setting ACLs on groups of functions, does using\nschemas conflate setting ACLs for those other types of objects with\nsetting ACLs for functions?\n\nModules could also, in the future, serve as a way of allowing for private\nfunctions within the grouping. Doing this in schemas would require\nthose kinds of changes in the schema construct.\n\nSwaha\n\nOn Thu, Feb 10, 2022 at 1:06 PM Bruce Momjian <bruce@momjian.us> wrote:On Thu, Feb 10, 2022 at 08:53:15AM -0800, Swaha Miller wrote:\n> On Fri, Feb 4, 2022 at 3:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n>     Hm. If the functional requirement is \"group objects without needing\n>     any out-in-the-filesystem infrastructure\", then I could see defining\n>     a module as being exactly like an extension except there's no such\n>     infrastructure --- and hence no concept of versions, plus pg_dump\n>     needs to act differently.  That's probably enough semantic difference\n>     to justify using a separate word, even if we can share a lot of\n>     code infrastructure.\n> \n> Then as a first cut for modules, could we add CREATE MODULE\n> syntax which adds an entry to pg_extension like CREATE EXTENSION\n> does? And also add a new column to pg_extension to distinguish \n> modules from extensions. \n> \n> The three-part path name resolution for functions would remain the \n> same, nothing would need to change there because of modules.\n> \n> Would that be an acceptable direction to go?\n\nWell, that would allow us to have CREATE EXTENSION syntax, but what\nwould it do that CREATE SCHEMA does not?A prominent use case for grouping functions into modules wouldbe access control on the group as a whole, with one commandfor an entire module instead of many individual functions. One reasonfor such a grouping is to set ACLs. Users migrating their database fromcommercial databases to PostgreSQL wanting to set ACLs onthousands or hundreds of thousands of functions would benefit froma grouping concept like modules.If users use schemas to group functions, then they have to break up functions that may have all been in one namespace into a bunch of new schemas. So we'd like to have a different grouping mechanism that can group functions within a schema. So we're looking at a new construct likemodules that can serve that purpose without introducing sub-schemas.Also a module would be a grouping of primarily functions. Schemas can have many other object types. Setting ACLs on groups of functions in aschema, which is what users could do with a module, would require breaking up a schema into other schemas based on ACLs. But schemasallow setting ACLs on other objects in it, not only functions. If wecreate groupings based on functions, what happens to the other typesof objects, what schema do they get grouped into? If modules are supposed to solve setting ACLs on groups of functions, does using schemas conflate setting ACLs for those other types of objects with setting ACLs for functions?Modules could also, in the future, serve as a way of allowing for privatefunctions within the grouping. Doing this in schemas would require those kinds of changes in the schema construct.Swaha", "msg_date": "Mon, 14 Feb 2022 15:23:07 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Mon, Feb 14, 2022 at 03:23:07PM -0800, Swaha Miller wrote:\n> A prominent use case for grouping functions into modules would\n> be access control on the group as a whole, with one command\n> for an entire module instead of many individual functions. One reason\n> for such a grouping is to set ACLs. Users migrating their database from\n> commercial databases to PostgreSQL wanting to set ACLs on\n> thousands or hundreds of thousands of functions would benefit from\n> a grouping concept like modules.\n> \n> If users use schemas to group functions, then they have to break up\n> functions that may have all been in one namespace into a bunch of new\n> schemas. So we'd like to have a different grouping mechanism that can\n> group functions within a schema. So we're looking at a new construct like\n> modules that can serve that purpose without introducing sub-schemas.\n\nI was working on a talk about microservices today and decided to create\ntwo schemas --- a public one that has USAGE permission for other services\nwith views and SECURITY DEFINER functions that referenced a private\nschema that can only be accessed by the owning service. Is that a usage\npattern that requires modules since it already works with schemas and\njust uses schema permissions to designate public/private schema\ninterfaces.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 19:42:21 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Mon, Feb 14, 2022 at 07:42:21PM -0500, Bruce Momjian wrote:\n> On Mon, Feb 14, 2022 at 03:23:07PM -0800, Swaha Miller wrote:\n> > A prominent use case for grouping functions into modules would\n> > be access control on the group as a whole, with one command\n> > for an entire module instead of many individual functions. One reason\n> > for such a grouping is to set ACLs. Users migrating their database from\n> > commercial databases to PostgreSQL wanting to set ACLs on\n> > thousands or hundreds of thousands of functions would benefit from\n> > a grouping concept like modules.\n> > \n> > If users use schemas to group functions, then they have to break up\n> > functions that may have all been in one namespace into a bunch of new\n> > schemas. So we'd like to have a different grouping mechanism that can\n> > group functions within a schema. So we're looking at a new construct like\n> > modules that can serve that purpose without introducing sub-schemas.\n> \n> I was working on a talk about microservices today and decided to create\n> two schemas --- a public one that has USAGE permission for other services\n> with views and SECURITY DEFINER functions that referenced a private\n> schema that can only be accessed by the owning service. Is that a usage\n> pattern that requires modules since it already works with schemas and\n> just uses schema permissions to designate public/private schema\n> interfaces.\n\nAttached is an example.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.", "msg_date": "Mon, 14 Feb 2022 19:58:22 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Mon, Feb 14, 2022 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n\n> On Mon, Feb 14, 2022 at 07:42:21PM -0500, Bruce Momjian wrote:\n> > On Mon, Feb 14, 2022 at 03:23:07PM -0800, Swaha Miller wrote:\n> > > A prominent use case for grouping functions into modules would\n> > > be access control on the group as a whole, with one command\n> > > for an entire module instead of many individual functions. One reason\n> > > for such a grouping is to set ACLs. Users migrating their database from\n> > > commercial databases to PostgreSQL wanting to set ACLs on\n> > > thousands or hundreds of thousands of functions would benefit from\n> > > a grouping concept like modules.\n> > >\n> > > If users use schemas to group functions, then they have to break up\n> > > functions that may have all been in one namespace into a bunch of new\n> > > schemas. So we'd like to have a different grouping mechanism that can\n> > > group functions within a schema. So we're looking at a new construct\n> like\n> > > modules that can serve that purpose without introducing sub-schemas.\n> >\n> > I was working on a talk about microservices today and decided to create\n> > two schemas --- a public one that has USAGE permission for other services\n> > with views and SECURITY DEFINER functions that referenced a private\n> > schema that can only be accessed by the owning service. Is that a usage\n> > pattern that requires modules since it already works with schemas and\n> > just uses schema permissions to designate public/private schema\n> > interfaces.\n>\n\nI think the reason for modules would be to make it a better experience for\nPostgreSQL users to do what they need. Yours is a great example\nfor a talk. A practical scenario would require many schemas following this\npattern of X and X_private. In a more expressive programming language,\nthe user would express modular design with classes X1, X2...Xn that\nhave both public interface calls and private functions. The modularity\nwould\nreflect what they are trying to do and not be restricted by how they are\nallowed to do so. In this context, I am going to link to a scenario Jim had\ndescribed before\n\nhttps://www.postgresql.org/message-id/CAB_5SRebSCjO12%3DnLsaLCBw2vnkiNH7jcNchirPc0yQ2KmiknQ%40mail.gmail.com\n\nIn your example, the only way a developer would know that X and\nX_private are related is that they are named consistently. With another\ngrouping construct like Modules, they would both have been in the same\nschema, but now they are not. From a long term maintainability of the code\nperspective, that becomes an issue because people make simple mistakes.\nWhat if in the future a new developer added a 3rd grouping, x_utils. Is it\nrelated to the other groupings? This has a lower case x instead of the X.\nHow is one to know? What if in the future a new developer added a 3rd\ngrouping X_utils but without having paid attention to the already existing\nX\nand X_private? Now naming becomes the means of expressing\nclassification and modular design. It is a relatively weak construct\ncompared to a language construct.\n\nYes, anything a user may want to do with modules is likely possible to\ndo already with schemas. But just because it is possible doesn't mean\nit is not tedious and awkward because of the mechanisms available to do\nthem now. And I would advocate for more expressive constructs to enable\nusers of PostgreSQL to focus and reason about more of the \"what\" than\nthe \"how\" of the systems they are trying to build or migrate.\n\nSwaha\n\nOn Mon, Feb 14, 2022 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:On Mon, Feb 14, 2022 at 07:42:21PM -0500, Bruce Momjian wrote:\n> On Mon, Feb 14, 2022 at 03:23:07PM -0800, Swaha Miller wrote:\n> > A prominent use case for grouping functions into modules would\n> > be access control on the group as a whole, with one command\n> > for an entire module instead of many individual functions. One reason\n> > for such a grouping is to set ACLs. Users migrating their database from\n> > commercial databases to PostgreSQL wanting to set ACLs on\n> > thousands or hundreds of thousands of functions would benefit from\n> > a grouping concept like modules.\n> > \n> > If users use schemas to group functions, then they have to break up\n> > functions that may have all been in one namespace into a bunch of new\n> > schemas. So we'd like to have a different grouping mechanism that can\n> > group functions within a schema. So we're looking at a new construct like\n> > modules that can serve that purpose without introducing sub-schemas.\n> \n> I was working on a talk about microservices today and decided to create\n> two schemas --- a public one that has USAGE permission for other services\n> with views and SECURITY DEFINER functions that referenced a private\n> schema that can only be accessed by the owning service.  Is that a usage\n> pattern that requires modules since it already works with schemas and\n> just uses schema permissions to designate public/private schema\n> interfaces.I think the reason for modules would be to make it a better experience forPostgreSQL users to do what they need. Yours is a great examplefor a talk. A practical scenario would require many schemas following this pattern of X and X_private. In a more expressive programming language, the user would express modular design with classes X1, X2...Xn that have both public interface calls and private functions. The modularity would reflect what they are trying to do and not be restricted by how they are allowed to do so. In this context, I am going to link to a scenario Jim had described before https://www.postgresql.org/message-id/CAB_5SRebSCjO12%3DnLsaLCBw2vnkiNH7jcNchirPc0yQ2KmiknQ%40mail.gmail.comIn your example, the only way a developer would know that X and X_private are related is that they are named consistently. With anothergrouping construct like Modules, they would both have been in the sameschema, but now they are not. From a long term maintainability of the code perspective, that becomes an issue because people make simple mistakes. What if in the future a new developer added a 3rd grouping, x_utils. Is it related to the other groupings? This has a lower case x instead of the X. How is one to know? What if in the future a new developer added a 3rd grouping X_utils but without having paid attention to the already existing X and X_private? Now naming becomes the means of expressing classification and modular design. It is a relatively weak construct compared to a language construct.Yes, anything a user may want to do with modules is likely possible todo already with schemas. But just because it is possible doesn't meanit is not tedious and awkward because of the mechanisms available to do them now. And I would advocate for more expressive constructs to enableusers of PostgreSQL to focus and reason about more of the \"what\" than the \"how\" of the systems they are trying to build or migrate.Swaha", "msg_date": "Tue, 15 Feb 2022 12:29:54 -0800", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "Hi\n\n\n> Yes, anything a user may want to do with modules is likely possible to\n> do already with schemas. But just because it is possible doesn't mean\n> it is not tedious and awkward because of the mechanisms available to do\n> them now. And I would advocate for more expressive constructs to enable\n> users of PostgreSQL to focus and reason about more of the \"what\" than\n> the \"how\" of the systems they are trying to build or migrate.\n>\n\nNobody will talk against better modularization. But it is not coming with\nyour proposal.\n\nThe main issue in this case is fact, so plpgsql is fully integrated to\nPostgres (on second hand this integration is a big performance win). It is\npretty different from PL/SQL. In Oracle you have a package, or any other\nsimilar features, because PL/SQL is an \"independent\" environment to the\ndatabase engine. You cannot do the same with PL/pgSQL. And if you try to\nimplement some enhancement of object hierarchy for PL/pgSQL, then you have\nto do it in the PostgreSQL core engine first. I'm 100% for enhancing stored\nprocedures about full modularization, but this feature cannot be\nimplemented step by step because you can break compatibility in any step.\nWe need a robust solution. The solution, that helps with something, but it\nis not robust, it is not progress.\n\nRegards\n\nPavel\n\n\n\n\n\n> Swaha\n>\n\nHiYes, anything a user may want to do with modules is likely possible todo already with schemas. But just because it is possible doesn't meanit is not tedious and awkward because of the mechanisms available to do them now. And I would advocate for more expressive constructs to enableusers of PostgreSQL to focus and reason about more of the \"what\" than the \"how\" of the systems they are trying to build or migrate.Nobody will talk against better modularization.  But it is not coming with your proposal. The main issue in this case is fact, so plpgsql is fully integrated to Postgres (on second hand this integration is a big performance win). It is pretty different from PL/SQL. In Oracle you have a package, or any other similar features, because PL/SQL is an \"independent\" environment to the database engine. You cannot do the same with PL/pgSQL. And if you try to implement some enhancement of object hierarchy for PL/pgSQL, then you have to do it in the PostgreSQL core engine first. I'm 100% for enhancing stored procedures about full modularization, but this feature cannot be implemented step by step because you can break compatibility in any step. We need a robust solution. The solution, that helps with something, but it is not robust, it is not progress. RegardsPavelSwaha", "msg_date": "Wed, 16 Feb 2022 06:20:15 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Wed, Feb 16, 2022 at 12:20 AM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n> The main issue in this case is fact, so plpgsql is fully integrated to\n> Postgres (on second hand this integration is a big performance win). It is\n> pretty different from PL/SQL. In Oracle you have a package, or any other\n> similar features, because PL/SQL is an \"independent\" environment to the\n> database engine. You cannot do the same with PL/pgSQL. And if you try to\n> implement some enhancement of object hierarchy for PL/pgSQL, then you have\n> to do it in the PostgreSQL core engine first. I'm 100% for enhancing stored\n> procedures about full modularization, but this feature cannot be\n> implemented step by step because you can break compatibility in any step.\n> We need a robust solution. The solution, that helps with something, but it\n> is not robust, it is not progress.\n>\n>\nI'm not sure I understand your feedback. The proposed design for modules\nis implemented in the engine and is orthogonal to PL/pgSQL. A module can\ncontain a mix of PL/pgSQL, PL/perl and PL/TCL functions if one wants\nto do something like that.\n\nDo you have any thoughts on how some sort of modularization or grouping\nshould be implemented? The Module route was the first thought on this\nbecause it's the way the spec tells us we should solve this problem. We\ncan invent something new if we have a better way of solving this.\n\nOn Wed, Feb 16, 2022 at 12:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:The main issue in this case is fact, so plpgsql is fully integrated to Postgres (on second hand this integration is a big performance win). It is pretty different from PL/SQL. In Oracle you have a package, or any other similar features, because PL/SQL is an \"independent\" environment to the database engine. You cannot do the same with PL/pgSQL. And if you try to implement some enhancement of object hierarchy for PL/pgSQL, then you have to do it in the PostgreSQL core engine first. I'm 100% for enhancing stored procedures about full modularization, but this feature cannot be implemented step by step because you can break compatibility in any step. We need a robust solution. The solution, that helps with something, but it is not robust, it is not progress. I'm not sure I understand your feedback. The proposed design for modulesis implemented in the engine and is orthogonal to PL/pgSQL. A module can contain a mix of PL/pgSQL, PL/perl and PL/TCL functions if one wants to do something like that. Do you have any thoughts on how some sort of modularization or grouping should be implemented? The Module route was the first thought on thisbecause it's the way the spec tells us we should solve this problem. Wecan invent something new if we have a better way of solving this.", "msg_date": "Wed, 16 Feb 2022 08:17:13 -0500", "msg_from": "Jim Mlodgenski <jimmy76@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "st 16. 2. 2022 v 14:17 odesílatel Jim Mlodgenski <jimmy76@gmail.com> napsal:\n\n>\n>\n> On Wed, Feb 16, 2022 at 12:20 AM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n>\n>>\n>> The main issue in this case is fact, so plpgsql is fully integrated to\n>> Postgres (on second hand this integration is a big performance win). It is\n>> pretty different from PL/SQL. In Oracle you have a package, or any other\n>> similar features, because PL/SQL is an \"independent\" environment to the\n>> database engine. You cannot do the same with PL/pgSQL. And if you try to\n>> implement some enhancement of object hierarchy for PL/pgSQL, then you have\n>> to do it in the PostgreSQL core engine first. I'm 100% for enhancing stored\n>> procedures about full modularization, but this feature cannot be\n>> implemented step by step because you can break compatibility in any step.\n>> We need a robust solution. The solution, that helps with something, but it\n>> is not robust, it is not progress.\n>>\n>>\n> I'm not sure I understand your feedback. The proposed design for modules\n> is implemented in the engine and is orthogonal to PL/pgSQL. A module can\n> contain a mix of PL/pgSQL, PL/perl and PL/TCL functions if one wants\n> to do something like that.\n>\n> Do you have any thoughts on how some sort of modularization or grouping\n> should be implemented? The Module route was the first thought on this\n> because it's the way the spec tells us we should solve this problem. We\n> can invent something new if we have a better way of solving this.\n>\n>\nThe proposal doesn't help with isolation PL/pgSQL code from outside. This\nis just secondary collecting, and I agree with other people that say so\nmaybe extending extensions can be good enough for this case.\n\nRegards\n\nPavel\n\nst 16. 2. 2022 v 14:17 odesílatel Jim Mlodgenski <jimmy76@gmail.com> napsal:On Wed, Feb 16, 2022 at 12:20 AM Pavel Stehule <pavel.stehule@gmail.com> wrote:The main issue in this case is fact, so plpgsql is fully integrated to Postgres (on second hand this integration is a big performance win). It is pretty different from PL/SQL. In Oracle you have a package, or any other similar features, because PL/SQL is an \"independent\" environment to the database engine. You cannot do the same with PL/pgSQL. And if you try to implement some enhancement of object hierarchy for PL/pgSQL, then you have to do it in the PostgreSQL core engine first. I'm 100% for enhancing stored procedures about full modularization, but this feature cannot be implemented step by step because you can break compatibility in any step. We need a robust solution. The solution, that helps with something, but it is not robust, it is not progress. I'm not sure I understand your feedback. The proposed design for modulesis implemented in the engine and is orthogonal to PL/pgSQL. A module can contain a mix of PL/pgSQL, PL/perl and PL/TCL functions if one wants to do something like that. Do you have any thoughts on how some sort of modularization or grouping should be implemented? The Module route was the first thought on thisbecause it's the way the spec tells us we should solve this problem. Wecan invent something new if we have a better way of solving this. The proposal doesn't help with isolation PL/pgSQL code from outside. This is just secondary collecting, and I agree with other people that say so maybe extending extensions can be good enough for this case.RegardsPavel", "msg_date": "Wed, 16 Feb 2022 16:04:35 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Tue, Feb 15, 2022 at 12:29:54PM -0800, Swaha Miller wrote:\n> On Mon, Feb 14, 2022 at 4:58 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I was working on a talk about microservices today and decided to create\n> > two schemas --- a public one that has USAGE permission for other services\n> > with views and SECURITY DEFINER functions that referenced a private\n> > schema that can only be accessed by the owning service.  Is that a usage\n> > pattern that requires modules since it already works with schemas and\n> > just uses schema permissions to designate public/private schema\n> > interfaces.\n> \n> \n> I think the reason for modules would be to make it a better experience for\n> PostgreSQL users to do what they need. Yours is a great example\n\n...\n\n> Yes, anything a user may want to do with modules is likely possible to\n> do already with schemas. But just because it is possible doesn't mean\n> it is not tedious and awkward because of the mechanisms available to do \n> them now. And I would advocate for more expressive constructs to enable\n> users of PostgreSQL to focus and reason about more of the \"what\" than \n> the \"how\" of the systems they are trying to build or migrate.\n\nWell, I think there are two downsides of having modules that do\nsome things like schemas, and some not:\n\n* You now have two ways of controlling namespaces and permissions\n* It is unclear how the two methods would interact\n\nSome people might find that acceptable, but historically more people have\nrejected that, partly due to user API complexity and partly due to\nserver code complexity.\n\nGiven what you can do with Postgres already, what is trying to be\naccomplished? Having public and private objects in the same schema? \nHaving public objects in a schema as visible to some users and private\nobjects invisible to them? The visibility control is currently not\npossible without using two schemas, but access control is possible.\n\nNow, if we could throw away are current schema/permission controls and\njust use one based on modules, that might be more acceptable, but it\nwould cause too much breakage and be non-standard, and also be less\nflexible than what we already support.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n If only the physical world exists, free will is an illusion.\n\n\n\n", "msg_date": "Thu, 17 Feb 2022 14:02:51 -0500", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "It seems unlikely that this will be committed for v15. Swaha, should the\ncommitfest entry be adjusted to v16 and moved to the next commitfest?\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:16:43 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Mar 17, 2022 at 4:16 PM Nathan Bossart <nathandbossart@gmail.com>\nwrote:\n\n> It seems unlikely that this will be committed for v15. Swaha, should the\n> commitfest entry be adjusted to v16 and moved to the next commitfest?\n>\n>\nYes please, thank you Nathan\n\nOn Thu, Mar 17, 2022 at 4:16 PM Nathan Bossart <nathandbossart@gmail.com> wrote:It seems unlikely that this will be committed for v15.  Swaha, should the\ncommitfest entry be adjusted to v16 and moved to the next commitfest?\nYes please, thank you Nathan", "msg_date": "Thu, 17 Mar 2022 16:26:31 -0700", "msg_from": "Swaha Miller <swaha.miller@gmail.com>", "msg_from_op": true, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Mar 17, 2022 at 04:26:31PM -0700, Swaha Miller wrote:\n> On Thu, Mar 17, 2022 at 4:16 PM Nathan Bossart <nathandbossart@gmail.com>\n> wrote:\n> \n>> It seems unlikely that this will be committed for v15. Swaha, should the\n>> commitfest entry be adjusted to v16 and moved to the next commitfest?\n>>\n>>\n> Yes please, thank you Nathan\n\nDone!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 17 Mar 2022 16:30:43 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" }, { "msg_contents": "On Thu, Mar 17, 2022 at 04:30:43PM -0700, Nathan Bossart wrote:\n> On Thu, Mar 17, 2022 at 04:26:31PM -0700, Swaha Miller wrote:\n>> On Thu, Mar 17, 2022 at 4:16 PM Nathan Bossart <nathandbossart@gmail.com>\n>> wrote:\n>>> It seems unlikely that this will be committed for v15. Swaha, should the\n>>> commitfest entry be adjusted to v16 and moved to the next commitfest?\n>>>\n>>>\n>> Yes please, thank you Nathan\n> \n> Done!\n\nI spoke with Swaha off-list, and we agreed that this commitfest entry can\nbe closed for now. I'm going to mark it as returned-with-feedback.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 18 Jul 2022 11:06:11 -0700", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: support for CREATE MODULE" } ]
[ { "msg_contents": "Hi hackers,\n\nUnlike xid, xid8 increases monotonically and cannot be reused.\nThis trait makes it possible to support min() and max() aggregate \nfunctions for xid8.\nI thought they would be useful for monitoring.\n\nSo I made a patch for this.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Thu, 03 Feb 2022 16:45:17 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "[PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "\n\nOn 2022/02/03 16:45, Ken Kato wrote:\n> Hi hackers,\n> \n> Unlike xid, xid8 increases monotonically and cannot be reused.\n> This trait makes it possible to support min() and max() aggregate functions for xid8.\n> I thought they would be useful for monitoring.\n> \n> So I made a patch for this.\n\nThanks for the patch! +1 with this feature.\n\n+\tPG_RETURN_FULLTRANSACTIONID((U64FromFullTransactionId(fxid1) > U64FromFullTransactionId(fxid2)) ? fxid1 : fxid2);\n\nShouldn't we use FullTransactionIdFollows() to compare those two fxid values here, instead?\n\n+\tPG_RETURN_FULLTRANSACTIONID((U64FromFullTransactionId(fxid1) < U64FromFullTransactionId(fxid2)) ? fxid1 : fxid2);\n\nShouldn't we use FullTransactionIdPrecedes() to compare those two fxid values here, instead?\n\nCould you add the regression tests for those min() and max() functions for xid8?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Sat, 5 Feb 2022 00:30:26 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "> +\tPG_RETURN_FULLTRANSACTIONID((U64FromFullTransactionId(fxid1) >\n> U64FromFullTransactionId(fxid2)) ? fxid1 : fxid2);\n> \n> Shouldn't we use FullTransactionIdFollows() to compare those two fxid\n> values here, instead?\n> \n> +\tPG_RETURN_FULLTRANSACTIONID((U64FromFullTransactionId(fxid1) <\n> U64FromFullTransactionId(fxid2)) ? fxid1 : fxid2);\n> \n> Shouldn't we use FullTransactionIdPrecedes() to compare those two fxid\n> values here, instead?\n> \n> Could you add the regression tests for those min() and max() functions \n> for xid8?\n\nThank you for the comments.\nI sent my old version of patch by mistake.\nThis is the updated one.\n\nBest wishes\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Sat, 05 Feb 2022 10:46:05 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "\n\nOn 2022/02/05 10:46, Ken Kato wrote:\n> Thank you for the comments.\n> I sent my old version of patch by mistake.\n> This is the updated one.\n\nThanks!\n\n+\tPG_RETURN_FULLTRANSACTIONID((FullTransactionIdFollowsOrEquals(fxid1, fxid2)) ? fxid1 : fxid2);\n\nBasically it's better to use less 80 line length for readability. So how about change the format of this to the following?\n\nif (FullTransactionIdFollows(fxid1, fxid2))\n PG_RETURN_FULLTRANSACTIONID(fxid1);\nelse\n PG_RETURN_FULLTRANSACTIONID(fxid2);\n\n+insert into xid8_tab values ('0'::xid8), ('18446744073709551615'::xid8);\n\nIsn't it better to use '0xffffffffffffffff'::xid8 instead of '18446744073709551615'::xid8, to more easily understand that this test uses maximum number allowed as xid8?\n\nIn addition to those two xid8 values, IMO it's better to insert also the xid8 value neither minimum nor maximum xid8 ones, for example, '42'::xid8.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Mon, 7 Feb 2022 10:45:34 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "Thank you for the comments!\n\n> if (FullTransactionIdFollows(fxid1, fxid2))\n> PG_RETURN_FULLTRANSACTIONID(fxid1);\n> else\n> PG_RETURN_FULLTRANSACTIONID(fxid2);\n\n> Isn't it better to use '0xffffffffffffffff'::xid8 instead of\n> '18446744073709551615'::xid8, to more easily understand that this test\n> uses maximum number allowed as xid8?\n\nI updated these two parts as you suggested.\n\n\n> In addition to those two xid8 values, IMO it's better to insert also\n> the xid8 value neither minimum nor maximum xid8 ones, for example,\n> '42'::xid8.\n\nI added '010'::xid8, '42'::xid8, and '-1'::xid8\nin addition to '0'::xid8 and '0xffffffffffffffff'::xid8\njust to have more varieties.\n\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 08 Feb 2022 13:23:12 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "On 2022/02/08 13:23, Ken Kato wrote:\n> \n> Thank you for the comments!\n> \n>> if (FullTransactionIdFollows(fxid1, fxid2))\n>>     PG_RETURN_FULLTRANSACTIONID(fxid1);\n>> else\n>>     PG_RETURN_FULLTRANSACTIONID(fxid2);\n> \n>> Isn't it better to use '0xffffffffffffffff'::xid8 instead of\n>> '18446744073709551615'::xid8, to more easily understand that this test\n>> uses maximum number allowed as xid8?\n> \n> I updated these two parts as you suggested.\n> \n> \n>> In addition to those two xid8 values, IMO it's better to insert also\n>> the xid8 value neither minimum nor maximum xid8 ones, for example,\n>> '42'::xid8.\n> \n> I added '010'::xid8, '42'::xid8, and '-1'::xid8\n> in addition to '0'::xid8 and '0xffffffffffffffff'::xid8\n> just to have more varieties.\n\nThanks for updating the patch! It basically looks good to me. I applied the following small changes to the patch. Updated version of the patch attached. Could you review this version?\n\n+\tif (FullTransactionIdFollowsOrEquals(fxid1, fxid2))\n+\t\tPG_RETURN_FULLTRANSACTIONID(fxid1);\n\nI used FullTransactionIdFollows() and FullTransactionIdPrecedes() in xid8_larger() and xid8_smaller() because other xxx_larger() and xxx_smaller() functions also use \">\" operator instead of \">=\".\n\n+create table xid8_tab (x xid8);\n+insert into xid8_tab values ('0'::xid8), ('010'::xid8),\n+('42'::xid8), ('0xffffffffffffffff'::xid8), ('-1'::xid8);\n\nSince \"::xid8\" is not necessary here, I got rid of it from the above query.\n\nI also merged this xid8_tab and the existing xid8_t1 table, to reduce the number of table creation.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 8 Feb 2022 15:34:21 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "On 2022-02-08 15:34, Fujii Masao wrote:\n> Thanks for updating the patch! It basically looks good to me. I\n> applied the following small changes to the patch. Updated version of\n> the patch attached. Could you review this version?\n\nThank you for the patch!\n\nIt looks good to me!\n\nI'm not sure how strict coding conventions are, but the following line \nis over 80 characters.\n+insert into xid8_t1 values ('0'), ('010'), ('42'), \n('0xffffffffffffffff'), ('-1');\nTherefore, I made a patch which removed ('010') just to fit in 80 \ncharacters.\n\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION", "msg_date": "Tue, 08 Feb 2022 18:43:18 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "\n\nOn 2022/02/08 18:43, Ken Kato wrote:\n> On 2022-02-08 15:34, Fujii Masao wrote:\n>> Thanks for updating the patch! It basically looks good to me. I\n>> applied the following small changes to the patch. Updated version of\n>> the patch attached. Could you review this version?\n> \n> Thank you for the patch!\n> \n> It looks good to me!\n\nThanks for the review!\n\n\n> I'm not sure how strict coding conventions are, but the following line is over 80 characters.\n> +insert into xid8_t1 values ('0'), ('010'), ('42'), ('0xffffffffffffffff'), ('-1');\n> Therefore, I made a patch which removed ('010') just to fit in 80 characters.\n\nIf you want to avoid the line longer than 80 columns, you should break it into two or more rather than remove the test code, I think. What to test is more important than formatting.\n\nAlso the following descriptions about formatting would be helpful.\n\n---------------------------\nhttps://www.postgresql.org/docs/devel/source-format.html\n\nLimit line lengths so that the code is readable in an 80-column window.\n(This doesn't mean that you must never go past 80 columns. For instance,\nbreaking a long error message string in arbitrary places just to keep\nthe code within 80 columns is probably not a net gain in readability.)\n---------------------------\n\nTherefore I'm ok with the patch that I posted upthread. Also I'm ok if you will break that longer line into two and post new patch. Or if the value '010' is really useless for the test purpose, I'm also ok if you remove it. Thought?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Feb 2022 23:16:38 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "On 2022-02-08 23:16, Fujii Masao wrote:\n> If you want to avoid the line longer than 80 columns, you should break\n> it into two or more rather than remove the test code, I think. What to\n> test is more important than formatting.\n> \n> Also the following descriptions about formatting would be helpful.\n> \n> ---------------------------\n> https://www.postgresql.org/docs/devel/source-format.html\n> \n> Limit line lengths so that the code is readable in an 80-column window.\n> (This doesn't mean that you must never go past 80 columns. For \n> instance,\n> breaking a long error message string in arbitrary places just to keep\n> the code within 80 columns is probably not a net gain in readability.)\n> ---------------------------\n> \n> Therefore I'm ok with the patch that I posted upthread. Also I'm ok if\n> you will break that longer line into two and post new patch. Or if the\n> value '010' is really useless for the test purpose, I'm also ok if you\n> remove it. Thought?\n\nThank you for the explanation!\n\nEven though the line is over 80 characters, it makes more sense to put \nin one line and it enhances readability IMO.\nAlso, '010' is good to have since it is the only octal value in the \ntest.\n\nTherefore, I think min_max_aggregates_for_xid8_v4.patch is the best one \nto go.\n\nBest wishes,\n\n-- \nKen Kato\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 09 Feb 2022 08:49:21 +0900", "msg_from": "Ken Kato <katouknl@oss.nttdata.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "\n\nOn 2022/02/09 8:49, Ken Kato wrote:\n> On 2022-02-08 23:16, Fujii Masao wrote:\n>> If you want to avoid the line longer than 80 columns, you should break\n>> it into two or more rather than remove the test code, I think. What to\n>> test is more important than formatting.\n>>\n>> Also the following descriptions about formatting would be helpful.\n>>\n>> ---------------------------\n>> https://www.postgresql.org/docs/devel/source-format.html\n>>\n>> Limit line lengths so that the code is readable in an 80-column window.\n>> (This doesn't mean that you must never go past 80 columns. For instance,\n>> breaking a long error message string in arbitrary places just to keep\n>> the code within 80 columns is probably not a net gain in readability.)\n>> ---------------------------\n>>\n>> Therefore I'm ok with the patch that I posted upthread. Also I'm ok if\n>> you will break that longer line into two and post new patch. Or if the\n>> value '010' is really useless for the test purpose, I'm also ok if you\n>> remove it. Thought?\n> \n> Thank you for the explanation!\n> \n> Even though the line is over 80 characters, it makes more sense to put in one line and it enhances readability IMO.\n> Also, '010' is good to have since it is the only octal value in the test.\n> \n> Therefore, I think min_max_aggregates_for_xid8_v4.patch is the best one to go.\n\nAgreed. So barring any objection, I will commit that patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:01:57 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "At Wed, 9 Feb 2022 11:01:57 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> Agreed. So barring any objection, I will commit that patch.\n\nSorry for being late, but I don't like the function names.\n\n+xid8_larger(PG_FUNCTION_ARGS)\n+xid8_smaller(PG_FUNCTION_ARGS)\n\nAren't they named like xid8gt and xid8lt conventionally? At least the\nname lacks a mention to equality.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Feb 2022 12:04:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "At Wed, 09 Feb 2022 12:04:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Wed, 9 Feb 2022 11:01:57 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in \n> > Agreed. So barring any objection, I will commit that patch.\n> \n> Sorry for being late, but I don't like the function names.\n> \n> +xid8_larger(PG_FUNCTION_ARGS)\n> +xid8_smaller(PG_FUNCTION_ARGS)\n> \n> Aren't they named like xid8gt and xid8lt conventionally? At least the\n> name lacks a mention to equality.\n\nAh, sorry. the function returns larger/smaller one from the\nparameters. Sorry for the noise.\n\nregardes.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Feb 2022 13:04:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" }, { "msg_contents": "\n\nOn 2022/02/09 13:04, Kyotaro Horiguchi wrote:\n> At Wed, 09 Feb 2022 12:04:51 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in\n>> At Wed, 9 Feb 2022 11:01:57 +0900, Fujii Masao <masao.fujii@oss.nttdata.com> wrote in\n>>> Agreed. So barring any objection, I will commit that patch.\n>>\n>> Sorry for being late, but I don't like the function names.\n>>\n>> +xid8_larger(PG_FUNCTION_ARGS)\n>> +xid8_smaller(PG_FUNCTION_ARGS)\n>>\n>> Aren't they named like xid8gt and xid8lt conventionally? At least the\n>> name lacks a mention to equality.\n> \n> Ah, sorry. the function returns larger/smaller one from the\n> parameters. Sorry for the noise.\n\nThanks for the review! I pushed the patch.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Thu, 10 Feb 2022 12:37:10 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add min() and max() aggregate functions for xid8" } ]
[ { "msg_contents": "Hello,\n\nI found a old parameter name 'heapRelation' in the comment\nof CheckIndexCompatible. This parameter was removed by 5f173040.\n\nAttached is a patch to remove it from the comment.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Fri, 4 Feb 2022 01:46:34 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Fix CheckIndexCompatible comment" }, { "msg_contents": "\n\nOn 2022/02/04 1:46, Yugo NAGATA wrote:\n> Hello,\n> \n> I found a old parameter name 'heapRelation' in the comment\n> of CheckIndexCompatible. This parameter was removed by 5f173040.\n> \n> Attached is a patch to remove it from the comment.\n\nThanks for the report! I agree to remove the mention of parameter already dropped, from the comment. OTOH, I found CheckIndexCompatible() now has \"oldId\" parameter but there is no comment about it though there are comments about other parameters. Isn't it better to add the comment about \"oldId\"?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 4 Feb 2022 09:08:22 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix CheckIndexCompatible comment" }, { "msg_contents": "On Fri, Feb 04, 2022 at 09:08:22AM +0900, Fujii Masao wrote:\n> On 2022/02/04 1:46, Yugo NAGATA wrote:\n>> I found a old parameter name 'heapRelation' in the comment\n>> of CheckIndexCompatible. This parameter was removed by 5f173040.\n>> \n>> Attached is a patch to remove it from the comment.\n\nIt looks like this parameter was removed in 5f17304.\n \n> Thanks for the report! I agree to remove the mention of parameter already dropped, from the comment. OTOH, I found CheckIndexCompatible() now has \"oldId\" parameter but there is no comment about it though there are comments about other parameters. Isn't it better to add the comment about \"oldId\"?\n\n+1\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 16:14:53 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix CheckIndexCompatible comment" }, { "msg_contents": "Hello, Fujii-san,\n\nOn Fri, 4 Feb 2022 09:08:22 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2022/02/04 1:46, Yugo NAGATA wrote:\n> > Hello,\n> > \n> > I found a old parameter name 'heapRelation' in the comment\n> > of CheckIndexCompatible. This parameter was removed by 5f173040.\n> > \n> > Attached is a patch to remove it from the comment.\n> \n> Thanks for the report! I agree to remove the mention of parameter already dropped, from the comment. OTOH, I found CheckIndexCompatible() now has \"oldId\" parameter but there is no comment about it though there are comments about other parameters. Isn't it better to add the comment about \"oldId\"?\n\nAgreed. I updated the patch to add a comment about 'oldId'.\n\nRegards,\nYugo Nagata\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>", "msg_date": "Mon, 7 Feb 2022 19:14:18 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix CheckIndexCompatible comment" }, { "msg_contents": "\n\nOn 2022/02/07 19:14, Yugo NAGATA wrote:\n> Agreed. I updated the patch to add a comment about 'oldId'.\n\nThanks for updating the patch! I slightly modified the patch and pushed it.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 18 Feb 2022 12:22:32 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Fix CheckIndexCompatible comment" }, { "msg_contents": "On Fri, 18 Feb 2022 12:22:32 +0900\nFujii Masao <masao.fujii@oss.nttdata.com> wrote:\n\n> \n> \n> On 2022/02/07 19:14, Yugo NAGATA wrote:\n> > Agreed. I updated the patch to add a comment about 'oldId'.\n> \n> Thanks for updating the patch! I slightly modified the patch and pushed it.\n\nThanks!\n\n> \n> Regards,\n> \n> -- \n> Fujii Masao\n> Advanced Computing Technology Center\n> Research and Development Headquarters\n> NTT DATA CORPORATION\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n", "msg_date": "Fri, 18 Feb 2022 15:35:32 +0900", "msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>", "msg_from_op": true, "msg_subject": "Re: Fix CheckIndexCompatible comment" } ]
[ { "msg_contents": "VACUUM's first pass over the heap is implemented by a function called\nlazy_scan_heap(), while the second pass is implemented by a function\ncalled lazy_vacuum_heap_rel(). This seems to imply that the first pass\nis primarily an examination of what is present, while the second pass\ndoes the real work. This used to be more true than it now is. In\nPostgreSQL 7.2, the first release that implemented concurrent vacuum,\nthe first heap pass could set hint bits as a side effect of calling\nHeapTupleSatisfiesVacuum(), and it could freeze old xmins. However,\nneither of those things wrote WAL, and you had a reasonable chance of\nescaping without dirtying the page at all. By the time PostgreSQL 8.2\nwas released, it had been understood that making critical changes to\npages without writing WAL was not a good plan, and so freezing now\nwrote WAL, but no big deal: most vacuums wouldn't freeze anything\nanyway. Things really changed a lot in PostgreSQL 8.3. With the\naddition of HOT, lazy_scan_heap() was made to prune the page, meaning\nthat the first heap pass would likely dirty a large fraction of the\npages that it touched, truncating dead tuples to line pointers and\ndefragmenting the page. The second heap pass would then have to dirty\nthe page again to mark dead line pointers unused. In the absolute\nworst case, that's a very large increase in WAL generation. VACUUM\ncould write full page images for all of those pages while HOT-pruning\nthem, and then a checkpoint could happen, and then VACUUM could write\nfull-page images of all of them again while marking the dead line\npointers unused. I don't know whether anyone spent time and energy\nworrying about this problem, but considering how much HOT improves\nperformance overall, it would be entirely understandable if this\ndidn't seem like a terribly important thing to worry about.\n\nBut maybe we should reconsider. What benefit do we get out of dirtying\nthe page twice like this, writing WAL each time? What if we went back\nto the idea of having the first heap pass be read-only? In fact, I'm\nthinking we might want to go even further and try to prevent even hint\nbit changes to the page during the first pass, especially because now\nwe have checksums and wal_log_hints. If our vacuum cost settings are\nto believed (and I am not sure that they are) dirtying a page is 10\ntimes as expensive as reading one from the disk. So on a large table,\nwe're paying 44 vacuum cost units per heap page vacuumed twice, when\nwe could be paying only 24 such cost units. What a bargain! The\ndownside is that we would be postponing, perhaps substantially, the\nwork that can be done immediately, namely freeing up space in the page\nand updating the free space map. The former doesn't seem like a big\nloss, because it can be done by anyone who visits the page anyway, and\nskipped if nobody does. The latter might be a loss, because getting\nthe page into the freespace map sooner could prevent bloat by allowing\nspace to be recycled sooner. I'm not sure how serious a problem this\nis. I'm curious what other people think. Would it be worth the delay\nin getting pages into the FSM if it means we dirty the pages only\nonce? Could we have our cake and eat it too by updating the FSM with\nthe amount of free space that the page WOULD have if we pruned it, but\nnot actually do so?\n\nI'm thinking about this because of the \"decoupling table and index\nvacuuming\" thread, which I was discussing with Dilip this morning. In\na world where table vacuuming and index vacuuming are decoupled, it\nfeels like we want to have only one kind of heap vacuum. It pushes us\nin the direction of unifying the first and second pass, and doing all\nthe cleanup work at once. However, I don't know that we want to use\nthe approach described there in all cases. For a small table that is,\nlet's just say, not part of any partitioning hierarchy, I'm not sure\nthat using the conveyor belt approach makes a lot of sense, because\nthe total amount of work we want to do is so small that we should just\nget it over with and not clutter up the disk with more conveyor belt\nforks -- especially for people who have large numbers of small tables,\nthe inode consumption could be a real issue. And we won't really save\nanything either. The value of decoupling operations has to do with\nimproving concurrency and error recovery and allowing global indexes\nand a bunch of stuff that, for a small table, simply doesn't matter.\nSo it would be nice to fall back to an approach more like what we do\nnow. But then you end up with two fairly distinct code paths, one\nwhere you want the heap phases combined and another where you want\nthem separated. If the first pass were a strictly read-only pass, you\ncould do that if there's no conveyor belt, or else read from the\nconveyor belt if there is one, and then the phase where you dirty the\nheap looks about the same either way.\n\nAside from the question of whether this is a good idea at all, I'm\nalso wondering what kinds of experiments we could run to try to find\nout. What would be a best case workload for the current strategy vs.\nthis? What would be a worst case for the current strategy vs. this?\nI'm not sure. If you have ideas, I'd love to hear them.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Feb 2022 12:20:36 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Feb 3, 2022 at 12:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> But maybe we should reconsider. What benefit do we get out of dirtying\n> the page twice like this, writing WAL each time? What if we went back\n> to the idea of having the first heap pass be read-only?\n\nWhat about recovery conflicts? Index vacuuming WAL records don't\nrequire their own latestRemovedXid field, since they can rely on\nearlier XLOG_HEAP2_PRUNE records instead. Since the TIDs that index\nvacuuming removes always point to LP_DEAD items in the heap, it's safe\nto lean on that.\n\n> In fact, I'm\n> thinking we might want to go even further and try to prevent even hint\n> bit changes to the page during the first pass, especially because now\n> we have checksums and wal_log_hints. If our vacuum cost settings are\n> to believed (and I am not sure that they are) dirtying a page is 10\n> times as expensive as reading one from the disk. So on a large table,\n> we're paying 44 vacuum cost units per heap page vacuumed twice, when\n> we could be paying only 24 such cost units. What a bargain!\n\nIn practice HOT generally works well enough that the number of heap\npages that prune significantly exceeds the subset that are also\nvacuumed during the second pass over the heap -- at least when heap\nfill factor has been tuned (which might be rare). The latter category\nof pages is not reported on by the enhanced autovacuum logging added\nto Postgres 14, so you might be able to get some sense of how this\nworks by looking at that.\n\n> Could we have our cake and eat it too by updating the FSM with\n> the amount of free space that the page WOULD have if we pruned it, but\n> not actually do so?\n\nDid you ever notice that VACUUM records free space after *it* prunes,\nusing its own horizon? With a long running VACUUM operation, where\nunremoved \"recently dead\" tuples are common, it's possible that the\namount of free space that's effectively available (available to every\nother backend) is significantly higher. And so we already record\n\"subjective amounts of free space\" -- though not necessarily by\ndesign.\n\n> I'm thinking about this because of the \"decoupling table and index\n> vacuuming\" thread, which I was discussing with Dilip this morning. In\n> a world where table vacuuming and index vacuuming are decoupled, it\n> feels like we want to have only one kind of heap vacuum. It pushes us\n> in the direction of unifying the first and second pass, and doing all\n> the cleanup work at once. However, I don't know that we want to use\n> the approach described there in all cases. For a small table that is,\n> let's just say, not part of any partitioning hierarchy, I'm not sure\n> that using the conveyor belt approach makes a lot of sense, because\n> the total amount of work we want to do is so small that we should just\n> get it over with and not clutter up the disk with more conveyor belt\n> forks -- especially for people who have large numbers of small tables,\n> the inode consumption could be a real issue.\n\nI'm not sure that what you're proposing here is the best way to go\nabout it, but let's assume for a moment that it is. Can't you just\nsimulate the conveyor belt approach, without needing a relation fork?\nJust store the same information in memory, accessed using the same\ninterface, with a spillover path?\n\nIdeally VACUUM will be able to use the conveyor belt for any table.\nWhether or not it actually happens should be decided at the latest\npossible point during VACUUM, based on considerations about the actual\nnumber of dead items that we now need to remove from indexes, as well\nas metadata from any preexisting conveyor belt.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:16:13 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, 3 Feb 2022 at 12:21, Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> VACUUM's first pass over the heap is implemented by a function called\n> lazy_scan_heap(), while the second pass is implemented by a function\n> called lazy_vacuum_heap_rel(). This seems to imply that the first pass\n> is primarily an examination of what is present, while the second pass\n> does the real work. This used to be more true than it now is.\n\nI've been out of touch for a while but I'm trying to catch up with the\nprogress of the past few years.\n\nWhatever happened to the idea to \"rotate\" the work of vacuum. So all\nthe work of the second pass would actually be deferred until the first\npass of the next vacuum cycle.\n\nThat would also have the effect of eliminating the duplicate work,\nboth the writes with the wal generation as well as the actual scan.\nThe only heap scan would be \"remove line pointers previously cleaned\nfrom indexes and prune dead tuples recording them to clean from\nindexes in future\". The index scan would remove line pointers and\nrecord them to be removed from the heap in a future heap scan.\n\nThe downside would mainly be in the latency before the actual tuples\nget cleaned up from the table. That is not so much of an issue as far\nas space these days with tuple pruning but is more and more of an\nissue with xid wraparound. Also, having to record the line pointers\nthat have been cleaned from indexes somewhere on disk for the\nsubsequent vacuum would be extra state on disk and we've learned that\nmeans extra complexity.\n\n-- \ngreg\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:05:06 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 4, 2022 at 2:16 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Thu, Feb 3, 2022 at 12:20 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > But maybe we should reconsider. What benefit do we get out of dirtying\n> > the page twice like this, writing WAL each time? What if we went back\n> > to the idea of having the first heap pass be read-only?\n>\n> What about recovery conflicts? Index vacuuming WAL records don't\n> require their own latestRemovedXid field, since they can rely on\n> earlier XLOG_HEAP2_PRUNE records instead. Since the TIDs that index\n> vacuuming removes always point to LP_DEAD items in the heap, it's safe\n> to lean on that.\n\nOh, that's an interesting consideration.\n\n> > In fact, I'm\n> > thinking we might want to go even further and try to prevent even hint\n> > bit changes to the page during the first pass, especially because now\n> > we have checksums and wal_log_hints. If our vacuum cost settings are\n> > to believed (and I am not sure that they are) dirtying a page is 10\n> > times as expensive as reading one from the disk. So on a large table,\n> > we're paying 44 vacuum cost units per heap page vacuumed twice, when\n> > we could be paying only 24 such cost units. What a bargain!\n>\n> In practice HOT generally works well enough that the number of heap\n> pages that prune significantly exceeds the subset that are also\n> vacuumed during the second pass over the heap -- at least when heap\n> fill factor has been tuned (which might be rare). The latter category\n> of pages is not reported on by the enhanced autovacuum logging added\n> to Postgres 14, so you might be able to get some sense of how this\n> works by looking at that.\n\nIs there an extra \"not\" in this sentence? Because otherwise it seems\nlike you're saying that I should look at the information that isn't\nreported, which seems hard.\n\nIn any case, I think this might be a death knell for the whole idea.\nIt might be good to cut down the number of page writes by avoiding\nwriting them twice -- but not at the expense of having the second pass\nhave to visit a large number of pages it could otherwise skip. I\nsuppose we could write only those pages in the first pass that we\naren't going to need to write again later, but at that point I can't\nreally see that we're winning anything.\n\n> > Could we have our cake and eat it too by updating the FSM with\n> > the amount of free space that the page WOULD have if we pruned it, but\n> > not actually do so?\n>\n> Did you ever notice that VACUUM records free space after *it* prunes,\n> using its own horizon? With a long running VACUUM operation, where\n> unremoved \"recently dead\" tuples are common, it's possible that the\n> amount of free space that's effectively available (available to every\n> other backend) is significantly higher. And so we already record\n> \"subjective amounts of free space\" -- though not necessarily by\n> design.\n\nYes, I wondered about that. It seems like maybe a running VACUUM\nshould periodically refresh its notion of what cutoff to use.\n\n> I'm not sure that what you're proposing here is the best way to go\n> about it, but let's assume for a moment that it is. Can't you just\n> simulate the conveyor belt approach, without needing a relation fork?\n> Just store the same information in memory, accessed using the same\n> interface, with a spillover path?\n\n(I'm not sure it's best either.)\n\nI think my concern here is about not having too many different code\npaths from heap vacuuming. I agree that if we're going to vacuum\nwithout an on-disk conveyor belt we can use an in-memory substitute.\nHowever, to Greg's point, if we're using the conveyor belt, it seems\nlike we want to merge the second pass of one VACUUM into the first\npass of the next one. That is, if we start up a heap vacuum already\nhaving a list of TIDs that can be marked unused, we want to do that\nduring the same pass of the heap that we prune and search for\nnewly-discovered dead TIDs. But we can't do that in the case where the\nconveyor belt is only simulated, because our in-memory data structure\ncan't contain leftovers from a previous vacuum the way the on-disk\nconveyor belt can. So it seems like the whole algorithm has to be\ndifferent. I'd like to find a way to avoid that.\n\nIf this isn't entirely making sense, it may well be because I'm a\nlittle fuzzy on all of it myself. But I hope it's clear enough that\nyou can figure out what it is that I'm worrying about. If not, I'll\nkeep trying to explain until we both reach a sufficiently non-fuzzy\nstate.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:18:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 4, 2022 at 3:05 PM Greg Stark <stark@mit.edu> wrote:\n> Whatever happened to the idea to \"rotate\" the work of vacuum. So all\n> the work of the second pass would actually be deferred until the first\n> pass of the next vacuum cycle.\n>\n> That would also have the effect of eliminating the duplicate work,\n> both the writes with the wal generation as well as the actual scan.\n> The only heap scan would be \"remove line pointers previously cleaned\n> from indexes and prune dead tuples recording them to clean from\n> indexes in future\". The index scan would remove line pointers and\n> record them to be removed from the heap in a future heap scan.\n\nI vaguely remember previous discussions of this, but only vaguely, so\nif there are threads on list feel free to send pointers. It seems to\nme that in order to do this, we'd need some kind of way of storing the\nTIDs that were found to be dead in one VACUUM so that they can be\nmarked unused in the next VACUUM - and the conveyor belt patches on\nwhich Dilip's work is based provide exactly that machinery, which his\npatches then leverage to do exactly that thing. But it feels like a\nbig, sudden change from the way things work now, and I'm trying to\nthink of ways to make it more incremental, and thus hopefully less\nrisky.\n\n> The downside would mainly be in the latency before the actual tuples\n> get cleaned up from the table. That is not so much of an issue as far\n> as space these days with tuple pruning but is more and more of an\n> issue with xid wraparound. Also, having to record the line pointers\n> that have been cleaned from indexes somewhere on disk for the\n> subsequent vacuum would be extra state on disk and we've learned that\n> means extra complexity.\n\nI don't think there's any XID wraparound issue here. Phase 1 does a\nHOT prune, after which only dead line pointers remain, not dead\ntuples. And those contain no XIDs. Phase 2 is only setting those dead\nline pointers back to unused.\n\nAs for the other part, that's pretty much exactly the complexity that\nI'm worrying about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:23:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 4, 2022 at 3:18 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > What about recovery conflicts? Index vacuuming WAL records don't\n> > require their own latestRemovedXid field, since they can rely on\n> > earlier XLOG_HEAP2_PRUNE records instead. Since the TIDs that index\n> > vacuuming removes always point to LP_DEAD items in the heap, it's safe\n> > to lean on that.\n>\n> Oh, that's an interesting consideration.\n\nYou'd pretty much have to do \"fake pruning\", performing the same\ncomputation as pruning without actually pruning.\n\n> > In practice HOT generally works well enough that the number of heap\n> > pages that prune significantly exceeds the subset that are also\n> > vacuumed during the second pass over the heap -- at least when heap\n> > fill factor has been tuned (which might be rare). The latter category\n> > of pages is not reported on by the enhanced autovacuum logging added\n> > to Postgres 14, so you might be able to get some sense of how this\n> > works by looking at that.\n>\n> Is there an extra \"not\" in this sentence? Because otherwise it seems\n> like you're saying that I should look at the information that isn't\n> reported, which seems hard.\n\nSorry, yes. I meant \"now\" (as in, as of Postgres 14).\n\n> In any case, I think this might be a death knell for the whole idea.\n> It might be good to cut down the number of page writes by avoiding\n> writing them twice -- but not at the expense of having the second pass\n> have to visit a large number of pages it could otherwise skip. I\n> suppose we could write only those pages in the first pass that we\n> aren't going to need to write again later, but at that point I can't\n> really see that we're winning anything.\n\nRight. I think that we *can* be more aggressive about deferring heap\npage vacuuming until another VACUUM operation with the conveyor belt\nstuff. You may well end up getting almost the same benefit that way.\n\n> Yes, I wondered about that. It seems like maybe a running VACUUM\n> should periodically refresh its notion of what cutoff to use.\n\nYeah, Andres said something about this a few months ago. Shouldn't be\nvery difficult.\n\n> I think my concern here is about not having too many different code\n> paths from heap vacuuming. I agree that if we're going to vacuum\n> without an on-disk conveyor belt we can use an in-memory substitute.\n\nAvoiding special cases in vacuumlazy.c seems really important to me.\n\n> However, to Greg's point, if we're using the conveyor belt, it seems\n> like we want to merge the second pass of one VACUUM into the first\n> pass of the next one.\n\nBut it's only going to be safe to do that with those dead TIDs (or\ndistinct generations of dead TIDs) that are known to already be\nremoved from all indexes, including indexes that have the least need\nfor vacuuming (often no direct need at all). I had imagined that we'd\nwant to do heap vacuuming in the same way as today with the dead TID\nconveyor belt stuff -- it just might take several VACUUM operations\nuntil we are ready to do a round of heap vacuuming.\n\nFor those indexes that use bottom-up index deletion effectively, the\nindex structure itself never really needs to be vacuumed to avoid\nindex bloat. We must nevertheless vacuum these indexes at some point,\njust to be able to vacuum heap pages with LP_DEAD items safely.\n\nOverall, I think that there will typically be stark differences among\nindexes on the table, in terms of how much vacuuming each index\nrequires. And so the thing that drives us to perform heap vacuuming\nwill probably be heap vacuuming itself, and not the fact that each and\nevery index has become \"sufficiently bloated\".\n\n> If this isn't entirely making sense, it may well be because I'm a\n> little fuzzy on all of it myself.\n\nI'm in no position to judge. :-)\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 4 Feb 2022 16:11:54 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 4, 2022 at 4:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I had imagined that we'd\n> want to do heap vacuuming in the same way as today with the dead TID\n> conveyor belt stuff -- it just might take several VACUUM operations\n> until we are ready to do a round of heap vacuuming.\n\nI am trying to understand exactly what you are imagining here. Do you\nmean we'd continue to lazy_scan_heap() at the start of every vacuum,\nand lazy_vacuum_heap_rel() at the end? I had assumed that we didn't\nwant to do that, because we might already know from the conveyor belt\nthat there are some dead TIDs that could be marked unused, and it\nseems strange to just ignore that knowledge at a time when we're\nscanning the heap anyway. However, on reflection, that approach has\nsomething to recommend it, because it would be somewhat simpler to\nunderstand what's actually being changed. We could just:\n\n1. Teach lazy_scan_heap() that it should add TIDs to the conveyor\nbelt, if we're using one, unless they're already there, but otherwise\nwork as today.\n\n2. Teach lazy_vacuum_heap_rel() that it, if there is a conveyor belt,\nit should try to clear from the indexes all of the dead TIDs that are\neligible.\n\n3. If there is a conveyor belt, use some kind of magic to decide when\nto skip vacuuming some or all indexes. When we skip one or more\nindexes, the subsequent lazy_vacuum_heap_rel() can't possibly mark as\nunused any of the dead TIDs we found this time, so we should just skip\nit, unless somehow there are TIDs on the conveyor belt that were\nalready ready to be marked unused at the start of this VACUUM, in\nwhich case we can still handle those.\n\nIs this the kind of thing you had in mind?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 11:36:37 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "Yes, that's what I meant. That's always how I thought that it would work,\nfor over a year now. I might have jumped to the conclusion that that's what\nyou had in mind all along. Oops.\n\nAlthough this design is simpler, which is an advantage, that's not really\nthe point. The point is that it makes sense, and that extra concurrent\nwith pruning heap vacuuming doesn't seem useful at all.\n-- \nPeter Geoghegan\n\nYes, that's what I meant. That's always how I thought that it would work, for over a year now. I might have jumped to the conclusion that that's what you had in mind all along. Oops.Although this design is simpler, which is an advantage, that's not really the point. The  point is that it makes sense, and that extra concurrent with pruning heap vacuuming doesn't seem useful at all.-- Peter Geoghegan", "msg_date": "Mon, 7 Feb 2022 13:55:21 -0500", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Mon, Feb 7, 2022 at 10:06 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Feb 4, 2022 at 4:12 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > I had imagined that we'd\n> > want to do heap vacuuming in the same way as today with the dead TID\n> > conveyor belt stuff -- it just might take several VACUUM operations\n> > until we are ready to do a round of heap vacuuming.\n>\n> I am trying to understand exactly what you are imagining here. Do you\n> mean we'd continue to lazy_scan_heap() at the start of every vacuum,\n> and lazy_vacuum_heap_rel() at the end? I had assumed that we didn't\n> want to do that, because we might already know from the conveyor belt\n> that there are some dead TIDs that could be marked unused, and it\n> seems strange to just ignore that knowledge at a time when we're\n> scanning the heap anyway. However, on reflection, that approach has\n> something to recommend it, because it would be somewhat simpler to\n> understand what's actually being changed. We could just:\n>\n> 1. Teach lazy_scan_heap() that it should add TIDs to the conveyor\n> belt, if we're using one, unless they're already there, but otherwise\n> work as today.\n>\n> 2. Teach lazy_vacuum_heap_rel() that it, if there is a conveyor belt,\n> it should try to clear from the indexes all of the dead TIDs that are\n> eligible.\n>\n> 3. If there is a conveyor belt, use some kind of magic to decide when\n> to skip vacuuming some or all indexes. When we skip one or more\n> indexes, the subsequent lazy_vacuum_heap_rel() can't possibly mark as\n> unused any of the dead TIDs we found this time, so we should just skip\n> it, unless somehow there are TIDs on the conveyor belt that were\n> already ready to be marked unused at the start of this VACUUM, in\n> which case we can still handle those.\n\nBased on this discussion, IIUC, we are saying that now we will do the\nlazy_scan_heap every time like we are doing now. And we will\nconditionally skip the index vacuum for all or some of the indexes and\nthen based on how much index vacuum is done we will conditionally do\nthe lazy_vacuum_heap_rel(). Is my understanding correct?\n\nIMHO, if we are doing the heap scan every time and then we are going\nto get the same dead items again which we had previously collected in\nthe conveyor belt. I agree that we will not add them again into the\nconveyor belt but why do we want to store them in the conveyor belt\nwhen we want to redo the whole scanning again?\n\nI think (without global indexes) the main advantage of using the\nconveyor belt is that if we skip the index scan for some of the\nindexes then we can save the dead item somewhere so that without\nscanning the heap again we have those dead items to do the index\nvacuum sometime in future but if you are going to rescan the heap\nagain next time before doing any index vacuuming then why we want to\nstore them anyway.\n\nIMHO, what we should do is, if there are not many new dead tuples in\nthe heap (total dead tuple based on the statistic - existing items in\nthe conveyor belt) then we should conditionally skip the heap scanning\n(first pass) and directly jump to the index vacuuming for some or all\nthe indexes based on the index size bloat.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 25 Feb 2022 18:35:57 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 25, 2022 at 5:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> Based on this discussion, IIUC, we are saying that now we will do the\n> lazy_scan_heap every time like we are doing now. And we will\n> conditionally skip the index vacuum for all or some of the indexes and\n> then based on how much index vacuum is done we will conditionally do\n> the lazy_vacuum_heap_rel(). Is my understanding correct?\n\nI can only speak for myself, but that sounds correct to me. IMO what\nwe really want here is to create lots of options for VACUUM. To do the\nwork of index vacuuming when it is most convenient, based on very\nrecent information about what's going on in each index. There at some\nspecific obvious ways that it might help. For example, it would be\nnice if the failsafe could not really skip index vacuuming -- it could\njust put it off until later, after relfrozenxid has been advanced to a\nsafe value.\n\nBear in mind that the cost of lazy_scan_heap is often vastly less than\nthe cost of vacuuming all indexes -- and so doing a bit more work\nthere than theoretically necessary is not necessarily a problem.\nEspecially if you have something like UUID indexes, where there is no\nnatural locality. Many tables have 10+ indexes. Even large tables.\n\n> IMHO, if we are doing the heap scan every time and then we are going\n> to get the same dead items again which we had previously collected in\n> the conveyor belt. I agree that we will not add them again into the\n> conveyor belt but why do we want to store them in the conveyor belt\n> when we want to redo the whole scanning again?\n\nI don't think we want to, exactly. Maybe it's easier to store\nredundant TIDs than to avoid storing them in the first place (we can\nprobably just accept some redundancy). There is a trade-off to be made\nthere. I'm not at all sure of what the best trade-off is, though.\n\n> I think (without global indexes) the main advantage of using the\n> conveyor belt is that if we skip the index scan for some of the\n> indexes then we can save the dead item somewhere so that without\n> scanning the heap again we have those dead items to do the index\n> vacuum sometime in future\n\nGlobal indexes are important in their own right, but ISTM that they\nhave similar needs to other things anyway. Having this flexibility is\neven more important with global indexes, but the concepts themselves\nare similar. We want options and maximum flexibility, everywhere.\n\n> but if you are going to rescan the heap\n> again next time before doing any index vacuuming then why we want to\n> store them anyway.\n\nIt all depends, of course. The decision needs to be made using a cost\nmodel. I suspect it will be necessary to try it out, and see.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 25 Feb 2022 09:15:25 -0800", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Feb 25, 2022 at 10:45 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Fri, Feb 25, 2022 at 5:06 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > Based on this discussion, IIUC, we are saying that now we will do the\n> > lazy_scan_heap every time like we are doing now. And we will\n> > conditionally skip the index vacuum for all or some of the indexes and\n> > then based on how much index vacuum is done we will conditionally do\n> > the lazy_vacuum_heap_rel(). Is my understanding correct?\n>\n> Bear in mind that the cost of lazy_scan_heap is often vastly less than\n> the cost of vacuuming all indexes -- and so doing a bit more work\n> there than theoretically necessary is not necessarily a problem.\n> Especially if you have something like UUID indexes, where there is no\n> natural locality. Many tables have 10+ indexes. Even large tables.\n\nCompletely agree with that.\n\n> > IMHO, if we are doing the heap scan every time and then we are going\n> > to get the same dead items again which we had previously collected in\n> > the conveyor belt. I agree that we will not add them again into the\n> > conveyor belt but why do we want to store them in the conveyor belt\n> > when we want to redo the whole scanning again?\n>\n> I don't think we want to, exactly. Maybe it's easier to store\n> redundant TIDs than to avoid storing them in the first place (we can\n> probably just accept some redundancy). There is a trade-off to be made\n> there. I'm not at all sure of what the best trade-off is, though.\n\nYeah we can think of that.\n\n> > I think (without global indexes) the main advantage of using the\n> > conveyor belt is that if we skip the index scan for some of the\n> > indexes then we can save the dead item somewhere so that without\n> > scanning the heap again we have those dead items to do the index\n> > vacuum sometime in future\n>\n> Global indexes are important in their own right, but ISTM that they\n> have similar needs to other things anyway. Having this flexibility is\n> even more important with global indexes, but the concepts themselves\n> are similar. We want options and maximum flexibility, everywhere.\n\n+1\n\n> > but if you are going to rescan the heap\n> > again next time before doing any index vacuuming then why we want to\n> > store them anyway.\n>\n> It all depends, of course. The decision needs to be made using a cost\n> model. I suspect it will be necessary to try it out, and see.\n\nYeah right. But I still think that we should be thinking toward\nskipping the first vacuum pass also conditionally. I mean if there\nare not many new dead tuples which we realize before evejn starting\nthe heap scan then why not directly jump to the index vacuuming if\nsome of the index needs vacuum. But I agree based on some testing and\ncost model we can decide what is the best way forward.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 26 Feb 2022 10:45:51 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "[ returning to this thread after a bit of a hiatus ]\n\nOn Fri, Feb 25, 2022 at 12:15 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I can only speak for myself, but that sounds correct to me. IMO what\n> we really want here is to create lots of options for VACUUM.\n\nI agree. That seems like a good way of thinking about it.\n\n> I don't think we want to, exactly. Maybe it's easier to store\n> redundant TIDs than to avoid storing them in the first place (we can\n> probably just accept some redundancy). There is a trade-off to be made\n> there. I'm not at all sure of what the best trade-off is, though.\n\nMy intuition is that storing redundant TIDs will turn out to be a very\nbad idea. I think that if we do that, the system will become prone to\na new kind of vicious cycle (to try to put this in words similar to\nthe ones you've been using to write about similar effects). Imagine\nthat we vacuum a table which contains N dead tuples a total of K times\nin succession, but each time we either deliberately decide against\nindex vacuuming or get killed before we can complete it. If we don't\nhave logic to prevent duplicate entries from being added to the\nconveyor belt, we will have N*K TIDs in the conveyor belt rather than\nonly N, and I think it's not hard to imagine situations where K could\nbe big enough for that to be hugely painful. Moreover, at some point,\nwe're going to need to deduplicate anyway, because each of those dead\nTIDs can only be marked unused once. Letting the data on the conveyor\nbelt in the hopes of sorting it out later seems almost certain to be a\nlosing proposition. The more data we collect on the conveyor belt, the\nharder it's going to be when we eventually need to deduplicate.\n\nAlso, I think it's possible to deduplicate at a very reasonable cost\nso long as (1) we enter each batch of TIDs into the conveyor belt as a\ndistinguishable \"run\" and (2) we never accumulate so many of these\nruns at the same time that we can't do a single merge pass to turn\nthem into a sorted output stream. We're always going to discover dead\nTIDs in sorted order, so as we make our pass over the heap, we can\nsimultaneously be doing an on-the-fly merge pass over the existing\nruns that are on the conveyor belt. That means we never need to store\nall the dead TIDs in memory at once. We just fetch enough data from\neach \"run\" to cover the block numbers for which we're performing the\nheap scan, and when the heap scan advances we throw away data for\nblocks that we've passed and fetch data for the blocks we're now\nreaching.\n\n> > but if you are going to rescan the heap\n> > again next time before doing any index vacuuming then why we want to\n> > store them anyway.\n>\n> It all depends, of course. The decision needs to be made using a cost\n> model. I suspect it will be necessary to try it out, and see.\n\nBut having said that, coming back to this with fresh eyes, I think\nDilip has a really good point here. If the first thing we do at the\nstart of every VACUUM is scan the heap in a way that is guaranteed to\nrediscover all of the dead TIDs that we've previously added to the\nconveyor belt plus maybe also new ones, we may as well just forget the\nwhole idea of having a conveyor belt at all. At that point we're just\ntalking about a system for deciding when to skip index vacuuming, and\nthe conveyor belt is a big complicated piece of machinery that stores\ndata we don't really need for anything because if we threw it out the\nnext vacuum would reconstruct it anyhow and we'd get exactly the same\nresults with less code. The only way the conveyor belt system has any\nvalue is if we think that there is some set of circumstances where the\nheap scan is separated in time from the index vacuum, such that we\nmight sometimes do an index vacuum without having done a heap scan\njust before.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 16:25:26 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Mar 31, 2022 at 1:25 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I don't think we want to, exactly. Maybe it's easier to store\n> > redundant TIDs than to avoid storing them in the first place (we can\n> > probably just accept some redundancy). There is a trade-off to be made\n> > there. I'm not at all sure of what the best trade-off is, though.\n>\n> My intuition is that storing redundant TIDs will turn out to be a very\n> bad idea. I think that if we do that, the system will become prone to\n> a new kind of vicious cycle (to try to put this in words similar to\n> the ones you've been using to write about similar effects).\n\nI don't feel very strongly about it either way. I definitely think\nthat there are workloads for which that will be true, and I in general\nI am no fan of putting off work that cannot possibly turn out to be\nunnecessary in the end. I am in favor of \"good laziness\", not \"bad\nlaziness\".\n\n> The more data we collect on the conveyor belt, the\n> harder it's going to be when we eventually need to deduplicate.\n>\n> Also, I think it's possible to deduplicate at a very reasonable cost\n> so long as (1) we enter each batch of TIDs into the conveyor belt as a\n> distinguishable \"run\"\n\nI definitely think that's the way to go, in general (regardless of\nwhat we do about deduplicating TIDs).\n\n> and (2) we never accumulate so many of these\n> runs at the same time that we can't do a single merge pass to turn\n> them into a sorted output stream. We're always going to discover dead\n> TIDs in sorted order, so as we make our pass over the heap, we can\n> simultaneously be doing an on-the-fly merge pass over the existing\n> runs that are on the conveyor belt. That means we never need to store\n> all the dead TIDs in memory at once.\n\nThat's a good idea IMV. I am vaguely reminded of an LSM tree.\n\n> We just fetch enough data from\n> each \"run\" to cover the block numbers for which we're performing the\n> heap scan, and when the heap scan advances we throw away data for\n> blocks that we've passed and fetch data for the blocks we're now\n> reaching.\n\nI wonder if you've thought about exploiting the new approach to\nskipping pages using the visibility map from my patch series (which\nyou reviewed recently). I think that putting that in scope here could\nbe very helpful. As a way of making the stuff we already want to do\nwith [global] indexes easier, but also as independently useful work\nbased on the conveyor belt. The visibility map is very underexploited\nas a source of accurate information about what VACUUM should be doing\nIMV (in autovacuum.c's scheduling logic, but also in vacuumlazy.c\nitself).\n\nImagine a world in which we decide up-front what pages we're going to\nscan (i.e. our final vacrel->scanned_pages), by first scanning the\nvisibility map, and serializing it in local memory, or sometimes in\ndisk using the conveyor belt. Now you'll have a pretty good idea how\nmuch TID deduplication will be necessary when you're done (to give one\nexample). In general \"locking in\" a plan of action for VACUUM seems\nlike it would be very useful. It will tend to cut down on the number\nof \"dead but not yet removable\" tuples that VACUUM encounters right\nnow -- you at least avoid visiting concurrently modified heap pages\nthat were all-visible back when OldestXmin was first established.\n\nWhen all skippable ranges are known in advance, we can reorder things\nin many different ways -- since the work of vacuuming can be\ndecomposed on the lazy_scan_heap side, too. The only ordering\ndependency among heap pages (that I can think of offhand) is FSM\nvacuuming, which seems like it could be addressed without great\ndifficulty.\n\nIdeally everything can be processed in whatever order is convenient as\nof right now, based on costs and benefits. We could totally decompose\nthe largest VACUUM operations into individually processable unit of\nwork (heap and index units), so that individual autovacuum workers no\nlonger own particular VACUUM operations at all. Autovacuum workers\nwould instead multiplex all of the units of work from all pending\nVACUUMs, that are scheduled centrally, based on the current load of\nthe system. We can probably afford to be much more sensitive to the\nnumber of pages we'll dirty relative to what the system can take right\nnow, and so on.\n\nCancelling an autovacuum worker may just make the system temporarily\nsuspend the VACUUM operation for later with this design (in fact\ncancelling an autovacuum may not really be meaningful at all). A\ndesign like this could also enable things like changing our mind about\nadvancing relfrozenxid: if we decided to skip all-visible pages in\nthis VACUUM operation, and come to regret that decision by the end\n(because lots of XIDs are consumed in the interim), maybe we can go\nback and get the all-visible pages we missed first time around.\n\nI don't think that all of these ideas will turn out to be winners, but\nI'm just trying to stimulate discussion. Breaking almost all\ndependencies on the order that we process things in seems like it has\nreal promise.\n\n> > It all depends, of course. The decision needs to be made using a cost\n> > model. I suspect it will be necessary to try it out, and see.\n>\n> But having said that, coming back to this with fresh eyes, I think\n> Dilip has a really good point here. If the first thing we do at the\n> start of every VACUUM is scan the heap in a way that is guaranteed to\n> rediscover all of the dead TIDs that we've previously added to the\n> conveyor belt plus maybe also new ones, we may as well just forget the\n> whole idea of having a conveyor belt at all.\n\nI definitely agree that that's bad, and would be the inevitable result\nof being lazy about deduplicating consistently.\n\n> The only way the conveyor belt system has any\n> value is if we think that there is some set of circumstances where the\n> heap scan is separated in time from the index vacuum, such that we\n> might sometimes do an index vacuum without having done a heap scan\n> just before.\n\nI agree.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 15:42:48 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Mar 31, 2022 at 6:43 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > The only way the conveyor belt system has any\n> > value is if we think that there is some set of circumstances where the\n> > heap scan is separated in time from the index vacuum, such that we\n> > might sometimes do an index vacuum without having done a heap scan\n> > just before.\n>\n> I agree.\n\nBut in http://postgr.es/m/CA+Tgmoa6kVEeurtyeOi3a+rA2XuynwQmJ_s-h4kUn6-bKMMDRw@mail.gmail.com\n(and the messages just before and just after it) we seemed to be\nagreeing on a design where that's exactly what happens. It seemed like\na good idea to me at the time, but now it seems like it's a bad idea,\nbecause it involves using the conveyor belt in a way that adds no\nvalue.\n\nAm I confused here?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 31 Mar 2022 20:30:54 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Mar 31, 2022 at 5:31 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I agree.\n>\n> But in http://postgr.es/m/CA+Tgmoa6kVEeurtyeOi3a+rA2XuynwQmJ_s-h4kUn6-bKMMDRw@mail.gmail.com\n> (and the messages just before and just after it) we seemed to be\n> agreeing on a design where that's exactly what happens. It seemed like\n> a good idea to me at the time, but now it seems like it's a bad idea,\n> because it involves using the conveyor belt in a way that adds no\n> value.\n\nThere are two types of heap scans (in my mind, at least): those that\nprune, and those that VACUUM. While there has traditionally been a 1:1\ncorrespondence between these two scans (barring cases with no LP_DEAD\nitems whatsoever), that's no longer in Postgres 14, which added the\n\"bypass index scan in the event of few LP_DEAD items left by pruning\"\noptimization (or Postgres 12 if you count INDEX_CLEANUP=off).\n\nWhen I said \"I agree\" earlier today, I imagined that I was pretty much\naffirming everything else that I'd said up until that point of the\nemail. Which is that the conveyor belt is interesting as a way of\nbreaking (or even just loosening) dependencies on the *order* in which\nwe perform work within a given \"VACUUM cycle\". Things can be much\nlooser than they are today, with indexes (which we've discussed a lot\nalready), and even with heap pruning (which I brought up for the first\ntime just today).\n\nHowever, I don't see any way that it will be possible to break one\nparticular ordering dependency, even with the conveyor belt stuff: The\n\"basic invariant\" described in comments above lazy_scan_heap(), which\ndescribes rules about TID recycling -- we can only recycle TIDs when a\nfull \"VACUUM cycle\" completes, just like today.\n\nThat was a point I was making in the email from back in February:\nobviously it's unsafe to do lazy_vacuum_heap_page() processing of a\npage until we're already 100% sure that the LP_DEAD items are not\nreferenced by any indexes, even indexes that have very little bloat\n(that don't really need to be vacuumed for their own sake). However,\nthe conveyor belt can add value by doing much more frequent processing\nin lazy_scan_prune() (of different pages each time, or perhaps even\nrepeat processing of the same heap pages), and much more frequent\nindex vacuuming for those indexes that seem to need it.\n\nSo the lazy_scan_prune() work (pruning and freezing) can and probably\nshould be separated in time from the index vacuuming (compared to the\ncurrent design). Maybe not for all of the indexes -- typically for the\nmajority, maybe 8 out of 10. We can do much less index vacuuming in\nthose indexes that don't really need it, in order to be able to do\nmuch more in those that do. At some point we must \"complete a whole\ncycle of heap vacuuming\" by processing all the heap pages using\nlazy_vacuum_heap_page() that need it.\n\nSeparately, the conveyor belt seems to have promise as a way of\nbreaking up work for multiplexing, or parallel processing.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 19:05:12 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 1:55 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n\n> But having said that, coming back to this with fresh eyes, I think\n> Dilip has a really good point here. If the first thing we do at the\n> start of every VACUUM is scan the heap in a way that is guaranteed to\n> rediscover all of the dead TIDs that we've previously added to the\n> conveyor belt plus maybe also new ones, we may as well just forget the\n> whole idea of having a conveyor belt at all. At that point we're just\n> talking about a system for deciding when to skip index vacuuming, and\n> the conveyor belt is a big complicated piece of machinery that stores\n> data we don't really need for anything because if we threw it out the\n> next vacuum would reconstruct it anyhow and we'd get exactly the same\n> results with less code.\n\nAfter thinking more about this I see there is some value of\nremembering the dead tids in the conveyor belt. Basically, the point\nis if there are multiple indexes and we do the index vacuum for some\nof the indexes and skip for others. And now when we again do the\ncomplete vacuum cycle that time we will again get all the old dead\ntids + the new dead tids but without conveyor belt we might need to\nperform the multiple cycle of the index vacuum even for the indexes\nfor which we had done the vacuum in previous vacuum cycle (if all tids\nare not fitting in maintenance work mem). But with the conveyor belt\nwe remember the conveyor belt pageno upto which we have done the index\nvacuum and then we only need to do vacuum for the remaining tids which\nwill definitely reduce the index vacuuming passes, right?\n\nSo my stand is, a) for the global indexes we must need the conveyor\nbelt for remembering the partition wise dead tids (because after\nvacuuming certain partitions when we go for global index vacuuming we\ndon't want to rescan all the partitions to get the same dead items) b)\nand even without global indexes there are advantages of storing dead\nitems in the conveyor belt as explained in my previous paragraph. So\nI think it is worth adding the conveyor belt infrastructure.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 09:38:00 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Mar 31, 2022 at 9:08 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> But with the conveyor belt\n> we remember the conveyor belt pageno upto which we have done the index\n> vacuum and then we only need to do vacuum for the remaining tids which\n> will definitely reduce the index vacuuming passes, right?\n\nRight, exactly -- the index or two that really need to be vacuumed a\nlot can have relatively small dead_items arrays.\n\nOther indexes (when eventually vacuumed) will need a larger dead_items\narray, with everything we need to get rid of from the index in one big\narray. Hopefully this won't matter much. Vacuuming these indexes\nshould be required infrequently (compared to the bloat-prone indexes).\n\nAs I said upthread, when we finally have to perform heap vacuuming\n(not heap pruning), it'll probably happen because the heap itself\nneeds heap vacuuming. We could probably get away with *never* vacuum\ncertain indexes on tables prone to non-HOT updates, without that ever\ncausing index bloat. But heap line pointer bloat is eventually going\nto become a real problem with non-HOT updates, no matter what.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Thu, 31 Mar 2022 21:30:00 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 12:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> After thinking more about this I see there is some value of\n> remembering the dead tids in the conveyor belt. Basically, the point\n> is if there are multiple indexes and we do the index vacuum for some\n> of the indexes and skip for others. And now when we again do the\n> complete vacuum cycle that time we will again get all the old dead\n> tids + the new dead tids but without conveyor belt we might need to\n> perform the multiple cycle of the index vacuum even for the indexes\n> for which we had done the vacuum in previous vacuum cycle (if all tids\n> are not fitting in maintenance work mem). But with the conveyor belt\n> we remember the conveyor belt pageno upto which we have done the index\n> vacuum and then we only need to do vacuum for the remaining tids which\n> will definitely reduce the index vacuuming passes, right?\n\nI guess you're right, and it's actually a little bit better than that,\nbecause even if the data does fit into shared memory, we'll have to\npass fewer TIDs to the worker to be removed from the heap, which might\nsave a few CPU cycles. But I think both of those are very small\nbenefits. If that's all we're going to do with the conveyor belt\ninfrastructure, I don't think it's worth the effort. I am completely\nin agreement with Peter's comments to the effect that the goal here is\nto create flexibility, but it feels to me like the particular\ndevelopment plan we discussed back in late February isn't going to\ncreate enough flexibility to make it worth the effort it takes. It\nseems to me we need to find a way to do better than that.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 14:04:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 11:04 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I guess you're right, and it's actually a little bit better than that,\n> because even if the data does fit into shared memory, we'll have to\n> pass fewer TIDs to the worker to be removed from the heap, which might\n> save a few CPU cycles. But I think both of those are very small\n> benefits.\n\nI'm not following. It seems like you're saying that the ability to\nvacuum indexes on their own schedule (based on their own needs) is not\nsufficiently compelling. I think it's very compelling, with enough\nindexes (and maybe not very many).\n\nThe conveyor belt doesn't just save I/O from repeated scanning of the\nheap. It may also save on repeated pruning (or just dirtying) of the\nsame heap pages again and again, for very little benefit.\n\nImagine an append-only table where 1% of transactions that insert are\naborts. You really want to be able to constantly VACUUM such a table,\nso that its pages are proactively frozen and set all-visible in the\nvisibility map -- it's not that different to a perfectly append-only\ntable, without any garbage tuples. And so it would be very useful if\nwe could delay index vacuuming for much longer than the current 2% of\nrel_pages heuristics seems to allow.\n\nThat heuristic has to conservatively assume that it might be some time\nbefore the next vacuum is launched, and has the opportunity to\nreconsider index vacuuming. What if it was a more or less independent\nquestion instead? To put it another way, it would be great if the\nscheduling code for autovacuum could make inferences about what\ngeneral strategy works best for a given table over time. In order to\nbe able to do that sensibly, the algorithm needs more context, so that\nit can course correct without paying much of a cost for being wrong.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Apr 2022 11:26:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 2:27 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> I'm not following. It seems like you're saying that the ability to\n> vacuum indexes on their own schedule (based on their own needs) is not\n> sufficiently compelling. I think it's very compelling, with enough\n> indexes (and maybe not very many).\n>\n> The conveyor belt doesn't just save I/O from repeated scanning of the\n> heap. It may also save on repeated pruning (or just dirtying) of the\n> same heap pages again and again, for very little benefit.\n\nI'm also not following. In order to get that benefit, we would have to\nsometimes decide to not perform lazy_scan_heap() at the startup of a\nvacuum. And in this email I asked you whether it was your idea that we\nshould always start a vacuum operation with lazy_scan_heap(), and you\nsaid \"yes\":\n\nhttp://postgr.es/m/CA+Tgmoa6kVEeurtyeOi3a+rA2XuynwQmJ_s-h4kUn6-bKMMDRw@mail.gmail.com\n\nSo I'm completely confused here. If we always start a vacuum with\nlazy_scan_heap(), as you said you wanted, then we will not save any\nheap scanning.\n\nWhat am I missing?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 14:39:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 11:39 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> So I'm completely confused here. If we always start a vacuum with\n> lazy_scan_heap(), as you said you wanted, then we will not save any\n> heap scanning.\n\nThe term \"start a VACUUM\" becomes ambiguous with the conveyor belt.\n\nWhat I was addressed in a nearby email back in February [1] was the\nidea of doing heap vacuuming of the last run (or several runs) of dead\nTIDs on top of heap pruning to create the next run/runs of dead TIDs.\n\n> What am I missing?\n\nThere is a certain sense in which we are bound to always \"start a\nvacuum\" in lazy_scan_prune(), with any design based on the current\none. How else are we ever going to make a basic initial determination\nabout which heap LP_DEAD items need their TIDs deleted from indexes,\nsooner or later? Obviously that information must always have\noriginated in lazy_scan_prune (or in lazy_scan_noprune).\n\nWith the conveyor belt, and a non-HOT-update heavy workload, we'll\neventually need to exhaustively do index vacuuming of all indexes\n(even those that don't need it for their own sake) to make it safe to\nremove heap line pointer bloat (to set heap LP_DEAD items to\nLP_UNUSED). This will happen least often of all, and is the one\ndependency conveyor belt can't help with.\n\nTo answer your question: when heap vacuuming does finally happen, we\nat least don't need to call lazy_scan_prune for any pages first\n(neither the pages we're vacuuming, nor any other heap pages). Plus\nthe decision to finally clean up line pointer bloat can be made based\non known facts about line pointer bloat, without tying that to other\nprocessing done by lazy_scan_prune() -- so there's greater separation\nof concerns.\n\nThat having been said...maybe it would make sense to also call\nlazy_scan_prune() right after these relatively rare calls to\nlazy_vacuum_heap_page(), opportunistically (since we already dirtied\nthe page once). But that would be an additional optimization, at best; it\nwouldn't be the main way that we call lazy_scan_prune().\n\n[1] https://www.postgresql.org/message-id/CAH2-WzmG%3D_vYv0p4bhV8L73_u%2BBkd0JMWe2zHH333oEujhig1g%40mail.gmail.com\n--\nPeter Geoghegan\n\n\n", "msg_date": "Fri, 1 Apr 2022 12:06:52 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Fri, Apr 1, 2022 at 11:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Apr 1, 2022 at 12:08 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > After thinking more about this I see there is some value of\n> > remembering the dead tids in the conveyor belt. Basically, the point\n> > is if there are multiple indexes and we do the index vacuum for some\n> > of the indexes and skip for others. And now when we again do the\n> > complete vacuum cycle that time we will again get all the old dead\n> > tids + the new dead tids but without conveyor belt we might need to\n> > perform the multiple cycle of the index vacuum even for the indexes\n> > for which we had done the vacuum in previous vacuum cycle (if all tids\n> > are not fitting in maintenance work mem). But with the conveyor belt\n> > we remember the conveyor belt pageno upto which we have done the index\n> > vacuum and then we only need to do vacuum for the remaining tids which\n> > will definitely reduce the index vacuuming passes, right?\n>\n> I guess you're right, and it's actually a little bit better than that,\n> because even if the data does fit into shared memory, we'll have to\n> pass fewer TIDs to the worker to be removed from the heap, which might\n> save a few CPU cycles. But I think both of those are very small\n> benefits. If that's all we're going to do with the conveyor belt\n> infrastructure, I don't think it's worth the effort.\n\nI don't think that saving extra index passes is really a small gain.\nI think this will save a lot of IO if indexes pages are not in shared\nbuffers because here we are talking about we can completely avoid the\nindex passes for some of the indexes if it is already done. And if\nthis is the only advantage then it might not be worth adding this\ninfrastructure but what about global indexes?\n\nBecause if we have global indexes then we must need this\ninfrastructure to store the dead items for the partition because for\nexample after vacuuming 1000 partitions while vacuuming the 1001st\npartition if we need to vacuum the global index then we don't want to\nrescan all the previous 1000 partitions to regenerate those old dead\nitems right? So I think this is the actual use case where we\nindirectly skip the heap vacuuming for some of the partitions before\nperforming the index vacuum.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:49:09 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 6:19 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> I don't think that saving extra index passes is really a small gain.\n> I think this will save a lot of IO if indexes pages are not in shared\n> buffers because here we are talking about we can completely avoid the\n> index passes for some of the indexes if it is already done. And if\n> this is the only advantage then it might not be worth adding this\n> infrastructure but what about global indexes?\n\nSure, I agree that the gain is large when the situation arises -- but\nin practice I think it's pretty rare that the dead TID array can't fit\nin maintenance_work_mem. In ten years of doing PostgreSQL support,\nI've seen only a handful of cases where # of index scans > 1, and\nthose were solved by just increasing maintenance_work_mem until the\nproblem went away. AFAICT, there's pretty much nobody who can't fit\nthe dead TID list in main memory. They just occasionally don't\nconfigure enough memory for it to happen. It makes sense if you think\nabout the math. Say you run with maintenance_work_mem=64MB. That's\nenough for 10 million dead TIDs. With default settings, the table\nbecomes eligible for vacuuming when the number of updates and deletes\nexceeds 20% of the table. So to fill up that amount of memory, you\nneed the table to have more than 50 million tuples. If you estimate\n(somewhat randomly) 100 tuples per page, that's 5 million pages, or\n40GB. If you have a 40GB table, you don't have a problem with using\n64MB of memory to vacuum it. And similarly if you have a 640GB table,\nyou don't have a problem with using 1GB of memory to vacuum it.\nPractically speaking, if we made work memory for autovacuum unlimited,\nand allocated on demand as much as we need, I bet almost nobody would\nhave an issue.\n\n> Because if we have global indexes then we must need this\n> infrastructure to store the dead items for the partition because for\n> example after vacuuming 1000 partitions while vacuuming the 1001st\n> partition if we need to vacuum the global index then we don't want to\n> rescan all the previous 1000 partitions to regenerate those old dead\n> items right? So I think this is the actual use case where we\n> indirectly skip the heap vacuuming for some of the partitions before\n> performing the index vacuum.\n\nWell I agree. But the problem is what development path we should\npursue in terms of getting there. We want to do something that's going\nto make sense if and when we eventually get global indexes, but which\nis going to give us a good amount of benefit in the meanwhile, and\nalso doesn't involve having to make too many changes to the code at\nthe same time. I liked the idea of keeping VACUUM basically as it is\ntoday -- two heap passes with an index pass in the middle, but now\nwith the conveyor injected -- because it keeps the code changes as\nsimple as possible. And perhaps we should start by doing just that\nmuch. But now that I've realized that the benefit of doing only that\nmuch is so little, I'm a lot less convinced that it is a good first\nstep. Any hope of getting a more significant benefit out of the\nconveyor belt stuff relies on our ability to get more decoupling, so\nthat we for example collect dead TIDs on Tuesday, vacuum the indexes\non Wednesday, and set the dead TIDs unused on Thursday, doing other\nthings meanwhile.\n\nAnd from that point of view I see two problems. One problem is that I\ndo not think we want to force all vacuuming through the conveyor belt\nmodel. It doesn't really make sense for a small table with no\nassociated global indexes. And so then there is a code structure\nissue: how do we set things up so that we can vacuum as we do today,\nor alternatively vacuum in completely separate stages, without filling\nthe code up with a million \"if\" statements? The other problem is\nunderstanding whether it's really feasible to postpone the index\nvacuuming and the second heap pass in realistic scenarios. Postponing\nindex vacuuming and the second heap pass means that dead line pointers\nremain in the heap, and that can drive bloat via line pointer\nexhaustion. The whole idea of decoupling table and index vacuum\nsupposes that there are situations in which it's worth performing the\nfirst heap pass where we gather the dead line pointers but where it's\nnot necessary to follow that up as quickly as possible with a second\nheap pass to mark dead line pointers unused. I think Peter and I are\nin agreement that there are situations in which some indexes need to\nbe vacuumed much more often than others -- but that doesn't matter if\nthe heap needs to be vacuumed more frequently than anything else,\nbecause you can't do that without first vacuuming all the indexes.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 08:44:27 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 5:44 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> The whole idea of decoupling table and index vacuum\n> supposes that there are situations in which it's worth performing the\n> first heap pass where we gather the dead line pointers but where it's\n> not necessary to follow that up as quickly as possible with a second\n> heap pass to mark dead line pointers unused. I think Peter and I are\n> in agreement that there are situations in which some indexes need to\n> be vacuumed much more often than others -- but that doesn't matter if\n> the heap needs to be vacuumed more frequently than anything else,\n> because you can't do that without first vacuuming all the indexes.\n\nIt's not just an enabler of more frequent index vacuuming (for those\nindexes that need it the most), though. It's also an enabler of more\nfrequent lazy_scan_prune processing (in particular setting hint bits\nand freezing), which is probably even more likely to benefit from the\ndecoupling you'd be enabling. I can imagine this having great value in\na world where autovacuum scheduling eagerly keeps up with inserts into\nan append-mostly table, largely avoiding repeating dirtying within\nlazy_scan_prune, with dynamic feedback. You just need to put off the\nwork of index/heap vacuuming to be able to do that kind of thing.\n\nPostgres 14 split the WAL record previously shared by both pruning and\nvacuuming (called XLOG_HEAP2_CLEAN) into two separate WAL records\n(called XLOG_HEAP2_PRUNE and XLOG_HEAP2_VACUUM). That made it easier\nto spot the fact that we usually have far fewer of the latter WAL\nrecords during VACUUM by using pg_waldump. Might be worth doing your\nown experiments on this.\n\nOther instrumentation changes in 14 also helped here. In particular\nthe \"%u pages from table (%.2f%% of total) had %lld dead item\nidentifiers removed\" line that was added to autovacuum's log output\nmade it easy to spot how little heap vacuuming might really be needed.\nWith some tables it is roughly the opposite way around (as much or\neven more heap vacuuming than pruning/freezing) -- you'll tend to see\nthat in tables where opportunistic pruning leaves behind a lot of\nLP_DEAD stubs that only VACUUM can make LP_UNUSED.\n\nBut, these same LP_DEAD-heavy tables *also* have a very decent\nchance of benefiting from a better index vacuuming strategy, something\n*also* enabled by the conveyor belt design. So overall, in either scenario,\nVACUUM concentrates on problems that are particular to a given table\nand workload, without being hindered by implementation-level\nrestrictions.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Apr 2022 12:21:39 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 3:22 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> It's not just an enabler of more frequent index vacuuming (for those\n> indexes that need it the most), though. It's also an enabler of more\n> frequent lazy_scan_prune processing (in particular setting hint bits\n> and freezing), which is probably even more likely to benefit from the\n> decoupling you'd be enabling. I can imagine this having great value in\n> a world where autovacuum scheduling eagerly keeps up with inserts into\n> an append-mostly table, largely avoiding repeating dirtying within\n> lazy_scan_prune, with dynamic feedback. You just need to put off the\n> work of index/heap vacuuming to be able to do that kind of thing.\n\nI had assumed that this would not be the case, because if the page is\nbeing accessed by the workload, it can be pruned - and probably frozen\ntoo, if we wanted to write code for that and spend the cycles on it -\nand if it isn't, pruning and freezing probably aren't needed.\n\n> But, these same LP_DEAD-heavy tables *also* have a very decent\n> chance of benefiting from a better index vacuuming strategy, something\n> *also* enabled by the conveyor belt design. So overall, in either scenario,\n> VACUUM concentrates on problems that are particular to a given table\n> and workload, without being hindered by implementation-level\n> restrictions.\n\nWell this is what I'm not sure about. We need to demonstrate that\nthere are at least some workloads where retiring the LP_DEAD line\npointers doesn't become the dominant concern.\n\nAlso, just to be clear, I'm not arguing against my own idea. I'm\ntrying to figure out what the first, smallest unit of work that\nproduces a committable patch with a meaningful benefit is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 16:10:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> I had assumed that this would not be the case, because if the page is\n> being accessed by the workload, it can be pruned - and probably frozen\n> too, if we wanted to write code for that and spend the cycles on it -\n> and if it isn't, pruning and freezing probably aren't needed.\n\nVACUUM has a top-down structure, and so seems to me to be the natural\nplace to think about the high level needs of the table as a whole,\nespecially over time.\n\nI don't think we actually need to scan the pages that we left some\nLP_DEAD items in previous VACUUM operations. It seems possible to\nfreeze newly appended pages quite often, without needlessly revisiting\nthe pages from previous batches (even those with LP_DEAD items left\nbehind). Maybe we need to rethink the definition of \"VACUUM operation\"\na little to do that, but it seems relatively tractable.\n\nAs I said upthread recently, I am excited about the potential of\n\"locking in\" a set of scanned_pages using a local/private version of\nthe visibility map (a copy from just after OldestXmin is initially\nestablished), that VACUUM can completely work off of. Especially if\ncombined with the conveyor belt, which could make VACUUM operations\nsuspendable and resumable.\n\nI don't see any reason why it wouldn't be possible to \"lock in\" an\ninitial scanned_pages, and then use that data structure (which could\nbe persisted) to avoid revisiting the pages that we know we already\nvisited (and left LP_DEAD items in). We could \"resume the VACUUM\noperation that was suspended earlier\" a bit later (not have several\ntechnically unrelated VACUUM operations together in close succession).\nThe later rounds of processing could even use new cutoffs for both\nfreezing and freezing, despite being from \"the same VACUUM operation\".\nThey could have an \"expanded rel_pages\" that covers the newly appended\npages that we want to quickly freeze tuples on.\n\nAFAICT the only thing that we need to do to make this safe is to carry\nforward our original vacrel->NewRelfrozenXid (which can never be later\nthan our original vacrel->OldestXmin). Under this architecture, we\ndon't really \"skip index vacuuming\" at all. Rather, we redefine VACUUM\noperations in a way that makes the final rel_pages provisional, at\nleast when run in autovacuum.\n\nVACUUM itself can notice that it might be a good idea to \"expand\nrel_pages\" and expand the scope of the work it ultimately does, based\non the observed characteristics of the table. No heap pages get repeat\nprocessing per \"VACUUM operation\" (relative to the current definition\nof the term). Some indexes will get \"extra, earlier index vacuuming\",\nwhich we've already said is the right way to think about all this (we\nshould think of it as extra index vacuuming, not less index\nvacuuming).\n\n> > But, these same LP_DEAD-heavy tables *also* have a very decent\n> > chance of benefiting from a better index vacuuming strategy, something\n> > *also* enabled by the conveyor belt design. So overall, in either scenario,\n> > VACUUM concentrates on problems that are particular to a given table\n> > and workload, without being hindered by implementation-level\n> > restrictions.\n>\n> Well this is what I'm not sure about. We need to demonstrate that\n> there are at least some workloads where retiring the LP_DEAD line\n> pointers doesn't become the dominant concern.\n\nIt will eventually become the dominant concern. But that could take a\nwhile, compared to the growth in indexes.\n\nAn LP_DEAD line pointer stub in a heap page is 4 bytes. The smallest\npossible B-Tree index tuple is 20 bytes on mainstream platforms (16\nbytes + 4 byte line pointer). Granted deduplication makes this less\ntrue, but that's far from guaranteed to help. Also, many tables have\nway more than one index.\n\nOf course it isn't nearly as simple as comparing the bytes of bloat in\neach case. More generally, I don't claim that it's easy to\ncharacterize which factor is more important, even in the abstract,\neven under ideal conditions -- it's very hard. But I'm sure that there\nare routinely very large differences among indexes and the heap\nstructure.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Apr 2022 13:29:38 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 4:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On Tue, Apr 5, 2022 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I had assumed that this would not be the case, because if the page is\n> > being accessed by the workload, it can be pruned - and probably frozen\n> > too, if we wanted to write code for that and spend the cycles on it -\n> > and if it isn't, pruning and freezing probably aren't needed.\n>\n> [ a lot of things ]\n\nI don't understand what any of this has to do with the point I was raising here.\n\n> > > But, these same LP_DEAD-heavy tables *also* have a very decent\n> > > chance of benefiting from a better index vacuuming strategy, something\n> > > *also* enabled by the conveyor belt design. So overall, in either scenario,\n> > > VACUUM concentrates on problems that are particular to a given table\n> > > and workload, without being hindered by implementation-level\n> > > restrictions.\n> >\n> > Well this is what I'm not sure about. We need to demonstrate that\n> > there are at least some workloads where retiring the LP_DEAD line\n> > pointers doesn't become the dominant concern.\n>\n> It will eventually become the dominant concern. But that could take a\n> while, compared to the growth in indexes.\n>\n> An LP_DEAD line pointer stub in a heap page is 4 bytes. The smallest\n> possible B-Tree index tuple is 20 bytes on mainstream platforms (16\n> bytes + 4 byte line pointer). Granted deduplication makes this less\n> true, but that's far from guaranteed to help. Also, many tables have\n> way more than one index.\n>\n> Of course it isn't nearly as simple as comparing the bytes of bloat in\n> each case. More generally, I don't claim that it's easy to\n> characterize which factor is more important, even in the abstract,\n> even under ideal conditions -- it's very hard. But I'm sure that there\n> are routinely very large differences among indexes and the heap\n> structure.\n\nYeah, I think we need to better understand how this works out.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 17:52:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 2:53 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Tue, Apr 5, 2022 at 4:30 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> > On Tue, Apr 5, 2022 at 1:10 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> > > I had assumed that this would not be the case, because if the page is\n> > > being accessed by the workload, it can be pruned - and probably frozen\n> > > too, if we wanted to write code for that and spend the cycles on it -\n> > > and if it isn't, pruning and freezing probably aren't needed.\n> >\n> > [ a lot of things ]\n>\n> I don't understand what any of this has to do with the point I was raising here.\n\nWhy do you assume that we'll ever have an accurate idea of how many\nLP_DEAD items there are, before we've looked? And if we're wrong about\nthat, persistently, why should anything else we think about it really\nmatter? This is an inherently dynamic and cyclic process. Statistics\ndon't really work here. That was how my remarks were related to yours.\nThat should be in scope -- getting better information about what work\nwe need to do by blurring the boundaries between deciding what to do,\nand executing that plan.\n\nOn a long enough timeline the LP_DEAD items in heap pages are bound to\nbecome the dominant concern in almost any interesting case for the\nconveyor belt, for the obvious reason: you can't do anything about\nLP_DEAD items without also doing every other piece of processing\ninvolving those same heap pages. So in that sense, yes, they will be\nthe dominant problem at times, for sure.\n\nOn the other hand it seems very hard to imagine an interesting\nscenario in which LP_DEAD items are the dominant problem from the\nearliest stage of processing by VACUUM. But even if it was somehow\npossible, would it matter? That would mean that there'd be occasional\ninstances of the conveyor belt being ineffective -- hardly the end of\nthe world. What has it cost us to keep it as an option that wasn't\nused? I don't think we'd have to do any extra work, other than\nin-memory bookkeeping.\n\n-- \nPeter Geoghegan\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:25:50 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Tue, Apr 5, 2022 at 6:26 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> On a long enough timeline the LP_DEAD items in heap pages are bound to\n> become the dominant concern in almost any interesting case for the\n> conveyor belt, for the obvious reason: you can't do anything about\n> LP_DEAD items without also doing every other piece of processing\n> involving those same heap pages. So in that sense, yes, they will be\n> the dominant problem at times, for sure.\n>\n> On the other hand it seems very hard to imagine an interesting\n> scenario in which LP_DEAD items are the dominant problem from the\n> earliest stage of processing by VACUUM. But even if it was somehow\n> possible, would it matter? That would mean that there'd be occasional\n> instances of the conveyor belt being ineffective -- hardly the end of\n> the world. What has it cost us to keep it as an option that wasn't\n> used? I don't think we'd have to do any extra work, other than\n> in-memory bookkeeping.\n\nWell, OK, here's what I don't understand. Let's say I insert a tuple\nand then I delete a tuple. Then time goes by and other things happen,\nincluding but those things do not include a heap vacuum. However,\nduring that time, all transactions that were in progress at the time\nof the insert-then-delete have now completed. At the end of that time,\nthe number of things that need to be cleaned out of the heap is\nexactly 1: there is either a dead line pointer, or if the page hasn't\nbeen pruned yet, there is a dead tuple. The number of things that need\nto be cleaned out of the index is <= 1, because the index tuple could\nhave gotten nuked by kill_prior_tuple or bottom-up index deletion, or\nit might still be there. It follows that the number of dead line\npointers (or tuples that can be truncated to dead line pointers) in\nthe heap is always greater than or equal to the number in the index.\n\nAll things being equal, that means the heap is always in trouble\nbefore the index is in trouble. Maybe all things are not equal, but I\ndon't know why that should be so. It feels like the index has\nopportunistic cleanup mechanisms that can completely eliminate index\ntuples, while the heap can at best replace dead tuples with dead line\npointers which still consume some resources.\n\nAnd if that's the case then doing more index vacuum cycles than we do\nheap vacuum cycles really isn't a sensible thing to do. You seem to\nthink it is, though... what am I missing?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:45:17 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: should vacuum's first heap pass be read-only?" }, { "msg_contents": "On Thu, Apr 7, 2022 at 6:45 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> Well, OK, here's what I don't understand. Let's say I insert a tuple\n> and then I delete a tuple. Then time goes by and other things happen,\n> including but those things do not include a heap vacuum. However,\n> during that time, all transactions that were in progress at the time\n> of the insert-then-delete have now completed. At the end of that time,\n> the number of things that need to be cleaned out of the heap is\n> exactly 1: there is either a dead line pointer, or if the page hasn't\n> been pruned yet, there is a dead tuple. The number of things that need\n> to be cleaned out of the index is <= 1,\n\nI don't think it's useful to talk about only 1 or 2 tuple (or\ntuple-like) units of bloat in isolation.\n\n> because the index tuple could\n> have gotten nuked by kill_prior_tuple or bottom-up index deletion, or\n> it might still be there. It follows that the number of dead line\n> pointers (or tuples that can be truncated to dead line pointers) in\n> the heap is always greater than or equal to the number in the index.\n\nThese techniques are effective because they limit the *concentration*\nof garbage index tuples in any part of the index's key space (or the\nconcentration on individual leaf pages, if you prefer). There is a\nhuge difference between 100 dead tuples that are obsolete versions of\n100 logical rows, and 100 dead tuples that are obsolete versions of\nonly 2 or 3 logical rows. In general just counting the number of dead\nindex tuples can be highly misleading.\n\n> All things being equal, that means the heap is always in trouble\n> before the index is in trouble. Maybe all things are not equal, but I\n> don't know why that should be so. It feels like the index has\n> opportunistic cleanup mechanisms that can completely eliminate index\n> tuples, while the heap can at best replace dead tuples with dead line\n> pointers which still consume some resources.\n\nThe problem with line pointer bloat is not really that you're wasting\nthese 4 byte units of space. The problem comes from second order\neffects. By allowing line pointer bloat in the short term, there is a\ntendency for heap fragmentation to build in the long term. I am\nreferring to a situation in which heap tuples tend to get located in\nrandom places over time, leaving logically related heap tuples (those\noriginally inserted around the same time, by the same transaction)\nstrewn all over the place (not packed together).\n\nFragmentation will persist after VACUUM runs and makes all LP_DEAD\nitems LP_UNUSED. It can be thought of as a degenerative process. If it\nwas enough to just VACUUM and then reclaim the LP space periodically,\nthen there wouldn't be much of a problem to fix here. I don't expect\nthat you'll find this explanation of things satisfactory, since it is\nvery complicated, and admits a lot of uncertainty. That's just what\nexperience has led me to believe.\n\nThe fact that all of this is so complicated makes techniques that\nfocus on lowering costs and bounding the uncertainty/worst case seem\nso compelling to me -- focussing on keeping costs low (without being\noverly concerned about the needs of the workload) is underexploited\nright now. Even if you could come up with a 100% needs driven (and 0%\ncost-of-cleanup driven) model that really worked, it would still be\nvery sensitive to having accurate paramaters -- but accurate current\ninformation is hard to come by right now. And so I just don't see it\never adding much value on its own.\n\n> And if that's the case then doing more index vacuum cycles than we do\n> heap vacuum cycles really isn't a sensible thing to do. You seem to\n> think it is, though... what am I missing?\n\nBottom-up index deletion is only effective with logically unchanged\nB-Tree indexes and non-HOT updates. The kill_prior_tuple technique is\nonly effective when index scans happen to read the same part of the\nkey space for a B-Tree, GiST, or hash index. That leaves a lot of\nother cases unaddressed.\n\nI just can't imagine a plausible workload in which there are real\nproblems, which manifest themselves as line pointer bloat first, with\nany problems in indexes coming up only much later, if at all\n(admittedly I could probably contrive such a case if I wanted to).\nAbsence of evidence isn't evidence of absence, though. Just giving you\nmy opinion.\n\nAgain, though, I must ask: why does it matter either way? Even if such\na scenario were reasonably common, it wouldn't necessarily make life\nharder for you here.\n\n--\nPeter Geoghegan\n\n\n", "msg_date": "Thu, 7 Apr 2022 09:15:44 -0700", "msg_from": "Peter Geoghegan <pg@bowt.ie>", "msg_from_op": false, "msg_subject": "Re: should vacuum's first heap pass be read-only?" } ]
[ { "msg_contents": "Hi All,\npg_walfile_name() returns the WAL file name corresponding to the given\nWAL location. Per\nhttps://www.postgresql.org/docs/14/functions-admin.html\n---\npg_walfile_name ( lsn pg_lsn ) → text\n\nConverts a write-ahead log location to the name of the WAL file\nholding that location.\n---\n\nThe function uses XLByteToPrevSeg() which gives the name of previous\nWAL file if the location falls on the boundary of WAL segment. I find\nit misleading since the given LSN will fall into the segment provided\nby XLByteToSeg() and not XLBytePrevSeg().\n\nAnd it gives some surprising results as well\n---\n#select pg_walfile_name('0/0'::pg_lsn);\n pg_walfile_name\n--------------------------\n 00000001FFFFFFFF000000FF\n(1 row)\n----\n\nComment in the code says\n---\n/*\n * Compute an xlog file name given a WAL location,\n * such as is returned by pg_stop_backup() or pg_switch_wal().\n */\nDatum\npg_walfile_name(PG_FUNCTION_ARGS)\n---\nXLByteToPrevSeg() may be inline with the comment but I don't think\nthat's what is conveyed by the documentation at least.\n\n--\nBest Wishes,\nAshutosh\n\n\n", "msg_date": "Fri, 4 Feb 2022 19:35:05 +0530", "msg_from": "Ashutosh Bapat <ashutosh.bapat.oss@gmail.com>", "msg_from_op": true, "msg_subject": "pg_walfile_name uses XLByteToPrevSeg" }, { "msg_contents": "On Fri, Feb 4, 2022 at 9:05 AM Ashutosh Bapat\n<ashutosh.bapat.oss@gmail.com> wrote:\n> And it gives some surprising results as well\n> ---\n> #select pg_walfile_name('0/0'::pg_lsn);\n> pg_walfile_name\n> --------------------------\n> 00000001FFFFFFFF000000FF\n> (1 row)\n> ----\n\nYeah, that seems wrong.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 09:17:54 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walfile_name uses XLByteToPrevSeg" }, { "msg_contents": "On Fri, Feb 04, 2022 at 09:17:54AM -0500, Robert Haas wrote:\n> On Fri, Feb 4, 2022 at 9:05 AM Ashutosh Bapat\n> <ashutosh.bapat.oss@gmail.com> wrote:\n>> And it gives some surprising results as well\n>> ---\n>> #select pg_walfile_name('0/0'::pg_lsn);\n>> pg_walfile_name\n>> --------------------------\n>> 00000001FFFFFFFF000000FF\n>> (1 row)\n>> ----\n> \n> Yeah, that seems wrong.\n\nIt looks like it's been this way for a while (704ddaa).\npg_walfile_name_offset() has the following comment:\n\n * Note that a location exactly at a segment boundary is taken to be in\n * the previous segment. This is usually the right thing, since the\n * expected usage is to determine which xlog file(s) are ready to archive.\n\nI see a couple of discussions about this as well [0] [1].\n\n[0] https://www.postgresql.org/message-id/flat/1154384790.3226.21.camel%40localhost.localdomain\n[1] https://www.postgresql.org/message-id/flat/15952.1154827205%40sss.pgh.pa.us\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 4 Feb 2022 14:50:57 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walfile_name uses XLByteToPrevSeg" }, { "msg_contents": "At Fri, 4 Feb 2022 14:50:57 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> On Fri, Feb 04, 2022 at 09:17:54AM -0500, Robert Haas wrote:\n> > On Fri, Feb 4, 2022 at 9:05 AM Ashutosh Bapat\n> > <ashutosh.bapat.oss@gmail.com> wrote:\n> >> And it gives some surprising results as well\n> >> ---\n> >> #select pg_walfile_name('0/0'::pg_lsn);\n> >> pg_walfile_name\n> >> --------------------------\n> >> 00000001FFFFFFFF000000FF\n> >> (1 row)\n> >> ----\n> > \n> > Yeah, that seems wrong.\n> \n> It looks like it's been this way for a while (704ddaa).\n> pg_walfile_name_offset() has the following comment:\n> \n> * Note that a location exactly at a segment boundary is taken to be in\n> * the previous segment. This is usually the right thing, since the\n> * expected usage is to determine which xlog file(s) are ready to archive.\n> \n> I see a couple of discussions about this as well [0] [1].\n> \n> [0] https://www.postgresql.org/message-id/flat/1154384790.3226.21.camel%40localhost.localdomain\n> [1] https://www.postgresql.org/message-id/flat/15952.1154827205%40sss.pgh.pa.us\n\nYes, its the deliberate choice of design, or a kind of\nquestionable-but-unoverturnable decision. I think there are many\nexternal tools conscious of this behavior.\n\nIt is also described in the documentation.\n\nhttps://www.postgresql.org/docs/current/functions-admin.html\n> When the given write-ahead log location is exactly at a write-ahead\n> log file boundary, both these functions return the name of the\n> preceding write-ahead log file. This is usually the desired behavior\n> for managing write-ahead log archiving behavior, since the preceding\n> file is the last one that currently needs to be archived.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Feb 2022 13:21:53 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walfile_name uses XLByteToPrevSeg" }, { "msg_contents": "At Mon, 07 Feb 2022 13:21:53 +0900 (JST), Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote in \n> At Fri, 4 Feb 2022 14:50:57 -0800, Nathan Bossart <nathandbossart@gmail.com> wrote in \n> > On Fri, Feb 04, 2022 at 09:17:54AM -0500, Robert Haas wrote:\n> > > On Fri, Feb 4, 2022 at 9:05 AM Ashutosh Bapat\n> > > <ashutosh.bapat.oss@gmail.com> wrote:\n> > >> And it gives some surprising results as well\n> > >> ---\n> > >> #select pg_walfile_name('0/0'::pg_lsn);\n> > >> pg_walfile_name\n> > >> --------------------------\n> > >> 00000001FFFFFFFF000000FF\n> > >> (1 row)\n> > >> ----\n> > > \n> > > Yeah, that seems wrong.\n> > \n> > It looks like it's been this way for a while (704ddaa).\n> > pg_walfile_name_offset() has the following comment:\n> > \n> > * Note that a location exactly at a segment boundary is taken to be in\n> > * the previous segment. This is usually the right thing, since the\n> > * expected usage is to determine which xlog file(s) are ready to archive.\n> > \n> > I see a couple of discussions about this as well [0] [1].\n> > \n> > [0] https://www.postgresql.org/message-id/flat/1154384790.3226.21.camel%40localhost.localdomain\n> > [1] https://www.postgresql.org/message-id/flat/15952.1154827205%40sss.pgh.pa.us\n> \n> Yes, its the deliberate choice of design, or a kind of\n> questionable-but-unoverturnable decision. I think there are many\n> external tools conscious of this behavior.\n> \n> It is also described in the documentation.\n> \n> https://www.postgresql.org/docs/current/functions-admin.html\n> > When the given write-ahead log location is exactly at a write-ahead\n> > log file boundary, both these functions return the name of the\n> > preceding write-ahead log file. This is usually the desired behavior\n> > for managing write-ahead log archiving behavior, since the preceding\n> > file is the last one that currently needs to be archived.\n\nI forgot to mentino, but I don't think we need to handle the\nwrap-around case of the function.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 07 Feb 2022 13:24:51 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pg_walfile_name uses XLByteToPrevSeg" } ]
[ { "msg_contents": "Hi,\n\nI've been doing some experiments with compilers, and I've noticed that \nwhen using clang configure still ends up setting\n\nGCC='yes'\n\nwhich seems somewhat strange. FWIW I've noticed because when building \nwith \"-Ofast\" (yeah, I'm just playing with stuff) it fails like this:\n\nchecking whether the C compiler still works... yes\nconfigure: error: do not put -ffast-math in CFLAGS\n\nwhich is in $GCC=yes check, and clang doesn't even have such option.\n\nNot a huge issue, but I wonder if this might confuse configure in other \nplaces, or something.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 4 Feb 2022 20:16:53 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "configure sets GCC=yes for clang" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I've been doing some experiments with compilers, and I've noticed that \n> when using clang configure still ends up setting\n> GCC='yes'\n> which seems somewhat strange.\n\nI'm pretty sure that's intentional. clang tries to be a gcc-alike,\nand mostly succeeds, other than variations in command line options.\nIt's better to set GCC=yes and deal with the small discrepancies\nthan not set it and have to deal with clang as a whole separate case.\n\n> FWIW I've noticed because when building \n> with \"-Ofast\" (yeah, I'm just playing with stuff) it fails like this:\n\n> checking whether the C compiler still works... yes\n> configure: error: do not put -ffast-math in CFLAGS\n\n> which is in $GCC=yes check, and clang doesn't even have such option.\n\nWell, the error is correct, even if clang doesn't spell the switch the\nsame way: you enabled fast-math optimizations and we don't want that.\n\n[ wanders away wondering how meson handles this stuff ... ]\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 14:41:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: configure sets GCC=yes for clang" } ]
[ { "msg_contents": "I've pushed the first draft for $SUBJECT at\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab22eea83169c8d0eb15050ce61cbe3d7dae4de6\n\nPlease send comments/corrections by Sunday.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 14:58:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Release notes for February minor releases" }, { "msg_contents": "On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n> I've pushed the first draft for $SUBJECT at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab22eea83169c8d0eb15050ce61cbe3d7dae4de6\n> \n> Please send comments/corrections by Sunday.\n\n+ <para>\n+ Build extended statistics for partitioned tables (Justin Pryzby)\n+ </para>\n+ <para>\n+ A previous bug fix disabled building of extended statistics for\n+ old-style inheritance trees, but it also prevented building them for\n+ partitioned tables, which was an unnecessary restriction.\n+ If you have created statistics objects for partitioned tables, you\n+ may wish to explicitly <command>ANALYZE</command> those tables after\n+ installing this update, rather than waiting for auto-analyze to do it.\n\nSince autoanalyze still doesn't process partitioned tables, the last part\nshould be removed.\n\nProbably it should say \"..you *should* explicitly ANALYZE thse tables..\".\n\n+ <para>\n+ Ignore extended statistics for inheritance trees (Justin Pryzby)\n+ </para>\n+ <para>\n+ A previous bug fix disabled building of extended statistics for\n+ old-style inheritance trees, but any existing statistics data was\n+ not removed, and that data would become more and more out-of-date\n+ over time. Adjust the planner to ignore such data. Extended\n+ statistics for the individual child tables are still built and used,\n+ however.\n+ </para>\n\nThe issue here isn't that old stats were never updated. For inheritance, they\n*were* updated with non-inherited stats (for SELECT FROM ONLY). But then\n\"SELECT FROM tbl*\" used the stats anyway...\n\n+ <para>\n+ Fix failure of SP-GiST indexes when indexed column's data type is\n+ binary-compatible with the declared input type of the operator class\n+ (Tom Lane)\n+ </para>\n\nmaybe: when *the*\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:02:14 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n> I've pushed the first draft for $SUBJECT at\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab22eea83169c8d0eb15050ce61cbe3d7dae4de6\n> \n> Please send comments/corrections by Sunday.\n\n> + <para>\n> + Fix corruption of HOT chains when a RECENTLY_DEAD tuple changes\n> + state to fully DEAD during page pruning (Andres Freund)\n> + </para>\n> +\n> + <para>\n> + This happens when the last transaction that could <quote>see</quote>\n> + the tuple ends while the page is being pruned. It was then possible\n> + to remove a tuple that is pointed to by a redirect item elsewhere on\n> + the page. While that causes no immediate problem, when the item slot\n> + is re-used by some new tuple, that tuple would be thought to be part\n> + of the pre-existing HOT chain, creating a form of index corruption.\n\nWell, ouchy.\n\n> + If this seems to have affected a table, <command>REINDEX</command>\n> + should repair the damage.\n\nI don't think this is very helpful to the reader, are their indexes\ncorrupt or not? If we can't tell them a specific command to run to\ncheck, can we at least mention that running amcheck would detect that\n(if it actually does)? Otherwise, I guess the only way to be sure is to\njust reindex every index? Or is this at least specific to b-trees?\n\n> + <para>\n> + Enforce standard locking protocol for TOAST table updates, to prevent\n> + problems with <command>REINDEX CONCURRENTLY</command> (Michael Paquier)\n> + </para>\n> +\n> + <para>\n> + If applied to a TOAST table or TOAST table's index, <command>REINDEX\n> + CONCURRENTLY</command> tended to produce a corrupted index. This\n> + happened because sessions updating TOAST entries released\n> + their <literal>ROW EXCLUSIVE</literal> locks immediately, rather\n> + than holding them until transaction commit as all other updates do.\n> + The fix is to make TOAST updates hold the table lock according to the\n> + normal rule. Any existing corrupted indexes can be repaired by\n> + reindexing again.\n> + </para>\n> + </listitem>\n\nSame, but at least here the admin can cut it down to only those indexes\nwhich were added concurrently.\n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Fri, 4 Feb 2022 22:27:54 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n>> Please send comments/corrections by Sunday.\n\n[ assorted comments ]\n\nThanks for the corrections.\n\n> + A previous bug fix disabled building of extended statistics for\n> + old-style inheritance trees, but any existing statistics data was\n> + not removed, and that data would become more and more out-of-date\n> + over time. Adjust the planner to ignore such data. Extended\n> + statistics for the individual child tables are still built and used,\n> + however.\n\n> The issue here isn't that old stats were never updated. For inheritance, they\n> *were* updated with non-inherited stats (for SELECT FROM ONLY). But then\n> \"SELECT FROM tbl*\" used the stats anyway...\n\nI'm confused about this bit. Are we still building bogus stats for\ninheritance parents, or has that stopped?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 16:29:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Michael Banck <michael.banck@credativ.de> writes:\n> On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n>> + If this seems to have affected a table, <command>REINDEX</command>\n>> + should repair the damage.\n\n> I don't think this is very helpful to the reader, are their indexes\n> corrupt or not? If we can't tell them a specific command to run to\n> check, can we at least mention that running amcheck would detect that\n> (if it actually does)? Otherwise, I guess the only way to be sure is to\n> just reindex every index? Or is this at least specific to b-trees?\n\nYeah, I wasn't too happy with that advice either, but it seems like the\nbest we can do [1]. I don't think we should advise blindly reindexing\nyour whole installation, because it's a very low-probability bug.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/20220204041935.gf4uwbdxddq6rffh%40alap3.anarazel.de\n\n\n", "msg_date": "Fri, 04 Feb 2022 16:35:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On Fri, Feb 04, 2022 at 04:29:19PM -0500, Tom Lane wrote:\n> > + A previous bug fix disabled building of extended statistics for\n> > + old-style inheritance trees, but any existing statistics data was\n> > + not removed, and that data would become more and more out-of-date\n> > + over time. Adjust the planner to ignore such data. Extended\n> > + statistics for the individual child tables are still built and used,\n> > + however.\n> \n> > The issue here isn't that old stats were never updated. For inheritance, they\n> > *were* updated with non-inherited stats (for SELECT FROM ONLY). But then\n> > \"SELECT FROM tbl*\" used the stats anyway...\n> \n> I'm confused about this bit. Are we still building bogus stats for\n> inheritance parents, or has that stopped?\n\nTo make a long story long:\n- before 859b3003de, an ERROR occurred when a stats object was created on an\n inheritance parent.\n- To avoid the error, 859b3003de changed to no longer build \"whole tree\" stats\n on the table heirarchy. Non-inheried stats were still collected.\n- However, the stats were *also* applied to inherited queries (FROM tbl*).\n\n36c4bc6 stops applying stats that shouldn't be applied (and doesn't change\ntheir collection during ANALYZE).\n\n20b9fa3 then changes to collect inherited stats on partitioned tables, since\nthey have no non-inherited stats, and since extended stats on partitioned\ntables were intended to work since v10, and did work until 859b3003de stopped\ncollecting them.\n\nIn back branches, pg_statistic has inherited stats for partitioned tables, and\nnon-inherited stats otherwise.\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 4 Feb 2022 15:41:03 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Feb 04, 2022 at 04:29:19PM -0500, Tom Lane wrote:\n>> I'm confused about this bit. Are we still building bogus stats for\n>> inheritance parents, or has that stopped?\n\n> To make a long story long:\n> - before 859b3003de, an ERROR occurred when a stats object was created on an\n> inheritance parent.\n> - To avoid the error, 859b3003de changed to no longer build \"whole tree\" stats\n> on the table heirarchy. Non-inheried stats were still collected.\n> - However, the stats were *also* applied to inherited queries (FROM tbl*).\n\n> 36c4bc6 stops applying stats that shouldn't be applied (and doesn't change\n> their collection during ANALYZE).\n\nGot it. So we collected (and still do collect) non-inherited stats\nfor inheritance parents, but prior to 36c4bc6 those were mistakenly\napplied in estimating both inheritance and non-inheritance queries.\nNow we only do the latter.\n\n(Since 269b532ae, this is all better in HEAD, but that's not\nrelevant for the back-branch release notes.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 04 Feb 2022 16:49:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On Fri, Feb 04, 2022 at 04:35:07PM -0500, Tom Lane wrote:\n> Michael Banck <michael.banck@credativ.de> writes:\n> > On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n> >> + If this seems to have affected a table, <command>REINDEX</command>\n> >> + should repair the damage.\n> \n> > I don't think this is very helpful to the reader, are their indexes\n> > corrupt or not? If we can't tell them a specific command to run to\n> > check, can we at least mention that running amcheck would detect that\n> > (if it actually does)? Otherwise, I guess the only way to be sure is to\n> > just reindex every index? Or is this at least specific to b-trees?\n> \n> Yeah, I wasn't too happy with that advice either, but it seems like the\n> best we can do [1]. I don't think we should advise blindly reindexing\n> your whole installation, because it's a very low-probability bug.\n\nRight ok. I wonder whether it makes sense to at hint at the low\nprobability then; I guess if you know Postgres well you can deduct from\nthe \"when the last transaction that could see the tuple ends while the\npage is being pruned\" that it is a low-probability corner-case, but\nI fear lots of users will be unable to gauge the chances they got hit by\nthis bug and just blindly assume they are affected (and/or ask around).\n\nI just woke up, so I don't have any good wording suggetsions yet.\n\n\nMichael\n\n-- \nMichael Banck\nTeamleiter PostgreSQL-Team\nProjektleiter\nTel.: +49 2166 9901-171\nEmail: michael.banck@credativ.de\n\ncredativ GmbH, HRB M�nchengladbach 12080\nUSt-ID-Nummer: DE204566209\nTrompeterallee 108, 41189 M�nchengladbach\nGesch�ftsf�hrung: Dr. Michael Meskes, Geoff Richardson, Peter Lilley\n\nUnser Umgang mit personenbezogenen Daten unterliegt\nfolgenden Bestimmungen: https://www.credativ.de/datenschutz\n\n\n", "msg_date": "Sat, 5 Feb 2022 09:28:11 +0100", "msg_from": "Michael Banck <michael.banck@credativ.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Hi,\n\nOn 2022-02-04 22:27:54 +0100, Michael Banck wrote:\n> On Fri, Feb 04, 2022 at 02:58:59PM -0500, Tom Lane wrote:\n> > I've pushed the first draft for $SUBJECT at\n> > \n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab22eea83169c8d0eb15050ce61cbe3d7dae4de6\n> > \n> > Please send comments/corrections by Sunday.\n> \n> > + <para>\n> > + Fix corruption of HOT chains when a RECENTLY_DEAD tuple changes\n> > + state to fully DEAD during page pruning (Andres Freund)\n> > + </para>\n> > +\n> > + <para>\n> > + This happens when the last transaction that could <quote>see</quote>\n> > + the tuple ends while the page is being pruned. It was then possible\n> > + to remove a tuple that is pointed to by a redirect item elsewhere on\n> > + the page. While that causes no immediate problem, when the item slot\n> > + is re-used by some new tuple, that tuple would be thought to be part\n> > + of the pre-existing HOT chain, creating a form of index corruption.\n> \n> Well, ouchy.\n\nI don't think the above description is quite accurate / makes it sound much\neasier to hit than it is.\n\nThe time window in which the stars need to align badly is not per-page window,\nbut per-vacuum. And the window is very narrow. Even if that prerequisite was\nfulfilled, one additionally needs to encounter a pretty rare combination of\ntids of very specific xid \"ages\".\n\n\n\n\n> > + If this seems to have affected a table, <command>REINDEX</command>\n> > + should repair the damage.\n> \n> I don't think this is very helpful to the reader, are their indexes\n> corrupt or not? If we can't tell them a specific command to run to\n> check, can we at least mention that running amcheck would detect that\n> (if it actually does)?\n\nIt does not reliably. Unfortunately heap amcheck does not verify HOT chains to\nany meaningful degree. Nor does btree amcheck check whether index tuples point\nto matching heap tuples :(\n\n\n> Otherwise, I guess the only way to be sure is to\n> just reindex every index? Or is this at least specific to b-trees?\n\nIt's an issue on the heap side, so unfortunately it is not btree specific.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 5 Feb 2022 14:17:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Hi,\n\nOn 2022-02-04 14:58:59 -0500, Tom Lane wrote:\n> I've pushed the first draft for $SUBJECT at\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ab22eea83169c8d0eb15050ce61cbe3d7dae4de6\n\n+Author: Andres Freund <andres@anarazel.de>\n+Branch: master [18b87b201] 2022-01-13 18:13:41 -0800\n+Branch: REL_14_STABLE [dad1539ae] 2022-01-14 10:56:12 -0800\n+-->\n+ <para>\n+ Fix corruption of HOT chains when a RECENTLY_DEAD tuple changes\n+ state to fully DEAD during page pruning (Andres Freund)\n+ </para>\n\nEven if that happens, it's still pretty unlikely to cause corruption - so\nmaybe s/corruption/chance of corruption/?\n\n\n+ <para>\n+ This happens when the last transaction that could <quote>see</quote>\n+ the tuple ends while the page is being pruned.\n\nThe transaction doesn't need to have ended while the page is vacuumed - the\nhorizon needs to have been \"refined/updated\" while the page is pruned so that\na tuple version that was first considered RECENTLY_DEAD is now considered\nDEAD. Which can only happen if RecentXmin changed after\nvacuum_set_xid_limits(), which only can happen if catalog snapshot\ninvalidations and other invalidations are processed in vac_open_indexes() and\nRecentXmin changed since vacuum_set_xid_limits(). Then a page involving\ntuples in a specific \"arrangement\" need to be encountered.\n\nThat's obviously to complicated for the release notes. Trying to make it more\nunderstandable I came up with the following, which still does not seem great:\n\n This can only happen if transactions, some having performed DDL, commit\n within a narrow window at the start of VACUUM. If VACUUM then prunes a\n page containing several tuple version that started to be removable within\n the aforementioned time window, the bug may cause corruption on that page\n (but no further pages). A tuple that is pointed to by a redirect item\n elsewhere on the page can get removed. [...]\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 5 Feb 2022 14:58:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> That's obviously to complicated for the release notes. Trying to make it more\n> understandable I came up with the following, which still does not seem great:\n> ...\n\nHow do you like this wording?\n\n <para>\n Fix corruption of HOT chains when a RECENTLY_DEAD tuple changes\n state to fully DEAD during page pruning (Andres Freund)\n </para>\n\n <para>\n It was possible for <command>VACUUM</command> to remove a\n recently-dead tuple while leaving behind a redirect item that\n pointed to it. When the tuple's item slot is later re-used by\n some new tuple, that tuple would be seen as part of the\n pre-existing HOT chain, creating a form of index corruption.\n If this has happened, reindexing the table should repair the\n damage. However, this is an extremely low-probability scenario,\n so we do not recommend reindexing just on the chance that it might\n have happened.\n </para>\n\nI'm also going to swap the order of this item and the TOAST locking\nitem, since that one is seeming like it's much more relevant to\nmost people.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 06 Feb 2022 13:09:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Hi,\n\nOn 2022-02-06 13:09:41 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > That's obviously to complicated for the release notes. Trying to make it more\n> > understandable I came up with the following, which still does not seem great:\n> > ...\n> \n> How do you like this wording?\n\n> I'm also going to swap the order of this item and the TOAST locking\n> item, since that one is seeming like it's much more relevant to\n> most people.\n\n+1\n\nThanks!\n\n\n", "msg_date": "Sun, 6 Feb 2022 10:44:23 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On 2/6/22 1:44 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-02-06 13:09:41 -0500, Tom Lane wrote:\r\n>> Andres Freund <andres@anarazel.de> writes:\r\n>>> That's obviously to complicated for the release notes. Trying to make it more\r\n>>> understandable I came up with the following, which still does not seem great:\r\n>>> ...\r\n>>\r\n>> How do you like this wording?\r\n> \r\n>> I'm also going to swap the order of this item and the TOAST locking\r\n>> item, since that one is seeming like it's much more relevant to\r\n>> most people.\r\n> \r\n> +1\r\n\r\nI'm working on the release announcement and have been following this thread.\r\n\r\nAre there steps we can provide to help a user detect that this occurred, \r\neven though it's a low-probability?\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 6 Feb 2022 14:22:25 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On Sun, Feb 06, 2022 at 02:22:25PM -0500, Jonathan S. Katz wrote:\n> I'm working on the release announcement and have been following this thread.\n> \n> Are there steps we can provide to help a user detect that this occurred,\n> even though it's a low-probability?\n\nIt's the same question as raised here:\nhttps://www.postgresql.org/message-id/61fd9a5b.1c69fb81.88a14.5556%40mx.google.com\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Feb 2022 14:38:02 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "Hi,\n\nOn 2022-02-06 14:22:25 -0500, Jonathan S. Katz wrote:\n> On 2/6/22 1:44 PM, Andres Freund wrote:\n> I'm working on the release announcement and have been following this thread.\n> \n> Are there steps we can provide to help a user detect that this occurred,\n> even though it's a low-probability?\n\nNot realiably currently:\nhttps://postgr.es/m/20220205221742.qylnkze5ykc4mabv%40alap3.anarazel.de\nhttps://www.postgresql.org/message-id/20220204041935.gf4uwbdxddq6rffh@alap3.anarazel.de\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 6 Feb 2022 12:42:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" }, { "msg_contents": "On 2/6/22 3:42 PM, Andres Freund wrote:\r\n> Hi,\r\n> \r\n> On 2022-02-06 14:22:25 -0500, Jonathan S. Katz wrote:\r\n>> On 2/6/22 1:44 PM, Andres Freund wrote:\r\n>> I'm working on the release announcement and have been following this thread.\r\n>>\r\n>> Are there steps we can provide to help a user detect that this occurred,\r\n>> even though it's a low-probability?\r\n> \r\n> Not realiably currently:\r\n> https://postgr.es/m/20220205221742.qylnkze5ykc4mabv%40alap3.anarazel.de\r\n> https://www.postgresql.org/message-id/20220204041935.gf4uwbdxddq6rffh@alap3.anarazel.de\r\n\r\nThanks.\r\n\r\nFor the release announcement then, it seems like the phrasing would to \r\nindicate the probability of hitting this issue is low, but if you are \r\nconcerned, you should reindex.\r\n\r\nI should have the first release announcement draft later tonight, which \r\nI'll put on its usual separate thread.\r\n\r\nThanks,\r\n\r\nJonathan", "msg_date": "Sun, 6 Feb 2022 17:30:34 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Release notes for February minor releases" } ]
[ { "msg_contents": "Hi,\n\nIt looks to me as if the transforms feature for PLs was added in\ncac7658, and support was added to plperl and plpython at that time,\nwith documentation added for CREATE FUNCTION, CREATE/DROP TRANSFORM,\nand the catalogs, but nothing was added at that time to plhandler.sgml\nto clarify that a PL handler itself must add code to look up and apply\nsuch transforms, or nothing happens.\n\nWithout that information, a reader not in the know (it happened to me)\ncould get the idea from the CREATE TRANSFORM docs that the support\nis more general than it is, and an incoming argument could already\ncontain the 'internal' something-specific-to-the-language-implementation\nreturned by a specified transform. (Which does lead to follow-on questions\nlike \"who says an arbitrary language implementation's transform result\ncould be safely represented in an mmgr-managed argument list?\"—it really\ndoes make better sense upon learning the PL handler has to do the deed.\nBut I haven't spotted any place where the docs *say* that.)\n\nI'm thinking plhandler.sgml is the only place that really needs to be\nsaid; readers looking up CREATE TRANSFORM and using an existing PL that\nsupports it don't need to know how the sausage is made. (Maybe it is\nworth mentioning there, though, that it isn't possible to develop\ntransforms for an arbitrary PL unless that PL applies transforms.)\n\nI noticed the CREATE TRANSFORM docs show the argument list as\n(argument_type [, ...]) even though check_transform_function will reject\nany argument list of length other than 1 or with type other than internal.\nIs that an overly-generic way to show the syntax, or is that a style\nwith precedent elsewhere in the docs? (I checked a couple obvious places\nlike CREATE OPERATOR, CREATE FUNCTION, CREATE TYPE but did not see similar\nexamples).\n\nAs long as a PL handler has the sole responsibility for invoking\nits transforms, I wonder if it would make sense to allow a PL to\ndefine what its transforms should look like, maybe replacing\ncheck_transform_function with a transform_validator for the PL.\nI'm not proposing such a patch here, but I am willing to prepare\none for plhandler.sgml and maybe pltemplate.c documenting the current\nsituation, if nobody tells me I'm wrong about something here.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 4 Feb 2022 18:55:42 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Documentation about PL transforms" }, { "msg_contents": "On 05.02.22 00:55, Chapman Flack wrote:\n> I'm thinking plhandler.sgml is the only place that really needs to be\n> said; readers looking up CREATE TRANSFORM and using an existing PL that\n> supports it don't need to know how the sausage is made. (Maybe it is\n> worth mentioning there, though, that it isn't possible to develop\n> transforms for an arbitrary PL unless that PL applies transforms.)\n\nmakes sense\n\n> I noticed the CREATE TRANSFORM docs show the argument list as\n> (argument_type [, ...]) even though check_transform_function will reject\n> any argument list of length other than 1 or with type other than internal.\n> Is that an overly-generic way to show the syntax, or is that a style\n> with precedent elsewhere in the docs?\n\nThat could be corrected.\n\n> As long as a PL handler has the sole responsibility for invoking\n> its transforms, I wonder if it would make sense to allow a PL to\n> define what its transforms should look like, maybe replacing\n> check_transform_function with a transform_validator for the PL.\n> I'm not proposing such a patch here, but I am willing to prepare\n> one for plhandler.sgml and maybe pltemplate.c documenting the current\n> situation, if nobody tells me I'm wrong about something here.\n\nMaybe. This kind of thing would mostly be driven what a PL wants in \nactual practice, and then how that could be generalized to many/all PLs.\n\n\n", "msg_date": "Mon, 7 Feb 2022 16:59:48 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 02/07/22 10:59, Peter Eisentraut wrote:\n> On 05.02.22 00:55, Chapman Flack wrote:\n>> I'm thinking plhandler.sgml is the only place that really needs to be\n>> ...\n>> worth mentioning there, though, that it isn't possible to develop\n>> transforms for an arbitrary PL unless that PL applies transforms.)\n> \n> makes sense\n> \n>> (argument_type [, ...]) even though check_transform_function will reject\n>> any argument list of length other than 1 or with type other than internal.\n> \n> That could be corrected.\n\nI'll work on some doc patches.\n\n>> As long as a PL handler has the sole responsibility for invoking\n>> its transforms, I wonder if it would make sense to allow a PL to\n>> define what its transforms should look like, maybe replacing\n>> check_transform_function with a transform_validator for the PL.\n> \n> Maybe. This kind of thing would mostly be driven what a PL wants in actual\n> practice, and then how that could be generalized to many/all PLs.\n\nIt has since occurred to me that another benefit of having a\ntransform_validator per PL would be immediate error reporting\nif someone, for whatever reason, tries out CREATE TRANSFORM\nfor a PL that doesn't grok transforms.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 7 Feb 2022 15:14:43 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 02/07/22 15:14, Chapman Flack wrote:\n> It has since occurred to me that another benefit of having a\n> transform_validator per PL would be immediate error reporting\n> if someone, for whatever reason, tries out CREATE TRANSFORM\n> for a PL that doesn't grok transforms.\n\nThe same could be achieved, I guess, by an event trigger, though that\nwould take positive effort to set up, where the benefit of a per-PL\ntransform_validator would be that if a given PL does not bother to\nprovide one, CREATE TRANSFORM for it would automatically fail. (And\nsuch a validator would not have to spend most of its life ignoring\nother DDL.)\n\nThat does reveal another documentation gap: table 40.1 does not show\nCREATE or DROP TRANSFORM being supported for event triggers. I've\nconfirmed they work, though. I'll tack that onto the doc patch.\n\nI notice our transforms lack the named groups of 9075-2. With those\n(plus our LANGUAGE clause), I could write, for a made-up example:\n\nCREATE TRANSFORM FOR hstore LANGUAGE plpython3u\n asdict (\n FROM SQL WITH FUNCTION hstore_to_plpython3dict,\n TO SQL WITH FUNCTION plpython3dict_to_hstore)\n asnamedtuple (\n FROM SQL WITH FUNCTION hstore_to_plpython3namedtup,\n TO SQL WITH FUNCTION plpython3namedtup_to_hstore);\n\nCREATE FUNCTION f1(val hstore) RETURNS int\nLANGUAGE plpython3u\nTRANSFORM GROUP asdict FOR TYPE hstore\n...\n\nCREATE FUNCTION f2(val hstore) RETURNS int\nLANGUAGE plpython3u\nTRANSFORM GROUP asnamedtuple FOR TYPE hstore\n...\n\nIt seems to me that could be useful, in cases where a PL offers\nmore than one good choice for representing a PostgreSQL type and\nthe preferred one could depend on the function.\n\nWas that considered and rejected for our transforms, or were ours\njust based on an earlier 9075-2 without the named groups?\n\n\nAlso, I am doubtful of our Compatibility note, \"There is a CREATE\nTRANSFORM command in the SQL standard, but it is for adapting data\ntypes to client languages.\"\n\nIn my reading of 9075-2, I do see the transforms used for client\nlanguages (all the <embedded SQL Foo program>s), but I also see\nthe to-sql and from-sql functions being applied in <routine invocation>\nwhenever \"R is an external routine\" and the type \"is a user-defined type\".\nThe latter doesn't seem much different from our usage. The differences\nI see are (1) our LANGUAGE clause, (2) we don't have a special\n\"user-defined type\" category limiting what types can have transforms\n(and (3), we don't have the named groups). And we are applying them\n/only/ for server-side routines, rather than for server and client code.\n\nThe ISO transforms work by mapping the (\"user-defined\") type to some\nexisting SQL type for which the PL has a standard mapping already. Ours\nwork by mapping the type directly to some suitable type in the PL.\n\nAm I reading this accurately?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Tue, 8 Feb 2022 14:03:21 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 02/07/22 15:14, Chapman Flack wrote:\n> I'll work on some doc patches.\n\nIt seems a bit of an impedance mismatch that there is a get_func_trftypes\nproducing a C Oid[] (with its length returned separately), while\nget_transform_fromsql and get_transform_tosql both expect a List.\n\nThere is only one in-core caller of get_func_trftypes (it is\nprint_function_trftypes in ruleutils.c), while both in-core PLs that\ncurrently support transforms are reduced to rolling their own List by\ngrabbing the protrftypes Datum and hitting it with oid_array_to_list.\n\nWould it be reasonable to deprecate get_func_trftypes and instead\nprovide a get_call_trftypes (following the naming of get_call_result_type\nwhere _call_ encompasses functions and procedures) that returns a List,\nand use that consistently?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 21 Feb 2022 16:09:52 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 02/21/22 16:09, Chapman Flack wrote:\n> On 02/07/22 15:14, Chapman Flack wrote:\n>> I'll work on some doc patches.\n> \n> It seems a bit of an impedance mismatch that there is a get_func_trftypes\n> producing a C Oid[] (with its length returned separately), while\n> get_transform_fromsql and get_transform_tosql both expect a List.\n> ...\n> Would it be reasonable to deprecate get_func_trftypes and instead\n> provide a get_call_trftypes\n\nIt would have been painful to write documentation of get_func_trftypes\nsaying its result isn't what get_transform_{from.to}sql expect, so\npatch 1 does add a get_call_trftypes that returns a List *.\n\nPatch 2 then updates the docs as discussed in this thread. It turned out\nplhandler.sgml was kind of a long monolith of text even before adding\ntransform information, so I broke it into sections first. This patch adds\nthe section markup without reindenting, so the changes aren't obscured.\n\nThe chapter had also fallen behind the introduction of procedures, so\nI have changed many instances of 'function' to the umbrella term\n'routine'.\n\nPatch 3 simply reindents for the new section markup and rewraps.\nWhitespace only.\n\nPatch 4 updates src/test/modules/plsample to demonstrate handling of\ntransforms (and to add some more comments generally).\n\nI'll add this to the CF.\n\nRegards,\n-Chap", "msg_date": "Tue, 22 Feb 2022 12:59:06 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": true, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 2022-02-22 12:59, Chapman Flack wrote:\n> It would have been painful to write documentation of get_func_trftypes\n> saying its result isn't what get_transform_{from.to}sql expect, so\n> patch 1 does add a get_call_trftypes that returns a List *.\n> \n> Patch 2 then updates the docs as discussed in this thread. It turned \n> out\n> plhandler.sgml was kind of a long monolith of text even before adding\n> transform information, so I broke it into sections first. This patch \n> adds\n> the section markup without reindenting, so the changes aren't obscured.\n> \n> The chapter had also fallen behind the introduction of procedures, so\n> I have changed many instances of 'function' to the umbrella term\n> 'routine'.\n> \n> Patch 3 simply reindents for the new section markup and rewraps.\n> Whitespace only.\n> \n> Patch 4 updates src/test/modules/plsample to demonstrate handling of\n> transforms (and to add some more comments generally).\n\nHere is a rebase.", "msg_date": "Thu, 07 Apr 2022 21:22:11 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "Hi\n\nI am doing an review of this patch and I have two comments\n\n1. I am not sure if get_call_trftypes is a good name - the prefix get_call\nis used when some runtime data is processed. This function just returns\nreformatted data from the system catalogue. Maybe get_func_trftypes_list,\nor just replace function get_func_trftypes (now, the list is an array, so\nthere should not be performance issues). For consistency, the new function\nshould be used in plperl and plpython too. Probably this function is not\nwidely used, so I don't see why we need to have two almost equal functions\nin code.\n\n2.\n\n+ It also contains the OID of the intended procedural language and\nwhether\n+ that procedural language is declared as <literal>TRUSTED</literal>,\nuseful\n+ if a single inline handler is supporting more than one procedural\nlanguage.\n\nI am not sure if this sentence is in the correct place. Maybe can be\nmentioned separately,\nso generally handlers can be used by more than one procedural language. But\nmaybe\nI don't understand this sentence.\n\nRegards\n\nPavel\n\nHiI am doing an review of this patch and I have two comments1. I am not sure if get_call_trftypes is a good name - the prefix get_call is used when some runtime data is processed. This function just returns reformatted data from the system catalogue. Maybe get_func_trftypes_list, or just replace function get_func_trftypes (now, the list is an array, so there should not be performance issues). For consistency, the new function should be used in plperl and plpython too. Probably this function is not widely used, so I don't see why we need to have two almost equal functions in code.2.+    It also contains the OID of the intended procedural language and whether+    that procedural language is declared as <literal>TRUSTED</literal>, useful+    if a single inline handler is supporting more than one procedural language.I am not sure if this sentence is in the correct place. Maybe can be mentioned separately,so generally handlers can be used by more than one procedural language. But maybeI don't understand this sentence.RegardsPavel", "msg_date": "Wed, 13 Jul 2022 06:26:47 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 2022-07-13 00:26, Pavel Stehule wrote:\n\n> 1. I am not sure if get_call_trftypes is a good name - the prefix \n> get_call\n> is used when some runtime data is processed.\n\nI guess I hadn't caught on that the prefix carried that meaning.\nTo me, it appeared to be a prefix used to avoid being specific to\n'function' or 'procedure'.\n\n> This function just returns\n> reformatted data from the system catalogue. Maybe \n> get_func_trftypes_list,\n> or just replace function get_func_trftypes (now, the list is an array, \n> so\n> there should not be performance issues). For consistency, the new \n> function\n> should be used in plperl and plpython too. Probably this function is \n> not\n\nIf it is acceptable to replace get_func_trftypes like that, I can \nproduce\nsuch a patch.\n\n> 2.\n> \n> + It also contains the OID of the intended procedural language and\n> whether\n> + that procedural language is declared as \n> <literal>TRUSTED</literal>,\n> useful\n> + if a single inline handler is supporting more than one procedural\n> language.\n> \n> I am not sure if this sentence is in the correct place. Maybe can be\n> mentioned separately,\n> so generally handlers can be used by more than one procedural language. \n> But\n> maybe\n> I don't understand this sentence.\n\nMy point was that, if the structure did /not/ include the OID of\nthe PL and its TRUSTED property, then it would not be possible\nfor a single inline handler to support more than one PL. So that\nis why it is a good thing that those are included in the structure,\nand why it would be a bad thing if they were not.\n\nWould it be clearer to say:\n\nIt also contains the OID of the intended procedural language and whether\nthat procedural language is declared as <literal>TRUSTED</literal>.\nWhile these values are redundant if the inline handler is serving only\na single procedural language, they are necessary to allow one inline\nhandler to serve more than one PL.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Wed, 27 Jul 2022 22:05:58 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "čt 28. 7. 2022 v 4:06 odesílatel <chap@anastigmatix.net> napsal:\n\n> On 2022-07-13 00:26, Pavel Stehule wrote:\n>\n> > 1. I am not sure if get_call_trftypes is a good name - the prefix\n> > get_call\n> > is used when some runtime data is processed.\n>\n> I guess I hadn't caught on that the prefix carried that meaning.\n> To me, it appeared to be a prefix used to avoid being specific to\n> 'function' or 'procedure'.\n>\n\nI am sure this function is in Postgres significantly longer than Postgres\nhas support for 'procedures'.\n\n\n>\n> > This function just returns\n> > reformatted data from the system catalogue. Maybe\n> > get_func_trftypes_list,\n> > or just replace function get_func_trftypes (now, the list is an array,\n> > so\n> > there should not be performance issues). For consistency, the new\n> > function\n> > should be used in plperl and plpython too. Probably this function is\n> > not\n>\n> If it is acceptable to replace get_func_trftypes like that, I can\n> produce\n> such a patch.\n>\n> > 2.\n> >\n> > + It also contains the OID of the intended procedural language and\n> > whether\n> > + that procedural language is declared as\n> > <literal>TRUSTED</literal>,\n> > useful\n> > + if a single inline handler is supporting more than one procedural\n> > language.\n> >\n> > I am not sure if this sentence is in the correct place. Maybe can be\n> > mentioned separately,\n> > so generally handlers can be used by more than one procedural language.\n> > But\n> > maybe\n> > I don't understand this sentence.\n>\n> My point was that, if the structure did /not/ include the OID of\n> the PL and its TRUSTED property, then it would not be possible\n> for a single inline handler to support more than one PL. So that\n> is why it is a good thing that those are included in the structure,\n> and why it would be a bad thing if they were not.\n>\n> Would it be clearer to say:\n>\n> It also contains the OID of the intended procedural language and whether\n> that procedural language is declared as <literal>TRUSTED</literal>.\n> While these values are redundant if the inline handler is serving only\n> a single procedural language, they are necessary to allow one inline\n> handler to serve more than one PL.\n>\n\nok\n\nRegards\n\nPavel\n\n>\n> Regards,\n> -Chap\n>\n\nčt 28. 7. 2022 v 4:06 odesílatel <chap@anastigmatix.net> napsal:On 2022-07-13 00:26, Pavel Stehule wrote:\n\n> 1. I am not sure if get_call_trftypes is a good name - the prefix \n> get_call\n> is used when some runtime data is processed.\n\nI guess I hadn't caught on that the prefix carried that meaning.\nTo me, it appeared to be a prefix used to avoid being specific to\n'function' or 'procedure'.I am sure this function is in Postgres significantly longer than Postgres has support for 'procedures'. \n\n> This function just returns\n> reformatted data from the system catalogue. Maybe \n> get_func_trftypes_list,\n> or just replace function get_func_trftypes (now, the list is an array, \n> so\n> there should not be performance issues). For consistency, the new \n> function\n> should be used in plperl and plpython too. Probably this function is \n> not\n\nIf it is acceptable to replace get_func_trftypes like that, I can \nproduce\nsuch a patch.\n\n> 2.\n> \n> +    It also contains the OID of the intended procedural language and\n> whether\n> +    that procedural language is declared as \n> <literal>TRUSTED</literal>,\n> useful\n> +    if a single inline handler is supporting more than one procedural\n> language.\n> \n> I am not sure if this sentence is in the correct place. Maybe can be\n> mentioned separately,\n> so generally handlers can be used by more than one procedural language. \n> But\n> maybe\n> I don't understand this sentence.\n\nMy point was that, if the structure did /not/ include the OID of\nthe PL and its TRUSTED property, then it would not be possible\nfor a single inline handler to support more than one PL. So that\nis why it is a good thing that those are included in the structure,\nand why it would be a bad thing if they were not.\n\nWould it be clearer to say:\n\nIt also contains the OID of the intended procedural language and whether\nthat procedural language is declared as <literal>TRUSTED</literal>.\nWhile these values are redundant if the inline handler is serving only\na single procedural language, they are necessary to allow one inline\nhandler to serve more than one PL.okRegardsPavel \n\nRegards,\n-Chap", "msg_date": "Thu, 28 Jul 2022 07:51:12 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "On 2022-07-28 01:51, Pavel Stehule wrote:\n>> Would it be clearer to say:\n>> \n>> It also contains the OID of the intended procedural language and \n>> whether\n>> that procedural language is declared as <literal>TRUSTED</literal>.\n>> While these values are redundant if the inline handler is serving only\n>> a single procedural language, they are necessary to allow one inline\n>> handler to serve more than one PL.\n> \n> ok\n\nI will probably be unable to produce a new patch for this CF.\nFamily obligation.\n\nRegards,\n-Chap\n\n\n", "msg_date": "Fri, 29 Jul 2022 12:35:20 -0400", "msg_from": "chap@anastigmatix.net", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" }, { "msg_contents": "pá 29. 7. 2022 v 18:35 odesílatel <chap@anastigmatix.net> napsal:\n\n> On 2022-07-28 01:51, Pavel Stehule wrote:\n> >> Would it be clearer to say:\n> >>\n> >> It also contains the OID of the intended procedural language and\n> >> whether\n> >> that procedural language is declared as <literal>TRUSTED</literal>.\n> >> While these values are redundant if the inline handler is serving only\n> >> a single procedural language, they are necessary to allow one inline\n> >> handler to serve more than one PL.\n> >\n> > ok\n>\n> I will probably be unable to produce a new patch for this CF.\n> Family obligation.\n>\n\nok\n\nRegards\n\nPavel\n\n\n\n>\n> Regards,\n> -Chap\n>\n\npá 29. 7. 2022 v 18:35 odesílatel <chap@anastigmatix.net> napsal:On 2022-07-28 01:51, Pavel Stehule wrote:\n>> Would it be clearer to say:\n>> \n>> It also contains the OID of the intended procedural language and \n>> whether\n>> that procedural language is declared as <literal>TRUSTED</literal>.\n>> While these values are redundant if the inline handler is serving only\n>> a single procedural language, they are necessary to allow one inline\n>> handler to serve more than one PL.\n> \n> ok\n\nI will probably be unable to produce a new patch for this CF.\nFamily obligation.okRegards Pavel \n\nRegards,\n-Chap", "msg_date": "Fri, 29 Jul 2022 20:47:59 +0200", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Documentation about PL transforms" } ]
[ { "msg_contents": "Significant changes:\n\n * don't run TestUpgrade if TAP tests are present in |src/bin/pg_upgrade|\n * Add proposed new location of |pg_upgrade| logs in TestUpgrade module\n * Use an HMAC over the whole content as the signature.\n * Quote |PROVE_FLAGS |in case there are multiple settings\n * tighten failure detection for cross version upgrade\n * be more verbose about git mirror failures\n * Support symlinks on Windows where possible in SCM module\n * Document |rm_worktrees| setting and make |on| the default.\n * Add new branches_to_build keywords |STABLE| and |OLD|\n\nBecause of the changes to how pg_upgrade logging works, owners are\nstrongly urged to upgrade to the new release as soon as possible if they\nare running the TestUpgrade module.\n\nThe meaning of the branches_to_build keywords are as follows: STABLE\nmeans all the live branches other than master/HEAD, while OLD means\nthose branches older than STABLE that we are now supporting limited\nbuilds for (currently REL9_2_STABLE through REL9_6_STABLE).\n\nDownloads are available at\n<https://github.com/PGBuildFarm/client-code/releases> and\n<https://buildfarm.postgresql.org/downloads>\n\n\ncheers\n\n\nandrew\n\n\n\n", "msg_date": "Sun, 6 Feb 2022 09:01:24 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": true, "msg_subject": "Announcing Release 14 of the PostgreSQL Buildfarm client" } ]
[ { "msg_contents": "Hi,\r\n\r\nAttached is a draft for the release announcement for the 2022-02-10 \r\ncumulative update release.\r\n\r\nPlease review for technical accuracy or if you believe any items should \r\nbe added/removed. Please provide feedback no later than 2020-02-10 0:00 \r\nAoE[1].\r\n\r\nThanks,\r\n\r\nJonathan\r\n\r\n[1] https://en.wikipedia.org/wiki/Anywhere_on_Earth", "msg_date": "Sun, 6 Feb 2022 20:01:02 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "2022-02-10 release announcement draft" }, { "msg_contents": "On Sun, Feb 06, 2022 at 08:01:02PM -0500, Jonathan S. Katz wrote:\n> Hi,\n> \n> Attached is a draft for the release announcement for the 2022-02-10\n> cumulative update release.\n> \n> Please review for technical accuracy or if you believe any items should be\n> added/removed. Please provide feedback no later than 2020-02-10 0:00 AoE[1].\n\nI guess you mean 2022 ;)\n\n> * The [`psql`](https://www.postgresql.org/docs/current/app-psql.html)\n> `\\password` command now defaults to setting the password for to the role defined\n> by `CURRENT_USER`. Additionally, the role name is now included in the password\n> prompt.\n\ns/for to/for/\n\n> * Build extended statistics for partitioned tables. If you previously added\n> extended statistics to a partitioned table, you can fix this by running\n> [`ANALYZE`](https://www.postgresql.org/docs/current/sql-analyze.html).\n> [`autovacuum`](https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM)\n> currently does not process these statistics, so you must periodically run\n> `ANALYZE` on any partitioned tables that are using extended statistics.\n\nInstead of \"you can fix this\" , I suggest to say \"you should run ANALYZE on\nthose tables to collect updated statistics.\"\n\nIt should say \"not process these TABLES\" (not stats), and should not say \"that\nare using extended statistics\" since partitioned tables should be manually\nanalyzed whether or not they have extended stats.\n\nIt's good to say \"..you should periodically run ANALYZE on partitioned tables\nto collect updated statistics.\" (even though this patch doesn't change that).\n\n> * Fix crash with [multiranges](https://www.postgresql.org/docs/current/rangetypes.html)\n> when extracting a variable-length data types.\n\nremove \"a\" or s/types/types/\n\n> * Disallow altering data type of a partitioned table's columns when the\n> partitioned table's row type is used as a composite type elsewhere.\n\nthe data types ?\n\n-- \nJustin\n\n\n", "msg_date": "Sun, 6 Feb 2022 21:20:04 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: 2022-02-10 release announcement draft" }, { "msg_contents": "On 2/6/22 10:20 PM, Justin Pryzby wrote:\r\n> On Sun, Feb 06, 2022 at 08:01:02PM -0500, Jonathan S. Katz wrote:\r\n>> Hi,\r\n>>\r\n>> Attached is a draft for the release announcement for the 2022-02-10\r\n>> cumulative update release.\r\n>>\r\n>> Please review for technical accuracy or if you believe any items should be\r\n>> added/removed. Please provide feedback no later than 2020-02-10 0:00 AoE[1].\r\n> \r\n> I guess you mean 2022 ;)\r\n\r\nMaybe ;)\r\n\r\n>> * The [`psql`](https://www.postgresql.org/docs/current/app-psql.html)\r\n>> `\\password` command now defaults to setting the password for to the role defined\r\n>> by `CURRENT_USER`. Additionally, the role name is now included in the password\r\n>> prompt.\r\n> \r\n> s/for to/for/\r\n\r\nDone.\r\n\r\n>> * Build extended statistics for partitioned tables. If you previously added\r\n>> extended statistics to a partitioned table, you can fix this by running\r\n>> [`ANALYZE`](https://www.postgresql.org/docs/current/sql-analyze.html).\r\n>> [`autovacuum`](https://www.postgresql.org/docs/current/routine-vacuuming.html#AUTOVACUUM)\r\n>> currently does not process these statistics, so you must periodically run\r\n>> `ANALYZE` on any partitioned tables that are using extended statistics.\r\n> \r\n> Instead of \"you can fix this\" , I suggest to say \"you should run ANALYZE on\r\n> those tables to collect updated statistics.\"\r\n\r\nI opted for most of this suggestion.\r\n\r\n> It should say \"not process these TABLES\" (not stats), and should not say \"that\r\n> are using extended statistics\" since partitioned tables should be manually\r\n> analyzed whether or not they have extended stats.\r\n\r\n> \r\n> It's good to say \"..you should periodically run ANALYZE on partitioned tables\r\n> to collect updated statistics.\" (even though this patch doesn't change that).\r\n\r\nI reread the release note an agree, I misinterpreted this. I updated the \r\nlanguage.\r\n\r\n> \r\n>> * Fix crash with [multiranges](https://www.postgresql.org/docs/current/rangetypes.html)\r\n>> when extracting a variable-length data types.\r\n> \r\n> remove \"a\" or s/types/types/\r\n\r\nFixed.\r\n\r\n>> * Disallow altering data type of a partitioned table's columns when the\r\n>> partitioned table's row type is used as a composite type elsewhere.\r\n> \r\n> the data types ?\r\n\r\nThis is correct, per 7ead9925f\r\n\r\nI've attached the updated copy. Thanks!\r\n\r\nJonathan", "msg_date": "Mon, 7 Feb 2022 20:43:43 -0500", "msg_from": "\"Jonathan S. Katz\" <jkatz@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: 2022-02-10 release announcement draft" } ]
[ { "msg_contents": "*Postgres version:* 11.4\n\n*Problem:*\n Query choosing Bad Index Path. Details are provided below:\n\n\n*Table :*\n\n\n\n\n\n\n*Doubt*\n 1. Why is this Query choosing *Index Scan Backward using table1_pkey\nIndex *though it's cost is high. It can rather choose\n BITMAP OR\n (Index on RECORDID) i.e; table1_idx6\n (Index on RELATEDID) i.e; table1_idx7\n\n Below is the selectivity details from *pg_stats* table\n - Recordid has 51969 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n - Relatedid has 82128 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n\nSince, selectivity is less, this should logically choose this Index, which\nwould have improve my query performance here.\nI cross-checked the same by removing PrimaryKey to this table and query now\nchooses these indexes and response is in 100ms. Please refer the plan below\n(after removing primary key):", "msg_date": "Mon, 7 Feb 2022 10:45:12 +0530", "msg_from": "Valli Annamalai <aishwaryaanns@gmail.com>", "msg_from_op": true, "msg_subject": "Query choosing Bad Index Path" }, { "msg_contents": "*Postgres version:* 11.4\n\n*Problem:*\n Query choosing Bad Index Path. Details are provided below:\n\n\n*Table :*\n\n\n\n\n\n\n*Doubt*\n 1. Why is this Query choosing *Index Scan Backward using table1_pkey\nIndex *though it's cost is high. It can rather choose\n BITMAP OR\n (Index on RECORDID) i.e; table1_idx6\n (Index on RELATEDID) i.e; table1_idx7\n\n Below is the selectivity details from *pg_stats* table\n - Recordid has 51969 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n - Relatedid has 82128 distinct values. And selectivity\n(most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n\nSince, selectivity is less, this should logically choose this Index, which\nwould have improve my query performance here.\nI cross-checked the same by removing PrimaryKey to this table and query now\nchooses these indexes and response is in 100ms. Please refer the plan below\n(after removing primary key):", "msg_date": "Mon, 7 Feb 2022 11:36:17 +0530", "msg_from": "Valli Annamalai <aishwaryaanns@gmail.com>", "msg_from_op": true, "msg_subject": "Query choosing Bad Index Path" }, { "msg_contents": "Hi\n\npo 7. 2. 2022 v 6:15 odesílatel Valli Annamalai <aishwaryaanns@gmail.com>\nnapsal:\n\n>\n> *Postgres version:* 11.4\n>\n> *Problem:*\n> Query choosing Bad Index Path. Details are provided below:\n>\n>\n> *Table :*\n>\n>\n>\n>\n>\n>\n>\nplease, don't use screenshots\n\n\n\n\n> *Doubt*\n> 1. Why is this Query choosing *Index Scan Backward using table1_pkey\n> Index *though it's cost is high. It can rather choose\n> BITMAP OR\n> (Index on RECORDID) i.e; table1_idx6\n> (Index on RELATEDID) i.e; table1_idx7\n>\n> Below is the selectivity details from *pg_stats* table\n> - Recordid has 51969 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n> - Relatedid has 82128 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n>\n> Since, selectivity is less, this should logically choose this Index, which\n> would have improve my query performance here.\n> I cross-checked the same by removing PrimaryKey to this table and query\n> now chooses these indexes and response is in 100ms. Please refer the plan\n> below (after removing primary key):\n>\n>\n>\n>\n>\n You can see very bad estimation 32499 x 0 rows\n\nNext source of problems can be LIMIT clause. Postgres expects so data are\nuniformly stored, and then LIMIT 10 quickly finds wanted rows. But it is\nnot true in your case.\n\nYou can try to use a multicolumn index, or you can transform your query\nfrom OR based to UNION ALL based\n\nSELECT * FROM tab WHERE p1 OR p1 => SELECT * FROM tab WHERE p1 UNION ALL\nSELECT * FROM tab WHERE p2\n\nRegards\n\nPavel", "msg_date": "Mon, 7 Feb 2022 07:18:14 +0100", "msg_from": "Pavel Stehule <pavel.stehule@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query choosing Bad Index Path" }, { "msg_contents": "Hi,\nOn Mon, Feb 07, 2022 at 11:36:17AM +0530, Valli Annamalai wrote:\n> *Postgres version:* 11.4\n> \n> *Problem:*\n> Query choosing Bad Index Path. Details are provided below:\n> \n> \n> *Table :*\n> \n> \n> \n> \n> \n> \n> *Doubt*\n> 1. Why is this Query choosing *Index Scan Backward using table1_pkey\n> Index *though it's cost is high. It can rather choose\n> BITMAP OR\n> (Index on RECORDID) i.e; table1_idx6\n> (Index on RELATEDID) i.e; table1_idx7\n> \n> Below is the selectivity details from *pg_stats* table\n> - Recordid has 51969 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.00376667\n> - Relatedid has 82128 distinct values. And selectivity\n> (most_common_freqs) for *recordid = 15842006928391817* is 0.0050666\n> \n> Since, selectivity is less, this should logically choose this Index, which\n> would have improve my query performance here.\n> I cross-checked the same by removing PrimaryKey to this table and query now\n> chooses these indexes and response is in 100ms. Please refer the plan below\n> (after removing primary key):\n\nYou already sent the same email less than an hour ago on pgsql-performance\n([1]), which is the correct mailing list, please don't post on this mailing\nlist also.\n\nNote also that sending information as images can be problematic as some people\nhere (including me) won't be able to see them. I tried on a web-based MUA and\nthat's not really better though, as the images are hardly readable and\ndefinitely not grep-able. You will likely have more answers sending\ninformation in plain text.\n\n[1] https://www.postgresql.org/message-id/CADkhgiJ+gT_FDKZWgP8oZsy6iRbYMYkmRjsPhqhcT1A2KBgcHA@mail.gmail.com\n\n\n", "msg_date": "Mon, 7 Feb 2022 14:18:40 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Query choosing Bad Index Path" } ]
[ { "msg_contents": "Hi,\n\nWhile working on pg_replslotdata tool [1], it was observed that some\nof the replication slot structures/enums/macros such as\nReplicationSlotPersistentData, ReplicationSlotPersistency,\nReplicationSlotOnDisk, ReplicationSlotOnDiskXXXX etc. are currently in\nreplication/slot.h and replication/slot.c. This makes the replication\nslot on disk data structures unusable by the external tools/modules.\nHow about moving these structures to a new header file called\nslot_common.h as attached in the patch here?\n\nThoughts?\n\nPS: I'm planning to have the tool separately, as it was rejected to be in core.\n\n[1] https://www.postgresql.org/message-id/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n\nRegards,\nBharath Rupireddy.", "msg_date": "Mon, 7 Feb 2022 10:52:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Move replication slot structures/enums/macros to a new header file\n for better usability by external tools/modules" }, { "msg_contents": "On Mon, Feb 7, 2022 at 4:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> While working on pg_replslotdata tool [1], it was observed that some\n> of the replication slot structures/enums/macros such as\n> ReplicationSlotPersistentData, ReplicationSlotPersistency,\n> ReplicationSlotOnDisk, ReplicationSlotOnDiskXXXX etc. are currently in\n> replication/slot.h and replication/slot.c. This makes the replication\n> slot on disk data structures unusable by the external tools/modules.\n> How about moving these structures to a new header file called\n> slot_common.h as attached in the patch here?\n>\n> Thoughts?\n>\n> PS: I'm planning to have the tool separately, as it was rejected to be in core.\n>\n> [1] https://www.postgresql.org/message-id/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n>\n> Regards,\n> Bharath Rupireddy.\n\nRecently I was also looking to add some new enums but I found it\nwas difficult to find any good place to put them where they could be\nshared by the replication code and the pg_recvlogical tool.\n\nSo +1 to your suggestion to have a common header, but I wonder can it\nhave a more generic name (e.g. repl_common.h? ...) since the\nstuff I wanted to put there was not really \"slot\" related.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia.\n\n\n", "msg_date": "Wed, 9 Feb 2022 07:46:00 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Move replication slot structures/enums/macros to a new header\n file for better usability by external tools/modules" }, { "msg_contents": "On Wed, Feb 9, 2022 at 2:16 AM Peter Smith <smithpb2250@gmail.com> wrote:\n>\n> On Mon, Feb 7, 2022 at 4:22 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > While working on pg_replslotdata tool [1], it was observed that some\n> > of the replication slot structures/enums/macros such as\n> > ReplicationSlotPersistentData, ReplicationSlotPersistency,\n> > ReplicationSlotOnDisk, ReplicationSlotOnDiskXXXX etc. are currently in\n> > replication/slot.h and replication/slot.c. This makes the replication\n> > slot on disk data structures unusable by the external tools/modules.\n> > How about moving these structures to a new header file called\n> > slot_common.h as attached in the patch here?\n> >\n> > Thoughts?\n> >\n> > PS: I'm planning to have the tool separately, as it was rejected to be in core.\n> >\n> > [1] https://www.postgresql.org/message-id/CALj2ACW0rV5gWK8A3m6_X62qH%2BVfaq5hznC%3Di0R5Wojt5%2Byhyw%40mail.gmail.com\n>\n> Recently I was also looking to add some new enums but I found it\n> was difficult to find any good place to put them where they could be\n> shared by the replication code and the pg_recvlogical tool.\n>\n> So +1 to your suggestion to have a common header, but I wonder can it\n> have a more generic name (e.g. repl_common.h? ...) since the\n> stuff I wanted to put there was not really \"slot\" related.\n\nThanks. repl_common.h sounds cool and generic IMO too, so I changed\nit. Another important note here is to let this file have only\nreplication data structures and functions that are meant/supposed to\nbe usable across entire postgres modules - core, tools, contrib\nmodules, the internal data structures can be added elsewhere.\n\nAttaching v2 patch.\n\nI will add a CF entry a day or two later.\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 09:51:29 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Move replication slot structures/enums/macros to a new header\n file for better usability by external tools/modules" }, { "msg_contents": "Hi,\n\nOn 2022-02-07 10:52:08 +0530, Bharath Rupireddy wrote:\n> While working on pg_replslotdata tool [1], it was observed that some\n> of the replication slot structures/enums/macros such as\n> ReplicationSlotPersistentData, ReplicationSlotPersistency,\n> ReplicationSlotOnDisk, ReplicationSlotOnDiskXXXX etc. are currently in\n> replication/slot.h and replication/slot.c. This makes the replication\n> slot on disk data structures unusable by the external tools/modules.\n\nFWIW, I still don't see a point in pg_replslotdata. And I don't think these\ndatastructures should ever be accessed outside the server environment.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:17:33 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Move replication slot structures/enums/macros to a new header\n file for better usability by external tools/modules" } ]
[ { "msg_contents": "Hi!\n\nA couple of years ago I was working on PostgreSQL\nfeatures and extensions. Some of them was in ECPG,\nmainly in the area of improving Informix compatibility.\n\nOne of the patchsets I invented was the cursor readahead\nsupport to improve performance in ECPG which used FETCH N.\nThis work was published at\nhttps://github.com/zboszor/ecpg-readahead/commits\n\nHowever, there was one use case for this patchset actually\ndecreased performance, namely if the cursor was used to\nmodify rows via UPDATE WHERE CURRENT OF. It was because\nthe cursor position on the server side was different from\nthe application so the ECPG readahead code had to use\nMOVE then UPDATE WHERE CURRENT OF.\n\nI brewed this idea for a while and yesterday I decided\nto do something about it and here's the (admittedly, limited)\nresult. I added a new PostgreSQL extension to the UPDATE syntax:\n\nUPDATE ... WHERE OFFSET n IN cursor;\n\nThis new syntax changes the feature disparity between\nFETCH N and UPDATE WHERE CURRENT OF that existed in\nPostgreSQL forever.\n\nI only implemented this for cursors that use SELECT FOR UPDATE.\nThis part was quite straightforward to implement.\n\nThe behaviour of this syntax is as follows:\n* the offset value may be 0 or negative, since it is a virtual\n index into the rows returned by the last FETCH statement\n and negative indexes are felt natural relative to the\n cursor position and the cursor direction\n* for offset value 0, the behaviour is identical to WHERE CURRENT OF\n* negative indexes allow UPDATEs on previous rows even if\n the cursor reached the end of the result set\n\nI need clues for how to extend this for cursors that aren't\nusing FOR UPDATE queries, if it's possible at all.\n\nPlease, review.\n\nBest regards,\nZoltán Böszörményi", "msg_date": "Mon, 7 Feb 2022 06:59:51 +0100", "msg_from": "=?UTF-8?B?QsO2c3rDtnJtw6lueWkgWm9sdMOhbg==?= <zboszor@pr.hu>", "msg_from_op": true, "msg_subject": "[PATCH] Add UPDATE WHERE OFFSET IN clause" }, { "msg_contents": "Hi!\n\nOn 02/07/22 00:59, Böszörményi Zoltán wrote:\n> UPDATE ... WHERE OFFSET n IN cursor;\n\nIf added to UPDATE, should this be added to DELETE also?\n\nRegards,\n-Chap\n\n\n", "msg_date": "Mon, 7 Feb 2022 19:33:16 -0500", "msg_from": "Chapman Flack <chap@anastigmatix.net>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add UPDATE WHERE OFFSET IN clause" }, { "msg_contents": "Chapman Flack <chap@anastigmatix.net> writes:\n> On 02/07/22 00:59, Böszörményi Zoltán wrote:\n>> UPDATE ... WHERE OFFSET n IN cursor;\n\n> If added to UPDATE, should this be added to DELETE also?\n\nFWIW, I think this is a really horrid hack. For one thing, it's not\nrobust against not-strictly-linear FETCH/MOVE of the cursor. It looks\nto me like \"OFFSET n\" really means \"the row that we read N reads ago\",\nnot \"the row N before the current cursor position\". I see that the\ndocumentation does explain it that way, but it fails to account for\noptimizations such as whether we implement moves by reading backwards\nor rewind-and-read-forwards. I don't think we want to expose that\nsort of implementation detail.\n\nI'm also pretty displeased with causing unbounded memory consumption for\nevery use of nodeLockRows, whether it has anything to do with a cursor or\nnot (never mind whether the cursor will ever be used for WHERE OFFSET IN).\nYeah, it's only a few bytes per row, but that will add up in queries that\nprocess lots of rows.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 07 Feb 2022 20:05:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add UPDATE WHERE OFFSET IN clause" }, { "msg_contents": "2022. 02. 08. 2:05 keltezéssel, Tom Lane írta:\n> Chapman Flack <chap@anastigmatix.net> writes:\n>> On 02/07/22 00:59, Böszörményi Zoltán wrote:\n>>> UPDATE ... WHERE OFFSET n IN cursor;\n> \n>> If added to UPDATE, should this be added to DELETE also?\n\nYes, it should be added, too.\n\n> FWIW, I think this is a really horrid hack.\n\nThanks for your kind words. :-D\n\n> For one thing, it's not\n> robust against not-strictly-linear FETCH/MOVE of the cursor. It looks\n> to me like \"OFFSET n\" really means \"the row that we read N reads ago\",\n> not \"the row N before the current cursor position\". I see that the\n> documentation does explain it that way, but it fails to account for\n> optimizations such as whether we implement moves by reading backwards\n> or rewind-and-read-forwards. I don't think we want to expose that\n> sort of implementation detail.\n> \n> I'm also pretty displeased with causing unbounded memory consumption for\n> every use of nodeLockRows, whether it has anything to do with a cursor or\n> not (never mind whether the cursor will ever be used for WHERE OFFSET IN).\n> Yeah, it's only a few bytes per row, but that will add up in queries that\n> process lots of rows.\n\nDoes PostgreSQL have SQL hints now? I.e. some kind of \"pragma\"\nparsed from SQL comments to indicate the subsequent usage pattern?\nSuch a hint would allow using either storing the single row information\nfor DELETE/UPDATE or the list.\n\nDumping the list to a disk file will be added later so memory\nusage is not unbounded.\n\nI was just testing the waters for the idea.\n\nBest regards,\nZoltán Böszörményi\n\n\n> \n> \t\t\tregards, tom lane\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 13:12:23 +0100", "msg_from": "=?UTF-8?B?QsO2c3rDtnJtw6lueWkgWm9sdMOhbg==?= <zboszor@pr.hu>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Add UPDATE WHERE OFFSET IN clause" } ]
[ { "msg_contents": "I noticed that in pgstatfuncs.c, the AF_UNIX macro is being used \nunprotected by HAVE_UNIX_SOCKETS, apparently since 2008. So I think the \nredirection through IS_AF_UNIX() is no longer necessary. (More \ngenerally, all supported platforms are now HAVE_UNIX_SOCKETS, but even \nif there were a new platform in the future, it seems plausible that it \nwould define the AF_UNIX symbol even without kernel support.) So maybe \nwe should remove the IS_AF_UNIX() macro and make the code a bit more \nconsistent?", "msg_date": "Mon, 7 Feb 2022 08:48:50 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Remove IS_AF_UNIX macro?" } ]
[ { "msg_contents": "When reverse-compiling a query, ruleutils.c has some complicated code to \nhandle the join output columns of a JOIN USING join. There used to be \nno way to qualify those columns, and so if there was a naming conflict \nanywhere in the query, those output columns had to be renamed to be \nunique throughout the query.\n\nSince PostgreSQL 14, we have a new feature that allows adding an alias \nto a JOIN USING clause. This provides a better solution to this \nproblem. This patch changes the logic in ruleutils.c so that if naming \nconflicts with JOIN USING output columns are found in the query, these \nJOIN USING aliases with generated names are attached everywhere and the \ncolumns are then qualified everywhere.\n\nThe test output changes show the effects nicely.\n\nObviously, the copy-and-paste code in set_rtable_names() could be \nrefactored a bit better, perhaps. I also just named the generated \naliases \"ju\" with numbers added, maybe there are other ideas for how to \ngenerate these names.", "msg_date": "Mon, 7 Feb 2022 09:06:54 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Use JOIN USING aliases in ruleutils.c" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> When reverse-compiling a query, ruleutils.c has some complicated code to \n> handle the join output columns of a JOIN USING join. There used to be \n> no way to qualify those columns, and so if there was a naming conflict \n> anywhere in the query, those output columns had to be renamed to be \n> unique throughout the query.\n\n> Since PostgreSQL 14, we have a new feature that allows adding an alias \n> to a JOIN USING clause. This provides a better solution to this \n> problem.\n\nI looked this over briefly, and came away fairly dissatisfied.\n\nMy big fear is that it will reduce portability of pg_dump output:\nviews that would have loaded successfully into pre-v14 servers\nno longer will, and your chances of porting them to other RDBMSes\nprobably go down too. Once v13 becomes EOL, that will be less of a\nconcern, but I think it's a valid objection for awhile yet.\n\nAlso, it doesn't seem like we're getting much in return for the\nportability hazard. AFAICS the patch actually makes a net addition\nof code to ruleutils.c, which perhaps means that you've not worked\nhard enough on removing no-longer-needed code. Maybe there's an\nargument that the new output is more readable, but these regression\ntest changes don't look like any huge step forward to me.\n\n> I also just named the generated \n> aliases \"ju\" with numbers added, maybe there are other ideas for how to \n> generate these names.\n\nFor my taste, names like \"join_N\" would be better.\n\nOn the whole, I'd recommend putting this idea on the back burner\nfor three or four years more.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 23 Mar 2022 11:14:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use JOIN USING aliases in ruleutils.c" }, { "msg_contents": "On 23.03.22 16:14, Tom Lane wrote:\n> My big fear is that it will reduce portability of pg_dump output:\n> views that would have loaded successfully into pre-v14 servers\n> no longer will, and your chances of porting them to other RDBMSes\n> probably go down too. Once v13 becomes EOL, that will be less of a\n> concern, but I think it's a valid objection for awhile yet.\n\nThat's a good point. I'll withdraw the patch for now.\n\n\n", "msg_date": "Fri, 25 Mar 2022 16:09:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: Use JOIN USING aliases in ruleutils.c" } ]
[ { "msg_contents": "Hello,\n\nI noticed that referential integrity checks aren't currently \nparallelized. Is it on purpose?\n\n From the documentation [1], the planner will not generate a parallel \nplan for a given query if any of the following are true:\n\n1) The system is running in single-user mode.\n2) max_parallel_workers_per_gather = 0.\n3) The query writes any data or locks any database rows.\n4) The query might be suspended during execution (cursor or PL/pgSQL loop).\n5) The query uses any function marked PARALLEL UNSAFE.\n6) The query is running inside of another query that is already parallel.\n\n From my understanding of the code, it seems to me that the fourth \ncondition is not always accurately detected. In particular, the query \ntriggered by a foreign key addition (a Left Excluding JOIN between the \ntwo tables) isn't parallelized, because the flag CURSOR_OPT_PARALLEL_OK \nhasn't been set at some point. I couldn't find any reason why not, but \nmaybe I missed something.\n\nAttached is a (naive) patch that aims to fix the case of a FK addition, \nbut the handling of the flag CURSOR_OPT_PARALLEL_OK, generally speaking, \nlooks rather hackish.\n\nBest regards,\nFrédéric\n\n[1] \nhttps://www.postgresql.org/docs/current/when-can-parallel-query-be-used.html", "msg_date": "Mon, 7 Feb 2022 11:26:07 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On 2/7/22 11:26, Frédéric Yhuel wrote:\n> Attached is a (naive) patch that aims to fix the case of a FK addition, \n> but the handling of the flag CURSOR_OPT_PARALLEL_OK, generally speaking, \n> looks rather hackish.\n\nThanks, for the patch. You can add it to the current open commitfest \n(https://commitfest.postgresql.org/37/).\n\nBest regards,\nAndreas\n\n\n", "msg_date": "Fri, 11 Feb 2022 00:16:56 +0100", "msg_from": "Andreas Karlsson <andreas@proxel.se>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 2/11/22 00:16, Andreas Karlsson wrote:\n> On 2/7/22 11:26, Frédéric Yhuel wrote:\n>> Attached is a (naive) patch that aims to fix the case of a FK \n>> addition, but the handling of the flag CURSOR_OPT_PARALLEL_OK, \n>> generally speaking, looks rather hackish.\n> \n> Thanks, for the patch. You can add it to the current open commitfest \n> (https://commitfest.postgresql.org/37/).\n> \n\nOK, I just did. Please let me know if I did something wrong.\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:56:44 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Mon, Feb 7, 2022 at 5:26 AM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n> I noticed that referential integrity checks aren't currently\n> parallelized. Is it on purpose?\n\nIt's not 100% clear to me that it is safe. But on the other hand, it's\nalso not 100% clear to me that it is unsafe.\n\nGenerally, I only added CURSOR_OPT_PARALLEL_OK in places where I was\nconfident that nothing bad would happen, and this wasn't one of those\nplaces. It's something of a nested-query environment -- your criterion\n#6. How do we know that the surrounding query isn't already parallel?\nPerhaps because it must be DML, but I think it must be more important\nto support parallel DML than to support this.\n\nI'm not sure what the right thing to do here is, but I think it would\nbe good if your patch included a test case.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 09:33:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "Hello, sorry for the late reply.\n\nOn 2/14/22 15:33, Robert Haas wrote:\n> On Mon, Feb 7, 2022 at 5:26 AM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>> I noticed that referential integrity checks aren't currently\n>> parallelized. Is it on purpose?\n> \n> It's not 100% clear to me that it is safe. But on the other hand, it's\n> also not 100% clear to me that it is unsafe.\n> \n> Generally, I only added CURSOR_OPT_PARALLEL_OK in places where I was\n> confident that nothing bad would happen, and this wasn't one of those\n> places. It's something of a nested-query environment -- your criterion\n> #6. How do we know that the surrounding query isn't already parallel?\n> Perhaps because it must be DML,\n\nDid you mean DDL?\n\n> but I think it must be more important\n> to support parallel DML than to support this.\n\n\n> I'm not sure what the right thing to do here is, but I think it would\n> be good if your patch included a test case.\n> \n\nWould that be a regression test? (in src/test/regress ?)\n\nIf yes, what should I check? Is it good enough to load auto_explain and \ncheck that the query triggered by a foreign key addition is parallelized?\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Thu, 3 Mar 2022 15:12:46 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "I looked at your patch and it's a good idea to make foreign key validation\r\nuse parallel query on large relations.\r\n\r\nIt would be valuable to add logging to ensure that the ActiveSnapshot and TransactionSnapshot \r\nis the same for the leader and the workers. This logging could be tested in the TAP test.\r\n\r\nAlso, inside RI_Initial_Check you may want to set max_parallel_workers to \r\nmax_parallel_maintenance_workers.\r\n\r\nCurrently the work_mem is set to maintenance_work_mem. This will also require\r\na doc change to call out.\r\n\r\n/*\r\n * Temporarily increase work_mem so that the check query can be executed\r\n * more efficiently. It seems okay to do this because the query is simple\r\n * enough to not use a multiple of work_mem, and one typically would not\r\n * have many large foreign-key validations happening concurrently. So\r\n * this seems to meet the criteria for being considered a \"maintenance\"\r\n * operation, and accordingly we use maintenance_work_mem. However, we\r\n\r\n\r\n\r\n\r\n\r\n", "msg_date": "Sat, 19 Mar 2022 00:57:23 +0000", "msg_from": "\"Imseih (AWS), Sami\" <simseih@amazon.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 3/19/22 01:57, Imseih (AWS), Sami wrote:\n> I looked at your patch and it's a good idea to make foreign key validation\n> use parallel query on large relations.\n> \n> It would be valuable to add logging to ensure that the ActiveSnapshot and TransactionSnapshot\n> is the same for the leader and the workers. This logging could be tested in the TAP test.\n> \n> Also, inside RI_Initial_Check you may want to set max_parallel_workers to\n> max_parallel_maintenance_workers.\n> \n> Currently the work_mem is set to maintenance_work_mem. This will also require\n> a doc change to call out.\n> \n> /*\n> * Temporarily increase work_mem so that the check query can be executed\n> * more efficiently. It seems okay to do this because the query is simple\n> * enough to not use a multiple of work_mem, and one typically would not\n> * have many large foreign-key validations happening concurrently. So\n> * this seems to meet the criteria for being considered a \"maintenance\"\n> * operation, and accordingly we use maintenance_work_mem. However, we\n> \n\nHello Sami,\n\nThank you for your review!\n\nI will try to do as you say, but it will take time, since my daily job \nas database consultant takes most of my time and energy.\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Thu, 14 Apr 2022 14:25:13 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 4/14/22 14:25, Frédéric Yhuel wrote:\n> \n> \n> On 3/19/22 01:57, Imseih (AWS), Sami wrote:\n>> I looked at your patch and it's a good idea to make foreign key \n>> validation\n>> use parallel query on large relations.\n>>\n>> It would be valuable to add logging to ensure that the ActiveSnapshot \n>> and TransactionSnapshot\n>> is the same for the leader and the workers. This logging could be \n>> tested in the TAP test.\n>>\n>> Also, inside RI_Initial_Check you may want to set max_parallel_workers to\n>> max_parallel_maintenance_workers.\n>>\n>> Currently the work_mem is set to maintenance_work_mem. This will also \n>> require\n>> a doc change to call out.\n>>\n>> /*\n>>       * Temporarily increase work_mem so that the check query can be \n>> executed\n>>       * more efficiently.  It seems okay to do this because the query \n>> is simple\n>>       * enough to not use a multiple of work_mem, and one typically \n>> would not\n>>       * have many large foreign-key validations happening \n>> concurrently.  So\n>>       * this seems to meet the criteria for being considered a \n>> \"maintenance\"\n>>       * operation, and accordingly we use maintenance_work_mem. \n>> However, we\n>>\n> \n> Hello Sami,\n> \n> Thank you for your review!\n> \n> I will try to do as you say, but it will take time, since my daily job \n> as database consultant takes most of my time and energy.\n> \n\nHi,\n\nAs suggested by Jacob, here is a quick message to say that I didn't find \nenough time to work further on this patch, but I didn't completely \nforget it either. I moved it to the next commitfest. Hopefully I will \nfind enough time and motivation in the coming months :-)\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Tue, 26 Jul 2022 13:58:17 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "2022年7月26日(火) 20:58 Frédéric Yhuel <frederic.yhuel@dalibo.com>:\n>\n>\n>\n> On 4/14/22 14:25, Frédéric Yhuel wrote:\n> >\n> >\n> > On 3/19/22 01:57, Imseih (AWS), Sami wrote:\n> >> I looked at your patch and it's a good idea to make foreign key\n> >> validation\n> >> use parallel query on large relations.\n> >>\n> >> It would be valuable to add logging to ensure that the ActiveSnapshot\n> >> and TransactionSnapshot\n> >> is the same for the leader and the workers. This logging could be\n> >> tested in the TAP test.\n> >>\n> >> Also, inside RI_Initial_Check you may want to set max_parallel_workers to\n> >> max_parallel_maintenance_workers.\n> >>\n> >> Currently the work_mem is set to maintenance_work_mem. This will also\n> >> require\n> >> a doc change to call out.\n> >>\n> >> /*\n> >> * Temporarily increase work_mem so that the check query can be\n> >> executed\n> >> * more efficiently. It seems okay to do this because the query\n> >> is simple\n> >> * enough to not use a multiple of work_mem, and one typically\n> >> would not\n> >> * have many large foreign-key validations happening\n> >> concurrently. So\n> >> * this seems to meet the criteria for being considered a\n> >> \"maintenance\"\n> >> * operation, and accordingly we use maintenance_work_mem.\n> >> However, we\n> >>\n> >\n> > Hello Sami,\n> >\n> > Thank you for your review!\n> >\n> > I will try to do as you say, but it will take time, since my daily job\n> > as database consultant takes most of my time and energy.\n> >\n>\n> Hi,\n>\n> As suggested by Jacob, here is a quick message to say that I didn't find\n> enough time to work further on this patch, but I didn't completely\n> forget it either. I moved it to the next commitfest. Hopefully I will\n> find enough time and motivation in the coming months :-)\n\nHi Frédéric\n\nThis patch has been carried forward for a couple more commitfests since\nyour message; do you think you'll be able to work on it further during this\nrelease cycle?\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Sun, 11 Dec 2022 14:29:15 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 12/11/22 06:29, Ian Lawrence Barwick wrote:\n> 2022年7月26日(火) 20:58 Frédéric Yhuel <frederic.yhuel@dalibo.com>:\n>>\n>>\n>>\n>> On 4/14/22 14:25, Frédéric Yhuel wrote:\n>>>\n>>>\n>>> On 3/19/22 01:57, Imseih (AWS), Sami wrote:\n>>>> I looked at your patch and it's a good idea to make foreign key\n>>>> validation\n>>>> use parallel query on large relations.\n>>>>\n>>>> It would be valuable to add logging to ensure that the ActiveSnapshot\n>>>> and TransactionSnapshot\n>>>> is the same for the leader and the workers. This logging could be\n>>>> tested in the TAP test.\n>>>>\n>>>> Also, inside RI_Initial_Check you may want to set max_parallel_workers to\n>>>> max_parallel_maintenance_workers.\n>>>>\n>>>> Currently the work_mem is set to maintenance_work_mem. This will also\n>>>> require\n>>>> a doc change to call out.\n>>>>\n>>>> /*\n>>>> * Temporarily increase work_mem so that the check query can be\n>>>> executed\n>>>> * more efficiently. It seems okay to do this because the query\n>>>> is simple\n>>>> * enough to not use a multiple of work_mem, and one typically\n>>>> would not\n>>>> * have many large foreign-key validations happening\n>>>> concurrently. So\n>>>> * this seems to meet the criteria for being considered a\n>>>> \"maintenance\"\n>>>> * operation, and accordingly we use maintenance_work_mem.\n>>>> However, we\n>>>>\n>>>\n>>> Hello Sami,\n>>>\n>>> Thank you for your review!\n>>>\n>>> I will try to do as you say, but it will take time, since my daily job\n>>> as database consultant takes most of my time and energy.\n>>>\n>>\n>> Hi,\n>>\n>> As suggested by Jacob, here is a quick message to say that I didn't find\n>> enough time to work further on this patch, but I didn't completely\n>> forget it either. I moved it to the next commitfest. Hopefully I will\n>> find enough time and motivation in the coming months :-)\n> \n> Hi Frédéric\n> \n> This patch has been carried forward for a couple more commitfests since\n> your message; do you think you'll be able to work on it further during this\n> release cycle?\n> \n\nHi Ian,\n\nI've planned to work on it full time on week 10 (6-10 March), if you \nagree to bear with me. The idea would be to bootstrap my brain on it, \nand then continue to work on it from time to time.\n\nBest regards,\nFrédéric\n\n\n", "msg_date": "Mon, 12 Dec 2022 17:35:41 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Mon, 12 Dec 2022 at 22:06, Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>\n>\n>\n> On 12/11/22 06:29, Ian Lawrence Barwick wrote:\n> > 2022年7月26日(火) 20:58 Frédéric Yhuel <frederic.yhuel@dalibo.com>:\n> >>\n> >>\n> >>\n> >> On 4/14/22 14:25, Frédéric Yhuel wrote:\n> >>>\n> >>>\n> >>> On 3/19/22 01:57, Imseih (AWS), Sami wrote:\n> >>>> I looked at your patch and it's a good idea to make foreign key\n> >>>> validation\n> >>>> use parallel query on large relations.\n> >>>>\n> >>>> It would be valuable to add logging to ensure that the ActiveSnapshot\n> >>>> and TransactionSnapshot\n> >>>> is the same for the leader and the workers. This logging could be\n> >>>> tested in the TAP test.\n> >>>>\n> >>>> Also, inside RI_Initial_Check you may want to set max_parallel_workers to\n> >>>> max_parallel_maintenance_workers.\n> >>>>\n> >>>> Currently the work_mem is set to maintenance_work_mem. This will also\n> >>>> require\n> >>>> a doc change to call out.\n> >>>>\n> >>>> /*\n> >>>> * Temporarily increase work_mem so that the check query can be\n> >>>> executed\n> >>>> * more efficiently. It seems okay to do this because the query\n> >>>> is simple\n> >>>> * enough to not use a multiple of work_mem, and one typically\n> >>>> would not\n> >>>> * have many large foreign-key validations happening\n> >>>> concurrently. So\n> >>>> * this seems to meet the criteria for being considered a\n> >>>> \"maintenance\"\n> >>>> * operation, and accordingly we use maintenance_work_mem.\n> >>>> However, we\n> >>>>\n> >>>\n> >>> Hello Sami,\n> >>>\n> >>> Thank you for your review!\n> >>>\n> >>> I will try to do as you say, but it will take time, since my daily job\n> >>> as database consultant takes most of my time and energy.\n> >>>\n> >>\n> >> Hi,\n> >>\n> >> As suggested by Jacob, here is a quick message to say that I didn't find\n> >> enough time to work further on this patch, but I didn't completely\n> >> forget it either. I moved it to the next commitfest. Hopefully I will\n> >> find enough time and motivation in the coming months :-)\n> >\n> > Hi Frédéric\n> >\n> > This patch has been carried forward for a couple more commitfests since\n> > your message; do you think you'll be able to work on it further during this\n> > release cycle?\n> >\n>\n> Hi Ian,\n>\n> I've planned to work on it full time on week 10 (6-10 March), if you\n> agree to bear with me. The idea would be to bootstrap my brain on it,\n> and then continue to work on it from time to time.\n\nI have moved this to Mar commitfest as the patch update is expected to\nhappen during March commitfest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Mon, 16 Jan 2023 20:00:18 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Mon, 12 Dec 2022 at 11:37, Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>\n> I've planned to work on it full time on week 10 (6-10 March), if you\n> agree to bear with me. The idea would be to bootstrap my brain on it,\n> and then continue to work on it from time to time.\n\nAny updates on this patch? Should we move it to next release at this\npoint? Even if you get time to work on it this commitfest do you think\nit's likely to be committable in the next few weeks?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 20 Mar 2023 10:58:27 -0400", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 3/20/23 15:58, Gregory Stark (as CFM) wrote:\n> On Mon, 12 Dec 2022 at 11:37, Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>>\n>> I've planned to work on it full time on week 10 (6-10 March), if you\n>> agree to bear with me. The idea would be to bootstrap my brain on it,\n>> and then continue to work on it from time to time.\n> \n> Any updates on this patch? \n\nI had the opportunity to discuss it with Melanie Plageman when she came \nto Paris last month and teach us (Dalibo's folk) about the patch review \nprocess.\n\nShe advised me to provide a good test case first, and to explain better \nhow it would be useful, which I'm going to do.\n\n> Should we move it to next release at this\n> point? Even if you get time to work on it this commitfest do you think\n> it's likely to be committable in the next few weeks?\n> \n\nIt is very unlikely. Maybe it's better to remove it from CF and put it \nback later if the test case I will provide does a better job at \nconvincing the Postgres folks that RI checks should be parallelized.\n\n\n", "msg_date": "Mon, 20 Mar 2023 16:48:59 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "> On 20 Mar 2023, at 16:48, Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n> On 3/20/23 15:58, Gregory Stark (as CFM) wrote:\n\n>> Should we move it to next release at this\n>> point? Even if you get time to work on it this commitfest do you think\n>> it's likely to be committable in the next few weeks?\n> \n> It is very unlikely. Maybe it's better to remove it from CF and put it back later if the test case I will provide does a better job at convincing the Postgres folks that RI checks should be parallelized.\n\nAs there is no new patch submitted I will go ahead and do that, please feel\nfree to resubmit when there is renewed interest in working on this.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 4 Jul 2023 09:45:02 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Tue, Jul 4, 2023 at 9:45 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n>\n> As there is no new patch submitted I will go ahead and do that, please feel\n> free to resubmit when there is renewed interest in working on this.\n>\n>\nRecently I restored a database from a directory format backup and having\nthis feature would have been quite useful. So, I would like to resume the\nwork on this patch, unless OP or someone else is already on it.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Jul 4, 2023 at 9:45 AM Daniel Gustafsson <daniel@yesql.se> wrote:\nAs there is no new patch submitted I will go ahead and do that, please feel\nfree to resubmit when there is renewed interest in working on this.Recently I restored a database from a directory format backup and having this feature would have been quite useful. So, I would like to resume the work on this patch, unless OP or someone else is already on it.Regards,Juan José Santamaría Flecha", "msg_date": "Thu, 10 Aug 2023 17:06:01 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Thu, Aug 10, 2023 at 5:06 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> On Tue, Jul 4, 2023 at 9:45 AM Daniel Gustafsson <daniel@yesql.se> wrote:\n>\n>> As there is no new patch submitted I will go ahead and do that, please\n>> feel\n>> free to resubmit when there is renewed interest in working on this.\n>>\n>\n> Recently I restored a database from a directory format backup and having\n> this feature would have been quite useful. So, I would like to resume the\n> work on this patch, unless OP or someone else is already on it.\n>\n\nHearing no one advising me otherwise I'll pick up the work on the patch.\nFirst, I'll try to address all previous comments.\n\nOn Mon, Feb 14, 2022 at 3:34 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Feb 7, 2022 at 5:26 AM Frédéric Yhuel <frederic.yhuel@dalibo.com>\n> wrote:\n> > I noticed that referential integrity checks aren't currently\n> > parallelized. Is it on purpose?\n>\n> It's not 100% clear to me that it is safe. But on the other hand, it's\n> also not 100% clear to me that it is unsafe.\n>\n> Generally, I only added CURSOR_OPT_PARALLEL_OK in places where I was\n> confident that nothing bad would happen, and this wasn't one of those\n> places. It's something of a nested-query environment -- your criterion\n> #6. How do we know that the surrounding query isn't already parallel?\n> Perhaps because it must be DML, but I think it must be more important\n> to support parallel DML than to support this.\n>\n\nCurrently I don't see a scenario where the validation might run inside\nanother parallel query, but I could be missing something. Suggestions on\nsome edge cases to test are more than welcome.\n\nWe already have a case where parallel operations can occur when adding a\ntable constraint, and that's the index creation for a primary key. Take for\nexample:\n\npostgres=# SET max_parallel_maintenance_workers TO 4;\npostgres=# SET parallel_setup_cost TO 0;\npostgres=# SET parallel_tuple_cost TO 0;\npostgres=# SET parallel_leader_participation TO 0;\npostgres=# SET min_parallel_table_scan_size TO 0;\npostgres=# SET client_min_messages TO debug1;\npostgres=# CREATE TABLE parallel_pk_table (a int) WITH (autovacuum_enabled\n= off);\npostgres=# ALTER TABLE parallel_pk_table ADD PRIMARY KEY (a);\nDEBUG: ALTER TABLE / ADD PRIMARY KEY will create implicit index\n\"parallel_pk_table_pkey\" for table \"parallel_pk_table\"\nDEBUG: building index \"parallel_pk_table_pkey\" on table\n\"parallel_pk_table\" with request for 1 parallel workers\n...\nDEBUG: index \"parallel_pk_table_pkey\" can safely use deduplication\nDEBUG: verifying table \"parallel_pk_table\"\n\nThis should be better documented. Maybe \"parallel index build\" [1] could\nhave its own section, so it can be cited from other sections?\n\n\n> I'm not sure what the right thing to do here is, but I think it would\n> be good if your patch included a test case.\n>\n\nI have included a basic test for parallel ADD PRIMARY KEY and ADD FOREIGN\nKEY.\n\nOn Sat, Mar 19, 2022 at 1:57 AM Imseih (AWS), Sami <simseih@amazon.com>\nwrote:\n\n> It would be valuable to add logging to ensure that the ActiveSnapshot and\n> TransactionSnapshot\n> is the same for the leader and the workers. This logging could be tested\n> in the TAP test.\n>\n\nThat machinery is used in any parallel query, I'm not sure this patch\nshould be responsible for carrying that requirement.\n\nAlso, inside RI_Initial_Check you may want to set max_parallel_workers to\n> max_parallel_maintenance_workers.\n>\n> Currently the work_mem is set to maintenance_work_mem. This will also\n> require\n> a doc change to call out.\n>\n\nDone.\n\nPFA the patch.\n\n[1] https://www.postgresql.org/docs/current/sql-createindex.html\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Wed, 16 Aug 2023 11:18:25 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n> Recently I restored a database from a directory format backup and having \n> this feature would have been quite useful\n\nHi,\n\nThanks for resuming work on this patch. I forgot to mention this in my \noriginal email, but the motivation was also to speed up the restore \nprocess. Parallelizing the FK checks could make a huge difference in \ncertain cases. We should probably provide such a test case (with perf \nnumbers), and maybe this is it what Robert asked for.\n\n\n", "msg_date": "Thu, 17 Aug 2023 09:32:06 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On 8/17/23 09:32, Frédéric Yhuel wrote:\n> \n> \n> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n>> Recently I restored a database from a directory format backup and \n>> having this feature would have been quite useful\n> \n> Hi,\n> \n> Thanks for resuming work on this patch. I forgot to mention this in my \n> original email, but the motivation was also to speed up the restore \n> process. Parallelizing the FK checks could make a huge difference in \n> certain cases. We should probably provide such a test case (with perf \n> numbers), and maybe this is it what Robert asked for.\n\nI have attached two scripts which demonstrate the following problems:\n\n1a. if the tables aren't analyzed nor vacuumed before the post-data \nstep, then they are index-only scanned, with a lot of heap fetches \n(depending on their size, the planner sometimes chooses a seq scan instead).\n\n1b. if the tables have been analyzed but not vacuumed before the \npost-data-step, then they are scanned sequentially. Usually better, but \nstill not so good without a parallel plan.\n\n2. if the visibility maps have been created, then the tables are \nindex-only scanned without heap fetches, but this can still be slower \nthan a parallel seq scan.\n\nSo it would be nice if pg_restore could vacuum analyze the tables before \nthe post-data step. I believe it would be faster in most cases.\n\nAnd it would be nice to allow a parallel plan for RI checks.\n\nBest regards,\nFrédéric", "msg_date": "Thu, 17 Aug 2023 14:00:59 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "\n\nOn 8/17/23 14:00, Frédéric Yhuel wrote:\n> \n> \n> On 8/17/23 09:32, Frédéric Yhuel wrote:\n>>\n>>\n>> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n>>> Recently I restored a database from a directory format backup and \n>>> having this feature would have been quite useful\n>>\n>> Hi,\n>>\n>> Thanks for resuming work on this patch. I forgot to mention this in my \n>> original email, but the motivation was also to speed up the restore \n>> process. Parallelizing the FK checks could make a huge difference in \n>> certain cases. We should probably provide such a test case (with perf \n>> numbers), and maybe this is it what Robert asked for.\n> \n> I have attached two scripts which demonstrate the following problems:\n> \n\nLet me add the plans for more clarity:\n\n> 1a. if the tables aren't analyzed nor vacuumed before the post-data \n> step, then they are index-only scanned, with a lot of heap fetches \n> (depending on their size, the planner sometimes chooses a seq scan \n> instead).\n> \n\nhttps://explain.dalibo.com/plan/7491ga22c5293683#raw\n\n> 1b. if the tables have been analyzed but not vacuumed before the \n> post-data-step, then they are scanned sequentially. Usually better, but \n> still not so good without a parallel plan.\n> \n\nhttps://explain.dalibo.com/plan/e92c5161g880bdhf#raw\n\n> 2. if the visibility maps have been created, then the tables are \n> index-only scanned without heap fetches, but this can still be slower \n> than a parallel seq scan.\n> \n\nhttps://explain.dalibo.com/plan/de22bdb4ggc3dffg#raw\n\nAnd at last, with the patch applied : \nhttps://explain.dalibo.com/plan/54612a09ffh2565a#raw\n\n\n", "msg_date": "Thu, 17 Aug 2023 15:50:58 +0200", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Thu, Aug 17, 2023 at 3:51 PM Frédéric Yhuel <frederic.yhuel@dalibo.com>\nwrote:\n\n> On 8/17/23 14:00, Frédéric Yhuel wrote:\n> > On 8/17/23 09:32, Frédéric Yhuel wrote:\n> >> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n> >>> Recently I restored a database from a directory format backup and\n> >>> having this feature would have been quite useful\n> >>\n> >> Thanks for resuming work on this patch. I forgot to mention this in my\n> >> original email, but the motivation was also to speed up the restore\n> >> process. Parallelizing the FK checks could make a huge difference in\n> >> certain cases. We should probably provide such a test case (with perf\n> >> numbers), and maybe this is it what Robert asked for.\n> >\n> > I have attached two scripts which demonstrate the following problems:\n>\n\nThanks for the scripts, but I think Robert's concerns come from the safety,\nand not the performance, of the parallel operation.\n\nProving its vulnerability could be easy with a counter example, but\nassuring its safety is trickier. What test would suffice to do that?\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Aug 17, 2023 at 3:51 PM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:On 8/17/23 14:00, Frédéric Yhuel wrote:> On 8/17/23 09:32, Frédéric Yhuel wrote:>> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n>>> Recently I restored a database from a directory format backup and \n>>> having this feature would have been quite useful\n>>>> Thanks for resuming work on this patch. I forgot to mention this in my \n>> original email, but the motivation was also to speed up the restore \n>> process. Parallelizing the FK checks could make a huge difference in \n>> certain cases. We should probably provide such a test case (with perf \n>> numbers), and maybe this is it what Robert asked for.\n> \n> I have attached two scripts which demonstrate the following problems:Thanks for the scripts, but I think Robert's concerns come from the safety, and not the performance, of the parallel operation.Proving its vulnerability could be easy with a counter example, but assuring its safety is trickier. What test would suffice to do that?Regards,Juan José Santamaría Flecha", "msg_date": "Fri, 18 Aug 2023 12:58:51 +0200", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Fri, 18 Aug 2023 at 16:29, Juan José Santamaría Flecha\n<juanjo.santamaria@gmail.com> wrote:\n>\n>\n> On Thu, Aug 17, 2023 at 3:51 PM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n>>\n>> On 8/17/23 14:00, Frédéric Yhuel wrote:\n>> > On 8/17/23 09:32, Frédéric Yhuel wrote:\n>> >> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n>> >>> Recently I restored a database from a directory format backup and\n>> >>> having this feature would have been quite useful\n>> >>\n>> >> Thanks for resuming work on this patch. I forgot to mention this in my\n>> >> original email, but the motivation was also to speed up the restore\n>> >> process. Parallelizing the FK checks could make a huge difference in\n>> >> certain cases. We should probably provide such a test case (with perf\n>> >> numbers), and maybe this is it what Robert asked for.\n>> >\n>> > I have attached two scripts which demonstrate the following problems:\n>\n>\n> Thanks for the scripts, but I think Robert's concerns come from the safety, and not the performance, of the parallel operation.\n>\n> Proving its vulnerability could be easy with a counter example, but assuring its safety is trickier. What test would suffice to do that?\n\nI'm seeing that there has been no activity in this thread for more\nthan 5 months, I'm planning to close this in the current commitfest\nunless someone is planning to take it forward. It can be opened again\nwhen there is more interest.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 21 Jan 2024 07:56:02 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" }, { "msg_contents": "On Sun, 21 Jan 2024 at 07:56, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Fri, 18 Aug 2023 at 16:29, Juan José Santamaría Flecha\n> <juanjo.santamaria@gmail.com> wrote:\n> >\n> >\n> > On Thu, Aug 17, 2023 at 3:51 PM Frédéric Yhuel <frederic.yhuel@dalibo.com> wrote:\n> >>\n> >> On 8/17/23 14:00, Frédéric Yhuel wrote:\n> >> > On 8/17/23 09:32, Frédéric Yhuel wrote:\n> >> >> On 8/10/23 17:06, Juan José Santamaría Flecha wrote:\n> >> >>> Recently I restored a database from a directory format backup and\n> >> >>> having this feature would have been quite useful\n> >> >>\n> >> >> Thanks for resuming work on this patch. I forgot to mention this in my\n> >> >> original email, but the motivation was also to speed up the restore\n> >> >> process. Parallelizing the FK checks could make a huge difference in\n> >> >> certain cases. We should probably provide such a test case (with perf\n> >> >> numbers), and maybe this is it what Robert asked for.\n> >> >\n> >> > I have attached two scripts which demonstrate the following problems:\n> >\n> >\n> > Thanks for the scripts, but I think Robert's concerns come from the safety, and not the performance, of the parallel operation.\n> >\n> > Proving its vulnerability could be easy with a counter example, but assuring its safety is trickier. What test would suffice to do that?\n>\n> I'm seeing that there has been no activity in this thread for more\n> than 5 months, I'm planning to close this in the current commitfest\n> unless someone is planning to take it forward. It can be opened again\n> when there is more interest.\n\nSince the author or no one else showed interest in taking it forward\nand the patch had no activity for more than 5 months, I have changed\nthe status to RWF. Feel free to add a new CF entry when someone is\nplanning to resume work more actively.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 23:19:41 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Allow parallel plan for referential integrity checks?" } ]
[ { "msg_contents": "Unless I'm misreading this code I think the nevents in\nWaitLatchOrSocket should really be 4 not 3. At least there are 4 calls\nto AddWaitEventToSet in it and I think it's possible to trigger all 4.\n\nI guess it's based on knowing that nobody would actually set\nWL_EXIT_ON_PM_DEATH and WL_POSTMASTER_DEATH on the same waitset?\n\n-- \ngreg\n\n\n", "msg_date": "Mon, 7 Feb 2022 17:00:32 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": true, "msg_subject": "WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "\n\nOn 2022/02/08 7:00, Greg Stark wrote:\n> Unless I'm misreading this code I think the nevents in\n> WaitLatchOrSocket should really be 4 not 3. At least there are 4 calls\n> to AddWaitEventToSet in it and I think it's possible to trigger all 4.\n\nGood catch! I think you're right.\n\nAs the quick test, I confirmed that the assertion failure happened when I passed four possible events to WaitLatchOrSocket() in postgres_fdw.\n\nTRAP: FailedAssertion(\"set->nevents < set->nevents_space\", File: \"latch.c\", Line: 868, PID: 54424)\n0 postgres 0x0000000107efa49f ExceptionalCondition + 223\n1 postgres 0x0000000107cbca0c AddWaitEventToSet + 76\n2 postgres 0x0000000107cbd86e WaitLatchOrSocket + 430\n3 postgres_fdw.so 0x000000010848b1aa pgfdw_get_result + 218\n4 postgres_fdw.so 0x000000010848accb do_sql_command + 75\n5 postgres_fdw.so 0x000000010848c6b8 configure_remote_session + 40\n6 postgres_fdw.so 0x000000010848c32d connect_pg_server + 1629\n7 postgres_fdw.so 0x000000010848aa06 make_new_connection + 566\n8 postgres_fdw.so 0x0000000108489a06 GetConnection + 550\n9 postgres_fdw.so 0x0000000108497ba4 postgresBeginForeignScan + 260\n10 postgres 0x0000000107a7d79f ExecInitForeignScan + 943\n11 postgres 0x0000000107a5c8ab ExecInitNode + 683\n12 postgres 0x0000000107a5028a InitPlan + 1386\n13 postgres 0x0000000107a4fb66 standard_ExecutorStart + 806\n14 postgres 0x0000000107a4f833 ExecutorStart + 83\n15 postgres 0x0000000107d0277f PortalStart + 735\n16 postgres 0x0000000107cfe150 exec_simple_query + 1168\n17 postgres 0x0000000107cfd39e PostgresMain + 2110\n18 postgres 0x0000000107c07e72 BackendRun + 50\n19 postgres 0x0000000107c07438 BackendStartup + 552\n20 postgres 0x0000000107c0621c ServerLoop + 716\n21 postgres 0x0000000107c039f9 PostmasterMain + 6441\n22 postgres 0x0000000107ae20d9 main + 809\n23 libdyld.dylib 0x00007fff2045cf3d start + 1\n24 ??? 0x0000000000000003 0x0 + 3\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Feb 2022 09:48:04 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "On Tue, Feb 8, 2022 at 1:48 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2022/02/08 7:00, Greg Stark wrote:\n> > Unless I'm misreading this code I think the nevents in\n> > WaitLatchOrSocket should really be 4 not 3. At least there are 4 calls\n> > to AddWaitEventToSet in it and I think it's possible to trigger all 4.\n>\n> Good catch! I think you're right.\n>\n> As the quick test, I confirmed that the assertion failure happened when I passed four possible events to WaitLatchOrSocket() in postgres_fdw.\n>\n> TRAP: FailedAssertion(\"set->nevents < set->nevents_space\", File: \"latch.c\", Line: 868, PID: 54424)\n\nYeah, the assertion shows there's no problem, but we should change it\nto 4 (and one day we should make it dynamic and change udata to hold\nan index instead of a pointer...) or, since it's a bit silly to add\nboth of those events, maybe we should do:\n\n- if ((wakeEvents & WL_POSTMASTER_DEATH) && IsUnderPostmaster)\n- AddWaitEventToSet(set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET,\n- NULL, NULL);\n-\n if ((wakeEvents & WL_EXIT_ON_PM_DEATH) && IsUnderPostmaster)\n AddWaitEventToSet(set, WL_EXIT_ON_PM_DEATH, PGINVALID_SOCKET,\n NULL, NULL);\n+ else if ((wakeEvents & WL_POSTMASTER_DEATH) && IsUnderPostmaster)\n+ AddWaitEventToSet(set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET,\n+\n\n\n", "msg_date": "Tue, 8 Feb 2022 13:51:41 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "\n\nOn 2022/02/08 9:51, Thomas Munro wrote:\n> an index instead of a pointer...) or, since it's a bit silly to add\n> both of those events, maybe we should do:\n> \n> - if ((wakeEvents & WL_POSTMASTER_DEATH) && IsUnderPostmaster)\n> - AddWaitEventToSet(set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET,\n> - NULL, NULL);\n> -\n> if ((wakeEvents & WL_EXIT_ON_PM_DEATH) && IsUnderPostmaster)\n> AddWaitEventToSet(set, WL_EXIT_ON_PM_DEATH, PGINVALID_SOCKET,\n> NULL, NULL);\n> + else if ((wakeEvents & WL_POSTMASTER_DEATH) && IsUnderPostmaster)\n> + AddWaitEventToSet(set, WL_POSTMASTER_DEATH, PGINVALID_SOCKET,\n> +\n\n+1\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 8 Feb 2022 13:00:06 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "On Tue, Feb 8, 2022 at 1:51 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> (and one day we should make it dynamic and change udata to hold\n> an index instead of a pointer...)\n\nHere's a patch like that.\n\nI'd originally sketched this out for another project, but I don't\nthink I need it for that anymore. After this exchange I couldn't\nresist fleshing it out for a commitfest, just on useability grounds.\nThoughts?", "msg_date": "Fri, 11 Feb 2022 10:47:45 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "Hi,\n\nOn 2022-02-11 10:47:45 +1300, Thomas Munro wrote:\n> I'd originally sketched this out for another project, but I don't\n> think I need it for that anymore. After this exchange I couldn't\n> resist fleshing it out for a commitfest, just on useability grounds.\n> Thoughts?\n\nThis currently doesn't apply: http://cfbot.cputube.org/patch_37_3533.log\nMarked as waiting-on-author for now. Are you aiming this for 15?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:09:12 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." }, { "msg_contents": "This entry has been waiting on author input for a while (our current\nthreshold is roughly two weeks), so I've marked it Returned with\nFeedback.\n\nOnce you think the patchset is ready for review again, you (or any\ninterested party) can resurrect the patch entry by visiting\n\n https://commitfest.postgresql.org/38/3533/\n\nand changing the status to \"Needs Review\", and then changing the\nstatus again to \"Move to next CF\". (Don't forget the second step;\nhopefully we will have streamlined this in the near future!)\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 11:52:41 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: WaitLatchOrSocket seems to not count to 4 right..." } ]
[ { "msg_contents": "Hi,\n\nDuring the last commit fest, some new contributors mentioned me that they were\nhaving trouble to find patch to review.\n\nAnd indeed, the commit fest app doesn't provide any kind of information about\nthe patch difficulty (which is different from the patch size, so you can't even\ntry to estimate it that way), whether there's a need for acceptance on the\nfeature/approach more than a code review, if it's a POC..., and that's not\nwelcoming.\n\nI was thinking that we could add a tag system, probably with some limited\npredefined tags, to address that problem.\n\nDo you think it's worth adding, or do you have a better idea to help newer\ncontributors?\n\n\n", "msg_date": "Tue, 8 Feb 2022 10:10:50 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Add tag/category to the commitfest app" }, { "msg_contents": "> On 8 Feb 2022, at 03:10, Julien Rouhaud <rjuju123@gmail.com> wrote:\n\n> I was thinking that we could add a tag system, probably with some limited\n> predefined tags, to address that problem.\n> \n> Do you think it's worth adding, or do you have a better idea to help newer\n> contributors?\n\nWe already have the categories for patches, not sure that a second level would\nbe all that helpful. I still think that adding an abstract/description field\nas discussed in 3C53E8AA-B53F-43C2-B3C6-FA71E18A462C@yesql.se would be helpful\nbut I haven't had time to work on that.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 15:14:30 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Add tag/category to the commitfest app" }, { "msg_contents": "On Wed, Feb 09, 2022 at 03:14:30PM +0100, Daniel Gustafsson wrote:\n> > On 8 Feb 2022, at 03:10, Julien Rouhaud <rjuju123@gmail.com> wrote:\n> \n> > I was thinking that we could add a tag system, probably with some limited\n> > predefined tags, to address that problem.\n> > \n> > Do you think it's worth adding, or do you have a better idea to help newer\n> > contributors?\n> \n> We already have the categories for patches, not sure that a second level would\n> be all that helpful.\n\nYes, but those categories are quite orthogonal to the problem of finding a patch\nsuitable for a new contributor or attract attention of other groups of people.\n\nNote that those tags wouldn't be a sub category, just a parallel thing where a\npatch could have 0, 1 or n predefined tags, with the search bar allowing easy\naccess to the combination you want.\n\n> I still think that adding an abstract/description field\n> as discussed in 3C53E8AA-B53F-43C2-B3C6-FA71E18A462C@yesql.se would be helpful\n> but I haven't had time to work on that.\n\nAgreed, but it doesn't seem to help apart from being a good indication that\nthis is likely not a good choice for a new contributor.\n\n\n", "msg_date": "Wed, 9 Feb 2022 22:29:01 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Add tag/category to the commitfest app" } ]
[ { "msg_contents": "Hi all,\n(Added Bruce and Julien in CC)\n\nWhile testing installcheck with various server configurations to see\nhow the main regression test suites could break, I found that loading\npg_stat_statements into the backend is enough to break installcheck\nas compute_query_id = auto, the default, lets the decision to compute\nquery IDs to pg_stat_statements itself. In short, loading\npg_stat_statements breaks EXPLAIN outputs of any SQL-based regression\ntest.\n\nRunning installcheck on existing installations is a popular sanity\ncheck, as much as is enabling pg_stat_statements by default, so it\nseems to me that we'd better disable compute_query_id by default in\nthe databases created for the sake of the regression tests, enabling\nit only in places where it is relevant. We do that in explain.sql for\na test with compute_query_id, but pg_stat_statements does not do\nthat.\n\nI'd like to suggest a fix for that, by tweaking the tests of\npg_stat_statements to use compute_query_id = auto, so as we would\nstill stress the code paths where the module takes the decision to\ncompute query IDs, while the default regression databases would\ndisable it. Please note that this also fixes the case of any\nout-of-core modules that have EXPLAIN cases.\n\nThe attached is enough to pass installcheck-world, on an instance\nwhere pg_stat_statements is loaded.\n\nThoughts?\n--\nMichael", "msg_date": "Tue, 8 Feb 2022 12:38:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "Hi,\n\nOn Tue, Feb 08, 2022 at 12:38:46PM +0900, Michael Paquier wrote:\n> \n> While testing installcheck with various server configurations to see\n> how the main regression test suites could break, I found that loading\n> pg_stat_statements into the backend is enough to break installcheck\n> as compute_query_id = auto, the default, lets the decision to compute\n> query IDs to pg_stat_statements itself. In short, loading\n> pg_stat_statements breaks EXPLAIN outputs of any SQL-based regression\n> test.\n\nIndeed, that's unfortunate but that's also a known behavior.\n\n> Running installcheck on existing installations is a popular sanity\n> check, as much as is enabling pg_stat_statements by default, so it\n> seems to me that we'd better disable compute_query_id by default in\n> the databases created for the sake of the regression tests, enabling\n> it only in places where it is relevant. We do that in explain.sql for\n> a test with compute_query_id, but pg_stat_statements does not do\n> that.\n\nI agree that enabling pg_stat_statements is something common when you're\nactually going to use your instance, I'm not sure that it also applies to\nenvironment for running regression tests.\n\n> I'd like to suggest a fix for that, by tweaking the tests of\n> pg_stat_statements to use compute_query_id = auto, so as we would\n> still stress the code paths where the module takes the decision to\n> compute query IDs, while the default regression databases would\n> disable it. Please note that this also fixes the case of any\n> out-of-core modules that have EXPLAIN cases.\n\nThat's already been discussed in [1] and rejected, as it would also mean losing\nthe ability to have pg_stat_statements (or any similar extension) coverage\nusing the regression tests. I personally rely on regression tests for such\ncustom extensions quite a lot, so I'm still -1 on that.\n\n[1] https://www.postgresql.org/message-id/flat/1634283396.372373993%40f75.i.mail.ru\n\n\n", "msg_date": "Tue, 8 Feb 2022 11:48:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Tue, Feb 08, 2022 at 11:48:15AM +0800, Julien Rouhaud wrote:\n> That's already been discussed in [1] and rejected, as it would also mean losing\n> the ability to have pg_stat_statements (or any similar extension) coverage\n> using the regression tests. I personally rely on regression tests for such\n> custom extensions quite a lot, so I'm still -1 on that.\n\nWell, I can see that this is a second independent complain after a few\nmonths. If you wish to keep this capability, wouldn't it be better to\nadd a \"regress\" mode to compute_query_id, where we would mask\nautomatically this information in the output of EXPLAIN but still run\nthe computation?\n--\nMichael", "msg_date": "Tue, 8 Feb 2022 12:56:59 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Tue, Feb 08, 2022 at 12:56:59PM +0900, Michael Paquier wrote:\n> On Tue, Feb 08, 2022 at 11:48:15AM +0800, Julien Rouhaud wrote:\n> > That's already been discussed in [1] and rejected, as it would also mean losing\n> > the ability to have pg_stat_statements (or any similar extension) coverage\n> > using the regression tests. I personally rely on regression tests for such\n> > custom extensions quite a lot, so I'm still -1 on that.\n> \n> Well, I can see that this is a second independent complain after a few\n> months.\n\n> If you wish to keep this capability, wouldn't it be better to\n> add a \"regress\" mode to compute_query_id, where we would mask\n> automatically this information in the output of EXPLAIN but still run\n> the computation?\n\nDid you just realize you had some script broken because of that (or have an\nactual need) or did you only try to see what setting breaks the regression\ntests? If the latter, it seems that the complaint seems a bit artificial.\n\nNo one complained about it since, and I'm assuming that pgpro could easily fix\ntheir test workflow since they didn't try to do add such a new mode.\n\nI suggested something like that in the thread but Tom didn't seemed\ninterested. Note that it would be much cleaner to do now that we rely on an\ninternal query_id_enabled variable rather than changing compute_query_id value,\nbut I don't know if it's really worth the effort given the number of things\nthat already breaks the regression tests.\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:22:10 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Tue, Feb 08, 2022 at 12:56:59PM +0900, Michael Paquier wrote:\n> Well, I can see that this is a second independent complain after a few\n> months. If you wish to keep this capability, wouldn't it be better to\n> add a \"regress\" mode to compute_query_id, where we would mask\n> automatically this information in the output of EXPLAIN but still run\n> the computation?\n\nSo, I have been looking at this problem, and I don't see a problem in\ndoing something like the attached, where we add a \"regress\" mode to\ncompute_query_id that is a synonym of \"auto\". Or, in short, we have\nthe default of letting a module decide if a query ID can be computed\nor not, at the exception that we hide its result in EXPLAIN outputs.\n\nJulien, what do you think?\n\nFWIW, about your question of upthread, I have noticed the behavior\nwhile testing, but I know of some internal customers that enable\npg_stat_statements and like doing tests on the PostgreSQL instance\ndeployed this way, so that would break. They are not on 14 yet as far\nas I know.\n--\nMichael", "msg_date": "Fri, 18 Feb 2022 17:22:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 18, 2022 at 05:22:36PM +0900, Michael Paquier wrote:\n>\n> So, I have been looking at this problem, and I don't see a problem in\n> doing something like the attached, where we add a \"regress\" mode to\n> compute_query_id that is a synonym of \"auto\". Or, in short, we have\n> the default of letting a module decide if a query ID can be computed\n> or not, at the exception that we hide its result in EXPLAIN outputs.\n>\n> Julien, what do you think?\n\nI don't see any problem with that patch.\n>\n> FWIW, about your question of upthread, I have noticed the behavior\n> while testing, but I know of some internal customers that enable\n> pg_stat_statements and like doing tests on the PostgreSQL instance\n> deployed this way, so that would break. They are not on 14 yet as far\n> as I know.\n\nAre they really testing EXPLAIN output, or just doing application-level tests?\n\n\n", "msg_date": "Fri, 18 Feb 2022 17:38:56 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Fri, Feb 18, 2022 at 05:38:56PM +0800, Julien Rouhaud wrote:\n> On Fri, Feb 18, 2022 at 05:22:36PM +0900, Michael Paquier wrote:\n>> So, I have been looking at this problem, and I don't see a problem in\n>> doing something like the attached, where we add a \"regress\" mode to\n>> compute_query_id that is a synonym of \"auto\". Or, in short, we have\n>> the default of letting a module decide if a query ID can be computed\n>> or not, at the exception that we hide its result in EXPLAIN outputs.\n>>\n>> Julien, what do you think?\n> \n> I don't see any problem with that patch.\n\nThanks for looking at it. An advantage of this version is that it is\nbackpatchable as the option enum won't break any ABI compatibility, so\nthat's a win-win.\n\n>> FWIW, about your question of upthread, I have noticed the behavior\n>> while testing, but I know of some internal customers that enable\n>> pg_stat_statements and like doing tests on the PostgreSQL instance\n>> deployed this way, so that would break. They are not on 14 yet as far\n>> as I know.\n> \n> Are they really testing EXPLAIN output, or just doing application-level tests?\n\nWith the various internal customers, both. And installcheck implies\nEXPLAIN outputs.\n\nFirst, let's wait a couple of days and see if anyone has an extra\nopinion to offer on this topic.\n--\nMichael", "msg_date": "Sat, 19 Feb 2022 08:25:33 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Fri, Feb 18, 2022 at 05:38:56PM +0800, Julien Rouhaud wrote:\n> On Fri, Feb 18, 2022 at 05:22:36PM +0900, Michael Paquier wrote:\n>> So, I have been looking at this problem, and I don't see a problem in\n>> doing something like the attached, where we add a \"regress\" mode to\n>> compute_query_id that is a synonym of \"auto\". Or, in short, we have\n>> the default of letting a module decide if a query ID can be computed\n>> or not, at the exception that we hide its result in EXPLAIN outputs.\n>>\n>> Julien, what do you think?\n> \n> I don't see any problem with that patch.\n\nOkay, thanks. I have worked on that today and applied the patch down\nto 14, but without the change to enforce the parameter in the\ndatabases created by pg_regress. Perhaps we should do that in the\nlong-term, but it is also possible to pass down the option to\nPGOPTIONS via command line as compute_query_id is a SUSET or just set\nit in postgresql.conf, and being able to do so is enough to address\nall the cases I have seen and/or could think about.\n--\nMichael", "msg_date": "Tue, 22 Feb 2022 11:04:16 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" }, { "msg_contents": "On Tue, Feb 22, 2022 at 11:04:16AM +0900, Michael Paquier wrote:\n> On Fri, Feb 18, 2022 at 05:38:56PM +0800, Julien Rouhaud wrote:\n> > On Fri, Feb 18, 2022 at 05:22:36PM +0900, Michael Paquier wrote:\n> >> So, I have been looking at this problem, and I don't see a problem in\n> >> doing something like the attached, where we add a \"regress\" mode to\n> >> compute_query_id that is a synonym of \"auto\". Or, in short, we have\n> >> the default of letting a module decide if a query ID can be computed\n> >> or not, at the exception that we hide its result in EXPLAIN outputs.\n\n> Okay, thanks. I have worked on that today and applied the patch down\n\nWhile rebasing, I noticed that this patch does part of what I added in another\nthread.\n\nhttps://commitfest.postgresql.org/37/3409/\n| explain_regress, explain(MACHINE), and default to explain(BUFFERS)\n\n- if (es->verbose && plannedstmt->queryId != UINT64CONST(0))\n+ if (es->verbose && plannedstmt->queryId != UINT64CONST(0) && es->machine)\n {\n /*\n * Output the queryid as an int64 rather than a uint64 so we match\n\nApparently, I first wrote this two years ago today.\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 22 Feb 2022 20:38:28 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: shared_preload_libraries = 'pg_stat_statements' failing with\n installcheck (compute_query_id = auto)" } ]
[ { "msg_contents": "Hi! my name is Mario Mercado. I'm from California western pacific time. I\nwould like to participate in your projects. I'm a software developer from\nCalifornia.\n\nHi! my name is Mario Mercado. I'm from California western pacific time. I would like to participate in your projects. I'm a software developer from California.", "msg_date": "Mon, 7 Feb 2022 21:12:57 -0800", "msg_from": "luminate 22 <mario2mercadox2@gmail.com>", "msg_from_op": true, "msg_subject": "I would like to participate for postgresql projects" }, { "msg_contents": "> On 8 Feb 2022, at 06:12, luminate 22 <mario2mercadox2@gmail.com> wrote:\n> \n> Hi! my name is Mario Mercado. I'm from California western pacific time. I would like to participate in your projects. I'm a software developer from California.\n\nWelcome aboard, everyone is welcome. I would recommend starting with cloning\nthe repository and getting familiar with it by building and running tests etc:\n\n https://www.postgresql.org/docs/devel/git.html \n https://www.postgresql.org/docs/devel/installation.html\n\nFinding an interesting project to hack on is quite subjective, but reviewing\nwhat others have done and looking at what's in flight might bring some ideas to\nmind:\n\n https://commitfest.postgresql.org/37/\n\nAnd, most important: have fun!\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 14:21:24 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: I would like to participate for postgresql projects" }, { "msg_contents": "On Wed, Feb 9, 2022 at 6:52 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> > On 8 Feb 2022, at 06:12, luminate 22 <mario2mercadox2@gmail.com> wrote:\n> >\n> > Hi! my name is Mario Mercado. I'm from California western pacific time.\n> I would like to participate in your projects. I'm a software developer from\n> California.\n>\n> Welcome aboard, everyone is welcome. I would recommend starting with\n> cloning\n> the repository and getting familiar with it by building and running tests\n> etc:\n>\n> https://www.postgresql.org/docs/devel/git.html\n> https://www.postgresql.org/docs/devel/installation.html\n>\n> Finding an interesting project to hack on is quite subjective, but\n> reviewing\n> what others have done and looking at what's in flight might bring some\n> ideas to\n> mind:\n>\n> https://commitfest.postgresql.org/37/\n\nYou may also want to visit the developer FAQ.\nhttps://wiki.postgresql.org/wiki/Developer_FAQ\n\nHave fun!\n\n-- Ashesh Vashi\n\n>\n>\n> And, most important: have fun!\n>\n> --\n> Daniel Gustafsson https://vmware.com/\n>\n>\n>\n>\n\nOn Wed, Feb 9, 2022 at 6:52 PM Daniel Gustafsson <daniel@yesql.se> wrote:> On 8 Feb 2022, at 06:12, luminate 22 <mario2mercadox2@gmail.com> wrote:\n> \n> Hi! my name is Mario Mercado. I'm from California western pacific time. I would like to participate in your projects. I'm a software developer from California.\n\nWelcome aboard, everyone is welcome.  I would recommend starting with cloning\nthe repository and getting familiar with it by building and running tests etc:\n\n    https://www.postgresql.org/docs/devel/git.html    \n    https://www.postgresql.org/docs/devel/installation.html\n\nFinding an interesting project to hack on is quite subjective, but reviewing\nwhat others have done and looking at what's in flight might bring some ideas to\nmind:\n\n    https://commitfest.postgresql.org/37/You may also want to visit the developer FAQ.https://wiki.postgresql.org/wiki/Developer_FAQHave fun!-- Ashesh Vashi\n\nAnd, most important: have fun!\n\n--\nDaniel Gustafsson               https://vmware.com/", "msg_date": "Wed, 9 Feb 2022 18:58:30 +0530", "msg_from": "Ashesh Vashi <ashesh.vashi@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: I would like to participate for postgresql projects" } ]
[ { "msg_contents": "Hello,\n\nI was wondering if pg_restore should call VACUUM ANALYZE for all tables, \nafter the \"COPY\" stage, and before the \"post-data\" stage.\n\nIndeed, without such a VACUUM, the visibility map isn't available. \nDepending on the size of the tables and on the configuration, a foreign \nkey constraint check would have to perform either:\n\n * a seq scan of the referencing table, even if an index exists for it;\n * a index-only scan, with 100% of heap fetches.\n\nAnd it's the same for the referenced table, of course, excepts the index \nalways exists.\n\nIn both cases, it could be very slow.\n\nWhat's more, the seq scan isn't parallelized [1].\n\nIt seems to me that in most cases, autovacuum doesn't have enough time \nto process the tables before the post-data stage, or only some of them.\n\nWhat do you think?\n\nIf the matter has already been discussed previously, can you point me to \nthe thread discussing it?\n\nBest regards,\nFrédéric\n\n[1] \nhttps://www.postgresql.org/message-id/flat/0d21e3b4-dcde-290c-875e-6ed5013e8e52%40dalibo.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 11:21:25 +0100", "msg_from": "=?UTF-8?Q?Fr=c3=a9d=c3=a9ric_Yhuel?= <frederic.yhuel@dalibo.com>", "msg_from_op": true, "msg_subject": "Should pg_restore vacuum the tables before the post-data stage?" } ]
[ { "msg_contents": "Here I'm starting a new thread to discuss a topic that's related to the\nTransparent Data Encryption (TDE), but could be useful even without that. The\nproblem has been addressed somehow in the Cybertec TDE fork, and I can post\nthe code here if it helps. However, after reading [1] (and the posts\nupthread), I've got another idea, so let's try to discuss it first.\n\nIt makes sense to me if we first implement the buffering (i.e. writing/reading\ncertain amount of data at a time) and make the related functions aware of\nencryption later: as long as we use a block cipher, we also need to read/write\n(suitably sized) chunks rather than individual bytes (or arbitrary amounts of\ndata). (In theory, someone might need encryption but reject buffering, but I'm\nnot sure if this is a realistic use case.)\n\nFor the buffering, I imagine a \"file stream\" object that user creates on the\ntop of a file descriptor, such as\n\n FileStream *FileStreamCreate(File file, int buffer_size)\n\n or\n\n FileStream *FileStreamCreateFD(int fd, int buffer_size)\n\nand uses functions like\n\n int FileStreamWrite(FileStream *stream, char *buffer, int amount)\n\n and\n\n int FileStreamRead(FileStream *stream, char *buffer, int amount)\n\nto write and read data respectively.\n\nBesides functions to close the streams explicitly (e.g. FileStreamClose() /\nFileStreamFDClose()), we'd need to ensure automatic closing where that happens\nto the file. For example, if OpenTemporaryFile() was used to obtain the file\ndescriptor, the user expects that the file will be closed and deleted on\ntransaction boundary, so the corresponding stream should be freed\nautomatically as well.\n\nTo avoid code duplication, buffile.c should use these streams internally as\nwell, as it also performs buffering. (Here we'd also need functions to change\nreading/writing position.)\n\nOnce we implement the encryption, we might need add an argument to the\nFileStreamCreate...() functions that helps to generate an unique IV, but the\n...Read() / ...Write() functions would stay intact. And possibly one more\nargument to specify the kind of cipher, in case we support more than one.\n\nI think that's enough to start the discussion. Thanks for feedback in advance.\n\n[1] https://www.postgresql.org/message-id/CA%2BTgmoYGjN_f%3DFCErX49bzjhNG%2BGoctY%2Ba%2BXhNRWCVvDY8U74w%40mail.gmail.com\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 08 Feb 2022 13:24:58 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Temporary file access API" }, { "msg_contents": "Greetings,\n\n* Antonin Houska (ah@cybertec.at) wrote:\n> Here I'm starting a new thread to discuss a topic that's related to the\n> Transparent Data Encryption (TDE), but could be useful even without that. The\n> problem has been addressed somehow in the Cybertec TDE fork, and I can post\n> the code here if it helps. However, after reading [1] (and the posts\n> upthread), I've got another idea, so let's try to discuss it first.\n\nYeah, I tend to agree that it makes sense to discuss the general idea of\ndoing buffered I/O for the temporary files that are being created today\nwith things like fwrite and have that be independent of encryption. As\nmentioned on that thread, doing our own buffering and having file\naccesses done the same way throughout the system is just generally a\ngood thing and focusing on that will certainly help with any TDE effort\nthat may or may not happen later.\n\n> It makes sense to me if we first implement the buffering (i.e. writing/reading\n> certain amount of data at a time) and make the related functions aware of\n> encryption later: as long as we use a block cipher, we also need to read/write\n> (suitably sized) chunks rather than individual bytes (or arbitrary amounts of\n> data). (In theory, someone might need encryption but reject buffering, but I'm\n> not sure if this is a realistic use case.)\n\nAgreed.\n\n> For the buffering, I imagine a \"file stream\" object that user creates on the\n> top of a file descriptor, such as\n> \n> FileStream *FileStreamCreate(File file, int buffer_size)\n> \n> or\n> \n> FileStream *FileStreamCreateFD(int fd, int buffer_size)\n> \n> and uses functions like\n> \n> int FileStreamWrite(FileStream *stream, char *buffer, int amount)\n> \n> and\n> \n> int FileStreamRead(FileStream *stream, char *buffer, int amount)\n> \n> to write and read data respectively.\n\nYeah, something along these lines was also what I had in mind, in\nparticular as it would mean fewer changes to places like reorderbuffer.c\n(or, at least, very clear changes).\n\n> Besides functions to close the streams explicitly (e.g. FileStreamClose() /\n> FileStreamFDClose()), we'd need to ensure automatic closing where that happens\n> to the file. For example, if OpenTemporaryFile() was used to obtain the file\n> descriptor, the user expects that the file will be closed and deleted on\n> transaction boundary, so the corresponding stream should be freed\n> automatically as well.\n\nSure. We do have some places that want to write using offsets and we'll\nwant to support that also, I'd think.\n\n> To avoid code duplication, buffile.c should use these streams internally as\n> well, as it also performs buffering. (Here we'd also need functions to change\n> reading/writing position.)\n\nYeah... though we should really go through and make sure that we\nunderstand the use-cases for each of the places that decided to use\ntheir own code rather than what already existed, to the extent that we\ncan figure that out, and make sure that we're solving those cases and\nnot writing a bunch of new code that won't end up getting used. We\nreally want to be sure that all writes are going through these code\npaths and make sure there aren't any reasons for them not to be used.\n\n> Once we implement the encryption, we might need add an argument to the\n> FileStreamCreate...() functions that helps to generate an unique IV, but the\n> ...Read() / ...Write() functions would stay intact. And possibly one more\n> argument to specify the kind of cipher, in case we support more than one.\n\nAs long as we're only needing to pass these into the Create function,\nthat seems reasonable to me. I wouldn't want to go through the effort\nof this and then still have to change every single place we read/write\nusing this system.\n\nThanks,\n\nStephen", "msg_date": "Tue, 1 Mar 2022 09:34:31 -0500", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Stephen Frost <sfrost@snowman.net> wrote:\n\n> * Antonin Houska (ah@cybertec.at) wrote:\n> > Here I'm starting a new thread to discuss a topic that's related to the\n> > Transparent Data Encryption (TDE), but could be useful even without that. The\n> > problem has been addressed somehow in the Cybertec TDE fork, and I can post\n> > the code here if it helps. However, after reading [1] (and the posts\n> > upthread), I've got another idea, so let's try to discuss it first.\n> \n> Yeah, I tend to agree that it makes sense to discuss the general idea of\n> doing buffered I/O for the temporary files that are being created today\n> with things like fwrite and have that be independent of encryption. As\n> mentioned on that thread, doing our own buffering and having file\n> accesses done the same way throughout the system is just generally a\n> good thing and focusing on that will certainly help with any TDE effort\n> that may or may not happen later.\n\nThanks for your comments, the initial version is attached here.\n\nThe buffer size is not configurable yet. Besides the new API itself, the patch\nseries shows how it can be used to store logical replication data changes as\nwell as statistics. Further comments are welcome, thanks in advance.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Tue, 08 Mar 2022 12:14:59 +0100", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Tue, Mar 8, 2022 at 6:12 AM Antonin Houska <ah@cybertec.at> wrote:\n> Thanks for your comments, the initial version is attached here.\n\nI've been meaning to look at this thread for some time but have not\nfound enough time to do that until just now. And now I have\nquestions...\n\n1. Supposing we accepted this, how widely do you think that we could\nadopt it? I see that this patch set adapts a few places to use it and\nthat's nice, but I have a feeling there's a lot more places that are\nmaking use of system calls directly, or through some other\nabstraction, than just this. I'm not sure that we actually want to use\nsomething like this everywhere, but what would be the rule for\ndeciding where to use it and where not to use\nit? If the plan for this facility is just to adapt these two\nparticular places to use it, that doesn't feel like enough to be\nworthwhile.\n\n2. What about frontend code? Most frontend code won't examine data\nfiles directly, but at least pg_controldata, pg_checksums, and\npg_resetwal are exceptions.\n\n3. How would we extend this to support encryption? Add an extra\nargument to BufFileStreamInit(V)FD passing down the encryption\ndetails?\n\nThere are some smaller things about the patch with which I'm not 100%\ncomfortable, but I'd like to start by understanding the big picture.\n\nThanks,\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 15:29:38 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Mar 8, 2022 at 6:12 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Thanks for your comments, the initial version is attached here.\n> \n> I've been meaning to look at this thread for some time but have not\n> found enough time to do that until just now. And now I have\n> questions...\n> \n> 1. Supposing we accepted this, how widely do you think that we could\n> adopt it? I see that this patch set adapts a few places to use it and\n> that's nice, but I have a feeling there's a lot more places that are\n> making use of system calls directly, or through some other\n> abstraction, than just this. I'm not sure that we actually want to use\n> something like this everywhere, but what would be the rule for\n> deciding where to use it and where not to use\n> it? If the plan for this facility is just to adapt these two\n> particular places to use it, that doesn't feel like enough to be\n> worthwhile.\n\nAdmittedly I viewed the problem from the perspective of the TDE, so I haven't\nspent much time looking for other opportunities. Now, with the stats collector\nusing shared memory, even one of the use cases implemented here no longer\nexists. I need to do more research.\n\nDo you think that the use of a system call is a problem itself (e.g. because\nthe code looks less simple if read/write is used somewhere and fread/fwrite\nelsewhere; of course of read/write is mandatory in special cases like WAL,\nheap pages, etc.) or is the problem that the system calls are used too\nfrequently? I suppose only the latter.\n\nAnyway, I'm not sure there are *many* places where system calls are used too\nfrequently. Instead, the coding uses to be such that the information is first\nassembled in memory and then written to file at once. So the value of the\n(buffered) stream is that it makes the code simpler (eliminates the need to\nprepare the data in memory). That's what I tried to do for reorderbuffer.c and\npgstat.c in my patch.\n\nRelated question is whether we should try to replace some uses of the libc\nstream (FILE *) at some places. You seem to suggest that in [1]. One example\nis snapmgr.c:ExportSnapshot(), if we also implement output formatting. Of\ncourse there are places where (FILE *) cannot be replaced because, besides\nregular file, the code needs to work with stdin/stdout in general. (Parsing of\nconfiguration files falls into this category, but that doesn't matter because\nbison-generated parser seems to implement buffering anyway.)\n\n> 2. What about frontend code? Most frontend code won't examine data\n> files directly, but at least pg_controldata, pg_checksums, and\n> pg_resetwal are exceptions.\n\nIf the frequency of using system calls is the problem, then I wouldn't change\nthese because ControlFileData structure needs to be initialized in memory\nanyway and then written at once. And pg_checksums reads whole blocks\nanyway. I'll take a closer look.\n\n> 3. How would we extend this to support encryption? Add an extra\n> argument to BufFileStreamInit(V)FD passing down the encryption\n> details?\n\nYes.\n\n> There are some smaller things about the patch with which I'm not 100%\n> comfortable, but I'd like to start by understanding the big picture.\n\nThanks!\n\n[1] https://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 08 Apr 2022 10:54:28 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Fri, Apr 8, 2022 at 4:54 AM Antonin Houska <ah@cybertec.at> wrote:\n> Do you think that the use of a system call is a problem itself (e.g. because\n> the code looks less simple if read/write is used somewhere and fread/fwrite\n> elsewhere; of course of read/write is mandatory in special cases like WAL,\n> heap pages, etc.) or is the problem that the system calls are used too\n> frequently? I suppose only the latter.\n\nI'm not really super-interested in reducing the number of system\ncalls. It's not a dumb thing in which to be interested and I know that\nfor example Thomas Munro is very interested in it and has done a bunch\nof work in that direction just to improve performance. But for me the\nattraction of this is mostly whether it gets us closer to TDE.\n\nAnd that's why I'm asking these questions about adopting it in\ndifferent places. I kind of thought that your previous patches needed\nto encrypt, I don't know, 10 or 20 different kinds of files. So I was\nsurprised to see this patch touching the handling of only 2 kinds of\nfiles. If we consolidate the handling of let's say 15 of 20 cases into\na single mechanism, we've really moved the needle in the right\ndirection -- but consolidating the handling of 2 of 20 cases, or\nwhatever the real numbers are, isn't very exciting.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 8 Apr 2022 12:01:22 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Apr 8, 2022 at 4:54 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Do you think that the use of a system call is a problem itself (e.g. because\n> > the code looks less simple if read/write is used somewhere and fread/fwrite\n> > elsewhere; of course of read/write is mandatory in special cases like WAL,\n> > heap pages, etc.) or is the problem that the system calls are used too\n> > frequently? I suppose only the latter.\n> \n> I'm not really super-interested in reducing the number of system\n> calls. It's not a dumb thing in which to be interested and I know that\n> for example Thomas Munro is very interested in it and has done a bunch\n> of work in that direction just to improve performance. But for me the\n> attraction of this is mostly whether it gets us closer to TDE.\n> \n> And that's why I'm asking these questions about adopting it in\n> different places. I kind of thought that your previous patches needed\n> to encrypt, I don't know, 10 or 20 different kinds of files. So I was\n> surprised to see this patch touching the handling of only 2 kinds of\n> files. If we consolidate the handling of let's say 15 of 20 cases into\n> a single mechanism, we've really moved the needle in the right\n> direction -- but consolidating the handling of 2 of 20 cases, or\n> whatever the real numbers are, isn't very exciting.\n\nThere are't really that many kinds of files to encrypt:\n\nhttps://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n\n(And pg_stat/* files should be removed from the list.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 10:06:04 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> There are't really that many kinds of files to encrypt:\n>\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n>\n> (And pg_stat/* files should be removed from the list.)\n\nThis kind of gets into some theoretical questions. Like, do we think\nthat it's an information leak if people can look at how many\ntransactions are committing and aborting in pg_xact_status? In theory\nit could be, but I know it's been argued that that's too much of a\nside channel. I'm not sure I believe that, but it's arguable.\nSimilarly, the argument that global/pg_internal.init doesn't contain\nuser data relies on the theory that the only table data that will make\nits way into the file is for system catalogs. I guess that's not user\ndata *exactly* but ... are we sure that's how we want to roll here?\n\nI really don't know how you can argue that pg_dynshmem/mmap.NNNNNNN\ndoesn't contain user data - surely it can.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 11 Apr 2022 16:34:18 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > There are't really that many kinds of files to encrypt:\n> >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> >\n> > (And pg_stat/* files should be removed from the list.)\n> \n> This kind of gets into some theoretical questions. Like, do we think\n> that it's an information leak if people can look at how many\n> transactions are committing and aborting in pg_xact_status? In theory\n> it could be, but I know it's been argued that that's too much of a\n> side channel. I'm not sure I believe that, but it's arguable.\n\nI was referring to the fact that the statistics are no longer stored in files:\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd\n\n> Similarly, the argument that global/pg_internal.init doesn't contain\n> user data relies on the theory that the only table data that will make\n> its way into the file is for system catalogs. I guess that's not user\n> data *exactly* but ... are we sure that's how we want to roll here?\n\nYes, this is worth attention.\n\n> I really don't know how you can argue that pg_dynshmem/mmap.NNNNNNN\n> doesn't contain user data - surely it can.\n\nGood point. Since postgres does not control writing into this file, it's a\nspecial case though. (Maybe TDE will have to reject to start if\ndynamic_shared_memory_type is set to mmap and the instance is encrypted.)\n\nThanks.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 12 Apr 2022 11:30:39 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Tue, Apr 12, 2022 at 5:30 AM Antonin Houska <ah@cybertec.at> wrote:\n> Robert Haas <robertmhaas@gmail.com> wrote:\n> > On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > There are't really that many kinds of files to encrypt:\n> > >\n> > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> > >\n> > > (And pg_stat/* files should be removed from the list.)\n> >\n> > This kind of gets into some theoretical questions. Like, do we think\n> > that it's an information leak if people can look at how many\n> > transactions are committing and aborting in pg_xact_status? In theory\n> > it could be, but I know it's been argued that that's too much of a\n> > side channel. I'm not sure I believe that, but it's arguable.\n>\n> I was referring to the fact that the statistics are no longer stored in files:\n>\n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd\n\nOh, yeah, I agree with that. I was saying that I'm not altogether a\nbeliever in the idea that we need not encrypt SLRU files.\n\nTBH, I think that no matter what we do there are going to be side\nchannel leaks, where some security researcher demonstrates that they\ncan infer something based on file length or file creation rate or\nsomething. It seems inevitable. But the more stuff we don't encrypt,\nthe easier those attacks are going to be. On the other hand,\nencrypting more stuff also makes the project harder. It's hard for me\nto judge what the right balance is here. Maybe it's OK to exclude that\nstuff for an initial version and just disclaim the heck out of it, but\nI don't think that should be our long term strategy.\n\n> > I really don't know how you can argue that pg_dynshmem/mmap.NNNNNNN\n> > doesn't contain user data - surely it can.\n>\n> Good point. Since postgres does not control writing into this file, it's a\n> special case though. (Maybe TDE will have to reject to start if\n> dynamic_shared_memory_type is set to mmap and the instance is encrypted.)\n\nInteresting point.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 12 Apr 2022 09:01:45 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Mon, 11 Apr 2022 at 10:05, Antonin Houska <ah@cybertec.at> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> > On Fri, Apr 8, 2022 at 4:54 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > Do you think that the use of a system call is a problem itself (e.g. because\n> > > the code looks less simple if read/write is used somewhere and fread/fwrite\n> > > elsewhere; of course of read/write is mandatory in special cases like WAL,\n> > > heap pages, etc.) or is the problem that the system calls are used too\n> > > frequently? I suppose only the latter.\n> >\n> > I'm not really super-interested in reducing the number of system\n> > calls. It's not a dumb thing in which to be interested and I know that\n> > for example Thomas Munro is very interested in it and has done a bunch\n> > of work in that direction just to improve performance. But for me the\n> > attraction of this is mostly whether it gets us closer to TDE.\n> >\n> > And that's why I'm asking these questions about adopting it in\n> > different places. I kind of thought that your previous patches needed\n> > to encrypt, I don't know, 10 or 20 different kinds of files. So I was\n> > surprised to see this patch touching the handling of only 2 kinds of\n> > files. If we consolidate the handling of let's say 15 of 20 cases into\n> > a single mechanism, we've really moved the needle in the right\n> > direction -- but consolidating the handling of 2 of 20 cases, or\n> > whatever the real numbers are, isn't very exciting.\n>\n> There are't really that many kinds of files to encrypt:\n>\n> https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n>\n> (And pg_stat/* files should be removed from the list.)\n\nI was looking at that list of files that contain user data, and\nnoticed that all relation forks except the main fork were marked as\n'does not contain user data'. To me this seems not necessarily true:\nAMs do have access to forks for user data storage as well (without any\nreal issues or breaking the abstraction), and the init-fork is\nexpected to store user data (specifically in the form of unlogged\nsequences). Shouldn't those forks thus also be encrypted-by-default,\nor should we provide some other method to ensure that non-main forks\nwith user data are encrypted?\n\n\n- Matthias\n\n\n", "msg_date": "Tue, 12 Apr 2022 17:00:24 +0200", "msg_from": "Matthias van de Meent <boekewurm+postgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Matthias van de Meent <boekewurm+postgres@gmail.com> wrote:\n\n> On Mon, 11 Apr 2022 at 10:05, Antonin Houska <ah@cybertec.at> wrote:\n> >\n> > There are't really that many kinds of files to encrypt:\n> >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> >\n> > (And pg_stat/* files should be removed from the list.)\n> \n> I was looking at that list of files that contain user data, and\n> noticed that all relation forks except the main fork were marked as\n> 'does not contain user data'. To me this seems not necessarily true:\n> AMs do have access to forks for user data storage as well (without any\n> real issues or breaking the abstraction), and the init-fork is\n> expected to store user data (specifically in the form of unlogged\n> sequences). Shouldn't those forks thus also be encrypted-by-default,\n> or should we provide some other method to ensure that non-main forks\n> with user data are encrypted?\n\nThanks. I've updated the wiki page (also included Robert's comments).\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Wed, 13 Apr 2022 08:30:28 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Apr 12, 2022 at 5:30 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > > There are't really that many kinds of files to encrypt:\n> > > >\n> > > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> > > >\n> > > > (And pg_stat/* files should be removed from the list.)\n> > >\n> > > This kind of gets into some theoretical questions. Like, do we think\n> > > that it's an information leak if people can look at how many\n> > > transactions are committing and aborting in pg_xact_status? In theory\n> > > it could be, but I know it's been argued that that's too much of a\n> > > side channel. I'm not sure I believe that, but it's arguable.\n> >\n> > I was referring to the fact that the statistics are no longer stored in files:\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd\n> \n> Oh, yeah, I agree with that. I was saying that I'm not altogether a\n> believer in the idea that we need not encrypt SLRU files.\n> \n> TBH, I think that no matter what we do there are going to be side\n> channel leaks, where some security researcher demonstrates that they\n> can infer something based on file length or file creation rate or\n> something. It seems inevitable. But the more stuff we don't encrypt,\n> the easier those attacks are going to be. On the other hand,\n> encrypting more stuff also makes the project harder. It's hard for me\n> to judge what the right balance is here. Maybe it's OK to exclude that\n> stuff for an initial version and just disclaim the heck out of it, but\n> I don't think that should be our long term strategy.\n\nI agree that there's undoubtably going to be side-channel attacks, some\nof which we may never be able to address, but we should be looking to\ntry and do as much as we can while also disclaiming that which we can't.\n\nI'd lay this out in steps along these lines:\n\n- Primary data is encrypted (and, ideally, optionally authenticated)\n\nThis is basically what the current state of things are with the\npatches that have been posted in the past and would be a fantastic first\nstep that would get us past a lot of the auditors and others who are\nunhappy that they can simply 'grep' a PG data directory for user data.\nThis also generally addresses off-line attacks, backups, etc. This also\nputs us on a similar level as other vendors who offer TDE, as I\nunderstand it.\n\n- Encryption / authentication of metadata (SLRUs, maybe more)\n\nThis would be a great next step and would move us in the direction of\nhaving a system that's resiliant against online storage-level attacks.\nAs for how to get there, I would think we'd start with coming up with a\nway to at least have good checksums for SLRUs that are just generally\navailable and users are encouraged to enable, and then have that done in\na way that we could store an authentication tag instead in the future.\n\n> > > I really don't know how you can argue that pg_dynshmem/mmap.NNNNNNN\n> > > doesn't contain user data - surely it can.\n> >\n> > Good point. Since postgres does not control writing into this file, it's a\n> > special case though. (Maybe TDE will have to reject to start if\n> > dynamic_shared_memory_type is set to mmap and the instance is encrypted.)\n> \n> Interesting point.\n\nYeah, this is an interesting case and it really depends on what attack\nvector is being considered and how the system is configured. I don't\nthink it's too much of an issue to say that you shouldn't use TDE with\nmmap (I'm not huge on the idea of explicitly preventing someone from\ndoing it if they really want though..).\n\nThanks,\n\nStephen", "msg_date": "Wed, 13 Apr 2022 11:49:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Mon, Apr 11, 2022 at 04:34:18PM -0400, Robert Haas wrote:\n> On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > There are't really that many kinds of files to encrypt:\n> >\n> > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> >\n> > (And pg_stat/* files should be removed from the list.)\n> \n> This kind of gets into some theoretical questions. Like, do we think\n> that it's an information leak if people can look at how many\n> transactions are committing and aborting in pg_xact_status? In theory\n> it could be, but I know it's been argued that that's too much of a\n> side channel. I'm not sure I believe that, but it's arguable.\n> Similarly, the argument that global/pg_internal.init doesn't contain\n> user data relies on the theory that the only table data that will make\n> its way into the file is for system catalogs. I guess that's not user\n> data *exactly* but ... are we sure that's how we want to roll here?\n\nI don't think we want to be encrypting pg_xact/, so they can get the\ntransaction commit rate from there.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 13 Apr 2022 18:25:31 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Wed, Apr 13, 2022 at 6:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I don't think we want to be encrypting pg_xact/, so they can get the\n> transaction commit rate from there.\n\nI think it would be a good idea to eventually encrypt SLRU data,\nincluding pg_xact. Maybe not in the first version.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 13 Apr 2022 18:54:13 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Greetings,\n\nOn Wed, Apr 13, 2022 at 18:54 Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Apr 13, 2022 at 6:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I don't think we want to be encrypting pg_xact/, so they can get the\n> > transaction commit rate from there.\n>\n> I think it would be a good idea to eventually encrypt SLRU data,\n> including pg_xact. Maybe not in the first version.\n\n\nI agree with Robert, on both parts.\n\nI do think getting checksums for that data may be the first sensible step.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Apr 13, 2022 at 18:54 Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Apr 13, 2022 at 6:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n> I don't think we want to be encrypting pg_xact/, so they can get the\n> transaction commit rate from there.\n\nI think it would be a good idea to eventually encrypt SLRU data,\nincluding pg_xact. Maybe not in the first version.I agree with Robert, on both parts.I do think getting checksums for that data may be the first sensible step.Thanks,Stephen", "msg_date": "Wed, 13 Apr 2022 19:10:16 -0400", "msg_from": "Stephen Frost <sfrost@snowman.net>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Wed, Apr 13, 2022 at 06:54:13PM -0400, Robert Haas wrote:\n> On Wed, Apr 13, 2022 at 6:25 PM Bruce Momjian <bruce@momjian.us> wrote:\n> > I don't think we want to be encrypting pg_xact/, so they can get the\n> > transaction commit rate from there.\n> \n> I think it would be a good idea to eventually encrypt SLRU data,\n> including pg_xact. Maybe not in the first version.\n\nI assume that would be very hard or slow, and of limited value.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Wed, 13 Apr 2022 19:18:37 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Tue, Apr 12, 2022 at 5:30 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Robert Haas <robertmhaas@gmail.com> wrote:\n> > > On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > > There are't really that many kinds of files to encrypt:\n> > > >\n> > > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> > > >\n> > > > (And pg_stat/* files should be removed from the list.)\n> > >\n> > > This kind of gets into some theoretical questions. Like, do we think\n> > > that it's an information leak if people can look at how many\n> > > transactions are committing and aborting in pg_xact_status? In theory\n> > > it could be, but I know it's been argued that that's too much of a\n> > > side channel. I'm not sure I believe that, but it's arguable.\n> >\n> > I was referring to the fact that the statistics are no longer stored in files:\n> >\n> > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd\n> \n> Oh, yeah, I agree with that.\n\nI see now that the statistics are yet saved to a file on server shutdown. I've\nupdated the wiki page.\n\nAttached is a new version of the patch, to evaluate what the API use in the\nbackend could look like. I haven't touched places where the file is accessed\nin a non-trivial way, e.g. lseek() / fseek() or pg_pwrite() / pg_pread() is\ncalled.\n\nAnother use case might be copying one file to another via a buffer. Something\nlike\n\n\tBufFileCopy(int dstfd, int srcfd, int bufsize)\n\nThe obvious call site would be in copydir.c:copy_file(), but I think there are\na few more in the server code.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Wed, 20 Apr 2022 17:00:26 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Tue, Apr 12, 2022 at 5:30 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > Robert Haas <robertmhaas@gmail.com> wrote:\n> > > > On Mon, Apr 11, 2022 at 4:05 AM Antonin Houska <ah@cybertec.at> wrote:\n> > > > > There are't really that many kinds of files to encrypt:\n> > > > >\n> > > > > https://wiki.postgresql.org/wiki/Transparent_Data_Encryption#List_of_the_files_that_contain_user_data\n> > > > >\n> > > > > (And pg_stat/* files should be removed from the list.)\n> > > >\n> > > > This kind of gets into some theoretical questions. Like, do we think\n> > > > that it's an information leak if people can look at how many\n> > > > transactions are committing and aborting in pg_xact_status? In theory\n> > > > it could be, but I know it's been argued that that's too much of a\n> > > > side channel. I'm not sure I believe that, but it's arguable.\n> > >\n> > > I was referring to the fact that the statistics are no longer stored in files:\n> > >\n> > > https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=5891c7a8ed8f2d3d577e7eea34dacff12d7b6bbd\n> > \n> > Oh, yeah, I agree with that.\n> \n> I see now that the statistics are yet saved to a file on server shutdown. I've\n> updated the wiki page.\n> \n> Attached is a new version of the patch, to evaluate what the API use in the\n> backend could look like. I haven't touched places where the file is accessed\n> in a non-trivial way, e.g. lseek() / fseek() or pg_pwrite() / pg_pread() is\n> called.\n\nRebased patch set is attached here, which applies to the current master.\n\n(A few more opportunities for the new API implemented here.)\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Mon, 04 Jul 2022 18:35:46 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On 04.07.22 18:35, Antonin Houska wrote:\n>> Attached is a new version of the patch, to evaluate what the API use in the\n>> backend could look like. I haven't touched places where the file is accessed\n>> in a non-trivial way, e.g. lseek() / fseek() or pg_pwrite() / pg_pread() is\n>> called.\n> Rebased patch set is attached here, which applies to the current master.\n> \n> (A few more opportunities for the new API implemented here.)\n\nI don't understand what this patch set is supposed to do. AFAICT, the \nthread originally forked off from a TDE discussion and, considering the \nthread subject, was possibly once an attempt to refactor temporary file \naccess to make integrating encryption easier? The discussion then \ntalked about things like saving on system calls and excising all \nOS-level file access API use, which I found confusing, and the thread \nthen just became a general TDE-related mini-discussion.\n\nThe patches at hand extend some internal file access APIs and then \nsprinkle them around various places in the backend code, but there is no \nexplanation why or how this is better. I don't see any real structural \nsavings one might expect from a refactoring patch. No information has \nbeen presented how this might help encryption.\n\nI also suspect that changing around the use of various file access APIs \nneeds to be carefully evaluated in each individual case. Various code \nhas subtle expectations about atomic write behavior, syncing, flushing, \nerror recovery, etc. I don't know if this has been considered here.\n\n\n", "msg_date": "Mon, 4 Jul 2022 20:37:30 +0200", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n\n> On 04.07.22 18:35, Antonin Houska wrote:\n> >> Attached is a new version of the patch, to evaluate what the API use in the\n> >> backend could look like. I haven't touched places where the file is accessed\n> >> in a non-trivial way, e.g. lseek() / fseek() or pg_pwrite() / pg_pread() is\n> >> called.\n> > Rebased patch set is attached here, which applies to the current master.\n> > (A few more opportunities for the new API implemented here.)\n> \n> I don't understand what this patch set is supposed to do. AFAICT, the thread\n> originally forked off from a TDE discussion and, considering the thread\n> subject, was possibly once an attempt to refactor temporary file access to\n> make integrating encryption easier? The discussion then talked about things\n> like saving on system calls and excising all OS-level file access API use,\n> which I found confusing, and the thread then just became a general TDE-related\n> mini-discussion.\n\nYes, it's an attempt to make the encryption less invasive, but there are a few\nother objectives, at least: 1) to make the read / write operations \"less\nlow-level\" (there are common things like error handling which are often just\ncopy & pasted from other places), 2) to have buffered I/O with configurable\nbuffer size (the current patch version still has fixed buffer size though)\n\nIt's true that the discussion ends up to be encryption-specific, however the\nscope of the patch is broader. The first meassage of the thread references a\nrelated discussion\n\nhttps://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\nwhich is more important for this patch than the suggestions about encryption.\n\n> The patches at hand extend some internal file access APIs and then sprinkle\n> them around various places in the backend code, but there is no explanation\n> why or how this is better. I don't see any real structural savings one might\n> expect from a refactoring patch. No information has been presented how this\n> might help encryption.\n\nAt this stage I expected feedback from the developers who have already\ncontributed to the discussion, because I'm not sure myself if this version\nfits their ideas - that's why I didn't elaborate the documentation yet. I'll\ntry to summarize my understanding in the next version, but I'd appreciate if I\ngot some feedback for the current version first.\n\n> I also suspect that changing around the use of various file access APIs needs\n> to be carefully evaluated in each individual case. Various code has subtle\n> expectations about atomic write behavior, syncing, flushing, error recovery,\n> etc. I don't know if this has been considered here.\n\nI considered that, but could have messed up at some places. Right now I'm\naware of one problem: pgstat.c does not expect the file access API to raise\nERROR - this needs to be fixed.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Tue, 05 Jul 2022 15:24:25 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Antonin Houska <ah@cybertec.at> wrote:\n\n> Peter Eisentraut <peter.eisentraut@enterprisedb.com> wrote:\n> \n> > On 04.07.22 18:35, Antonin Houska wrote:\n> > >> Attached is a new version of the patch, to evaluate what the API use in the\n> > >> backend could look like. I haven't touched places where the file is accessed\n> > >> in a non-trivial way, e.g. lseek() / fseek() or pg_pwrite() / pg_pread() is\n> > >> called.\n> > > Rebased patch set is attached here, which applies to the current master.\n> > > (A few more opportunities for the new API implemented here.)\n> > \n> > I don't understand what this patch set is supposed to do. AFAICT, the thread\n> > originally forked off from a TDE discussion and, considering the thread\n> > subject, was possibly once an attempt to refactor temporary file access to\n> > make integrating encryption easier? The discussion then talked about things\n> > like saving on system calls and excising all OS-level file access API use,\n> > which I found confusing, and the thread then just became a general TDE-related\n> > mini-discussion.\n> \n> Yes, it's an attempt to make the encryption less invasive, but there are a few\n> other objectives, at least: 1) to make the read / write operations \"less\n> low-level\" (there are common things like error handling which are often just\n> copy & pasted from other places), 2) to have buffered I/O with configurable\n> buffer size (the current patch version still has fixed buffer size though)\n> \n> It's true that the discussion ends up to be encryption-specific, however the\n> scope of the patch is broader. The first meassage of the thread references a\n> related discussion\n> \n> https://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n> \n> which is more important for this patch than the suggestions about encryption.\n> \n> > The patches at hand extend some internal file access APIs and then sprinkle\n> > them around various places in the backend code, but there is no explanation\n> > why or how this is better. I don't see any real structural savings one might\n> > expect from a refactoring patch. No information has been presented how this\n> > might help encryption.\n> \n> At this stage I expected feedback from the developers who have already\n> contributed to the discussion, because I'm not sure myself if this version\n> fits their ideas - that's why I didn't elaborate the documentation yet. I'll\n> try to summarize my understanding in the next version, but I'd appreciate if I\n> got some feedback for the current version first.\n> \n> > I also suspect that changing around the use of various file access APIs needs\n> > to be carefully evaluated in each individual case. Various code has subtle\n> > expectations about atomic write behavior, syncing, flushing, error recovery,\n> > etc. I don't know if this has been considered here.\n> \n> I considered that, but could have messed up at some places. Right now I'm\n> aware of one problem: pgstat.c does not expect the file access API to raise\n> ERROR - this needs to be fixed.\n\nAttached is a new version. It allows the user to set \"elevel\" (i.e. ERROR is\nnot necessarily thrown on I/O failure, if the user prefers to check the number\nof bytes read/written) and to specify the buffer size. Also, 0015 adds a\nfunction to copy data from one file descriptor to another.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com", "msg_date": "Fri, 29 Jul 2022 17:37:33 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Fri, Jul 29, 2022 at 11:36 AM Antonin Houska <ah@cybertec.at> wrote:\n> Attached is a new version. It allows the user to set \"elevel\" (i.e. ERROR is\n> not necessarily thrown on I/O failure, if the user prefers to check the number\n> of bytes read/written) and to specify the buffer size. Also, 0015 adds a\n> function to copy data from one file descriptor to another.\n\nI basically agree with Peter Eisentraut's earlier comment: \"I don't\nunderstand what this patch set is supposed to do.\" The intended\nadvantages, as documented in buffile.c, are:\n\n+ * 1. Less code is needed to access the file.\n+ *\n+ * 2.. Buffering reduces the number of I/O system calls.\n+ *\n+ * 3. The buffer size can be controlled by the user. (The larger the buffer,\n+ * the fewer I/O system calls are needed, but the more data needs to be\n+ * written to the buffer before the user recognizes that the file access\n+ * failed.)\n+ *\n+ * 4. It should make features like Transparent Data Encryption less invasive.\n\nIt does look to me like there is a modest reduction in the total\namount of code from these patches, so I do think they might be\nachieving (1).\n\nI'm not so sure about (2), though. A number of these places are cases\nwhere we only do a single write to the file - like in patches 0010,\n0011, 0012, and 0014, for example. And even in 0013, where we do\nmultiple I/Os, it makes no sense to introduce a second buffering\nlayer. We're reading from a file into a buffer several kilobytes in\nsize and then shipping the data out. The places where you want\nbuffering are places where we might be doing system calls for a few\nbytes of I/O at a time, not places where we're already doing\nrelatively large I/Os. An extra layer of buffering probably makes\nthings slower, not faster.\n\nI don't think that (3) is a separate advantage from (1) and (2), so I\ndon't have anything separate to say about it.\n\nI find it really unclear whether, or how, it achieves (4). In general,\nI don't think that the large number of raw calls to read() and write()\nspread across the backend is a particularly good thing. TDE is an\nexample of a feature that might affect every byte we read or write,\nbut you can also imagine instrumentation and tracing facilities that\nmight want to be able to do something every time we do an I/O without\nhaving to modify a zillion separate call sites. Such features can\ncertainly benefit from consolidation, but I suspect it isn't helpful\nto treat every I/O in exactly the same way. For instance, let's say we\nsettle on a design where everything in the cluster is encrypted but,\njust to make things simple for the sake of discussion, let's say there\nare no stored nonces anywhere. Can we just route every I/O in the\nbackend through a wrapper layer that does encryption and decryption?\n\nProbably not. I imagine postgresql.conf and pg_hba.conf aren't going\nto be encrypted, for example, because they're intended to be edited by\nusers. And Bruce has previously proposed designs where the WAL would\nbe encrypted with a different key than the data files. Another idea,\nwhich I kind of like, is to encrypt WAL on a record level rather than\na block level. Either way, you can't just encrypt every single byte\nthe same way. There are probably other relevant distinctions: some\ndata is temporary and need not survive a server restart or be shared\nwith other backends (except maybe in the case of parallel query) and\nsome data is permanent. Some data is already block-structured using\nblocks of size BLCKSZ; some data is block-structured using some other\nblock size; and some data is not block-structured. Some data is in\nstandard-format pages that are part of the buffer pool and some data\nisn't. I feel like some of those distinctions probably matter to TDE,\nand maybe to other things that we might want to do.\n\nFor instance, to go back to my earlier comments, if the data is\nalready block-structured, we do not need to insert a second layer of\nbuffering; if it isn't, maybe we should, not just for TDE but for\nother reasons as well. If the data is temporary, TDE might want to\nencrypt it using a temporary key which is generated on the fly and\nonly stored in server memory, but if it's data that needs to be\nreadable after a server restart, we're going to need to use a key that\nwe'll still have access to in the future. Or maybe that particular\nthing isn't relevant, but I just can't believe that every single I/O\nneeds exactly the same treatment, so there have got to be some\npatterns here that do matter. And I can't really tell whether this\npatch set has a theory about that and, if so, what it is.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 29 Jul 2022 12:52:50 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Fri, Jul 29, 2022 at 11:36 AM Antonin Houska <ah@cybertec.at> wrote:\n> > Attached is a new version. It allows the user to set \"elevel\" (i.e. ERROR is\n> > not necessarily thrown on I/O failure, if the user prefers to check the number\n> > of bytes read/written) and to specify the buffer size. Also, 0015 adds a\n> > function to copy data from one file descriptor to another.\n> \n> I basically agree with Peter Eisentraut's earlier comment: \"I don't\n> understand what this patch set is supposed to do.\" The intended\n> advantages, as documented in buffile.c, are:\n> \n> + * 1. Less code is needed to access the file.\n> + *\n> + * 2.. Buffering reduces the number of I/O system calls.\n> + *\n> + * 3. The buffer size can be controlled by the user. (The larger the buffer,\n> + * the fewer I/O system calls are needed, but the more data needs to be\n> + * written to the buffer before the user recognizes that the file access\n> + * failed.)\n> + *\n> + * 4. It should make features like Transparent Data Encryption less invasive.\n> \n> It does look to me like there is a modest reduction in the total\n> amount of code from these patches, so I do think they might be\n> achieving (1).\n> \n> I'm not so sure about (2), though. A number of these places are cases\n> where we only do a single write to the file - like in patches 0010,\n> 0011, 0012, and 0014, for example. And even in 0013, where we do\n> multiple I/Os, it makes no sense to introduce a second buffering\n> layer. We're reading from a file into a buffer several kilobytes in\n> size and then shipping the data out. The places where you want\n> buffering are places where we might be doing system calls for a few\n> bytes of I/O at a time, not places where we're already doing\n> relatively large I/Os. An extra layer of buffering probably makes\n> things slower, not faster.\n\nThe buffering seemed to me like a good opportunity to plug-in the encryption,\nbecause the data needs to be encrypted in memory before it's written to\ndisk. Of course it'd be better to use the existing buffering for that than to\nadd another layer.\n\n> I don't think that (3) is a separate advantage from (1) and (2), so I\n> don't have anything separate to say about it.\n\nI thought that the uncontrollable buffer size is one of the things you\ncomplaint about in\n\nhttps://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\n> I find it really unclear whether, or how, it achieves (4). In general,\n> I don't think that the large number of raw calls to read() and write()\n> spread across the backend is a particularly good thing. TDE is an\n> example of a feature that might affect every byte we read or write,\n> but you can also imagine instrumentation and tracing facilities that\n> might want to be able to do something every time we do an I/O without\n> having to modify a zillion separate call sites. Such features can\n> certainly benefit from consolidation, but I suspect it isn't helpful\n> to treat every I/O in exactly the same way. For instance, let's say we\n> settle on a design where everything in the cluster is encrypted but,\n> just to make things simple for the sake of discussion, let's say there\n> are no stored nonces anywhere. Can we just route every I/O in the\n> backend through a wrapper layer that does encryption and decryption?\n\n> Probably not. I imagine postgresql.conf and pg_hba.conf aren't going\n> to be encrypted, for example, because they're intended to be edited by\n> users. And Bruce has previously proposed designs where the WAL would\n> be encrypted with a different key than the data files. Another idea,\n> which I kind of like, is to encrypt WAL on a record level rather than\n> a block level. Either way, you can't just encrypt every single byte\n> the same way. There are probably other relevant distinctions: some\n> data is temporary and need not survive a server restart or be shared\n> with other backends (except maybe in the case of parallel query) and\n> some data is permanent. Some data is already block-structured using\n> blocks of size BLCKSZ; some data is block-structured using some other\n> block size; and some data is not block-structured. Some data is in\n> standard-format pages that are part of the buffer pool and some data\n> isn't. I feel like some of those distinctions probably matter to TDE,\n> and maybe to other things that we might want to do.\n> \n> For instance, to go back to my earlier comments, if the data is\n> already block-structured, we do not need to insert a second layer of\n> buffering; if it isn't, maybe we should, not just for TDE but for\n> other reasons as well. If the data is temporary, TDE might want to\n> encrypt it using a temporary key which is generated on the fly and\n> only stored in server memory, but if it's data that needs to be\n> readable after a server restart, we're going to need to use a key that\n> we'll still have access to in the future. Or maybe that particular\n> thing isn't relevant, but I just can't believe that every single I/O\n> needs exactly the same treatment, so there have got to be some\n> patterns here that do matter. And I can't really tell whether this\n> patch set has a theory about that and, if so, what it is.\n\nI didn't want to use this API for relations (file reads/writes for these only\naffects a couple of places in md.c), nor for WAL. The intention was to use the\nAPI for temporary files, and also for other kinds of \"auxiliary\" files where\nthe current coding is rather verbose but not too special.\n\nTo enable the encryption, we'd need to add an extra argument (structure) to\nthe appropriate function, which provides the necessary information on the\nencryption, such as cipher kind (block/stream), the encryption key or the\nnonce. I just don't understand whether the TDE alone (4) justifies patch like\nthis, so I tried to make more use of it.\n\nOn the other hand, if code simplification (1) should be the only benefit, the\npatch might be too controversial as well.\n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Mon, 08 Aug 2022 20:27:27 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Mon, Aug 8, 2022 at 08:27:27PM +0200, Antonin Houska wrote:\n> > For instance, to go back to my earlier comments, if the data is\n> > already block-structured, we do not need to insert a second layer of\n> > buffering; if it isn't, maybe we should, not just for TDE but for\n> > other reasons as well. If the data is temporary, TDE might want to\n> > encrypt it using a temporary key which is generated on the fly and\n> > only stored in server memory, but if it's data that needs to be\n> > readable after a server restart, we're going to need to use a key that\n> > we'll still have access to in the future. Or maybe that particular\n> > thing isn't relevant, but I just can't believe that every single I/O\n> > needs exactly the same treatment, so there have got to be some\n> > patterns here that do matter. And I can't really tell whether this\n> > patch set has a theory about that and, if so, what it is.\n> \n> I didn't want to use this API for relations (file reads/writes for these only\n> affects a couple of places in md.c), nor for WAL. The intention was to use the\n> API for temporary files, and also for other kinds of \"auxiliary\" files where\n> the current coding is rather verbose but not too special.\n> \n> To enable the encryption, we'd need to add an extra argument (structure) to\n> the appropriate function, which provides the necessary information on the\n> encryption, such as cipher kind (block/stream), the encryption key or the\n> nonce. I just don't understand whether the TDE alone (4) justifies patch like\n> this, so I tried to make more use of it.\n\nYes, I assumed a parameter would be added to each call site to indicate\nthe type of key to be used for encryption, or no encryption. If these\nare in performance-critical places, we can avoid the additional level of\ncalls by adding an 'if (tde)' check to only do the indirection when\nnecessary. We can also skip the indirection for files we know will not\nneed to be encrypted, e.g., postgresql.conf.\n\nI can also imagine needing to write the data encrypted without\nencrypting the in-memory version, so centralizing that code in one place\nwould be good.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EDB https://enterprisedb.com\n\n Indecision is a decision. Inaction is an action. Mark Batterson\n\n\n\n", "msg_date": "Mon, 8 Aug 2022 19:23:35 -0400", "msg_from": "Bruce Momjian <bruce@momjian.us>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Mon, Aug 8, 2022 at 2:26 PM Antonin Houska <ah@cybertec.at> wrote:\n> > I don't think that (3) is a separate advantage from (1) and (2), so I\n> > don't have anything separate to say about it.\n>\n> I thought that the uncontrollable buffer size is one of the things you\n> complaint about in\n>\n> https://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\nWell, I think that email is mostly complaining about there being no\nbuffering at all in a situation where it would be advantageous to do\nsome buffering. But that particular code I believe is gone now because\nof the shared-memory stats collector, and when looking through the\npatch set, I didn't see that any of the cases that it touched were\nsimilar to that one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 9 Aug 2022 10:23:03 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "I’m a latecomer to the discussion, but as a word of introduction, I’m working with Stephen, and I have started looking over the temp file proposal with the idea of helping it move along.\n\nI’ll start by summarizing the temp file proposal and its goals.\n\n From a high level, the proposed code:\n\n * Creates an fread/fwrite replacement (BufFileStream) for buffering data to a single file.\n * Updates BufFile, which reads/writes a set of files, to use BufFileStream internally.\n * Does not impact the normal (page cached) database I/O.\n * Updates all the other places where fread/fwrite and read/write are used.\n * Creates and removes transient files.\nI see the following goals:\n\n * Unify all the “other” file accesses into a single, consistent API.\n * Integrate with VFDs.\n * Integrate transient files with transactions and tablespaces.\n * Create a consolidated framework where features like encryption and compression can be more easily added.\n * Maintain good streaming performance.\nIs this a fair description of the proposal?\n\nFor myself, I’d like to map out how features like compression and encryption would fit into the framework, more as a sanity check than anything else, and I’d like to look closer at some of the implementation details. But at the moment, I want to make sure I have the proper high-level view of the temp file proposal.\n\n\nFrom: Robert Haas <robertmhaas@gmail.com>\nDate: Wednesday, September 21, 2022 at 11:54 AM\nTo: Antonin Houska <ah@cybertec.at>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, Stephen Frost <sfrost@snowman.net>\nSubject: Re: Temporary file access API\nOn Mon, Aug 8, 2022 at 2:26 PM Antonin Houska <ah@cybertec.at> wrote:\n> > I don't think that (3) is a separate advantage from (1) and (2), so I\n> > don't have anything separate to say about it.\n>\n> I thought that the uncontrollable buffer size is one of the things you\n> complaint about in\n>\n> https://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\nWell, I think that email is mostly complaining about there being no\nbuffering at all in a situation where it would be advantageous to do\nsome buffering. But that particular code I believe is gone now because\nof the shared-memory stats collector, and when looking through the\npatch set, I didn't see that any of the cases that it touched were\nsimilar to that one.\n\n--\nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n\n\n\n\n\n\n\n\n\n\nI’m a latecomer to the discussion, but as a word of introduction, I’m working with Stephen, and I have started looking over the temp file proposal with the idea of helping it move along.\n \nI’ll start by summarizing the temp file proposal and its goals.\n \nFrom a high level, the proposed code:\n\n\nCreates an fread/fwrite replacement (BufFileStream) for buffering data to a single file.\nUpdates BufFile, which reads/writes a set of files, to use BufFileStream internally.\nDoes not impact the normal (page cached) database I/O.\nUpdates all the other places where fread/fwrite and read/write are used.\nCreates and removes transient files.\nI see the following goals:\n\n\nUnify all the “other” file accesses into a single, consistent API.\nIntegrate with VFDs.\nIntegrate transient files with transactions and tablespaces.\nCreate a consolidated framework where features like encryption and compression can be more easily added.\nMaintain good streaming performance.\nIs this a fair description of the proposal? \n \nFor myself, I’d like to map out how features like compression and encryption would fit into the framework, more as a sanity check than anything else, and I’d like to look closer at some of the implementation details. But at the moment,\n I want to make sure I have the proper high-level view of the temp file proposal.\n \n \n\nFrom:\nRobert Haas <robertmhaas@gmail.com>\nDate: Wednesday, September 21, 2022 at 11:54 AM\nTo: Antonin Houska <ah@cybertec.at>\nCc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, Stephen Frost <sfrost@snowman.net>\nSubject: Re: Temporary file access API\n\n\nOn Mon, Aug 8, 2022 at 2:26 PM Antonin Houska <ah@cybertec.at> wrote:\n> > I don't think that (3) is a separate advantage from (1) and (2), so I\n> > don't have anything separate to say about it.\n>\n> I thought that the uncontrollable buffer size is one of the things you\n> complaint about in\n>\n> \nhttps://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n\nWell, I think that email is mostly complaining about there being no\nbuffering at all in a situation where it would be advantageous to do\nsome buffering. But that particular code I believe is gone now because\nof the shared-memory stats collector, and when looking through the\npatch set, I didn't see that any of the cases that it touched were\nsimilar to that one.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Thu, 22 Sep 2022 15:42:13 +0000", "msg_from": "John Morris <john@precision-gps.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "Hi,\n\nJohn Morris <john@precision-gps.com> wrote:\n\n> I’m a latecomer to the discussion, but as a word of introduction, I’m working with Stephen, and I have started looking over the temp file proposal with the idea of helping it move along.\n> \n> I’ll start by summarizing the temp file proposal and its goals.\n> \n> From a high level, the proposed code:\n> \n> * Creates an fread/fwrite replacement (BufFileStream) for buffering data to a single file.\n> \n> * Updates BufFile, which reads/writes a set of files, to use BufFileStream internally.\n> \n> * Does not impact the normal (page cached) database I/O.\n> \n> * Updates all the other places where fread/fwrite and read/write are used.\n\nNot really all, just those where the change seemed reasonable (i.e. where it\ndoes not make the code more complex)\n\n> * Creates and removes transient files.\n\nThe \"stream API\" is rather an additional layer on top of files that user needs\nto create / remove at lower level.\n\n> I see the following goals:\n> \n> * Unify all the “other” file accesses into a single, consistent API.\n> \n> * Integrate with VFDs.\n> \n> * Integrate transient files with transactions and tablespaces.\n\nIf you mean automatic closing/deletion of files on transaction end, this is\nalso the lower level thing that I didn't try to change.\n\n> * Create a consolidated framework where features like encryption and compression can be more easily added.\n> \n> * Maintain good streaming performance.\n> \n> Is this a fair description of the proposal? \n\nBasically that's it.\n\n> For myself, I’d like to map out how features like compression and encryption would fit into the framework, more as a sanity check than anything else, and I’d like to look closer at some of the implementation details. But at the moment, I want to make sure I have the\n> proper high-level view of the temp file proposal.\n\nI think the high level design (i.e. how the API should be used) still needs\ndiscussion. In particular, I don't know whether it should aim at the\nencryption adoption or not. If it does, then it makes sense to base it on\nbuffile.c, because encryption essentially takes place in memory. But if\nbuffering itself (w/o encryption) is not really useful at other places (see\nRobert's comments in [1]), then we can design something simpler, w/o touching\nbuffile.c (which, in turn, won't be usable for encryption, compression or so).\n\nSo I think that code simplification and easy adoption of in-memory data\nchanges (such as encryption or compression) are two rather distinct goals.\nadmit that I'm running out of ideas how to develop a framework that'd be\nuseful for both.\n\n[1]\nhttps://www.postgresql.org/message-id/CA%2BTgmoZWP8UtkNVLd75Qqoh9VGOVy_0xBK%2BSHZAdNvnfaikKsQ%40mail.gmail.com\n\n\n> From: Robert Haas <robertmhaas@gmail.com>\n> Date: Wednesday, September 21, 2022 at 11:54 AM\n> To: Antonin Houska <ah@cybertec.at>\n> Cc: PostgreSQL Hackers <pgsql-hackers@lists.postgresql.org>, Peter Eisentraut <peter.eisentraut@enterprisedb.com>, Stephen Frost <sfrost@snowman.net>\n> Subject: Re: Temporary file access API\n> \n> On Mon, Aug 8, 2022 at 2:26 PM Antonin Houska <ah@cybertec.at> wrote:\n> > > I don't think that (3) is a separate advantage from (1) and (2), so I\n> > > don't have anything separate to say about it.\n> >\n> > I thought that the uncontrollable buffer size is one of the things you\n> > complaint about in\n> >\n> > https://www.postgresql.org/message-id/CA+TgmoYGjN_f=FCErX49bzjhNG+GoctY+a+XhNRWCVvDY8U74w@mail.gmail.com\n> \n> Well, I think that email is mostly complaining about there being no\n> buffering at all in a situation where it would be advantageous to do\n> some buffering. But that particular code I believe is gone now because\n> of the shared-memory stats collector, and when looking through the\n> patch set, I didn't see that any of the cases that it touched were\n> similar to that one.\n> \n> -- \n> Robert Haas\n> EDB: http://www.enterprisedb.com\n> \n\n-- \nAntonin Houska\nWeb: https://www.cybertec-postgresql.com\n\n\n", "msg_date": "Fri, 23 Sep 2022 15:45:58 +0200", "msg_from": "Antonin Houska <ah@cybertec.at>", "msg_from_op": true, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "* So I think that code simplification and easy adoption of in-memory data\nchanges (such as encryption or compression) are two rather distinct goals.\nadmit that I'm running out of ideas how to develop a framework that'd be\nuseful for both.\n\nI’m also wondering about code simplification vs a more general encryption/compression framework. I started with the current proposal, and I’m also looking at David Steele’s approach to encryption/compression in pgbackrest. I’m beginning to think we should do a high-level design which includes encryption/compression, and then implement it as a “simplification” without actually doing the transformations.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nSo I think that code simplification and easy adoption of in-memory data\nchanges (such as encryption or compression) are two rather distinct goals.\nadmit that I'm running out of ideas how to develop a framework that'd be\nuseful for both.\n \nI’m also wondering about code simplification vs a more general encryption/compression framework. I started with the current proposal, and I’m also looking at David Steele’s approach to encryption/compression\n in pgbackrest. I’m beginning to think we should do a high-level design which includes encryption/compression, and then implement it as a “simplification” without actually doing the transformations.", "msg_date": "Tue, 27 Sep 2022 16:22:39 +0000", "msg_from": "John Morris <john.morris@crunchydata.com>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" }, { "msg_contents": "On Tue, Sep 27, 2022 at 04:22:39PM +0000, John Morris wrote:\n> I’m also wondering about code simplification vs a more general\n> encryption/compression framework. I started with the current\n> proposal, and I’m also looking at David Steele’s approach to\n> encryption/compression in pgbackrest. I’m beginning to think we\n> should do a high-level design which includes encryption/compression,\n> and then implement it as a “simplification” without actually doing\n> the transformations.\n\nIt sounds to me like the proposal of this thread is not the most\nadapted thing, so I have marked it as RwF as this is the status of 2~3\nweeks ago.. If you feel that I am wrong, feel free to scream after\nme.\n--\nMichael", "msg_date": "Wed, 12 Oct 2022 14:55:50 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Temporary file access API" } ]
[ { "msg_contents": "I had to do some analysis on the \"sanity\" tests in the regression test \nsuite (opr_sanity, type_sanity) recently, and I found some of the \nqueries very confusing. One main stumbling block is that for some \nprobably ancient reason many of the older queries are written with \ncorrelation names p1, p2, etc. independent of the name of the catalog. \nThis one is a good example:\n\nSELECT p1.oid, p1.oprname, p2.oid, p2.proname\nFROM pg_operator AS p1, pg_proc AS p2 <-- HUH?!?\nWHERE p1.oprcode = p2.oid AND\n p1.oprkind = 'l' AND\n (p2.pronargs != 1\n OR NOT binary_coercible(p2.prorettype, p1.oprresult)\n OR NOT binary_coercible(p1.oprright, p2.proargtypes[0])\n OR p1.oprleft != 0);\n\nI think this is better written as\n\nSELECT o1.oid, o1.oprname, p1.oid, p1.proname\nFROM pg_operator AS o1, pg_proc AS p1\nWHERE o1.oprcode = p1.oid AND\n o1.oprkind = 'l' AND\n (p1.pronargs != 1\n OR NOT binary_coercible(p1.prorettype, o1.oprresult)\n OR NOT binary_coercible(o1.oprright, p1.proargtypes[0])\n OR o1.oprleft != 0);\n\nAttached is a patch that cleans up all the queries in this manner.\n\n(As in the above case, I kept the digits like o1 and p1 even in cases \nwhere only one of each letter is used in a query. This is mainly to \nkeep the style consistent, but if people don't like that at all, it \ncould be changed.)", "msg_date": "Tue, 8 Feb 2022 14:52:09 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Improve correlation names in sanity tests" }, { "msg_contents": "Peter Eisentraut <peter.eisentraut@enterprisedb.com> writes:\n> I had to do some analysis on the \"sanity\" tests in the regression test \n> suite (opr_sanity, type_sanity) recently, and I found some of the \n> queries very confusing. One main stumbling block is that for some \n> probably ancient reason many of the older queries are written with \n> correlation names p1, p2, etc. independent of the name of the catalog. \n\nI think that was at least partially my fault, and no there isn't\nany very good reason for it. Your proposal seems fine.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 08 Feb 2022 14:27:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Improve correlation names in sanity tests" } ]
[ { "msg_contents": "Hi,\n\nCommit 0ba281cb4bf9f5f65529dfa4c8282abb734dd454 added a new syntax for\nthe BASE_BACKUP command, and commit\ncc333f32336f5146b75190f57ef587dff225f565 added a new COPY subprotocol\nfor taking base backups. In both cases, I preserved backward\ncompatibility. However, nothing in PostgreSQL itself cares about this,\nbecause if you try to run an older version of pg_basebackup against a\nv15 server, it's going to fail anyway:\n\npg_basebackup: error: incompatible server version 15devel\n\n From that point of view, there's no downside to removing from the\nserver the old syntax for BASE_BACKUP and the old protocol for taking\nbackups. We can't remove anything from pg_basebackup, because it is\nour practice to make new versions of pg_basebackup work with old\nversions of the server. But the reverse is not true: an older\npg_basebackup will categorically refuse to work with a newer server\nversion. Therefore keeping the code for this stuff around in the\nserver has no value ... unless there is out-of-core code that (a) uses\nthe BASE_BACKUP command and (b) wouldn't immediately adopt the new\nsyntax and protocol anyway. If there is, we might want to keep the\nbackward-compatibility code around in the server for a few releases.\nIf not, we should probably nuke that code to simplify things and\nreduce the maintenance burden.\n\nPatches for the nuking are attached. If nobody writes back, I'm going\nto assume that means nobody cares, and commit these some time before\nfeature freeze. If one or more people do write back, then my plan is\nto see what they have to say and proceed accordingly.\n\nThanks,\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Tue, 8 Feb 2022 11:26:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Tue, Feb 08, 2022 at 11:26:41AM -0500, Robert Haas wrote:\n> Patches for the nuking are attached. If nobody writes back, I'm going\n> to assume that means nobody cares, and commit these some time before\n> feature freeze. If one or more people do write back, then my plan is\n> to see what they have to say and proceed accordingly.\n\nI created this and added that for visibility and so it's not forgotten.\nISTM that could be done post-freeze, although I don't know if that's useful or\nimportant.\n\nhttps://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n\n-- \nJustin\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:03:15 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Tue, Feb 8, 2022 at 1:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> I created this and added that for visibility and so it's not forgotten.\n> ISTM that could be done post-freeze, although I don't know if that's useful or\n> important.\n>\n> https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n\nThanks. I feel like this is in the department of things that are\nunlikely to be an issue for anyone, but could be. But the set of\npeople who could care is basically just backup tool authors, so\nhopefully they'll notice this thread, or that list.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 13:39:12 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On 2/8/22 12:39, Robert Haas wrote:\n> On Tue, Feb 8, 2022 at 1:03 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>> I created this and added that for visibility and so it's not forgotten.\n>> ISTM that could be done post-freeze, although I don't know if that's useful or\n>> important.\n>>\n>> https://wiki.postgresql.org/wiki/PostgreSQL_15_Open_Items\n> \n> Thanks. I feel like this is in the department of things that are\n> unlikely to be an issue for anyone, but could be. But the set of\n> people who could care is basically just backup tool authors, so\n> hopefully they'll notice this thread, or that list.\n\nI'm aware of several tools that use pg_basebackup but they are using the \ncommand-line tool, not the server protocol directly.\n\nPersonally, I'm in favor of simplifying the code on the server side. If \nanyone is using the protocol directly (which I kind of doubt) I imagine \nthey would want to take advantage of all the new features that have been \nadded.\n\nRegards,\n-David\n\n\n", "msg_date": "Tue, 8 Feb 2022 12:52:32 -0600", "msg_from": "David Steele <david@pgmasters.net>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Tue, Feb 8, 2022 at 1:52 PM David Steele <david@pgmasters.net> wrote:\n> I'm aware of several tools that use pg_basebackup but they are using the\n> command-line tool, not the server protocol directly.\n\nGood to know.\n\n> Personally, I'm in favor of simplifying the code on the server side. If\n> anyone is using the protocol directly (which I kind of doubt) I imagine\n> they would want to take advantage of all the new features that have been\n> added.\n\nI agree, and I'm glad you do too, but I thought it was worth asking.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 8 Feb 2022 13:59:16 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Tue, Feb 08, 2022 at 11:26:41AM -0500, Robert Haas wrote:\n> From that point of view, there's no downside to removing from the\n> server the old syntax for BASE_BACKUP and the old protocol for taking\n> backups. We can't remove anything from pg_basebackup, because it is\n> our practice to make new versions of pg_basebackup work with old\n> versions of the server. But the reverse is not true: an older\n> pg_basebackup will categorically refuse to work with a newer server\n> version. Therefore keeping the code for this stuff around in the\n> server has no value ... unless there is out-of-core code that (a) uses\n> the BASE_BACKUP command and (b) wouldn't immediately adopt the new\n> syntax and protocol anyway. If there is, we might want to keep the\n> backward-compatibility code around in the server for a few releases.\n> If not, we should probably nuke that code to simplify things and\n> reduce the maintenance burden.\n\nThis line of arguments looks sensible from here, so +1 for this\ncleanup in the backend as of 15~. I am not sure if we should worry\nabout out-of-core tools that use replication commands, either, and the\nnew grammar is easy to adapt to.\n\nFWIW, one backup tool maintained by NTT is pg_rman, which does not use\nthe replication protocol AFAIK:\nhttps://github.com/ossc-db/pg_rman\nPerhaps Horiguchi-san or Fujita-san have an opinion on that.\n--\nMichael", "msg_date": "Wed, 9 Feb 2022 11:21:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "2022年2月9日(水) 11:21 Michael Paquier <michael@paquier.xyz>:\n>\n> On Tue, Feb 08, 2022 at 11:26:41AM -0500, Robert Haas wrote:\n> > From that point of view, there's no downside to removing from the\n> > server the old syntax for BASE_BACKUP and the old protocol for taking\n> > backups. We can't remove anything from pg_basebackup, because it is\n> > our practice to make new versions of pg_basebackup work with old\n> > versions of the server. But the reverse is not true: an older\n> > pg_basebackup will categorically refuse to work with a newer server\n> > version. Therefore keeping the code for this stuff around in the\n> > server has no value ... unless there is out-of-core code that (a) uses\n> > the BASE_BACKUP command and (b) wouldn't immediately adopt the new\n> > syntax and protocol anyway. If there is, we might want to keep the\n> > backward-compatibility code around in the server for a few releases.\n> > If not, we should probably nuke that code to simplify things and\n> > reduce the maintenance burden.\n>\n> This line of arguments looks sensible from here, so +1 for this\n> cleanup in the backend as of 15~. I am not sure if we should worry\n> about out-of-core tools that use replication commands, either, and the\n> new grammar is easy to adapt to.\n\nFWIW the out-of-core utility I have some involvement in and which uses base\nbackup functionality, uses this by executing pg_basebackup, so no problem\nthere. But even if it used the replication protocol directly, it'd\njust be a case of\nadding another bit of Pg version-specific code handling; I certainly wouldn't\nexpect core to hold itself back for my convenience.\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 11:42:18 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "At Wed, 9 Feb 2022 11:21:41 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Feb 08, 2022 at 11:26:41AM -0500, Robert Haas wrote:\n> > From that point of view, there's no downside to removing from the\n> > server the old syntax for BASE_BACKUP and the old protocol for taking\n> > backups. We can't remove anything from pg_basebackup, because it is\n> > our practice to make new versions of pg_basebackup work with old\n> > versions of the server. But the reverse is not true: an older\n> > pg_basebackup will categorically refuse to work with a newer server\n> > version. Therefore keeping the code for this stuff around in the\n> > server has no value ... unless there is out-of-core code that (a) uses\n> > the BASE_BACKUP command and (b) wouldn't immediately adopt the new\n> > syntax and protocol anyway. If there is, we might want to keep the\n> > backward-compatibility code around in the server for a few releases.\n> > If not, we should probably nuke that code to simplify things and\n> > reduce the maintenance burden.\n> \n> This line of arguments looks sensible from here, so +1 for this\n> cleanup in the backend as of 15~. I am not sure if we should worry\n> about out-of-core tools that use replication commands, either, and the\n> new grammar is easy to adapt to.\n> \n> FWIW, one backup tool maintained by NTT is pg_rman, which does not use\n> the replication protocol AFAIK:\n> https://github.com/ossoc-db/pg_rman\n> Perhaps Horiguchi-san or Fujita-san have an opinion on that.\n\n# Oh, the excessive 'o' perplexed me:p\n\nThanks for pining. AFAICS, as you see, pg_rman doesn't talk\nbasebackup protocol (nor even pg_basebackup command) as it supports\ninremental backup. So there's no issue about the removal of old\nsyntax on our side.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Wed, 09 Feb 2022 17:14:43 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Wed, Feb 9, 2022 at 3:14 AM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Thanks for pining. AFAICS, as you see, pg_rman doesn't talk\n> basebackup protocol (nor even pg_basebackup command) as it supports\n> inremental backup. So there's no issue about the removal of old\n> syntax on our side.\n\nCool. Since the verdict here seems pretty unanimous and we've heard\nfrom a good number of backup tool authors, I'll give this another day\nor so and then commit, unless objections emerge before then.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 08:01:41 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "Am Mittwoch, dem 09.02.2022 um 08:01 -0500 schrieb Robert Haas:\n> On Wed, Feb 9, 2022 at 3:14 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Thanks for pining.  AFAICS, as you see, pg_rman doesn't talk\n> > basebackup protocol (nor even pg_basebackup command) as it supports\n> > inremental backup.  So there's no issue about the removal of old\n> > syntax on our side.\n> \n> Cool. Since the verdict here seems pretty unanimous and we've heard\n> from a good number of backup tool authors, I'll give this another day\n> or so and then commit, unless objections emerge before then.\n> \n\nA while ago i worked on a tool which talks the streaming protocol\ndirectly. In the past there were already additions to the protocol\n(like MANIFEST), so to support those optional features and multiple\nmajor versions you already have to abstract the code to deal with that.\n\nHaving said that, i don't have any objections either to remove the old\nprotocol support.\n\nThanks\n\tBernd\n\n\n\n", "msg_date": "Wed, 09 Feb 2022 14:50:58 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Wed, Feb 9, 2022 at 2:02 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Wed, Feb 9, 2022 at 3:14 AM Kyotaro Horiguchi\n> <horikyota.ntt@gmail.com> wrote:\n> > Thanks for pining. AFAICS, as you see, pg_rman doesn't talk\n> > basebackup protocol (nor even pg_basebackup command) as it supports\n> > inremental backup. So there's no issue about the removal of old\n> > syntax on our side.\n>\n> Cool. Since the verdict here seems pretty unanimous and we've heard\n> from a good number of backup tool authors, I'll give this another day\n> or so and then commit, unless objections emerge before then.\n\nLate to the game, but here's another +1.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/\n Work: https://www.redpill-linpro.com/\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:05:02 +0100", "msg_from": "Magnus Hagander <magnus@hagander.net>", "msg_from_op": false, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" }, { "msg_contents": "On Wed, Feb 9, 2022 at 3:05 PM Magnus Hagander <magnus@hagander.net> wrote:\n> Late to the game, but here's another +1.\n\nNot that late, thanks for chiming in.\n\nLooks pretty unanimous, so I have committed the patches.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:28:05 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "Re: is the base backup protocol used by out-of-core tools?" } ]
[ { "msg_contents": "Add TAP test to automate the equivalent of check_guc\n\nsrc/backend/utils/misc/check_guc is a script that cross-checks the\nconsistency of the GUCs with postgresql.conf.sample, making sure that\nits format is in line with what guc.c has. It has never been run\nautomatically, and has rotten over the years, creating a lot of false\npositives as per a report from Justin Pryzby.\n\nd10e41d has introduced a SQL function to publish the most relevant flags\nassociated to a GUC, with tests added in the main regression test suite\nto make sure that we avoid most of the inconsistencies in the GUC\nsettings, based on recent reports, but there was nothing able to\ncross-check postgresql.conf.sample with the contents of guc.c.\n\nThis commit adds a TAP test that covers the remaining gap. It emulates\nthe most relevant checks that check_guc does, so as any format mistakes\nare detected in postgresql.conf.sample at development stage, with the\nfollowing checks:\n- Check that parameters marked as NOT_IN_SAMPLE are not in the sample\nfile.\n- Check that there are no dead entries in postgresql.conf.sample for\nparameters not marked as NOT_IN_SAMPLE.\n- Check that no parameters are missing from the sample file if listed in\nguc.c without NOT_IN_SAMPLE.\n\nThe idea of building a list of the GUCs by parsing the sample file comes\nfrom Justin, and he wrote the regex used in the patch to find all the\nGUCs (this same formatting rule basically applies for the last 20~ years\nor so). In order to test this patch, I have played with manual\nmodifications of postgresql.conf.sample and guc.c, making sure that we\ndetect problems with the GUC rules and the sample file format.\n\nThe test is located in src/test/modules/test_misc, which is the best\nlocation I could think about for such sanity checks.\n\nReviewed-by: Justin Pryzby\nDiscussion: https://postgr.es/m/Yf9YGSwPiMu0c7fP@paquier.xyz\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/b0a55f4d4ad58e4bc152f8c1696b4af46525e3f1\n\nModified Files\n--------------\nsrc/test/modules/test_misc/t/003_check_guc.pl | 108 ++++++++++++++++++++++++++\n1 file changed, 108 insertions(+)", "msg_date": "Wed, 09 Feb 2022 01:15:56 +0000", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Michael Paquier\n> Add TAP test to automate the equivalent of check_guc\n> \n> src/backend/utils/misc/check_guc is a script that cross-checks the\n> consistency of the GUCs with postgresql.conf.sample, making sure that\n\nHi,\n\nthis test is failing at Debian package compile time:\n\n00:55:07 ok 18 - drop tablespace 2\n00:55:07 ok 19 - drop tablespace 3\n00:55:07 ok 20 - drop tablespace 4\n00:55:07 ok\n00:55:07 # Looks like your test exited with 2 before it could output anything.\n00:55:09 t/003_check_guc.pl ..............\n00:55:09 1..3\n00:55:09 Dubious, test returned 2 (wstat 512, 0x200)\n00:55:09 Failed 3/3 subtests\n00:55:09\n00:55:09 Test Summary Report\n00:55:09 -------------------\n00:55:09 t/003_check_guc.pl (Wstat: 512 Tests: 0 Failed: 0)\n00:55:09 Non-zero exit status: 2\n00:55:09 Parse errors: Bad plan. You planned 3 tests but ran 0.\n00:55:09 Files=3, Tests=62, 6 wallclock secs ( 0.05 usr 0.01 sys + 3.31 cusr 1.35 csys = 4.72 CPU)\n00:55:09 Result: FAIL\n00:55:09 make[3]: *** [/<<PKGBUILDDIR>>/build/../src/makefiles/pgxs.mk:457: check] Error 1\n00:55:09 make[3]: Leaving directory '/<<PKGBUILDDIR>>/build/src/test/modules/test_misc'\n\n### Starting node \"main\"\n# Running: pg_ctl -w -D /srv/projects/postgresql/pg/master/build/src/test/modules/test_misc/tmp_check/t_003_check_guc_main_data/pgdata -l /srv/projects/postgresql/pg/master/build/src/test/modules/test_misc/tmp_check/log/003_check_guc_main.log -o --cluster-name=main start\nwaiting for server to start.... done\nserver started\n# Postmaster PID for node \"main\" is 1432398\nCould not open /usr/share/postgresql/15/postgresql.conf.sample: No such file or directory at t/003_check_guc.pl line 47.\n### Stopping node \"main\" using mode immediate\n\n\nSo it's trying to read from /usr/share/postgresql which doesn't exist\nyet at build time.\n\nThe relevant part of the test is this:\n\n# Find the location of postgresql.conf.sample, based on the information\n# provided by pg_config.\nmy $sample_file =\n $node->config_data('--sharedir') . '/postgresql.conf.sample';\n\n\nIt never caused any problem in the 12+ years we have been doing this,\nbut Debian is patching pg_config not to be relocatable since we are\ninstalling into /usr/lib/postgresql/NN /usr/share/postgresql/NN, so\nnot a single prefix.\n\nhttps://salsa.debian.org/postgresql/postgresql/-/blob/15/debian/patches/50-per-version-dirs.patch\n\nChristoph\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:48:11 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> this test is failing at Debian package compile time:\n> Could not open /usr/share/postgresql/15/postgresql.conf.sample: No such file or directory at t/003_check_guc.pl line 47.\n\n> So it's trying to read from /usr/share/postgresql which doesn't exist\n> yet at build time.\n\n> The relevant part of the test is this:\n\n> # Find the location of postgresql.conf.sample, based on the information\n> # provided by pg_config.\n> my $sample_file =\n> $node->config_data('--sharedir') . '/postgresql.conf.sample';\n\nThis seems like a pretty bad idea even if it weren't failing outright.\nWe should be examining the version of the file that's in the source\ntree; the one in the installation tree might have version-skew\nproblems, if you've not yet done \"make install\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:59:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Fri, Feb 11, 2022 at 09:59:55AM -0500, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n> > this test is failing at Debian package compile time:\n> > Could not open /usr/share/postgresql/15/postgresql.conf.sample: No such file or directory at t/003_check_guc.pl line 47.\n> \n> > So it's trying to read from /usr/share/postgresql which doesn't exist\n> > yet at build time.\n> \n> > The relevant part of the test is this:\n> \n> > # Find the location of postgresql.conf.sample, based on the information\n> > # provided by pg_config.\n> > my $sample_file =\n> > $node->config_data('--sharedir') . '/postgresql.conf.sample';\n> \n> This seems like a pretty bad idea even if it weren't failing outright.\n> We should be examining the version of the file that's in the source\n> tree; the one in the installation tree might have version-skew\n> problems, if you've not yet done \"make install\".\n\nMy original way used the source tree, but Michael thought it would be an issue\nfor \"installcheck\" where the config may not be available.\n\nhttps://www.postgresql.org/message-id/YfTg/WHNLVVygy8v%40paquier.xyz\n\nThis is what I had written:\n\n-- test that GUCS are in postgresql.conf\nSELECT lower(name) FROM tab_settings_flags WHERE NOT not_in_sample EXCEPT\nSELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\nFROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'), '\\n') AS ln) conf\nWHERE ln ~ '^#?[[:alpha:]]'\nORDER BY 1;\n lower \n-----------------------------\n config_file\n plpgsql.check_asserts\n plpgsql.extra_errors\n plpgsql.extra_warnings\n plpgsql.print_strict_params\n plpgsql.variable_conflict\n(6 rows)\n\n-- test that lines in postgresql.conf that look like GUCs are GUCs\nSELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\nFROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'), '\\n') AS ln) conf\nWHERE ln ~ '^#?[[:alpha:]]'\nEXCEPT SELECT lower(name) FROM tab_settings_flags WHERE NOT not_in_sample\nORDER BY 1;\n guc \n-------------------\n include\n include_dir\n include_if_exists\n(3 rows)\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:35:49 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Fri, Feb 11, 2022 at 09:59:55AM -0500, Tom Lane wrote:\n>> This seems like a pretty bad idea even if it weren't failing outright.\n>> We should be examining the version of the file that's in the source\n>> tree; the one in the installation tree might have version-skew\n>> problems, if you've not yet done \"make install\".\n\n> My original way used the source tree, but Michael thought it would be an issue\n> for \"installcheck\" where the config may not be available.\n\nYeah, you are at risk either way, but in practice nobody is going to be\nrunning these TAP tests without a source tree.\n\n> This is what I had written:\n> FROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'), '\\n') AS ln) conf\n\nThat's not using the source tree either, but the copy in the\ncluster-under-test. I'd fear it to be unstable in the buildfarm, where\nanimals can append whatever they please to the config file being used by\ntests.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:41:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Fri, Feb 11, 2022 at 10:41:27AM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Fri, Feb 11, 2022 at 09:59:55AM -0500, Tom Lane wrote:\n> >> This seems like a pretty bad idea even if it weren't failing outright.\n> >> We should be examining the version of the file that's in the source\n> >> tree; the one in the installation tree might have version-skew\n> >> problems, if you've not yet done \"make install\".\n> \n> > My original way used the source tree, but Michael thought it would be an issue\n> > for \"installcheck\" where the config may not be available.\n> \n> Yeah, you are at risk either way, but in practice nobody is going to be\n> running these TAP tests without a source tree.\n> \n> > This is what I had written:\n> > FROM (SELECT regexp_split_to_table(pg_read_file('postgresql.conf'), '\\n') AS ln) conf\n> \n> That's not using the source tree either, but the copy in the\n> cluster-under-test. I'd fear it to be unstable in the buildfarm, where\n> animals can append whatever they please to the config file being used by\n> tests.\n\nThe important test is that \"new GUCs exist in sample.conf\". The converse test\nis less interesting, so I think the important half would be okay.\n\nI see the BF client appends to postgresql.conf.\nhttps://github.com/PGBuildFarm/client-code/blob/main/run_build.pl#L1374\n\nIt could use \"include\", which was added in v8.2. Or it could add a marker,\nlike pg_regress does:\nsrc/test/regress/pg_regress.c: fputs(\"\\n# Configuration added by pg_regress\\n\\n\", pg_conf);\n\nIf it were desirable to also check that \"things that look like GUCs in the conf\nare actually GUCs\", either of those would avoid false positives.\n\nOr, is it okay to use ABS_SRCDIR?\n\n+\\getenv abs_srcdir PG_ABS_SRCDIR\n+\\set filename :abs_srcdir '../../../../src/backend/utils/misc/postgresql.conf.sample'\n+-- test that GUCS are in postgresql.conf\n+SELECT lower(name) FROM tab_settings_flags WHERE NOT not_in_sample EXCEPT\n+SELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\n+FROM (SELECT regexp_split_to_table(pg_read_file(:'filename'), '\\n') AS ln) conf\n+WHERE ln ~ '^#?[[:alpha:]]'\n+ORDER BY 1;\n+\n+-- test that lines in postgresql.conf that look like GUCs are GUCs\n+SELECT regexp_replace(ln, '^#?([_[:alpha:]]+) (= .*|[^ ]*$)', '\\1') AS guc\n+FROM (SELECT regexp_split_to_table(pg_read_file(:'filename'), '\\n') AS ln) conf\n+WHERE ln ~ '^#?[[:alpha:]]'\n+EXCEPT SELECT lower(name) FROM tab_settings_flags WHERE NOT not_in_sample\n+ORDER BY 1;\n\n-- \nJustin\n\n\n", "msg_date": "Fri, 11 Feb 2022 11:29:29 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Or, is it okay to use ABS_SRCDIR?\n\nDon't see why not --- other tests certainly do.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:34:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Fri, Feb 11, 2022 at 10:48:11AM +0100, Christoph Berg wrote:\n> It never caused any problem in the 12+ years we have been doing this,\n> but Debian is patching pg_config not to be relocatable since we are\n> installing into /usr/lib/postgresql/NN /usr/share/postgresql/NN, so\n> not a single prefix.\n> \n> https://salsa.debian.org/postgresql/postgresql/-/blob/15/debian/patches/50-per-version-dirs.patch\n\nWow. This is the way for Debian to bypass the fact that ./configure\nis itself patched, hence you cannot rely on things like --libdir,\n--bindir and the kind at build time? That's.. Err.. Fancy, I'd\nsay.\n--\nMichael", "msg_date": "Sat, 12 Feb 2022 09:46:49 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:34:31PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> Or, is it okay to use ABS_SRCDIR?\n> \n> Don't see why not --- other tests certainly do.\n\nAnother solution would be to rely on TESTDIR, which is always set as\nof the Makefile invocations and vcregress.pl, but that's doomed for\nVPATH builds as this points to the build directory, not the source\ndirectory so we would not have access to the sample file. The root\nissue here is that we don't have access to top_srcdir when running the\nTAP tests, if we want to keep this stuff as a TAP test.\n\nlibpq's regress.pl tweaks things by passing down a SRCDIR to take care\nof the VPATH problem, but there is nothing in %ENV that points to the\nsource directory in the context of a TAP test by default, if we cannot\nrely on pg_config to find where the sample file is located. That's\ndifferent than this thread, but should we make an effort and expand\nthe data available here for TAP tests?\n\nHmm. Relying on ABS_SRCDIR in the main test suite would be fine,\nthough. It also looks that there would be nothing preventing the\naddition of the three tests in the TAP test, as of three SQL queries\ninstead. These had better be written with one or more CTEs, for\nreadability, at least, with the inner query reading the contents of\nthe sample file to grab the list of all the GUCs.\n\nI guess that it would be better to revert the TAP test and rework all\nthat for the time being, then.\n--\nMichael", "msg_date": "Sat, 12 Feb 2022 09:49:47 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Sat, Feb 12, 2022 at 09:49:47AM +0900, Michael Paquier wrote:\n> I guess that it would be better to revert the TAP test and rework all\n> that for the time being, then.\n\nAnd this part is done.\n--\nMichael", "msg_date": "Sat, 12 Feb 2022 13:29:54 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Michael Paquier\n> On Fri, Feb 11, 2022 at 10:48:11AM +0100, Christoph Berg wrote:\n> > It never caused any problem in the 12+ years we have been doing this,\n> > but Debian is patching pg_config not to be relocatable since we are\n> > installing into /usr/lib/postgresql/NN /usr/share/postgresql/NN, so\n> > not a single prefix.\n> > \n> > https://salsa.debian.org/postgresql/postgresql/-/blob/15/debian/patches/50-per-version-dirs.patch\n> \n> Wow. This is the way for Debian to bypass the fact that ./configure\n> is itself patched, hence you cannot rely on things like --libdir,\n> --bindir and the kind at build time? That's.. Err.. Fancy, I'd\n> say.\n\n--libdir and --bindir will point at the final install locations.\n\nI think the \"bug\" here is that vanilla PG doesn't support installing\nin FHS locations with /usr/lib and /usr/share split, hence the Debian\npatch.\n\nChristoph\n\n\n", "msg_date": "Sat, 12 Feb 2022 09:49:18 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I think the \"bug\" here is that vanilla PG doesn't support installing\n> in FHS locations with /usr/lib and /usr/share split, hence the Debian\n> patch.\n\nI'm confused by this statement. An out-of-the-box installation\nproduces trees under $PREFIX/lib and $PREFIX/share. What about\nthat is not FHS compliant? And couldn't you fix it using the\ninstallation fine-tuning switches that configure already provides?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 11:23:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Tom Lane\n> Christoph Berg <myon@debian.org> writes:\n> > I think the \"bug\" here is that vanilla PG doesn't support installing\n> > in FHS locations with /usr/lib and /usr/share split, hence the Debian\n> > patch.\n> \n> I'm confused by this statement. An out-of-the-box installation\n> produces trees under $PREFIX/lib and $PREFIX/share. What about\n> that is not FHS compliant? And couldn't you fix it using the\n> installation fine-tuning switches that configure already provides?\n\nTo support multiple major versions in parallel, we need\n/usr/lib/postgresql/NN/lib and /usr/share/postgresql/NN, so it's not a\nsingle $PREFIX. But you are correct, and ./configure supports that.\n\nI was confusing that with this: The problem that led to the pg_config\npatch years ago was that we have a /usr/bin/pg_config in\n(non-major-version-dependant) libpq-dev, and\n/usr/lib/postgresql/NN/bin/pg_config in the individual\npostgresql-server-dev-NN packages, and iirc the /usr/bin version\ndidn't particularly like the other binaries being in\n/usr/lib/postgresql/NN/bin/.\n\nI guess it's time to revisit that problem now and see if it can be\nsolved more pretty today on our side.\n\nSorry for the rumbling,\nChristoph\n\n\n", "msg_date": "Sat, 12 Feb 2022 17:31:55 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I was confusing that with this: The problem that led to the pg_config\n> patch years ago was that we have a /usr/bin/pg_config in\n> (non-major-version-dependant) libpq-dev, and\n> /usr/lib/postgresql/NN/bin/pg_config in the individual\n> postgresql-server-dev-NN packages, and iirc the /usr/bin version\n> didn't particularly like the other binaries being in\n> /usr/lib/postgresql/NN/bin/.\n\nAh. That seems a bit problematic anyway ... if the version-dependent\npg_configs don't all return identical results, what's the\nnon-version-dependent one supposed to do?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 11:50:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Tom Lane\n> Ah. That seems a bit problematic anyway ... if the version-dependent\n> pg_configs don't all return identical results, what's the\n> non-version-dependent one supposed to do?\n\nIt still returns a correct --includedir for client apps that need it.\n\n(Unfortunately many need --includedir-server since that's where the\ntype OIDs live.)\n\nChristoph\n\n\n", "msg_date": "Sat, 12 Feb 2022 18:40:06 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 13:29:54 +0900, Michael Paquier wrote:\n> On Sat, Feb 12, 2022 at 09:49:47AM +0900, Michael Paquier wrote:\n> > I guess that it would be better to revert the TAP test and rework all\n> > that for the time being, then.\n> \n> And this part is done.\n\nYou wrote in the commit message that:\n\n\"However, this is also an issue as a TAP test only offers the build directory\nas of TESTDIR in the environment context, so this would fail with VPATH\nbuilds.\"\n\nTap tests currently are always executed in the source directory above t/, as\nfar as I can tell. So I don't think VPATH/non-VPATH or windows / non-windows\nwould break using a relative reference from the test dir.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 12:10:46 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Sat, Feb 12, 2022 at 05:31:55PM +0100, Christoph Berg wrote:\n> To support multiple major versions in parallel, we need\n> /usr/lib/postgresql/NN/lib and /usr/share/postgresql/NN, so it's not a\n> single $PREFIX. But you are correct, and ./configure supports that.\n\nYep, I don't quite see the problem, but I don't have the years of\nexperience in terms of packaging you have, either. (NB: I use various\n--libdir and --bindir combinations that for some of packaging of PG I\nmaintain for pg_upgrade, but that's not as complex as Debian, I\nguess).\n\n> I was confusing that with this: The problem that led to the pg_config\n> patch years ago was that we have a /usr/bin/pg_config in\n> (non-major-version-dependant) libpq-dev, and\n> /usr/lib/postgresql/NN/bin/pg_config in the individual\n> postgresql-server-dev-NN packages, and iirc the /usr/bin version\n> didn't particularly like the other binaries being in\n> /usr/lib/postgresql/NN/bin/.\n> \n> I guess it's time to revisit that problem now and see if it can be\n> solved more pretty today on our side.\n\nIf you can solve that, my take is that it could make things nicer in\nthe long-run for some of the TAP test facilities. Still, I'll try to\nlook at a solution for this thread that does not interact badly with\nwhat you are doing, and I am going to grab your patch when testing\nthings.\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 11:03:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Sat, Feb 12, 2022 at 12:10:46PM -0800, Andres Freund wrote:\n> Tap tests currently are always executed in the source directory above t/, as\n> far as I can tell. So I don't think VPATH/non-VPATH or windows / non-windows\n> would break using a relative reference from the test dir.\n\nThat would mean to rely on $ENV{PWD} for this job, as of something\nlike that:\n-# Find the location of postgresql.conf.sample, based on the information\n-# provided by pg_config.\n-my $sample_file =\n- $node->config_data('--sharedir') . '/postgresql.conf.sample';\n+my $rootdir = $ENV{PWD} . \"/../../../..\";\n+my $sample_file = \"$rootdir/src/backend/utils/misc/postgresql.conf.sample\";\n\nA run through the CI shows that this is working, and it also works\nwith Debian's patch for the config data. I still tend to prefer the\nreadability of a TAP test over the main regression test suite for this\nfacility. Even if we could use the same trick with PG_ABS_SRCDIR, I'd\nrather not add this kind of dependency with a file in the source\ntree in src/test/regress/. Do others have any opinions on the matter?\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 14:11:37 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Mon, Feb 14, 2022 at 02:11:37PM +0900, Michael Paquier wrote:\n> A run through the CI shows that this is working, and it also works\n> with Debian's patch for the config data. I still tend to prefer the\n> readability of a TAP test over the main regression test suite for this\n> facility.\n\nHearing nothing, I have looked once again at this stuff and tested it\nwith what Debian was doing. And that looks to work fine for\neverything we care about, so applied with the use of a relative path\nto find postgresql.conf.sample in the source tree.\n--\nMichael", "msg_date": "Wed, 16 Feb 2022 11:23:17 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Michael Paquier\n> Hearing nothing, I have looked once again at this stuff and tested it\n> with what Debian was doing. And that looks to work fine for\n> everything we care about, so applied with the use of a relative path\n> to find postgresql.conf.sample in the source tree.\n\nThe build works with the \"Add TAP test to automate the equivalent of\ncheck_guc, take two\" commit. Thanks!\n\n(Tests are still failing for\nhttps://www.postgresql.org/message-id/YgjwrkEvNEqoz4Vm%40msg.df7cb.de\nthough)\n\nChristoph\n\n\n", "msg_date": "Wed, 16 Feb 2022 13:38:51 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Wed, Feb 16, 2022 at 01:38:51PM +0100, Christoph Berg wrote:\n> The build works with the \"Add TAP test to automate the equivalent of\n> check_guc, take two\" commit. Thanks!\n\nThanks for double-checking, Christoph!\n\n> (Tests are still failing for\n> https://www.postgresql.org/message-id/YgjwrkEvNEqoz4Vm%40msg.df7cb.de\n> though)\n\nThis has been unanswered for four weeks now. I'll see about jumping\ninto it, then..\n--\nMichael", "msg_date": "Thu, 17 Feb 2022 09:24:23 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Wed, Feb 16, 2022 at 7:24 PM Michael Paquier <michael@paquier.xyz> wrote:\n> On Wed, Feb 16, 2022 at 01:38:51PM +0100, Christoph Berg wrote:\n> > The build works with the \"Add TAP test to automate the equivalent of\n> > check_guc, take two\" commit. Thanks!\n>\n> Thanks for double-checking, Christoph!\n>\n> > (Tests are still failing for\n> > https://www.postgresql.org/message-id/YgjwrkEvNEqoz4Vm%40msg.df7cb.de\n> > though)\n>\n> This has been unanswered for four weeks now. I'll see about jumping\n> into it, then..\n\nSorry, I saw that and then forgot about it ... but isn't it 3 days,\nnot 4 weeks? In any case, I'm happy to have you take care of it, but I\ncan also look at it tomorrow if you prefer.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 16 Feb 2022 19:39:26 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Wed, Feb 16, 2022 at 07:39:26PM -0500, Robert Haas wrote:\n> Sorry, I saw that and then forgot about it ... but isn't it 3 days,\n> not 4 weeks? In any case, I'm happy to have you take care of it, but I\n> can also look at it tomorrow if you prefer.\n\nUgh. I looked at the top of the thread and saw January the 17th :)\n\nSo that means that I need more caffeine. If you are planning to look\nat it, please feel free. Thanks!\n--\nMichael", "msg_date": "Thu, 17 Feb 2022 09:48:12 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "Re: Michael Paquier\n> > I was confusing that with this: The problem that led to the pg_config\n> > patch years ago was that we have a /usr/bin/pg_config in\n> > (non-major-version-dependant) libpq-dev, and\n> > /usr/lib/postgresql/NN/bin/pg_config in the individual\n> > postgresql-server-dev-NN packages, and iirc the /usr/bin version\n> > didn't particularly like the other binaries being in\n> > /usr/lib/postgresql/NN/bin/.\n> > \n> > I guess it's time to revisit that problem now and see if it can be\n> > solved more pretty today on our side.\n> \n> If you can solve that, my take is that it could make things nicer in\n> the long-run for some of the TAP test facilities. Still, I'll try to\n> look at a solution for this thread that does not interact badly with\n> what you are doing, and I am going to grab your patch when testing\n> things.\n\ntl;dr: This is no longer an issue.\n\nSince build-time testing broke again about two weeks ago due to\nDebian's pg_config patch, I revisited the situation and found that the\npatch is in fact no longer necessary to support pg_config in /usr/bin:\n\nTo support cross-compilation against libpq, some years ago I had\nalready replaced /usr/bin/pg_config (in libpq-dev) with a perl script\nthat collects path info at build time and outputs them statically at\nrun time [1]. The \"real\" /usr/lib/postgresql/15/bin/pg_config (in\npostgresql-server-dev-15) is still the C version, now unpatched with\nfull relocatability support.\n\n[1] https://salsa.debian.org/postgresql/postgresql-common/-/blob/master/server/postgresql.mk#L189-194\n https://salsa.debian.org/postgresql/postgresql-common/-/blob/master/server/pg_config.pl\n\nChristoph\n\n\n", "msg_date": "Fri, 15 Apr 2022 16:49:28 +0200", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" }, { "msg_contents": "On Fri, Apr 15, 2022 at 04:49:28PM +0200, Christoph Berg wrote:\n> Since build-time testing broke again about two weeks ago due to\n> Debian's pg_config patch, I revisited the situation and found that the\n> patch is in fact no longer necessary to support pg_config in /usr/bin:\n> \n> To support cross-compilation against libpq, some years ago I had\n> already replaced /usr/bin/pg_config (in libpq-dev) with a perl script\n> that collects path info at build time and outputs them statically at\n> run time [1]. The \"real\" /usr/lib/postgresql/15/bin/pg_config (in\n> postgresql-server-dev-15) is still the C version, now unpatched with\n> full relocatability support.\n\nNice! Thanks for letting us know, Christoph.\n--\nMichael", "msg_date": "Sat, 16 Apr 2022 10:02:51 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": true, "msg_subject": "Re: pgsql: Add TAP test to automate the equivalent of check_guc" } ]
[ { "msg_contents": "Hi,\n\nI just noticed that e2c52beecd (adding PeterE in Cc) added a resetPQExpBuffer()\nwhich seems unnecessary since the variable is untouched since the initial\ncreatePQExpBuffer().\n\nSimple patch attached.", "msg_date": "Wed, 9 Feb 2022 10:50:07 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Unnecessary call to resetPQExpBuffer in getIndexes" }, { "msg_contents": "On Wed, Feb 09, 2022 at 10:50:07AM +0800, Julien Rouhaud wrote:\n> I just noticed that e2c52beecd (adding PeterE in Cc) added a resetPQExpBuffer()\n> which seems unnecessary since the variable is untouched since the initial\n> createPQExpBuffer().\n> \n> Simple patch attached.\n\nLGTM\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 10:21:10 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Unnecessary call to resetPQExpBuffer in getIndexes" }, { "msg_contents": "On 09.02.22 19:21, Nathan Bossart wrote:\n> On Wed, Feb 09, 2022 at 10:50:07AM +0800, Julien Rouhaud wrote:\n>> I just noticed that e2c52beecd (adding PeterE in Cc) added a resetPQExpBuffer()\n>> which seems unnecessary since the variable is untouched since the initial\n>> createPQExpBuffer().\n>>\n>> Simple patch attached.\n> \n> LGTM\n\ncommitted\n\n\n", "msg_date": "Thu, 10 Feb 2022 12:25:36 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Unnecessary call to resetPQExpBuffer in getIndexes" }, { "msg_contents": "On Thu, Feb 10, 2022 at 12:25:36PM +0100, Peter Eisentraut wrote:\n> On 09.02.22 19:21, Nathan Bossart wrote:\n> > On Wed, Feb 09, 2022 at 10:50:07AM +0800, Julien Rouhaud wrote:\n> > > I just noticed that e2c52beecd (adding PeterE in Cc) added a resetPQExpBuffer()\n> > > which seems unnecessary since the variable is untouched since the initial\n> > > createPQExpBuffer().\n> > > \n> > > Simple patch attached.\n> > \n> > LGTM\n> \n> committed\n\nThanks!\n\n\n", "msg_date": "Thu, 10 Feb 2022 19:53:15 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Unnecessary call to resetPQExpBuffer in getIndexes" } ]
[ { "msg_contents": "Hi,\n\nI think this change can improve this particular function by avoiding\ntouching value if not needed.\nTest if not isnull is cheaper than test TupleDescAttr is -1.\n\nbest regards,\n\nRanier Vilela", "msg_date": "Wed, 9 Feb 2022 07:56:35 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 09, 2022 at 07:56:35AM -0300, Ranier Vilela wrote:\n> \n> I think this change can improve this particular function by avoiding\n> touching value if not needed.\n> Test if not isnull is cheaper than test TupleDescAttr is -1.\n\nIt looks sensible.\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:52:30 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Hi,\n\nI've applied your patch. I think it's reasonable.\nbut IMHO It looks more complicated to read because of many conditions\nin if statement.\nso what about just moving up if-statement?\n\nThanks.\nDong Wook Lee\n\n2022년 2월 9일 (수) 오후 7:56, Ranier Vilela <ranier.vf@gmail.com>님이 작성:\n>\n> Hi,\n>\n> I think this change can improve this particular function by avoiding touching value if not needed.\n> Test if not isnull is cheaper than test TupleDescAttr is -1.\n>\n> best regards,\n>\n> Ranier Vilela", "msg_date": "Wed, 9 Feb 2022 23:33:47 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Oops, I realized that I made a mistake in my patch and now I have corrected it.\n\n2022년 2월 9일 (수) 오후 11:33, Dong Wook Lee <sh95119@gmail.com>님이 작성:\n>\n> Hi,\n>\n> I've applied your patch. I think it's reasonable.\n> but IMHO It looks more complicated to read because of many conditions\n> in if statement.\n> so what about just moving up if-statement?\n>\n> Thanks.\n> Dong Wook Lee\n>\n> 2022년 2월 9일 (수) 오후 7:56, Ranier Vilela <ranier.vf@gmail.com>님이 작성:\n> >\n> > Hi,\n> >\n> > I think this change can improve this particular function by avoiding touching value if not needed.\n> > Test if not isnull is cheaper than test TupleDescAttr is -1.\n> >\n> > best regards,\n> >\n> > Ranier Vilela", "msg_date": "Wed, 9 Feb 2022 23:55:21 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Our style is that we group all declarations in a block to appear at the\ntop. We don't mix declarations and statements.\n\n-- \nÁlvaro Herrera 39°49'30\"S 73°17'W — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 9 Feb 2022 12:45:02 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Yes, now I understand it.\nThank you for letting me know about that.\n\nThanks.\n\n2022년 2월 10일 (목) 00:39, Ranier Vilela <ranier.vf@gmail.com>님이 작성:\n\n>\n> Em qua., 9 de fev. de 2022 às 11:34, Dong Wook Lee <sh95119@gmail.com>\n> escreveu:\n>\n>> Hi,\n>>\n>> I've applied your patch. I think it's reasonable.\n>> but IMHO It looks more complicated to read because of many conditions\n>> in if statement.\n>> so what about just moving up if-statement?\n>>\n>\n> No.\n> Your version is worse than HEAD, please don't.\n>\n> 1. Do not declare variables after statement.\n> Although Postgres accepts C99, this is not acceptable, for a number of\n> problems, already discussed.\n>\n> 2. Still slow unnecessarily, because still test TupleDescAttr(tupleDesc,\n> i)->attlen,\n> which is much more onerous than test isnull.\n>\n> 3. We don't write Baby C code.\n> C has short break expressions, use-it!\n>\n> My version is the right thing, here.\n>\n> regards,\n> Ranier Vilela\n>\n\nYes, now I understand it.Thank you for letting me know about that.Thanks.2022년 2월 10일 (목) 00:39, Ranier Vilela <ranier.vf@gmail.com>님이 작성:Em qua., 9 de fev. de 2022 às 11:34, Dong Wook Lee <sh95119@gmail.com> escreveu:Hi,\n\nI've applied your patch. I think it's reasonable.\nbut  IMHO It looks more complicated to read because of many conditions\nin if statement.\nso what about just moving up if-statement?No.Your version is worse than HEAD, please don't.1. Do not declare variables after statement.Although Postgres accepts C99, this is not acceptable, for a number of problems, already discussed.2. Still slow unnecessarily, because still test TupleDescAttr(tupleDesc, i)->attlen,which is much more onerous than test isnull.3. We don't write Baby C code.C has short break expressions, use-it!My version is the right thing, here.regards,Ranier Vilela", "msg_date": "Thu, 10 Feb 2022 08:10:16 +0900", "msg_from": "Dong Wook Lee <sh95119@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" }, { "msg_contents": "Em qua., 9 de fev. de 2022 às 20:10, Dong Wook Lee <sh95119@gmail.com>\nescreveu:\n\n> Yes, now I understand it.\n> Thank you for letting me know about that.\n>\nYou are welcome.\n\nBest regards,\nRanier Vilela\n\nEm qua., 9 de fev. de 2022 às 20:10, Dong Wook Lee <sh95119@gmail.com> escreveu:Yes, now I understand it.Thank you for letting me know about that.You are welcome.Best regards,Ranier Vilela", "msg_date": "Wed, 9 Feb 2022 20:46:24 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Improve function toast_delete_external\n (src/backend/access/table/toast_helper.c)" } ]
[ { "msg_contents": "Hi,\n\nCommit\nhttps://github.com/postgres/postgres/commit/1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf,\nmodified\ndata struct TwoPhaseFileHeader and added two new fields:\n\nXLogRecPtr origin_lsn; /* lsn of this record at origin node */\nTimestampTz origin_timestamp; /* time of prepare at origin node */\n\nI think thay forgot initialize these fields in the function StartPrepare\nbecause,\nwhen calling function save_state_data(&hdr, sizeof(TwoPhaseFileHeader));\n\nI have one report about possible uninitialized usage of the variables.\n\nBest regards,\n\nRanier Vilela", "msg_date": "Wed, 9 Feb 2022 08:15:45 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" }, { "msg_contents": "At Wed, 9 Feb 2022 08:15:45 -0300, Ranier Vilela <ranier.vf@gmail.com> wrote in \n> Hi,\n> \n> Commit\n> https://github.com/postgres/postgres/commit/1eb6d6527aae264b3e0b9c95aa70bb7a594ad1cf,\n> modified\n> data struct TwoPhaseFileHeader and added two new fields:\n> \n> XLogRecPtr origin_lsn; /* lsn of this record at origin node */\n> TimestampTz origin_timestamp; /* time of prepare at origin node */\n> \n> I think thay forgot initialize these fields in the function StartPrepare\n> because,\n> when calling function save_state_data(&hdr, sizeof(TwoPhaseFileHeader));\n> \n> I have one report about possible uninitialized usage of the variables.\n\nWho repoerted that to you?\n\nStartPrepare and EndPrepare are virtually a single function that\naccepts some additional operations in the middle. StartPrepare leaves\nthe \"records\" incomplete then EndPrepare completes it. It is not\ndesigned so that the fields are accessed from others before the\ncompletion. There seems to be no critical reasons for EndPrepare to\ndo the pointed operations, but looking that another replorigin-related\noperation is needed in the same function, it is sensible that the\nauthor intended to consolidate all replorigin related changes in\nEndPrepare.\n\nSo, my humble opinion here is, that change is not needed.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Feb 2022 10:18:15 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" }, { "msg_contents": "On Thu, Feb 10, 2022 at 10:18:15AM +0900, Kyotaro Horiguchi wrote:\n> Who repoerted that to you?\n\nLet's bet on Coverity.\n\n> StartPrepare and EndPrepare are virtually a single function that\n> accepts some additional operations in the middle. StartPrepare leaves\n> the \"records\" incomplete then EndPrepare completes it. It is not\n> designed so that the fields are accessed from others before the\n> completion. There seems to be no critical reasons for EndPrepare to\n> do the pointed operations, but looking that another replorigin-related\n> operation is needed in the same function, it is sensible that the\n> author intended to consolidate all replorigin related changes in\n> EndPrepare.\n\nWell, that's the intention I recall from reading this code a couple of\nweeks ago. In the worst case, this could be considered as a bad\npractice as having clean data helps at least debugging if you look at\nthis data between the calls of StartPrepare() and EndPrepare(), and we\ndo things this way for all the other fields, like total_len.\n\nThe proposed change is incomplete anyway once you consider this\nargument. First, there is no need to set up those fields in\nEndPrepare() anymore if there is no origin data, no? It would be good\nto comment that these are filled in EndPrepare(), I guess, once you do\nthe initialization in StartPrepare(). I agree that those changes are\npurely cosmetic.\n--\nMichael", "msg_date": "Thu, 10 Feb 2022 11:38:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:38:44AM +0900, Michael Paquier wrote:\n> The proposed change is incomplete anyway once you consider this\n> argument. First, there is no need to set up those fields in\n> EndPrepare() anymore if there is no origin data, no? It would be good\n> to comment that these are filled in EndPrepare(), I guess, once you do\n> the initialization in StartPrepare().\n\nThat still looked better on a fresh look in terms of consistency, so\napplied this way.\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 11:07:19 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" }, { "msg_contents": "At Mon, 14 Feb 2022 11:07:19 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Feb 10, 2022 at 11:38:44AM +0900, Michael Paquier wrote:\n> > The proposed change is incomplete anyway once you consider this\n> > argument. First, there is no need to set up those fields in\n> > EndPrepare() anymore if there is no origin data, no? It would be good\n> > to comment that these are filled in EndPrepare(), I guess, once you do\n> > the initialization in StartPrepare().\n> \n> That still looked better on a fresh look in terms of consistency, so\n> applied this way.\n\nFWIW +1. That is what I had in my mind at the time of posting my\ncomment, but I choosed not to make a change. It is one way to keep\nconsistency. (And works to silence Coverity.)\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Feb 2022 18:02:11 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" }, { "msg_contents": "Em dom., 13 de fev. de 2022 às 23:07, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Thu, Feb 10, 2022 at 11:38:44AM +0900, Michael Paquier wrote:\n> > The proposed change is incomplete anyway once you consider this\n> > argument. First, there is no need to set up those fields in\n> > EndPrepare() anymore if there is no origin data, no? It would be good\n> > to comment that these are filled in EndPrepare(), I guess, once you do\n> > the initialization in StartPrepare().\n>\n> That still looked better on a fresh look in terms of consistency, so\n> applied this way.\n>\nThanks.\n\nregards,\nRanier Vilela\n\nEm dom., 13 de fev. de 2022 às 23:07, Michael Paquier <michael@paquier.xyz> escreveu:On Thu, Feb 10, 2022 at 11:38:44AM +0900, Michael Paquier wrote:\n> The proposed change is incomplete anyway once you consider this\n> argument.  First, there is no need to set up those fields in\n> EndPrepare() anymore if there is no origin data, no?  It would be good\n> to comment that these are filled in EndPrepare(), I guess, once you do\n> the initialization in StartPrepare().\n\nThat still looked better on a fresh look in terms of consistency, so\napplied this way.Thanks.regards,Ranier Vilela", "msg_date": "Mon, 14 Feb 2022 08:05:42 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Possible uninitialized use of the variables\n (src/backend/access/transam/twophase.c)" } ]
[ { "msg_contents": "Hi.\n\nI've looked at patches, introducing CREATE INDEX CONCURRENTLY for \npartitioned tables - \nhttps://www.postgresql.org/message-id/flat/20210226182019.GU20769%40telsasoft.com#da169a0a518bf8121604437d9ab053b3 \n.\nThe thread didn't have any activity for a year.\n\nI've rebased patches and tried to fix issues I've seen. I've fixed \nreference after table_close() in the first patch (can be seen while \nbuilding with CPPFLAGS='-DRELCACHE_FORCE_RELEASE'). Also merged old \n0002-f-progress-reporting.patch and \n0003-WIP-Add-SKIPVALID-flag-for-more-integration.patch. It seems the \nfirst one didn't really fixed issue with progress report (as \nReindexRelationConcurrently() uses pgstat_progress_start_command(), \nwhich seems to mess up the effect of this command in DefineIndex()). \nAlso third patch completely removes attempts to report create index \nprogress correctly (reindex reports about individual commands, not the \nwhole CREATE INDEX).\n\nSo I've added 0003-Try-to-fix-create-index-progress-report.patch, which \ntries to fix the mess with create index progress report. It introduces \nnew flag REINDEXOPT_REPORT_PART to ReindexParams->options. Given this \nflag, ReindexRelationConcurrently() will not report about individual \noperations, but ReindexMultipleInternal() will report about reindexed \npartitions. To make the issue worse, some partitions can be handled in \nReindexPartitions() and ReindexMultipleInternal() should know how many \nto correctly update PROGRESS_CREATEIDX_PARTITIONS_DONE counter, so we \npass the number of handled partitions to it.\n\nI also have question if in src/backend/commands/indexcmds.c:1239\n1240 oldcontext = MemoryContextSwitchTo(ind_context);\n1239 childidxs = RelationGetIndexList(childrel);\n1241 attmap =\n1242 \nbuild_attrmap_by_name(RelationGetDescr(childrel),\n1243 parentDesc);\n1244 MemoryContextSwitchTo(oldcontext);\n\nshould live in ind_context, given that we iterate over this list of oids \nand immediately free it, but at least it shouldn't do much harm.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional", "msg_date": "Wed, 09 Feb 2022 15:18:12 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "Justin Pryzby <pryzby@telsasoft.com>" }, { "msg_contents": "Alexander Pyhalov писал 2022-02-09 15:18:\n> Hi.\n> \n> I've looked at patches, introducing CREATE INDEX CONCURRENTLY for\n> partitioned tables -\n> https://www.postgresql.org/message-id/flat/20210226182019.GU20769%40telsasoft.com#da169a0a518bf8121604437d9ab053b3\n> .\n> The thread didn't have any activity for a year.\n> \n> I've rebased patches and tried to fix issues I've seen. I've fixed\n> reference after table_close() in the first patch (can be seen while\n> building with CPPFLAGS='-DRELCACHE_FORCE_RELEASE'). Also merged old\n> 0002-f-progress-reporting.patch and\n> 0003-WIP-Add-SKIPVALID-flag-for-more-integration.patch. It seems the\n> first one didn't really fixed issue with progress report (as\n> ReindexRelationConcurrently() uses pgstat_progress_start_command(),\n> which seems to mess up the effect of this command in DefineIndex()).\n> Also third patch completely removes attempts to report create index\n> progress correctly (reindex reports about individual commands, not the\n> whole CREATE INDEX).\n> \n> So I've added 0003-Try-to-fix-create-index-progress-report.patch,\n> which tries to fix the mess with create index progress report. It\n> introduces new flag REINDEXOPT_REPORT_PART to ReindexParams->options.\n> Given this flag, ReindexRelationConcurrently() will not report about\n> individual operations, but ReindexMultipleInternal() will report about\n> reindexed partitions. To make the issue worse, some partitions can be\n> handled in ReindexPartitions() and ReindexMultipleInternal() should\n> know how many to correctly update PROGRESS_CREATEIDX_PARTITIONS_DONE\n> counter, so we pass the number of handled partitions to it.\n> \n> I also have question if in src/backend/commands/indexcmds.c:1239\n> 1240 oldcontext = MemoryContextSwitchTo(ind_context);\n> 1239 childidxs = RelationGetIndexList(childrel);\n> 1241 attmap =\n> 1242 \n> build_attrmap_by_name(RelationGetDescr(childrel),\n> 1243 parentDesc);\n> 1244 MemoryContextSwitchTo(oldcontext);\n> \n> should live in ind_context, given that we iterate over this list of\n> oids and immediately free it, but at least it shouldn't do much harm.\n\nSorry, messed the topic.\n-- \nBest regards,\nAlexander Pyhalov,\nPostgres Professional\n\n\n", "msg_date": "Wed, 09 Feb 2022 15:20:54 +0300", "msg_from": "Alexander Pyhalov <a.pyhalov@postgrespro.ru>", "msg_from_op": true, "msg_subject": "CREATE INDEX CONCURRENTLY on partitioned index" }, { "msg_contents": "Hi Alexander,\n\nOn Wed, Feb 09, 2022 at 03:18:12PM +0300, Alexander Pyhalov wrote:\n> I've looked at patches, introducing CREATE INDEX CONCURRENTLY for\n> partitioned tables - https://www.postgresql.org/message-id/flat/20210226182019.GU20769%40telsasoft.com#da169a0a518bf8121604437d9ab053b3\n> .\n> The thread didn't have any activity for a year.\n\nIf you plan to review a patch set, could you reply directly on its\nthread without changing the subject please? This way of replying\nbreaks the flow of a thread, making it harder to track.\n\nThanks!\n--\nMichael", "msg_date": "Thu, 10 Feb 2022 10:01:08 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Justin Pryzby <pryzby@telsasoft.com>" } ]
[ { "msg_contents": "Whether or not to explicitly plan the number of TAP tests per suite has been\ndiscussed a number of times on this list, often as a side-note in an unrelated\nthread which adds/modifies a test. The concensus has so far weighed towards\nnot doing manual bookkeeping of test plans but to let Test::More deal with it.\nSo far, no concrete patch to make that happen has been presented though.\n\nThe attached patch removes all Test::More planning and instead ensures that all\ntests conclude with a done_testing() call. While there, I also removed a few\nexit(0) calls from individual tests making them more consistent.\n\nThoughts?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 9 Feb 2022 15:01:36 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Replacing TAP test planning with done_testing()" }, { "msg_contents": "Hi,\n\nOn Wed, Feb 09, 2022 at 03:01:36PM +0100, Daniel Gustafsson wrote:\n> Whether or not to explicitly plan the number of TAP tests per suite has been\n> discussed a number of times on this list, often as a side-note in an unrelated\n> thread which adds/modifies a test. The concensus has so far weighed towards\n> not doing manual bookkeeping of test plans but to let Test::More deal with it.\n> So far, no concrete patch to make that happen has been presented though.\n> \n> The attached patch removes all Test::More planning and instead ensures that all\n> tests conclude with a done_testing() call. While there, I also removed a few\n> exit(0) calls from individual tests making them more consistent.\n> \n> Thoughts?\n\n+1, I don't think it adds much value compared to the extra work it requires.\nNot having the current total also doesn't seem much a problem, as it was said\non the other thread no tests is long enough to really care, at least on a\ndevelopment box.\n\nI still remember the last time I wanted to add some TAP tests to pg_dump, and\nit wasn't pleasant. In any case we should at least get rid of this one.\n\nThe patch itself looks good to me.\n\n\n", "msg_date": "Wed, 9 Feb 2022 22:16:11 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n\n> Whether or not to explicitly plan the number of TAP tests per suite has been\n> discussed a number of times on this list, often as a side-note in an unrelated\n> thread which adds/modifies a test. The concensus has so far weighed towards\n> not doing manual bookkeeping of test plans but to let Test::More deal with it.\n> So far, no concrete patch to make that happen has been presented though.\n>\n> The attached patch removes all Test::More planning and instead ensures that all\n> tests conclude with a done_testing() call. While there, I also removed a few\n> exit(0) calls from individual tests making them more consistent.\n>\n> Thoughts?\n\nLGTM, +1.\n\nI have grepped the patched code and can verify that there are no\noccurrences of `tests => N` (with or without quotes around `tests`) left\nin any test scripts, and `make check-world` passes.\n\nI'm especially pleased by the removal of the big chunks of test count\ncalculating code in the pg_dump tests.\n\n- ilmari\n\n\n", "msg_date": "Wed, 09 Feb 2022 14:34:08 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> The attached patch removes all Test::More planning and instead ensures that all\n>> tests conclude with a done_testing() call. While there, I also removed a few\n>> exit(0) calls from individual tests making them more consistent.\n\n> LGTM, +1.\n\nLGTM too.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 14:49:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "On Wed, Feb 09, 2022 at 02:49:47PM -0500, Tom Lane wrote:\n> =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n>> Daniel Gustafsson <daniel@yesql.se> writes:\n>>> The attached patch removes all Test::More planning and instead ensures that all\n>>> tests conclude with a done_testing() call. While there, I also removed a few\n>>> exit(0) calls from individual tests making them more consistent.\n> \n>> LGTM, +1.\n> \n> LGTM too.\n\nNot tested, but +1. Could it be possible to backpatch that even if\nthis could be qualified as only cosmetic? Each time a test is\nbackpatched we need to tweak the number of tests planned, and that may\nchange slightly depending on the branch dealt with.\n--\nMichael", "msg_date": "Thu, 10 Feb 2022 09:58:27 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "At Thu, 10 Feb 2022 09:58:27 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Wed, Feb 09, 2022 at 02:49:47PM -0500, Tom Lane wrote:\n> > =?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org> writes:\n> >> Daniel Gustafsson <daniel@yesql.se> writes:\n> >>> The attached patch removes all Test::More planning and instead ensures that all\n> >>> tests conclude with a done_testing() call. While there, I also removed a few\n> >>> exit(0) calls from individual tests making them more consistent.\n> > \n> >> LGTM, +1.\n> > \n> > LGTM too.\n\n+1. I was anoyed by the definitions especially when adding Non-windows\nonly tests.\n\n> Not tested, but +1. Could it be possible to backpatch that even if\n> this could be qualified as only cosmetic? Each time a test is\n> backpatched we need to tweak the number of tests planned, and that may\n> change slightly depending on the branch dealt with.\n\n+1. I think that makes backpatching easier.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:26:44 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "> On 10 Feb 2022, at 01:58, Michael Paquier <michael@paquier.xyz> wrote:\n>>> Daniel Gustafsson <daniel@yesql.se> writes:\n\n>>>> The attached patch removes all Test::More planning and instead ensures that all\n>>>> tests conclude with a done_testing() call.\n\nPushed to master now with a few more additional hunks fixing test changes that\nhappened between posting this and now.\n\n> Could it be possible to backpatch that even if\n> this could be qualified as only cosmetic? Each time a test is\n> backpatched we need to tweak the number of tests planned, and that may\n> change slightly depending on the branch dealt with.\n\nI opted out of backpatching for now, to solicit more comments on that. It's\nnot a bugfix, but it's also not affecting the compiled bits that we ship, so I\nthink there's a case to be made both for and against a backpatch. Looking at\nthe oldest branch we support, it seems we've done roughly 25 changes to the\ntest plans in REL_10_STABLE over the years, so it's neither insignificant nor\nan everyday activity. Personally I don't have strong opinions, what do others\nthink?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Fri, 11 Feb 2022 21:04:53 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "On 2022-02-11 21:04:53 +0100, Daniel Gustafsson wrote:\n> I opted out of backpatching for now, to solicit more comments on that. It's\n> not a bugfix, but it's also not affecting the compiled bits that we ship, so I\n> think there's a case to be made both for and against a backpatch. Looking at\n> the oldest branch we support, it seems we've done roughly 25 changes to the\n> test plans in REL_10_STABLE over the years, so it's neither insignificant nor\n> an everyday activity. Personally I don't have strong opinions, what do others\n> think?\n\n+1 on backpatching. Backpatching tests now is less likely to cause conflicts,\nbut more likely to fail during tests.\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:16:22 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-11 21:04:53 +0100, Daniel Gustafsson wrote:\n>> I opted out of backpatching for now, to solicit more comments on that. It's\n>> not a bugfix, but it's also not affecting the compiled bits that we ship, so I\n>> think there's a case to be made both for and against a backpatch. Looking at\n>> the oldest branch we support, it seems we've done roughly 25 changes to the\n>> test plans in REL_10_STABLE over the years, so it's neither insignificant nor\n>> an everyday activity. Personally I don't have strong opinions, what do others\n>> think?\n\n> +1 on backpatching. Backpatching tests now is less likely to cause conflicts,\n> but more likely to fail during tests.\n\nIf you've got the energy to do it, +1 for backpatching. I agree\nwith Michael's opinion that doing so will probably save us\nfuture annoyance.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:22:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "On 2022-Feb-11, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n\n> > +1 on backpatching. Backpatching tests now is less likely to cause conflicts,\n> > but more likely to fail during tests.\n> \n> If you've got the energy to do it, +1 for backpatching. I agree\n> with Michael's opinion that doing so will probably save us\n> future annoyance.\n\n+1\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 11 Feb 2022 17:28:00 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" }, { "msg_contents": "Le sam. 12 févr. 2022 à 04:28, Alvaro Herrera <alvherre@alvh.no-ip.org> a\nécrit :\n\n> On 2022-Feb-11, Tom Lane wrote:\n>\n> > Andres Freund <andres@anarazel.de> writes:\n>\n> > > +1 on backpatching. Backpatching tests now is less likely to cause\n> conflicts,\n> > > but more likely to fail during tests.\n> >\n> > If you've got the energy to do it, +1 for backpatching. I agree\n> > with Michael's opinion that doing so will probably save us\n> > future annoyance.\n>\n> +1\n>\n\n+1\n\n>\n\nLe sam. 12 févr. 2022 à 04:28, Alvaro Herrera <alvherre@alvh.no-ip.org> a écrit :On 2022-Feb-11, Tom Lane wrote:\n\n> Andres Freund <andres@anarazel.de> writes:\n\n> > +1 on backpatching. Backpatching tests now is less likely to cause conflicts,\n> > but more likely to fail during tests.\n> \n> If you've got the energy to do it, +1 for backpatching.  I agree\n> with Michael's opinion that doing so will probably save us\n> future annoyance.\n\n+1+1", "msg_date": "Sat, 12 Feb 2022 10:06:03 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Replacing TAP test planning with done_testing()" } ]
[ { "msg_contents": "Spotted a typo when reading the new archive modules documentation, will apply\nthe attached shortly to fix.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/", "msg_date": "Wed, 9 Feb 2022 15:04:34 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": true, "msg_subject": "Typo in archive modules docs" }, { "msg_contents": "On Wed, Feb 09, 2022 at 03:04:34PM +0100, Daniel Gustafsson wrote:\n> Spotted a typo when reading the new archive modules documentation, will apply\n> the attached shortly to fix.\n\nThanks!\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 09:55:54 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Typo in archive modules docs" } ]
[ { "msg_contents": "Respected Sir/Ma'am,\nI am Annada Dash, a computer science undergraduate, currently about to\nfinish the first semester of University (MKSSS Cummins College of\nEngineering for Women, Pune). I am new to open source contributions but I\nam well aware of python, C and R. I would love to contribute to your\norganization, but could you please tell me how to get started?\nHoping to hear from you soon.\nRegards\nAnnada\n\n\n[image: Mailtrack]\n<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11&>\nSender\nnotified by\nMailtrack\n<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=signaturevirality11&>\n02/09/22,\n10:43:41 PM\n\nRespected Sir/Ma'am,I am Annada Dash, a computer science undergraduate, currently about to finish the first semester of University (MKSSS Cummins College of Engineering for Women, Pune). I am new to open source contributions but I am well aware of python, C and R. I would love to contribute to your organization, but could you please tell me how to get started?Hoping to hear from you soon.RegardsAnnada\n\n\n\n\n\n\n\n\nSender notified by \nMailtrack\n02/09/22, 10:43:41 PM", "msg_date": "Wed, 9 Feb 2022 22:43:42 +0530", "msg_from": "603_Annada Dash <annada.dash@cumminscollege.in>", "msg_from_op": true, "msg_subject": "How to get started with contribution" }, { "msg_contents": "> On 9 Feb 2022, at 18:13, 603_Annada Dash <annada.dash@cumminscollege.in> wrote:\n\n> Respected Sir/Ma'am,\n> I am Annada Dash, a computer science undergraduate, currently about to finish the first semester of University (MKSSS Cummins College of Engineering for Women, Pune). I am new to open source contributions but I am well aware of python, C and R. I would love to contribute to your organization, but could you please tell me how to get started?\n> Hoping to hear from you soon.\n\nSee this recent thread from this list with some information on this:\n\nhttps://www.postgresql.org/message-id/flat/CALyR1r4m%2Bhd17ubDQXzBFghXEL_0UQJtEaABb2D-hcrSNj7Y_w%40mail.gmail.com\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Wed, 9 Feb 2022 19:37:13 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: How to get started with contribution" } ]
[ { "msg_contents": ">No, but I was distracted by other things leaving this on the TODO list.\nIt's\n\n>been pushed now.\n\nHi,\n\nIMO I think that still have troubles here.\n\nReadStr can return NULL, so the fix can crash.\n\nregards,\n\nRanier Vilela", "msg_date": "Wed, 9 Feb 2022 14:48:35 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "On Wed, Feb 09, 2022 at 02:48:35PM -0300, Ranier Vilela wrote:\n> IMO I think that still have troubles here.\n> \n> ReadStr can return NULL, so the fix can crash.\n\n- sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n- free(tmp);\n+ if (tmp)\n+ {\n+ sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n+ free(tmp);\n+ }\n+ else\n+ te->catalogId.tableoid = InvalidOid;\n\nThis patch makes things worse, doesn't it? Doesn't this localized\nchange mean that we expose ourselves more into *ignoring* TOC entries\nif we mess up with this code in the future? That sounds particularly\nsensible if you have a couple of bytes corrupted in a dump.\n--\nMichael", "msg_date": "Thu, 10 Feb 2022 11:16:44 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "Em qua., 9 de fev. de 2022 às 23:16, Michael Paquier <michael@paquier.xyz>\nescreveu:\n\n> On Wed, Feb 09, 2022 at 02:48:35PM -0300, Ranier Vilela wrote:\n> > IMO I think that still have troubles here.\n> >\n> > ReadStr can return NULL, so the fix can crash.\n>\n> - sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n> - free(tmp);\n> + if (tmp)\n> + {\n> + sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n> + free(tmp);\n> + }\n> + else\n> + te->catalogId.tableoid = InvalidOid;\n>\n> This patch makes things worse, doesn't it?\n\nNo.\n\n\n> Doesn't this localized\n> change mean that we expose ourselves more into *ignoring* TOC entries\n> if we mess up with this code in the future?\n\nInvalidOid already used for \"default\".\nIf ReadStr fails and returns NULL, sscanf will crash.\n\nMaybe in this case, better report to the user?\npg_log_warning?\n\nregards,\nRanier Vilela\n\nEm qua., 9 de fev. de 2022 às 23:16, Michael Paquier <michael@paquier.xyz> escreveu:On Wed, Feb 09, 2022 at 02:48:35PM -0300, Ranier Vilela wrote:\n> IMO I think that still have troubles here.\n> \n> ReadStr can return NULL, so the fix can crash.\n\n-           sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n-           free(tmp);\n+           if (tmp)\n+           {\n+               sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n+               free(tmp);\n+           }\n+           else\n+               te->catalogId.tableoid = InvalidOid;\n\nThis patch makes things worse, doesn't it? No.  Doesn't this localized\nchange mean that we expose ourselves more into *ignoring* TOC entries\nif we mess up with this code in the future?InvalidOid already used for \"default\".If ReadStr fails and returns NULL, sscanf will crash.Maybe in this case, better report to the user?pg_log_warning?regards,Ranier Vilela", "msg_date": "Thu, 10 Feb 2022 08:14:13 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "Em qui., 10 de fev. de 2022 às 08:14, Ranier Vilela <ranier.vf@gmail.com>\nescreveu:\n\n> Em qua., 9 de fev. de 2022 às 23:16, Michael Paquier <michael@paquier.xyz>\n> escreveu:\n>\n>> On Wed, Feb 09, 2022 at 02:48:35PM -0300, Ranier Vilela wrote:\n>> > IMO I think that still have troubles here.\n>> >\n>> > ReadStr can return NULL, so the fix can crash.\n>>\n>> - sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n>> - free(tmp);\n>> + if (tmp)\n>> + {\n>> + sscanf(tmp, \"%u\", &te->catalogId.tableoid);\n>> + free(tmp);\n>> + }\n>> + else\n>> + te->catalogId.tableoid = InvalidOid;\n>>\n>> This patch makes things worse, doesn't it?\n>\n> No.\n>\n>\n>> Doesn't this localized\n>> change mean that we expose ourselves more into *ignoring* TOC entries\n>> if we mess up with this code in the future?\n>\n> InvalidOid already used for \"default\".\n> If ReadStr fails and returns NULL, sscanf will crash.\n>\n> Maybe in this case, better report to the user?\n> pg_log_warning?\n>\nMaybe in this case, the right thing is abort?\n\nSee v2, please.\n\nregards,\nRanier Vilela", "msg_date": "Thu, 10 Feb 2022 10:49:43 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "> On 10 Feb 2022, at 12:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Em qua., 9 de fev. de 2022 às 23:16, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> escreveu:\n\n> This patch makes things worse, doesn't it? \n> No.\n> \n> Doesn't this localized\n> change mean that we expose ourselves more into *ignoring* TOC entries\n> if we mess up with this code in the future?\n> InvalidOid already used for \"default\".\n\nThere is no default case here, setting the tableoid to InvalidOid is done when\nthe archive doesn't support this particular feature. If we can't read the\ntableoid here, it's a corrupt TOC and we should abort.\n\n> If ReadStr fails and returns NULL, sscanf will crash.\n\nYes, which is better than silently propage the error.\n\n> Maybe in this case, better report to the user?\n> pg_log_warning?\n\nThat would demote what is today a crash to a warning on a corrupt TOC entry,\nwhich I think is the wrong way to go. Question is, can this fail in a\nnon-synthetic case on output which was successfully generated by pg_dump? I'm\nnot saying we should ignore errors, but I have a feeling that any input fed\nthat triggers this will be broken enough to cause fireworks elsewhere too, and\nthis being a chase towards low returns apart from complicating the code.\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:57:24 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: Plug minor memleak in pg_dump" }, { "msg_contents": "Em qui., 10 de fev. de 2022 às 10:57, Daniel Gustafsson <daniel@yesql.se>\nescreveu:\n\n> > On 10 Feb 2022, at 12:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> > Em qua., 9 de fev. de 2022 às 23:16, Michael Paquier <\n> michael@paquier.xyz <mailto:michael@paquier.xyz>> escreveu:\n>\n> > This patch makes things worse, doesn't it?\n> > No.\n> >\n> > Doesn't this localized\n> > change mean that we expose ourselves more into *ignoring* TOC entries\n> > if we mess up with this code in the future?\n> > InvalidOid already used for \"default\".\n>\n> There is no default case here, setting the tableoid to InvalidOid is done\n> when\n> the archive doesn't support this particular feature. If we can't read the\n> tableoid here, it's a corrupt TOC and we should abort.\n>\nWell, the v2 aborts.\n\n\n>\n> > If ReadStr fails and returns NULL, sscanf will crash.\n>\n> Yes, which is better than silently propage the error.\n>\nOk, silently propagating the error is bad, but crashing is a signal of\npoor tool.\n\n\n> > Maybe in this case, better report to the user?\n> > pg_log_warning?\n>\n> That would demote what is today a crash to a warning on a corrupt TOC\n> entry,\n> which I think is the wrong way to go. Question is, can this fail in a\n> non-synthetic case on output which was successfully generated by pg_dump?\n> I'm\n> not saying we should ignore errors, but I have a feeling that any input fed\n> that triggers this will be broken enough to cause fireworks elsewhere too,\n> and\n> this being a chase towards low returns apart from complicating the code.\n>\nFor me the code stays more simple and maintainable.\n\nregards,\nRanier Vilela\n\nEm qui., 10 de fev. de 2022 às 10:57, Daniel Gustafsson <daniel@yesql.se> escreveu:> On 10 Feb 2022, at 12:14, Ranier Vilela <ranier.vf@gmail.com> wrote:\n> Em qua., 9 de fev. de 2022 às 23:16, Michael Paquier <michael@paquier.xyz <mailto:michael@paquier.xyz>> escreveu:\n\n> This patch makes things worse, doesn't it? \n> No.\n>  \n> Doesn't this localized\n> change mean that we expose ourselves more into *ignoring* TOC entries\n> if we mess up with this code in the future?\n> InvalidOid already used for \"default\".\n\nThere is no default case here, setting the tableoid to InvalidOid is done when\nthe archive doesn't support this particular feature.  If we can't read the\ntableoid here, it's a corrupt TOC and we should abort.Well, the v2 aborts. \n\n> If ReadStr fails and returns NULL, sscanf will crash.\n\nYes, which is better than silently propage the error.Ok, silently propagating the error is bad,  but crashing is a signal of poor tool.\n\n> Maybe in this case, better report to the user?\n> pg_log_warning?\n\nThat would demote what is today a crash to a warning on a corrupt TOC entry,\nwhich I think is the wrong way to go.  Question is, can this fail in a\nnon-synthetic case on output which was successfully generated by pg_dump?  I'm\nnot saying we should ignore errors, but I have a feeling that any input fed\nthat triggers this will be broken enough to cause fireworks elsewhere too, and\nthis being a chase towards low returns apart from complicating the code.For me the code stays more simple and maintainable.regards,Ranier Vilela", "msg_date": "Thu, 10 Feb 2022 11:36:22 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Plug minor memleak in pg_dump" } ]
[ { "msg_contents": "Hi,\n\nI was working on rebasing the AIO branch. Tests started to fail after, but it\nturns out that the problem exists independent of AIO.\n\nThe problem starts with\n\ncommit aa01051418f10afbdfa781b8dc109615ca785ff9\nAuthor: Robert Haas <rhaas@postgresql.org>\nDate: 2022-01-24 14:23:15 -0500\n\n pg_upgrade: Preserve database OIDs.\n\n Commit 9a974cbcba005256a19991203583a94b4f9a21a9 arranged to preserve\n relfilenodes and tablespace OIDs. For similar reasons, also arrange\n to preserve database OIDs.\n\nafter this commit the database contents for databases that exist before\npg_upgrade runs, and the databases pg_upgrade (re-)creates can have the same\noid. Consequently the RelFileNode of files in the \"pre-existing\" database and\nthe re-created database will be the same.\n\nThat's a problem: There is no reliable invalidation mechanism forcing all\nprocesses to clear SMgrRelationHash when a database is dropped /\ncreated. Which means that backends can end up using file descriptors for the\nold, dropped, database, when accessing contents in the new database. This is\nmost likely to happen to bgwriter, but could be any backend (backends in other\ndatabases can end up opening files in another database to write out dirty\nvictim buffers during buffer replacement).\n\nThat's of course dangerous, because we can end up reading the wrong data (by\nreading from the old file), or loosing writes (by writing to the old file).\n\n\nNow, this isn't a problem that's entirely new - but previously it requried\nrelfilenodes to be recycled due to wraparound or such. The timeframes for that\nare long enough that it's very likely that *something* triggers smgr\ninvalidation processing. E.g. has\n\n\t\tif (FirstCallSinceLastCheckpoint())\n\t\t{\n\t\t\t/*\n\t\t\t * After any checkpoint, close all smgr files. This is so we\n\t\t\t * won't hang onto smgr references to deleted files indefinitely.\n\t\t\t */\n\t\t\tsmgrcloseall();\n\t\t}\n\nin its mainloop. Previously for the problem to occur there would have to be\n\"oid wraparound relfilenode recycling\" within a single checkpoint window -\nquite unlikely. But now a bgwriter cycle just needs to span a drop/create\ndatabase, easier to hit.\n\n\nI think the most realistic way to address this is the mechanism is to use the\nmechanism from\nhttps://postgr.es/m/CA%2BhUKGLdemy2gBm80kz20GTe6hNVwoErE8KwcJk6-U56oStjtg%40mail.gmail.com\n\nIf we add such barriers to dropdb(), XLOG_DBASE_DROP redo, DropTableSpace(),\nXLOG_TBLSPC_DROP redo, I think all the \"new\" ways to hit this are foreclosed.\n\nWe perhaps could avoid all relfilenode recycling, issues on the fd level, by\nadding a barrier whenever relfilenode allocation wraps around. But I'm not\nsure how easy that is to detect.\n\n\nI think we might actually be able to remove the checkpoint from dropdb() by\nusing barriers?\n\n\nI think we need to add some debugging code to detect \"fd to deleted file of\nsame name\" style problems. Especially during regression tests they're quite\nhard to notice, because most of the contents read/written are identical across\nrecycling.\n\nOn linux we can do so by a) checking if readlink(/proc/self/fd/$fd) points to\na filename ending in \" (deleted)\", b) doing fstat(fd) and checking if st_nlink\n== 0.\n\nb) might also work on a few other operating systems, but I have not verified\nthat. a) has the advantage of providing a filename, which makes it a lot\neasier to understand the problem. It'd probably make sense to use b) on all\nthe operating systems it works on, and then a) on linux for a more useful\nerror message.\n\nThe checks for this would need to be in FileRead/FileWrite/... and direct\ncalls to pread etc. I think that's an acceptable cost. I have a patch for\nthis, but it's currently based on the AIO tree. if others agree it'd be useful\nto have, I'll adapt it to work on master.\n\nComments?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 14:00:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "wrong fds used for refilenodes after pg_upgrade relfilenode changes\n Reply-To:" }, { "msg_contents": "On Wed, Feb 09, 2022 at 02:00:04PM -0800, Andres Freund wrote:\n> On linux we can do so by a) checking if readlink(/proc/self/fd/$fd) points to\n> a filename ending in \" (deleted)\", b) doing fstat(fd) and checking if st_nlink\n> == 0.\n\nYou could also stat() the file in proc/self/fd/N and compare st_ino. It\n\"looks\" like a symlink (in which case that wouldn't work) but it's actually a\nVery Special File. You can even recover deleted, still-opened files that way..\n\nPS. I didn't know pg_upgrade knew about Reply-To ;)\n\n\n", "msg_date": "Wed, 9 Feb 2022 16:42:30 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-02-09 16:42:30 -0600, Justin Pryzby wrote:\n> On Wed, Feb 09, 2022 at 02:00:04PM -0800, Andres Freund wrote:\n> > On linux we can do so by a) checking if readlink(/proc/self/fd/$fd) points to\n> > a filename ending in \" (deleted)\", b) doing fstat(fd) and checking if st_nlink\n> > == 0.\n> \n> You could also stat() the file in proc/self/fd/N and compare st_ino. It\n> \"looks\" like a symlink (in which case that wouldn't work) but it's actually a\n> Very Special File. You can even recover deleted, still-opened files that way..\n\nYea, the readlink() thing above relies on it being a /proc/self/fd/$fd being a\n\"Very Special File\".\n\nIn most places we'd not have convenient access to a inode / filename to\ncompare it to. I don't think there's any additional information we could gain\nanyway, compared to looking at st_nlink == 0 and then doing a readlink() to\nget the filename?\n\n\n> PS. I didn't know pg_upgrade knew about Reply-To ;)\n\nUgh, formatting fail...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 15:11:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes" }, { "msg_contents": "On Wed, Feb 9, 2022 at 5:00 PM Andres Freund <andres@anarazel.de> wrote:\n> The problem starts with\n>\n> commit aa01051418f10afbdfa781b8dc109615ca785ff9\n> Author: Robert Haas <rhaas@postgresql.org>\n> Date: 2022-01-24 14:23:15 -0500\n>\n> pg_upgrade: Preserve database OIDs.\n\nWell, that's sad.\n\n> I think the most realistic way to address this is the mechanism is to use the\n> mechanism from\n> https://postgr.es/m/CA%2BhUKGLdemy2gBm80kz20GTe6hNVwoErE8KwcJk6-U56oStjtg%40mail.gmail.com\n\nI agree. While I feel sort of bad about missing this issue in review,\nI also feel like it's pretty surprising that there isn't something\nplugging this hole already. It feels unexpected that our FD management\nlayer might hand you an FD that references the previous file that had\nthe same name instead of the one that exists right now, and I don't\nthink it's too much of a stretch of the imagination to suppose that\nthis won't be the last problem we have if we don't fix it. I also\nagree that the mechanism proposed in that thread is the best way to\nfix the problem. The sinval mechanism won't work, since there's\nnothing to ensure that the invalidation messages are processed in a\nsufficient timely fashion, and there's no obvious way to get around\nthat problem. But the ProcSignalBarrier mechanism is not intertwined\nwith heavyweight locking in the way that sinval is, so it should be a\ngood fit for this problem.\n\nThe main question in my mind is who is going to actually make that\nhappen. It was your idea (I think), Thomas coded it, and my commit\nmade it a live problem. So who's going to get something committed\nhere?\n\n> If we add such barriers to dropdb(), XLOG_DBASE_DROP redo, DropTableSpace(),\n> XLOG_TBLSPC_DROP redo, I think all the \"new\" ways to hit this are foreclosed.\n\nWhat about movedb()?\n\n> We perhaps could avoid all relfilenode recycling, issues on the fd level, by\n> adding a barrier whenever relfilenode allocation wraps around. But I'm not\n> sure how easy that is to detect.\n\nI definitely think it would be nice to cover this issue not only for\ndatabase OIDs and tablespace OIDs but also for relfilenode OIDs.\nOtherwise we're right on the cusp of having the same bug in that case.\npg_upgrade doesn't drop and recreate tables the way it does databases,\nbut if somebody made it do that in the future, would they think about\nthis? I bet not.\n\nDilip's been working on your idea of making relfilenode into a 56-bit\nquantity. That would make the problem of detecting wraparound go away.\nIn the meantime, I suppose we could do think of trying to do something\nhere:\n\n /* wraparound, or first post-initdb\nassignment, in normal mode */\n ShmemVariableCache->nextOid = FirstNormalObjectId;\n ShmemVariableCache->oidCount = 0;\n\nThat's a bit sketch, because this is the OID counter, which isn't only\nused for relfilenodes. I'm not sure whether there would be problems\nwaiting for a barrier here - I think we might be holding some lwlocks\nin some code paths.\n\n> I think we might actually be able to remove the checkpoint from dropdb() by\n> using barriers?\n\nThat looks like it might work. We'd need to be sure that the\ncheckpointer would both close any open FDs and also absorb fsync\nrequests before acknowledging the barrier.\n\n> I think we need to add some debugging code to detect \"fd to deleted file of\n> same name\" style problems. Especially during regression tests they're quite\n> hard to notice, because most of the contents read/written are identical across\n> recycling.\n\nMaybe. If we just plug the holes here, I am not sure we need permanent\ninstrumentation. But I'm also not violently opposed to it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 13:49:50 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Fri, Feb 11, 2022 at 7:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> The main question in my mind is who is going to actually make that\n> happen. It was your idea (I think), Thomas coded it, and my commit\n> made it a live problem. So who's going to get something committed\n> here?\n\nI was about to commit that, because the original Windows problem it\nsolved is showing up occasionally in CI failures (that is, it already\nsolves a live problem, albeit a different and non-data-corrupting\none):\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGJp-m8uAD_wS7%2BdkTgif013SNBSoJujWxvRUzZ1nkoUyA%40mail.gmail.com\n\nIt seems like I should go ahead and do that today, and we can study\nfurther uses for PROCSIGNAL_BARRIER_SMGRRELEASE in follow-on work?\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:10:38 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Thu, Feb 10, 2022 at 3:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Feb 11, 2022 at 7:50 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > The main question in my mind is who is going to actually make that\n> > happen. It was your idea (I think), Thomas coded it, and my commit\n> > made it a live problem. So who's going to get something committed\n> > here?\n>\n> I was about to commit that, because the original Windows problem it\n> solved is showing up occasionally in CI failures (that is, it already\n> solves a live problem, albeit a different and non-data-corrupting\n> one):\n>\n> https://www.postgresql.org/message-id/CA%2BhUKGJp-m8uAD_wS7%2BdkTgif013SNBSoJujWxvRUzZ1nkoUyA%40mail.gmail.com\n>\n> It seems like I should go ahead and do that today, and we can study\n> further uses for PROCSIGNAL_BARRIER_SMGRRELEASE in follow-on work?\n\nA bird in the hand is worth two in the bush. Unless it's a vicious,\nhand-biting bird.\n\nIn other words, if you think what you've got is better than what we\nhave now and won't break anything worse than it is today, +1 for\ncommitting, and more improvements can come when we get enough round\ntuits.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:23:57 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 13:49:50 -0500, Robert Haas wrote:\n> I agree. While I feel sort of bad about missing this issue in review,\n> I also feel like it's pretty surprising that there isn't something\n> plugging this hole already. It feels unexpected that our FD management\n> layer might hand you an FD that references the previous file that had\n> the same name instead of the one that exists right now, and I don't\n> think it's too much of a stretch of the imagination to suppose that\n> this won't be the last problem we have if we don't fix it.\n\nArguably it's not really the FD layer that's the issue - it's on the smgr\nlevel. But that's semantics...\n\n\n> The main question in my mind is who is going to actually make that\n> happen. It was your idea (I think), Thomas coded it, and my commit\n> made it a live problem. So who's going to get something committed\n> here?\n\nI think it'd make sense for Thomas to commit the current patch for the\ntablespace issues first. And then we can go from there?\n\n\n> > If we add such barriers to dropdb(), XLOG_DBASE_DROP redo, DropTableSpace(),\n> > XLOG_TBLSPC_DROP redo, I think all the \"new\" ways to hit this are foreclosed.\n> \n> What about movedb()?\n\nMy first reaction was that it should be fine, because RelFileNode.spcNode\nwould differ. But of course that doesn't protect against moving a database\nback and forth. So yes, you're right, it's needed there too.\n\n\nAre there other commands that could be problematic? ALTER TABLE SET TABLESPACE\nshould be fine, that allocates a new relfilenode.\n\n\n> I definitely think it would be nice to cover this issue not only for\n> database OIDs and tablespace OIDs but also for relfilenode OIDs.\n> Otherwise we're right on the cusp of having the same bug in that case.\n> pg_upgrade doesn't drop and recreate tables the way it does databases,\n> but if somebody made it do that in the future, would they think about\n> this? I bet not.\n\nAnd even just plugging the wraparound aspect would be great. It's not actually\n*that* hard to wrap oids around, because of toast.\n\n\n> Dilip's been working on your idea of making relfilenode into a 56-bit\n> quantity. That would make the problem of detecting wraparound go away.\n> In the meantime, I suppose we could do think of trying to do something\n> here:\n> \n> /* wraparound, or first post-initdb\n> assignment, in normal mode */\n> ShmemVariableCache->nextOid = FirstNormalObjectId;\n> ShmemVariableCache->oidCount = 0;\n> \n> That's a bit sketch, because this is the OID counter, which isn't only\n> used for relfilenodes.\n\nI can see a few ways to deal with that:\n\n1) Split nextOid into two, nextRelFileNode and nextOid. Have the wraparound\n behaviour only in the nextRelFileNode case. That'd be a nice improvement,\n but not entirely trivial, because we currently use GetNewRelFileNode() for\n oid assignment as well (c.f. heap_create_with_catalog()).\n\n2) Keep track of the last assigned relfilenode in ShmemVariableCache, and wait\n for a barrier in GetNewRelFileNode() if the assigned oid is wrapped\n around. While there would be a chance of \"missing\" a ->nextOid wraparound,\n e.g. because of >2**32 toast oids being allocated, I think that could only\n happen if there were no \"wrapped around relfilenodes\" allocated, so that\n should be ok.\n\n3) Any form of oid wraparound might cause problems, not just relfilenode\n issues. So having a place to emit barriers on oid wraparound might be a\n good idea.\n\n\nOne thing I wonder is how to make sure the path would be tested. Perhaps we\ncould emit the anti-wraparound barriers more frequently when assertions are\nenabled?\n\n\n> I'm not sure whether there would be problems waiting for a barrier here - I\n> think we might be holding some lwlocks in some code paths.\n\nIt seems like a bad idea to call GetNewObjectId() with lwlocks held. From what\nI can see the two callers of GetNewObjectId() both have loops that can go on\nfor a long time after a wraparound.\n\n\n\n> > I think we need to add some debugging code to detect \"fd to deleted file of\n> > same name\" style problems. Especially during regression tests they're quite\n> > hard to notice, because most of the contents read/written are identical across\n> > recycling.\n> \n> Maybe. If we just plug the holes here, I am not sure we need permanent\n> instrumentation. But I'm also not violently opposed to it.\n\nThe question is how we can find all the places we need to add barriers emit &\nwait to. I e.g. didn't initially didn't realize that there's wraparound danger\nin WAL replay as well.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:14:08 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-02-11 09:10:38 +1300, Thomas Munro wrote:\n> I was about to commit that, because the original Windows problem it\n> solved is showing up occasionally in CI failures (that is, it already\n> solves a live problem, albeit a different and non-data-corrupting\n> one):\n\n+1\n\n> It seems like I should go ahead and do that today, and we can study\n> further uses for PROCSIGNAL_BARRIER_SMGRRELEASE in follow-on work?\n\nYes.\n\nI wonder whether we really should make the barriers be conditional on\ndefined(WIN32) || defined(USE_ASSERT_CHECKING) as done in that patch, even\nleaving wraparound dangers aside. On !windows we still have the issues of the\nspace for the dropped / moved files not being released if there are processes\nhaving them open. That can be a lot of space if there's long-lived connections\nin a cluster that doesn't fit into s_b (because processes will have random fds\nopen for writing back dirty buffers). And we don't truncate the files before\nunlinking when done as part of a DROP DATABASE...\n\nBut that's something we can fine-tune later as well...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 14:26:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 14:26:59 -0800, Andres Freund wrote:\n> On 2022-02-11 09:10:38 +1300, Thomas Munro wrote:\n> > It seems like I should go ahead and do that today, and we can study\n> > further uses for PROCSIGNAL_BARRIER_SMGRRELEASE in follow-on work?\n> \n> Yes.\n\nI wrote a test to show the problem. While looking at the code I unfortunately\nfound that CREATE DATABASE ... OID isn't the only problem. Two consecutive\nALTER DATABASE ... SET TABLESPACE also can cause corruption.\n\nThe test doesn't work on < 15 (due to PostgresNode renaming and use of\nallow_in_place_tablespaces). But manually it's unfortunately quite\nreproducible :(\n\nStart a server with\n\nshared_buffers = 1MB\nautovacuum=off\nbgwriter_lru_maxpages=1\nbgwriter_delay=10000\n\nthese are just to give more time / to require smaller tables.\n\n\nStart two psql sessions\n\ns1: \\c postgres\ns1: CREATE DATABASE conflict_db;\ns1: CREATE TABLESPACE test_tablespace LOCATION '/tmp/blarg';\ns1: CREATE EXTENSION pg_prewarm;\n\ns2: \\c conflict_db\ns2: CREATE TABLE large(id serial primary key, dataa text, datab text);\ns2: INSERT INTO large(dataa, datab) SELECT g.i::text, 0 FROM generate_series(1, 4000) g(i);\ns2: UPDATE large SET datab = 1;\n\n-- prevent AtEOXact_SMgr\ns1: BEGIN;\n\n-- Fill shared_buffers with other contents. This needs to write out the prior dirty contents\n-- leading to opening smgr rels / file descriptors\ns1: SELECT SUM(pg_prewarm(oid)) warmed_buffers FROM pg_class WHERE pg_relation_filenode(oid) != 0;\n\ns2: \\c postgres\n-- unlinks the files\ns2: ALTER DATABASE conflict_db SET TABLESPACE test_tablespace;\n-- now new files with the same relfilenode exist\ns2: ALTER DATABASE conflict_db SET TABLESPACE pg_default;\ns2: \\c conflict_db\n-- dirty buffers again\ns2: UPDATE large SET datab = 2;\n\n-- this again writes out the dirty buffers. But because nothing forced the smgr handles to be closed,\n-- fds pointing to the *old* file contents are used. I.e. we'll write to the wrong file\ns1: SELECT SUM(pg_prewarm(oid)) warmed_buffers FROM pg_class WHERE pg_relation_filenode(oid) != 0;\n\n-- verify that everything is [not] ok\ns2: SELECT datab, count(*) FROM large GROUP BY 1 ORDER BY 1 LIMIT 10;\n\n┌───────┬───────┐\n│ datab │ count │\n├───────┼───────┤\n│ 1 │ 2117 │\n│ 2 │ 67 │\n└───────┴───────┘\n(2 rows)\n\n\noops.\n\n\nI don't yet have a good idea how to tackle this in the backbranches.\n\n\n\nI've started to work on a few debugging aids to find problem like\nthese. Attached are two WIP patches:\n\n1) use fstat(...).st_nlink == 0 to detect dangerous actions on fds with\n deleted files. That seems to work (almost) everywhere. Optionally, on\n linux, use readlink(/proc/$pid/fd/$fd) to get the filename.\n2) On linux, iterate over PGPROC to get a list of pids. Then iterate over\n /proc/$pid/fd/ to check for deleted files. This only can be done after\n we've just made sure there's no fds open to deleted files.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Feb 2022 01:11:21 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-02-22 01:11:21 -0800, Andres Freund wrote:\n> I've started to work on a few debugging aids to find problem like\n> these. Attached are two WIP patches:\n\nForgot to attach. Also importantly includes a tap test for several of these\nissues\n\nGreetings,\n\nAndres Freund", "msg_date": "Tue, 22 Feb 2022 01:40:01 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Tue, Feb 22, 2022 at 4:40 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-22 01:11:21 -0800, Andres Freund wrote:\n> > I've started to work on a few debugging aids to find problem like\n> > these. Attached are two WIP patches:\n>\n> Forgot to attach. Also importantly includes a tap test for several of these\n> issues\n\nHi,\n\nJust a few very preliminary comments:\n\n- I am having some trouble understanding clearly what 0001 is doing.\nI'll try to study it further.\n\n- 0002 seems like it's generally a good idea. I haven't yet dug into\nwhy the call sites for AssertFileNotDeleted() are where they are, or\nwhether that's a complete set of places to check.\n\n- In general, 0003 makes a lot of sense to me.\n\n+ /*\n+ * Finally tell the kernel to write the data to\nstorage. Don't smgr if\n+ * previously closed, otherwise we could end up evading fd-reuse\n+ * protection.\n+ */\n\n- I think the above hunk is missing a word, because it uses smgr as a\nverb. I also think that it's easy to imagine this explanation being\ninsufficient for some future hacker to understand the issue.\n\n- While 0004 seems useful for testing, it's an awfully big hammer. I'm\nnot sure we should be thinking about committing something like that,\nor at least not as a default part of the build. But ... maybe?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Mar 2022 14:52:01 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-03-02 14:52:01 -0500, Robert Haas wrote:\n> - I am having some trouble understanding clearly what 0001 is doing.\n> I'll try to study it further.\n\nIt tests for the various scenarios I could think of that could lead to FD\nreuse, to state the obvious ;). Anything particularly unclear.\n\n\n> - In general, 0003 makes a lot of sense to me.\n> \n> + /*\n> + * Finally tell the kernel to write the data to\n> storage. Don't smgr if\n> + * previously closed, otherwise we could end up evading fd-reuse\n> + * protection.\n> + */\n> \n> - I think the above hunk is missing a word, because it uses smgr as a\n> verb. I also think that it's easy to imagine this explanation being\n> insufficient for some future hacker to understand the issue.\n\nYea, it's definitely not sufficient or gramattically correct. I think I wanted\nto send something out, but was very tired by that point..\n\n\n> - While 0004 seems useful for testing, it's an awfully big hammer. I'm\n> not sure we should be thinking about committing something like that,\n> or at least not as a default part of the build. But ... maybe?\n\nAggreed. I think it's racy anyway, because further files could get deleted\n(e.g. segments > 0) after the barrier has been processed.\n\n\nWhat I am stuck on is what we can do for the released branches. Data\ncorruption after two consecutive ALTER DATABASE SET TABLESPACEs seems like\nsomething we need to address.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 2 Mar 2022 12:00:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, Mar 2, 2022 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> What I am stuck on is what we can do for the released branches. Data\n> corruption after two consecutive ALTER DATABASE SET TABLESPACEs seems like\n> something we need to address.\n\nI think we should consider back-porting the ProcSignalBarrier stuff\neventually. I realize that it's not really battle-tested yet and I am\nnot saying we should do it right now, but I think that if we get these\nchanges into v15, back-porting it in let's say May of next year could\nbe a reasonable thing to do. Sure, there is some risk there, but on\nthe other hand, coming up with completely different fixes for the\nback-branches is not risk-free either, nor is it clear that there is\nany alternative fix that is nearly as good. In the long run, I am\nfairly convinced that ProcSignalBarrier is the way forward not only\nfor this purpose but for other things as well, and everybody's got to\nget on the train or be left behind.\n\nAlso, I am aware of multiple instances where the project waited a\nlong, long time to fix bugs because we didn't have a back-patchable\nfix. I disagree with that on principle. A master-only fix now is\nbetter than a back-patchable fix two or three years from now. Of\ncourse a back-patchable fix now is better still, but we have to pick\nfrom the options we have, not the ones we'd like to have.\n\n<digressing a bit>\n\nIt seems to me that if we were going to try to construct an\nalternative fix for the back-branches, it would have to be something\nthat didn't involve a new invalidation mechanism -- because the\nProcSignalBarrier stuff is an invalidation mechanism in effect, and I\nfeel that it can't be better to invent two new invalidation mechanisms\nrather than one. And the only idea I have is trying to detect a\ndangerous sequence of operations and just outright block it. We have\nsome cases sort of like that already - e.g. you can't prepare a\ntransaction if it's done certain things. But, the existing precedents\nthat occur to me are, I think, all cases where all of the related\nactions are being performed in the same backend. It doesn't sound\ncrazy to me to have some rule like \"you can't ALTER TABLESPACE on the\nsame tablespace in the same backend twice in a row without an\nintervening checkpoint\", or whatever, and install the book-keeping to\nenforce that. But I don't think anything like that can work, both\nbecause the two ALTER TABLESPACE commands could be performed in\ndifferent sessions, and also because an intervening checkpoint is no\nguarantee of safety anyway, IIUC. So I'm just not really seeing a\nreasonable strategy that isn't basically the barrier stuff.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 2 Mar 2022 16:27:47 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, Mar 2, 2022 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-03-02 14:52:01 -0500, Robert Haas wrote:\n> > - I am having some trouble understanding clearly what 0001 is doing.\n> > I'll try to study it further.\n>\n> It tests for the various scenarios I could think of that could lead to FD\n> reuse, to state the obvious ;). Anything particularly unclear.\n\nAs I understand it, the basic idea of this test is:\n\n1. create a database called 'conflict_db' containing a table called 'large'\n2. also create a table called 'replace_sb' in the postgres db.\n3. have a long running transaction open in 'postgres' database that\nrepeatedly accesses the 'replace_sb' table to evict the 'conflict_db'\ntable, sometimes causing write-outs\n4. try to get it to write out the data to a stale file descriptor by\nfiddling with various things, and then detect the corruption that\nresults\n5. use both a primary and a standby not because they are intrinsically\nnecessary but because you want to test that both cases work\n\nAs to (1) and (2), I note that 'large' is not a hugely informative\ntable name, especially when it's the smaller of the two test tables.\nAnd also, the names are all over the place. The table names don't have\nmuch to do with each other (large, replace_sb) and they're completely\nunrelated to the database names (postgres, conflict_db). And then you\nhave Perl functions whose names also don't make it obvious what we're\ntalking about. I can't remember that verify() is the one that accesses\nconflict.db large while cause_eviction() is the one that accesses\npostgres.replace_sb for more than like 15 seconds. It'd make it a lot\neasier if the table names, database names, and Perl function names had\nsome common elements.\n\nAs to (5), the identifiers are just primary and standby, without\nmentioning the database name, which adds to the confusion, and there\nare no comments explaining why we have two nodes.\n\nAlso, some of the comments seem like they might be in the wrong place:\n\n+# Create template database with a table that we'll update, to trigger dirty\n+# rows. Using a template database + preexisting rows makes it a bit easier to\n+# reproduce, because there's no cache invalidations generated.\n\nRight after this comment, we create a table and then a template\ndatabase just after, so I think we're not avoiding any invalidations\nat this point. We're avoiding them at later points where the comments\naren't talking about this.\n\nI think it would be helpful to find a way to divide the test case up\ninto sections that are separated from each other visually, and explain\nthe purpose of each test a little more in a comment. For example I'm\nstruggling to understand the exact purpose of this bit:\n\n+$node_primary->safe_psql('conflict_db', \"VACUUM FULL large;\");\n+$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 3;\");\n+\n+verify($node_primary, $node_standby, 3,\n+ \"restored contents as expected\");\n\nI'm all for having test coverage of VACUUM FULL, but it isn't clear to\nme what this does that is related to FD reuse.\n\nSimilarly for the later ones. I generally grasp that you are trying to\nmake sure that things are OK with respect to FD reuse in various\nscenarios, but I couldn't explain to you what we'd be losing in terms\nof test coverage if we removed this line:\n\n+$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 7;\");\n\nI am not sure how much all of this potential cleanup matters because\nI'm pretty skeptical that this would be stable in the buildfarm. It\nrelies on a lot of assumptions about timing and row sizes and details\nof when invalidations are sent that feel like they might not be\nentirely predictable at the level you would need to have this\nconsistently pass everywhere. Perhaps I am too pessimistic?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Mar 2022 13:11:17 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Hi,\n\nOn 2022-03-03 13:11:17 -0500, Robert Haas wrote:\n> On Wed, Mar 2, 2022 at 3:00 PM Andres Freund <andres@anarazel.de> wrote:\n> > On 2022-03-02 14:52:01 -0500, Robert Haas wrote:\n> > > - I am having some trouble understanding clearly what 0001 is doing.\n> > > I'll try to study it further.\n> >\n> > It tests for the various scenarios I could think of that could lead to FD\n> > reuse, to state the obvious ;). Anything particularly unclear.\n> \n> As I understand it, the basic idea of this test is:\n> \n> 1. create a database called 'conflict_db' containing a table called 'large'\n> 2. also create a table called 'replace_sb' in the postgres db.\n> 3. have a long running transaction open in 'postgres' database that\n> repeatedly accesses the 'replace_sb' table to evict the 'conflict_db'\n> table, sometimes causing write-outs\n> 4. try to get it to write out the data to a stale file descriptor by\n> fiddling with various things, and then detect the corruption that\n> results\n> 5. use both a primary and a standby not because they are intrinsically\n> necessary but because you want to test that both cases work\n\nRight.\n\n\nI wasn't proposing to commit the test as-is, to be clear. It was meant as a\ndemonstration of the types of problems I can see, and what's needed to fix\nthem...\n\n\n> I can't remember that verify() is the one that accesses conflict.db large\n> while cause_eviction() is the one that accesses postgres.replace_sb for more\n> than like 15 seconds.\n\nFor more than 15seconds? The whole test runs in a few seconds for me...\n\n\n> As to (5), the identifiers are just primary and standby, without\n> mentioning the database name, which adds to the confusion, and there\n> are no comments explaining why we have two nodes.\n\nI don't follow this one - physical rep doesn't do anything below the database\nlevel?\n\n\n\n> I think it would be helpful to find a way to divide the test case up\n> into sections that are separated from each other visually, and explain\n> the purpose of each test a little more in a comment. For example I'm\n> struggling to understand the exact purpose of this bit:\n> \n> +$node_primary->safe_psql('conflict_db', \"VACUUM FULL large;\");\n> +$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 3;\");\n> +\n> +verify($node_primary, $node_standby, 3,\n> + \"restored contents as expected\");\n> \n> I'm all for having test coverage of VACUUM FULL, but it isn't clear to\n> me what this does that is related to FD reuse.\n\nIt's just trying to clean up prior damage, so the test continues to later\nones. Definitely not needed once there's a fix. But it's useful while trying\nto make the test actually detect various corruptions.\n\n\n> Similarly for the later ones. I generally grasp that you are trying to\n> make sure that things are OK with respect to FD reuse in various\n> scenarios, but I couldn't explain to you what we'd be losing in terms\n> of test coverage if we removed this line:\n> \n> +$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 7;\");\n> \n> I am not sure how much all of this potential cleanup matters because\n> I'm pretty skeptical that this would be stable in the buildfarm.\n\nWhich \"potential cleanup\" are you referring to?\n\n\n> It relies on a lot of assumptions about timing and row sizes and details of\n> when invalidations are sent that feel like they might not be entirely\n> predictable at the level you would need to have this consistently pass\n> everywhere. Perhaps I am too pessimistic?\n\nI don't think the test passing should be dependent on row size, invalidations\netc to a significant degree (unless the tables are too small to reach s_b\netc)? The exact symptoms of failures are highly unstable, but obviously we'd\nfix them in-tree before committing a test. But maybe I'm missing a dependency?\n\nFWIW, the test did pass on freebsd, linux, macos and windows with the\nfix. After a few iterations of improving the fix ;)\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 3 Mar 2022 10:28:29 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Thu, Mar 3, 2022 at 1:28 PM Andres Freund <andres@anarazel.de> wrote:\n> > I can't remember that verify() is the one that accesses conflict.db large\n> > while cause_eviction() is the one that accesses postgres.replace_sb for more\n> > than like 15 seconds.\n>\n> For more than 15seconds? The whole test runs in a few seconds for me...\n\nI'm not talking about running the test. I'm talking about reading it\nand trying to keep straight what is happening. The details of which\nfunction is accessing which database keep going out of my head.\nQuickly. Maybe because I'm dumb, but I think some better naming could\nhelp, just in case other people are dumb, too.\n\n> > As to (5), the identifiers are just primary and standby, without\n> > mentioning the database name, which adds to the confusion, and there\n> > are no comments explaining why we have two nodes.\n>\n> I don't follow this one - physical rep doesn't do anything below the database\n> level?\n\nRight but ... those handles are connected to a particular DB on that\nnode, not just the node in general.\n\n> > I think it would be helpful to find a way to divide the test case up\n> > into sections that are separated from each other visually, and explain\n> > the purpose of each test a little more in a comment. For example I'm\n> > struggling to understand the exact purpose of this bit:\n> >\n> > +$node_primary->safe_psql('conflict_db', \"VACUUM FULL large;\");\n> > +$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 3;\");\n> > +\n> > +verify($node_primary, $node_standby, 3,\n> > + \"restored contents as expected\");\n> >\n> > I'm all for having test coverage of VACUUM FULL, but it isn't clear to\n> > me what this does that is related to FD reuse.\n>\n> It's just trying to clean up prior damage, so the test continues to later\n> ones. Definitely not needed once there's a fix. But it's useful while trying\n> to make the test actually detect various corruptions.\n\nAh, OK, I definitely did not understand that before.\n\n> > Similarly for the later ones. I generally grasp that you are trying to\n> > make sure that things are OK with respect to FD reuse in various\n> > scenarios, but I couldn't explain to you what we'd be losing in terms\n> > of test coverage if we removed this line:\n> >\n> > +$node_primary->safe_psql('conflict_db', \"UPDATE large SET datab = 7;\");\n> >\n> > I am not sure how much all of this potential cleanup matters because\n> > I'm pretty skeptical that this would be stable in the buildfarm.\n>\n> Which \"potential cleanup\" are you referring to?\n\nI mean, whatever improvements you might consider making based on my comments.\n\n> > It relies on a lot of assumptions about timing and row sizes and details of\n> > when invalidations are sent that feel like they might not be entirely\n> > predictable at the level you would need to have this consistently pass\n> > everywhere. Perhaps I am too pessimistic?\n>\n> I don't think the test passing should be dependent on row size, invalidations\n> etc to a significant degree (unless the tables are too small to reach s_b\n> etc)? The exact symptoms of failures are highly unstable, but obviously we'd\n> fix them in-tree before committing a test. But maybe I'm missing a dependency?\n>\n> FWIW, the test did pass on freebsd, linux, macos and windows with the\n> fix. After a few iterations of improving the fix ;)\n\nHmm. Well, I admit that's already more than I would have expected....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 3 Mar 2022 13:47:20 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "Some thoughts:\n\nThe v1-0003 patch introduced smgropen_cond() to avoid the problem of\nIssuePendingWritebacks(), which does desynchronised smgropen() calls\nand could open files after the barrier but just before they are\nunlinked. Makes sense, but...\n\n1. For that to actually work, we'd better call smgrcloseall() when\nPROCSIGNAL_BARRIER_SMGRRELEASE is handled; currently it only calls\ncloseAllVds(). Here's a patch for that. But...\n\n2. The reason I used closeAllVds() in 4eb21763 is because I didn't\nwant random CHECK_FOR_INTERRUPTS() calls to break code like this, from\nRelationCopyStorage():\n\n for (blkno = 0; blkno < nblocks; blkno++)\n {\n /* If we got a cancel signal during the copy of the data, quit */\n CHECK_FOR_INTERRUPTS();\n\n smgrread(src, forkNum, blkno, buf.data);\n\nPerhaps we could change RelationCopyStorage() to take Relation, and\nuse CreateFakeRelCacheEntry() in various places to satisfy that, and\nalso extend that mechanism to work with temporary tables. Here's an\ninitial sketch of that idea, to see what problems it crashes into...\nFake Relations are rather unpleasant though; I wonder if there is an\nSMgrRelationHandle concept that could make this all nicer. There is\npossibly also some cleanup missing to avoid an SMgrRelation that\npoints to a defunct fake Relation on non-local exit (?).", "msg_date": "Fri, 1 Apr 2022 19:03:40 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Fri, Apr 1, 2022 at 2:04 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> The v1-0003 patch introduced smgropen_cond() to avoid the problem of\n> IssuePendingWritebacks(), which does desynchronised smgropen() calls\n> and could open files after the barrier but just before they are\n> unlinked. Makes sense, but...\n>\n> 1. For that to actually work, we'd better call smgrcloseall() when\n> PROCSIGNAL_BARRIER_SMGRRELEASE is handled; currently it only calls\n> closeAllVds(). Here's a patch for that. But...\n\nWhat if we instead changed our approach to fixing\nIssuePendingWritebacks()? Design sketch:\n\n1. We add a new flag, bool smgr_zonk, to SmgrRelationData. Initially it's false.\n2. Receipt of PROCSIGNAL_BARRIER_SMGRRELEASE sets smgr_zonk to true.\n3. If you want, you can set it back to false any time you access the\nsmgr while holding a relation lock.\n4. IssuePendingWritebacks() uses smgropen_cond(), as today, but that\nfunction now refuses either to create a new smgr or to return one\nwhere smgr_zonk is true.\n\nI think this would be sufficient to guarantee that we never try to\nwrite back to a relfilenode if we haven't had a lock on it since the\nlast PROCSIGNAL_BARRIER_SMGRRELEASE, which I think is the invariant we\nneed here?\n\nMy thinking here is that it's a lot easier to get away with marking\nthe content of a data structure invalid than it is to get away with\ninvalidating pointers to that data structure. If we can tinker with\nthe design so that the SMgrRelationData doesn't actually go away but\njust gets a little bit more thoroughly invalidated, we could avoid\nhaving to audit the entire code base for code that keeps smgr handles\naround longer than would be possible with the design you demonstrate\nhere.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 1 Apr 2022 09:51:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, Apr 2, 2022 at 2:52 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 1, 2022 at 2:04 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The v1-0003 patch introduced smgropen_cond() to avoid the problem of\n> > IssuePendingWritebacks(), which does desynchronised smgropen() calls\n> > and could open files after the barrier but just before they are\n> > unlinked. Makes sense, but...\n> >\n> > 1. For that to actually work, we'd better call smgrcloseall() when\n> > PROCSIGNAL_BARRIER_SMGRRELEASE is handled; currently it only calls\n> > closeAllVds(). Here's a patch for that. But...\n>\n> What if we instead changed our approach to fixing\n> IssuePendingWritebacks()? Design sketch:\n>\n> 1. We add a new flag, bool smgr_zonk, to SmgrRelationData. Initially it's false.\n> 2. Receipt of PROCSIGNAL_BARRIER_SMGRRELEASE sets smgr_zonk to true.\n> 3. If you want, you can set it back to false any time you access the\n> smgr while holding a relation lock.\n> 4. IssuePendingWritebacks() uses smgropen_cond(), as today, but that\n> function now refuses either to create a new smgr or to return one\n> where smgr_zonk is true.\n>\n> I think this would be sufficient to guarantee that we never try to\n> write back to a relfilenode if we haven't had a lock on it since the\n> last PROCSIGNAL_BARRIER_SMGRRELEASE, which I think is the invariant we\n> need here?\n>\n> My thinking here is that it's a lot easier to get away with marking\n> the content of a data structure invalid than it is to get away with\n> invalidating pointers to that data structure. If we can tinker with\n> the design so that the SMgrRelationData doesn't actually go away but\n> just gets a little bit more thoroughly invalidated, we could avoid\n> having to audit the entire code base for code that keeps smgr handles\n> around longer than would be possible with the design you demonstrate\n> here.\n\nAnother idea would be to call a new function DropPendingWritebacks(),\nand also tell all the SMgrRelation objects to close all their internal\nstate (ie the fds + per-segment objects) but not free the main\nSMgrRelationData object, and for good measure also invalidate the\nsmall amount of cached state (which hasn't been mentioned in this\nthread, but it kinda bothers me that that state currently survives, so\nit was one unspoken reason I liked the smgrcloseall() idea). Since\nDropPendingWritebacks() is in a rather different module, perhaps if we\nwere to do that we'd want to rename PROCSIGNAL_BARRIER_SMGRRELEASE to\nsomething else, because it's not really a SMGR-only thing anymore.\n\n\n", "msg_date": "Sat, 2 Apr 2022 10:03:05 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, Apr 2, 2022 at 10:03 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another idea would be to call a new function DropPendingWritebacks(),\n> and also tell all the SMgrRelation objects to close all their internal\n> state (ie the fds + per-segment objects) but not free the main\n> SMgrRelationData object, and for good measure also invalidate the\n> small amount of cached state (which hasn't been mentioned in this\n> thread, but it kinda bothers me that that state currently survives, so\n> it was one unspoken reason I liked the smgrcloseall() idea). Since\n> DropPendingWritebacks() is in a rather different module, perhaps if we\n> were to do that we'd want to rename PROCSIGNAL_BARRIER_SMGRRELEASE to\n> something else, because it's not really a SMGR-only thing anymore.\n\nHere's a sketch of that idea.", "msg_date": "Sat, 2 Apr 2022 18:04:09 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Fri, Apr 1, 2022 at 5:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Another idea would be to call a new function DropPendingWritebacks(),\n> and also tell all the SMgrRelation objects to close all their internal\n> state (ie the fds + per-segment objects) but not free the main\n> SMgrRelationData object, and for good measure also invalidate the\n> small amount of cached state (which hasn't been mentioned in this\n> thread, but it kinda bothers me that that state currently survives, so\n> it was one unspoken reason I liked the smgrcloseall() idea). Since\n> DropPendingWritebacks() is in a rather different module, perhaps if we\n> were to do that we'd want to rename PROCSIGNAL_BARRIER_SMGRRELEASE to\n> something else, because it's not really a SMGR-only thing anymore.\n\nI'm not sure that it really matters, but with the idea that I proposed\nit's possible to \"save\" a pending writeback, if we notice that we're\naccessing the relation with a proper lock after the barrier arrives\nand before we actually try to do the writeback. With this approach we\nthrow them out immediately, so they're just gone. I don't mind that\nbecause I don't think this will happen often enough to matter, and I\ndon't think it will be very expensive when it does, but it's something\nto think about.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 4 Apr 2022 10:18:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Tue, Apr 5, 2022 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 1, 2022 at 5:03 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > Another idea would be to call a new function DropPendingWritebacks(),\n> > and also tell all the SMgrRelation objects to close all their internal\n> > state (ie the fds + per-segment objects) but not free the main\n> > SMgrRelationData object, and for good measure also invalidate the\n> > small amount of cached state (which hasn't been mentioned in this\n> > thread, but it kinda bothers me that that state currently survives, so\n> > it was one unspoken reason I liked the smgrcloseall() idea). Since\n> > DropPendingWritebacks() is in a rather different module, perhaps if we\n> > were to do that we'd want to rename PROCSIGNAL_BARRIER_SMGRRELEASE to\n> > something else, because it's not really a SMGR-only thing anymore.\n>\n> I'm not sure that it really matters, but with the idea that I proposed\n> it's possible to \"save\" a pending writeback, if we notice that we're\n> accessing the relation with a proper lock after the barrier arrives\n> and before we actually try to do the writeback. With this approach we\n> throw them out immediately, so they're just gone. I don't mind that\n> because I don't think this will happen often enough to matter, and I\n> don't think it will be very expensive when it does, but it's something\n> to think about.\n\nThe checkpointer never takes heavyweight locks, so the opportunity\nyou're describing can't arise.\n\nThis stuff happens entirely within the scope of a call to\nBufferSync(), which is called by CheckPointBuffer(). BufferSync()\ndoes WritebackContextInit() to set up the object that accumulates the\nwriteback requests, and then runs for a while performing a checkpoint\n(usually spread out over time), and then at the end does\nIssuePendingWritebacks(). A CFI() can be processed any time up until\nthe IssuePendingWritebacks(), but not during IssuePendingWritebacks().\nThat's the size of the gap we need to cover.\n\nI still like the patch I posted most recently. Note that it's not\nquite as I described above: there is no DropPendingWritebacks(),\nbecause that turned out to be impossible to implement in a way that\nthe barrier handler could call -- it needs access to the writeback\ncontext. Instead I had to expose a counter so that\nIssuePendingWritebacks() could detect whether any smgrreleaseall()\ncalls had happened while the writeback context was being filled up\nwith writeback requests, by comparing with the level recorded therein.\n\n\n", "msg_date": "Tue, 5 Apr 2022 10:24:27 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Tue, Apr 5, 2022 at 10:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Tue, Apr 5, 2022 at 2:18 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> > I'm not sure that it really matters, but with the idea that I proposed\n> > it's possible to \"save\" a pending writeback, if we notice that we're\n> > accessing the relation with a proper lock after the barrier arrives\n> > and before we actually try to do the writeback. With this approach we\n> > throw them out immediately, so they're just gone. I don't mind that\n> > because I don't think this will happen often enough to matter, and I\n> > don't think it will be very expensive when it does, but it's something\n> > to think about.\n>\n> The checkpointer never takes heavyweight locks, so the opportunity\n> you're describing can't arise.\n\n<thinks harder> Hmm, oh, you probably meant the buffer interlocking\nin SyncOneBuffer(). It's true that my most recent patch throws away\nmore requests than it could, by doing the level check at the end of\nthe loop over all buffers instead of adding some kind of\nDropPendingWritebacks() in the barrier handler. I guess I could find\na way to improve that, basically checking the level more often instead\nof at the end, but I don't know if it's worth it; we're still throwing\nout an arbitrary percentage of writeback requests.\n\n\n", "msg_date": "Tue, 5 Apr 2022 14:19:45 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Mon, Apr 4, 2022 at 10:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > The checkpointer never takes heavyweight locks, so the opportunity\n> > you're describing can't arise.\n>\n> <thinks harder> Hmm, oh, you probably meant the buffer interlocking\n> in SyncOneBuffer(). It's true that my most recent patch throws away\n> more requests than it could, by doing the level check at the end of\n> the loop over all buffers instead of adding some kind of\n> DropPendingWritebacks() in the barrier handler. I guess I could find\n> a way to improve that, basically checking the level more often instead\n> of at the end, but I don't know if it's worth it; we're still throwing\n> out an arbitrary percentage of writeback requests.\n\nDoesn't every backend have its own set of pending writebacks?\nBufferAlloc() calls\nScheduleBufferTagForWriteback(&BackendWritebackContext, ...)?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 5 Apr 2022 13:07:12 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, Apr 6, 2022 at 5:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Mon, Apr 4, 2022 at 10:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > The checkpointer never takes heavyweight locks, so the opportunity\n> > > you're describing can't arise.\n> >\n> > <thinks harder> Hmm, oh, you probably meant the buffer interlocking\n> > in SyncOneBuffer(). It's true that my most recent patch throws away\n> > more requests than it could, by doing the level check at the end of\n> > the loop over all buffers instead of adding some kind of\n> > DropPendingWritebacks() in the barrier handler. I guess I could find\n> > a way to improve that, basically checking the level more often instead\n> > of at the end, but I don't know if it's worth it; we're still throwing\n> > out an arbitrary percentage of writeback requests.\n>\n> Doesn't every backend have its own set of pending writebacks?\n> BufferAlloc() calls\n> ScheduleBufferTagForWriteback(&BackendWritebackContext, ...)?\n\nYeah. Alright, for my exaggeration above, s/can't arise/probably\nwon't often arise/, since when regular backends do this it's for\nrandom dirty buffers, not necessarily stuff they hold a relation lock\non.\n\nI think you are right that if we find ourselves calling smgrwrite() on\nanother buffer for that relfilenode a bit later, then it's safe to\n\"unzonk\" the relation. In the worst case, IssuePendingWritebacks()\nmight finish up calling sync_file_range() on a file that we originally\nwrote to for generation 1 of that relfilenode, even though we now have\na file descriptor for generation 2 of that relfilenode that was\nreopened by a later smgrwrite(). That would be acceptable, because a\nbogus sync_file_range() call would be harmless. The key point here is\nthat we absolutely must avoid re-opening files without careful\ninterlocking, because that could lead to later *writes* for generation\n2 going to the defunct generation 1 file that we opened just as it was\nbeing unlinked.\n\n(For completeness: we know of another way for smgrwrite() to write to\nthe wrong generation of a recycled relfilenode, but that's hopefully\nvery unlikely and will hopefully be addressed by adding more bits and\nkilling off OID-wraparound[1]. In this thread we're concerned only\nwith these weird \"explicit OID reycling\" cases we're trying to fix\nwith PROCSIGNAL_BARRIER_SMGRRELEASE sledgehammer-based cache\ninvalidation.)\n\nMy new attempt, attached, is closer to what Andres proposed, except at\nthe level of md.c segment objects instead of level of SMgrRelation\nobjects. This avoids the dangling SMgrRelation pointer problem\ndiscussed earlier, and is also a little like your \"zonk\" idea, except\ninstead of a zonk flag, the SMgrRelation object is reset to a state\nwhere it knows nothing at all except its relfilenode. Any access\nthrough smgrXXX() functions *except* smgrwriteback() will rebuild the\nstate, just as you would clear the hypothetical zonk flag.\n\nSo, to summarise the new patch that I'm attaching to this email as 0001:\n\n1. When PROCSIGNAL_BARRIER_SMGRRELEASE is handled, we call\nsmgrreleaseall(), which calls smgrrelease() on all SMgrRelation\nobjects, telling them to close all their files.\n\n2. All SMgrRelation objects are still valid, and any pointers to them\nthat were acquired before a CFI() and then used in an smgrXXX() call\nafterwards will still work (the example I know of being\nRelationCopyStorage()[2]).\n\n3. mdwriteback() internally uses a new \"behavior\" flag\nEXTENSION_DONT_OPEN when getting its hands on the internal segment\nobject, which says that we should just give up immediately if we don't\nalready have the file open (concretely: if\nPROCSIGNAL_BARRIER_SMGRRELEASE came along between our recent\nsmgrwrite() call and the writeback machinery's smgrwriteback() call,\nit'll do nothing at all).\n\nSomeone might say that it's weird that smgrwriteback() has that\nbehaviour internally, and we might eventually want to make it more\nexplicit by adding a \"behavior\" argument to the function itself, so\nthat it's the caller that controls it. It didn't seem worth it for\nnow though; the whole thing is a hint to the operating system anyway.\n\nHowever it seems that I have something wrong, because CI is failing on\nWindows; I ran out of time for looking into that today, but wanted to\npost what I have so far since I know we have an open item or two to\nclose here ASAP...\n\nPatches 0002-0004 are Andres's, with minor tweaks:\n\nv3-0002-Fix-old-fd-issues-using-global-barriers-everywher.patch:\n\nIt seems useless to even keep the macro USE_BARRIER_SMGRRELEASE if\nwe're going to define it always, so I removed it. I guess it's useful\nto be able to disable that logic easily to see that the assertion in\nthe other patch fails, but you can do that by commenting out a line in\nProcessBarrierSmgrRelease().\n\nI'm still slightly confused about whether we need an *extra* global\nbarrier in DROP TABLESPACE, not just if destroy_tablespace_directory()\nfailed.\n\nv3-0003-WIP-test-for-file-reuse-dangers-around-database-a.patch\n\nWas missing:\n\n-EXTRA_INSTALL=contrib/test_decoding\n+EXTRA_INSTALL=contrib/test_decoding contrib/pg_prewarm\n\nv3-0004-WIP-AssertFileNotDeleted-fd.patch\n\n+ * XXX: Figure out which operating systems this works on.\n\nDo we want this to be wrapped in some kind of macros that vanishes\ninto thin air unless you enable it with USE_UNLINKED_FILE_CHECKING or\nsomething? Then we wouldn't care so much if it doesn't work on some\nunusual system. Bikeshed mode: I would prefer \"not unlinked\", since\nthat has a more specific meaning than \"not deleted\".\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BTgmobM5FN5x0u3tSpoNvk_TZPFCdbcHxsXCoY1ytn1dXROvg%40mail.gmail.com#1070c79256f2330ec52f063cdbe2add0\n[2] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BBfaZ2puQDYV6h2oSWO2QW21_JOXZpha65gWRcmGNCZA%40mail.gmail.com#5deeb3d8adae4daf0da1f09e509eef56", "msg_date": "Fri, 22 Apr 2022 19:37:30 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Fri, Apr 22, 2022 at 3:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> So, to summarise the new patch that I'm attaching to this email as 0001:\n\nThis all makes sense to me, and I didn't see anything obviously wrong\nlooking through the patch, either.\n\n> However it seems that I have something wrong, because CI is failing on\n> Windows; I ran out of time for looking into that today, but wanted to\n> post what I have so far since I know we have an open item or two to\n> close here ASAP...\n\n:-(\n\n> Patches 0002-0004 are Andres's, with minor tweaks:\n>\n> v3-0002-Fix-old-fd-issues-using-global-barriers-everywher.patch:\n>\n> I'm still slightly confused about whether we need an *extra* global\n> barrier in DROP TABLESPACE, not just if destroy_tablespace_directory()\n> failed.\n\nAndres wrote this code, but it looks correct to me. Currently, the\nreason why we use USE_BARRIER_SMGRRELEASE is that we want to make sure\nthat we don't fail to remove a tablespace because some other backend\nmight have files open. However, it might also be that no other backend\nhas files open, and in that case we don't need to do anything, so the\ncurrent placement of the call is correct. With this patch, though, we\nwant to make sure that no FD that is open before we start dropping\nfiles remains open after we drop files - and that means we need to\nforce files to be closed before we even try to remove files the first\ntime. It seems to me that Andres's patch (your 0002) doesn't add a\nsecond call - it moves the existing one earlier. And that seems right:\nno new files should be getting opened once we force them closed the\nfirst time, I hope anyway.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 3 May 2022 14:36:11 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, May 4, 2022 at 6:36 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Fri, Apr 22, 2022 at 3:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > So, to summarise the new patch that I'm attaching to this email as 0001:\n>\n> This all makes sense to me, and I didn't see anything obviously wrong\n> looking through the patch, either.\n\nThanks.\n\n> > However it seems that I have something wrong, because CI is failing on\n> > Windows; I ran out of time for looking into that today, but wanted to\n> > post what I have so far since I know we have an open item or two to\n> > close here ASAP...\n>\n> :-(\n\nIt passes sometimes and fails sometimes. Here's the weird failure I\nneed to debug:\n\nhttps://api.cirrus-ci.com/v1/artifact/task/6033765456674816/log/src/test/recovery/tmp_check/log/regress_log_032_relfilenode_reuse\n\nRight at the end, it says:\n\nWarning: unable to close filehandle GEN26 properly: Bad file\ndescriptor during global destruction.\nWarning: unable to close filehandle GEN21 properly: Bad file\ndescriptor during global destruction.\nWarning: unable to close filehandle GEN6 properly: Bad file descriptor\nduring global destruction.\n\nI don't know what it means (Windows' fd->handle mapping table got\ncorrupted?) or even which program is printing it (you'd think maybe\nperl? but how could that be affected by anything I did in\npostgres.exe, but if it's not perl why is it always at the end like\nthat?). Hrmph.\n\n\n", "msg_date": "Wed, 4 May 2022 07:44:11 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, May 4, 2022 at 7:44 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> It passes sometimes and fails sometimes. Here's the weird failure I\n> need to debug:\n>\n> https://api.cirrus-ci.com/v1/artifact/task/6033765456674816/log/src/test/recovery/tmp_check/log/regress_log_032_relfilenode_reuse\n>\n> Right at the end, it says:\n>\n> Warning: unable to close filehandle GEN26 properly: Bad file\n> descriptor during global destruction.\n> Warning: unable to close filehandle GEN21 properly: Bad file\n> descriptor during global destruction.\n> Warning: unable to close filehandle GEN6 properly: Bad file descriptor\n> during global destruction.\n>\n> I don't know what it means (Windows' fd->handle mapping table got\n> corrupted?) or even which program is printing it (you'd think maybe\n> perl? but how could that be affected by anything I did in\n> postgres.exe, but if it's not perl why is it always at the end like\n> that?). Hrmph.\n\nGot some off-list clues: that's just distracting Perl cleanup noise\nafter something else went wrong (thanks Robert), and now I'm testing a\ntheory from Andres that we're missing a barrier on the redo side when\nreplaying XLOG_DBASE_CREATE_FILE_COPY. More soon.\n\n\n", "msg_date": "Wed, 4 May 2022 08:53:32 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, May 4, 2022 at 8:53 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Got some off-list clues: that's just distracting Perl cleanup noise\n> after something else went wrong (thanks Robert), and now I'm testing a\n> theory from Andres that we're missing a barrier on the redo side when\n> replaying XLOG_DBASE_CREATE_FILE_COPY. More soon.\n\nYeah, looks like that was the explanation. Presumably in older\nreleases, recovery can fail with EACCES here, and since commit\ne2f0f8ed we get ENOENT, because someone's got an unlinked file open,\nand ReadDir() can still see it. (I've wondered before if ReadDir()\nshould also hide zombie Windows directory entries, but that's kinda\nindependent and would only get us one step further, a later rmdir()\nwould still fail.) Adding the barrier fixes the problem. Assuming no\nobjections or CI failures show up, I'll consider pushing the first two\npatches tomorrow.", "msg_date": "Wed, 4 May 2022 14:23:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Wed, May 4, 2022 at 2:23 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Assuming no\n> objections or CI failures show up, I'll consider pushing the first two\n> patches tomorrow.\n\nDone. Time to watch the build farm.\n\nIt's possible that these changes will produce some blowback, now that\nwe're using PROCSIGNAL_BARRIER_SMGRRELEASE on common platforms.\nObviously the earlier work started down this path already, but that\nwas Windows-only, so it probably didn't get much attention from our\nmostly Unix crowd.\n\nFor example, if there are bugs in the procsignal barrier system, or if\nthere are places that don't process interrupts at all or promptly, we\nmight hear complaints about that. That bug surface includes, for\nexample, background workers created by extensions. An example on my\nmind is a place where we hold interrupts while cleaning up temporary\nfiles (= a loop of arbitrary size with filesystem ops that might hang\non slow storage), so a concurrent DROP TABLESPACE will have to wait,\nwhich is kinda strange because it would in fact be perfectly safe to\nprocess this particular \"interrupt\". In that case we really just\ndon't want the kinds of interrupts that perform non-local exits and\nprevent our cleanup work from running to completion, but we don't have\na way to say that. I think we'll probably also want to invent a way\nto report which backend is holding up progress, since without that\nit's practically impossible for an end user to understand why their\ncommand is hanging.\n\n\n", "msg_date": "Sat, 7 May 2022 16:52:31 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, May 7, 2022 at 4:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Done. Time to watch the build farm.\n\nSo far \"grison\" failed. I think it's probably just that the test\nforgot to wait for replay of CREATE EXTENSION before using pg_prewarm\non the standby, hence \"ERROR: function pg_prewarm(oid) does not exist\nat character 12\". I'll wait for more animals to report before I try\nto fix that tomorrow.\n\n\n", "msg_date": "Sat, 7 May 2022 21:37:44 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, May 7, 2022 at 4:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I think we'll probably also want to invent a way\n> to report which backend is holding up progress, since without that\n> it's practically impossible for an end user to understand why their\n> command is hanging.\n\nSimple idea: how about logging the PID of processes that block\nprogress for too long? In the attached, I arbitrarily picked 5\nseconds as the wait time between LOG messages. Also, DEBUG1 messages\nlet you see the processing speed on eg build farm animals. Thoughts?\n\nTo test this, kill -STOP a random backend, and then try an ALTER\nDATABASE SET TABLESPACE in another backend. Example output:\n\nDEBUG: waiting for all backends to process ProcSignalBarrier generation 1\nLOG: still waiting for pid 1651417 to accept ProcSignalBarrier\nSTATEMENT: alter database mydb set tablespace ts1;\nLOG: still waiting for pid 1651417 to accept ProcSignalBarrier\nSTATEMENT: alter database mydb set tablespace ts1;\nLOG: still waiting for pid 1651417 to accept ProcSignalBarrier\nSTATEMENT: alter database mydb set tablespace ts1;\nLOG: still waiting for pid 1651417 to accept ProcSignalBarrier\nSTATEMENT: alter database mydb set tablespace ts1;\n\n... then kill -CONT:\n\nDEBUG: finished waiting for all backends to process ProcSignalBarrier\ngeneration 1\n\nAnother thought is that it might be nice to be able to test with a\ndummy PSB that doesn't actually do anything. You could use it to see\nhow fast your system processes it, while doing various other things,\nand to find/debug problems in other code that fails to handle\ninterrupts correctly. Here's an attempt at that. I guess it could go\ninto a src/test/modules/something instead of core, but on the other\nhand the PSB itself has to be in core anyway, so maybe not. Thoughts?\n No documentation yet, just seeing if people think this is worth\nhaving... better names/ideas welcome.\n\nTo test this, just SELECT pg_test_procsignal_barrier().", "msg_date": "Mon, 9 May 2022 11:30:06 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sun, May 8, 2022 at 7:30 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Simple idea: how about logging the PID of processes that block\n> progress for too long? In the attached, I arbitrarily picked 5\n> seconds as the wait time between LOG messages. Also, DEBUG1 messages\n> let you see the processing speed on eg build farm animals. Thoughts?\n>\n> To test this, kill -STOP a random backend, and then try an ALTER\n> DATABASE SET TABLESPACE in another backend. Example output:\n>\n> DEBUG: waiting for all backends to process ProcSignalBarrier generation 1\n> LOG: still waiting for pid 1651417 to accept ProcSignalBarrier\n> STATEMENT: alter database mydb set tablespace ts1;\n> LOG: still waiting for pid 1651417 to accept ProcSignalBarrier\n> STATEMENT: alter database mydb set tablespace ts1;\n> LOG: still waiting for pid 1651417 to accept ProcSignalBarrier\n> STATEMENT: alter database mydb set tablespace ts1;\n> LOG: still waiting for pid 1651417 to accept ProcSignalBarrier\n> STATEMENT: alter database mydb set tablespace ts1;\n\nThis is a very good idea.\n\n> Another thought is that it might be nice to be able to test with a\n> dummy PSB that doesn't actually do anything. You could use it to see\n> how fast your system processes it, while doing various other things,\n> and to find/debug problems in other code that fails to handle\n> interrupts correctly. Here's an attempt at that. I guess it could go\n> into a src/test/modules/something instead of core, but on the other\n> hand the PSB itself has to be in core anyway, so maybe not. Thoughts?\n> No documentation yet, just seeing if people think this is worth\n> having... better names/ideas welcome.\n\nI did this at one point, but I wasn't convinced it was going to find\nenough bugs to be worth committing. It's OK if you're convinced of\nthings that didn't convince me, though.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 9 May 2022 09:06:59 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Tue, May 10, 2022 at 1:07 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> On Sun, May 8, 2022 at 7:30 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > LOG: still waiting for pid 1651417 to accept ProcSignalBarrier\n> > STATEMENT: alter database mydb set tablespace ts1;\n\n> This is a very good idea.\n\nOK, I pushed this, after making the ereport call look a bit more like\nothers that talk about backend PIDs.\n\n> > Another thought is that it might be nice to be able to test with a\n> > dummy PSB that doesn't actually do anything. You could use it to see\n> > how fast your system processes it, while doing various other things,\n> > and to find/debug problems in other code that fails to handle\n> > interrupts correctly. Here's an attempt at that. I guess it could go\n> > into a src/test/modules/something instead of core, but on the other\n> > hand the PSB itself has to be in core anyway, so maybe not. Thoughts?\n> > No documentation yet, just seeing if people think this is worth\n> > having... better names/ideas welcome.\n>\n> I did this at one point, but I wasn't convinced it was going to find\n> enough bugs to be worth committing. It's OK if you're convinced of\n> things that didn't convince me, though.\n\nI'll leave this here for now.\n\n\n", "msg_date": "Wed, 11 May 2022 18:04:07 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, May 7, 2022 at 9:37 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> So far \"grison\" failed. I think it's probably just that the test\n> forgot to wait for replay of CREATE EXTENSION before using pg_prewarm\n> on the standby, hence \"ERROR: function pg_prewarm(oid) does not exist\n> at character 12\". I'll wait for more animals to report before I try\n> to fix that tomorrow.\n\nThat one was addressed by commit a22652e. Unfortunately two new kinds\nof failure showed up:\n\nChipmunk, another little early model Raspberry Pi:\n\nerror running SQL: 'psql:<stdin>:1: ERROR: source database\n\"conflict_db_template\" is being accessed by other users\nDETAIL: There is 1 other session using the database.'\nwhile running 'psql -XAtq -d port=57394 host=/tmp/luyJopPv9L\ndbname='postgres' -f - -v ON_ERROR_STOP=1' with sql 'CREATE DATABASE\nconflict_db TEMPLATE conflict_db_template OID = 50001;' at\n/home/pgbfarm/buildroot/HEAD/pgsql.build/src/test/recovery/../../../src/test/perl/PostgreSQL/Test/Cluster.pm\nline 1836.\n\nI think that might imply that when you do two\n$node_primary->safe_psql() calls in a row, the backend running the\nsecond one might still see the ghost of the first backend in the\ndatabase, even though the first psql process has exited. Hmmm.\n\nSkink, the valgrind animal, also failed. After first 8 tests, it times out:\n\n[07:18:26.237](14.827s) ok 8 - standby: post move contents as expected\n[07:18:42.877](16.641s) Bail out! aborting wait: program timed out\nstream contents: >><<\npattern searched for: (?^m:warmed_buffers)\n\nThat's be in the perl routine cause_eviction(), while waiting for a\npg_prewarm query to return, but I'm not yet sure what's going on (I\nhave to suspect it's in the perl scripting rather than anything in the\nserver, given the location of the failure). Will try to repro\nlocally.\n\n\n", "msg_date": "Thu, 12 May 2022 15:13:12 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Thu, May 12, 2022 at 3:13 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Chipmunk, another little early model Raspberry Pi:\n>\n> error running SQL: 'psql:<stdin>:1: ERROR: source database\n> \"conflict_db_template\" is being accessed by other users\n> DETAIL: There is 1 other session using the database.'\n\nOh, for this one I think it may just be that the autovacuum worker\nwith PID 23757 took longer to exit than the 5 seconds\nCountOtherDBBackends() is prepared to wait, after sending it SIGTERM.\n\n\n", "msg_date": "Thu, 12 May 2022 16:57:57 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Thu, May 12, 2022 at 4:57 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, May 12, 2022 at 3:13 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > error running SQL: 'psql:<stdin>:1: ERROR: source database\n> > \"conflict_db_template\" is being accessed by other users\n> > DETAIL: There is 1 other session using the database.'\n>\n> Oh, for this one I think it may just be that the autovacuum worker\n> with PID 23757 took longer to exit than the 5 seconds\n> CountOtherDBBackends() is prepared to wait, after sending it SIGTERM.\n\nIn this test, autovacuum_naptime is set to 1s (per Andres, AV was\nimplicated when he first saw the problem with pg_upgrade, hence desire\nto crank it up). That's not necessary: commenting out the active line\nin ProcessBarrierSmgrRelease() shows that the tests reliably reproduce\ndata corruption without it. Let's just take that out.\n\nAs for skink failing, the timeout was hard coded 300s for the whole\ntest, but apparently that wasn't enough under valgrind. Let's use the\nstandard PostgreSQL::Test::Utils::timeout_default (180s usually), but\nreset it for each query we send.\n\nSee attached.", "msg_date": "Fri, 13 May 2022 14:19:58 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Thu, May 12, 2022 at 10:20 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> As for skink failing, the timeout was hard coded 300s for the whole\n> test, but apparently that wasn't enough under valgrind. Let's use the\n> standard PostgreSQL::Test::Utils::timeout_default (180s usually), but\n> reset it for each query we send.\n\n@@ -202,6 +198,9 @@ sub send_query_and_wait\n my ($psql, $query, $untl) = @_;\n my $ret;\n\n+ $psql_timeout->reset();\n+ $psql_timeout->start();\n+\n # send query\n $$psql{stdin} .= $query;\n $$psql{stdin} .= \"\\n\";\n\nThis seems fine, but I think you should add a non-trivial comment about it.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 13 May 2022 11:33:08 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" }, { "msg_contents": "On Sat, May 14, 2022 at 3:33 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> This seems fine, but I think you should add a non-trivial comment about it.\n\nThanks for looking. Done, and pushed. Let's see if 180s per query is enough...\n\n\n", "msg_date": "Sat, 14 May 2022 12:07:05 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: wrong fds used for refilenodes after pg_upgrade relfilenode\n changes Reply-To:" } ]
[ { "msg_contents": "Hi,\n\nI noticed that during ParallelWorkerMain()->RestoreGUCState() we perform\ncatalog accesses with GUCs being set to wrong values, specifically values that\naren't what's in postgresql.conf, and that haven't been set in the leader.\n\nThe reason for this is that\n\n1) RestoreGUCState() is called in a transaction, which signals to several GUC\nhooks that they can do catalog access\n\n2) RestoreGUCState() resets all non-skipped GUC values to their \"boot_val\"\nfirst and then in another pass sets all GUCs that were non-default in the\nleader.\n\n\nThis e.g. can lead to parallel worker startup using the wrong fsync or\nwal_sync_method value when catalog accesses trigger WAL writes. Luckily\nPGC_POSTMASTER GUCs are skipped, otherwise this'd be kinda bad. But it still\ndoesn't seem good.\n\nDo we really need to reset GUCs to their boot value? Particularly for\nPGC_SIGHUP variables I don't think we can do that if we want to do\nRestoreGUCState() in a transaction. It'd obviously involve a bit more code,\nbut it seems entirely possible to just get rid of the reset phase?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 15:14:26 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Do we really need to reset GUCs to their boot value? Particularly for\n> PGC_SIGHUP variables I don't think we can do that if we want to do\n> RestoreGUCState() in a transaction. It'd obviously involve a bit more code,\n> but it seems entirely possible to just get rid of the reset phase?\n\nIn an EXEC_BACKEND build, they're going to start out with those\nvalues anyway. If you're not happy about the consequences,\n\"skipping the reset\" is not the way to improve matters.\n\nCan we arrange to absorb the leader's values before starting the\nworker's transaction, instead of inside it?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 18:56:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "Hi,\n\nOn 2022-02-09 18:56:41 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Do we really need to reset GUCs to their boot value? Particularly for\n> > PGC_SIGHUP variables I don't think we can do that if we want to do\n> > RestoreGUCState() in a transaction. It'd obviously involve a bit more code,\n> > but it seems entirely possible to just get rid of the reset phase?\n> \n> In an EXEC_BACKEND build, they're going to start out with those\n> values anyway. If you're not happy about the consequences,\n> \"skipping the reset\" is not the way to improve matters.\n\nPostmaster's GUC state will already be loaded via read_nondefault_variables(),\nmuch earlier in startup. Well before bgworkers get control and before\ntransaction environment is up.\n\nSetting a watchpoint on enableFsync, in a parallel worker where postmaster\nruns with enableFsync=off, shows the following:\n\nOld value = true\nNew value = false\nset_config_option (name=0x7f42bd9ec110 \"fsync\", value=0x7f42bd9ec000 \"false\", context=PGC_POSTMASTER, source=PGC_S_ARGV, action=GUC_ACTION_SET, \n changeVal=true, elevel=21, is_reload=true) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:7760\n7760\t\t\t\t\t\tset_extra_field(&conf->gen, &conf->gen.extra,\n(rr) bt\n#0 set_config_option (name=0x7f42bd9ec110 \"fsync\", value=0x7f42bd9ec000 \"false\", context=PGC_POSTMASTER, source=PGC_S_ARGV, action=GUC_ACTION_SET, \n changeVal=true, elevel=21, is_reload=true) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:7760\n#1 0x00007f42bcd5f178 in read_nondefault_variables () at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:10635\n#2 0x00007f42bca9ccd1 in SubPostmasterMain (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5086\n#3 0x00007f42bc98bcef in main (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/main/main.c:194\n\nOld value = false\nNew value = true\nInitializeOneGUCOption (gconf=0x7f42bd0a32c0 <ConfigureNamesBool+4320>) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:5900\n5900\t\t\t\t\tconf->gen.extra = conf->reset_extra = extra;\n(rr) bt\n#0 InitializeOneGUCOption (gconf=0x7f42bd0a32c0 <ConfigureNamesBool+4320>) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:5900\n#1 0x00007f42bcd5ff95 in RestoreGUCState (gucstate=0x7f42bc4f1d00) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:11135\n#2 0x00007f42bc720873 in ParallelWorkerMain (main_arg=545066472) at /home/andres/src/postgresql/src/backend/access/transam/parallel.c:1408\n#3 0x00007f42bca884e5 in StartBackgroundWorker () at /home/andres/src/postgresql/src/backend/postmaster/bgworker.c:858\n#4 0x00007f42bca9cfd1 in SubPostmasterMain (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5230\n#5 0x00007f42bc98bcef in main (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/main/main.c:194\n\n\nOld value = true\nNew value = false\nset_config_option (name=0x7f42bc4f1e04 \"fsync\", value=0x7f42bc4f1e0a \"false\", context=PGC_POSTMASTER, source=PGC_S_ARGV, action=GUC_ACTION_SET, \n changeVal=true, elevel=21, is_reload=true) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:7760\n7760\t\t\t\t\t\tset_extra_field(&conf->gen, &conf->gen.extra,\n(rr) bt\n#0 set_config_option (name=0x7f42bc4f1e04 \"fsync\", value=0x7f42bc4f1e0a \"false\", context=PGC_POSTMASTER, source=PGC_S_ARGV, action=GUC_ACTION_SET, \n changeVal=true, elevel=21, is_reload=true) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:7760\n#1 0x00007f42bcd600fc in RestoreGUCState (gucstate=0x7f42bc4f1d00) at /home/andres/src/postgresql/src/backend/utils/misc/guc.c:11172\n#2 0x00007f42bc720873 in ParallelWorkerMain (main_arg=545066472) at /home/andres/src/postgresql/src/backend/access/transam/parallel.c:1408\n#3 0x00007f42bca884e5 in StartBackgroundWorker () at /home/andres/src/postgresql/src/backend/postmaster/bgworker.c:858\n#4 0x00007f42bca9cfd1 in SubPostmasterMain (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/postmaster/postmaster.c:5230\n#5 0x00007f42bc98bcef in main (argc=3, argv=0x7f42bd9c93c0) at /home/andres/src/postgresql/src/backend/main/main.c:194\n\n\n\n\n> Can we arrange to absorb the leader's values before starting the\n> worker's transaction, instead of inside it?\n\nI assume Robert did it this way for a reason? It'd not be surprising if there\nwere a bunch of extensions assuming its safe to do catalog accesses if\nIsUnderPostmaster or such :(\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 16:33:36 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "On Wed, Feb 9, 2022 at 7:33 PM Andres Freund <andres@anarazel.de> wrote:\n> > Can we arrange to absorb the leader's values before starting the\n> > worker's transaction, instead of inside it?\n>\n> I assume Robert did it this way for a reason? It'd not be surprising if there\n> were a bunch of extensions assuming its safe to do catalog accesses if\n> IsUnderPostmaster or such :(\n\nI remember trying that, and if I remember correctly, it broke with\ncore GUCs, without any need for extensions in the picture. I don't\nremember the details too well, unfortunately, but I think it had to do\nwith the behavior of some of the check and/or assign hooks.\n\nIt's probably time to try it again, because (a) maybe things are\ndifferent now, or (b) maybe I did it wrong, and in any event (c) I\ncan't really remember why it didn't work and we probably want to know\nthat.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 9 Feb 2022 21:47:09 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "Hi,\n\nI think we need to do something about this, because afaics this has data\ncorruption risks, albeit with a narrow window.\n\nE.g. consider what happens if the value of wal_sync_method is reset to the\nboot value and one of the catalog accesses in GUC check hooks causes WAL\nwrites (e.g. due to pruning, buffer replacement). One backend doing\nfdatasync() and the rest O_DSYNC or vice versa won't work lead to reliable\ndurability.\n\n\nOn 2022-02-09 21:47:09 -0500, Robert Haas wrote:\n> I remember trying that, and if I remember correctly, it broke with\n> core GUCs, without any need for extensions in the picture. I don't\n> remember the details too well, unfortunately, but I think it had to do\n> with the behavior of some of the check and/or assign hooks.\n>\n> It's probably time to try it again, because (a) maybe things are\n> different now, or (b) maybe I did it wrong, and in any event (c) I\n> can't really remember why it didn't work and we probably want to know\n> that.\n\nThe tests fail:\n+ERROR: invalid value for parameter \"session_authorization\": \"andres\"\n+CONTEXT: while setting parameter \"session_authorization\" to \"andres\"\n+parallel worker\n+while scanning relation \"public.pvactst\"\n\nbut that's easily worked around. In fact, there's already code to do so, it\njust is lacking awareness of session_authorization:\n\nstatic bool\ncan_skip_gucvar(struct config_generic *gconf)\n...\n\t * Role must be handled specially because its current value can be an\n\t * invalid value (for instance, if someone dropped the role since we set\n\t * it). So if we tried to serialize it normally, we might get a failure.\n\t * We skip it here, and use another mechanism to ensure the worker has the\n\t * right value.\n\nDon't those reasons apply just as well to session_authorization as they do to\nrole?\n\nIf I skip session_authorization in can_skip_gucvar() and move\nRestoreGUCState() out of the transaction, all tests pass.\n\nUnsurprisingly we get bogus results for parallel accesses to\nsession_authorization:\n\npostgres[3312622][1]=> SELECT current_setting('session_authorization');\n┌─────────────────┐\n│ current_setting │\n├─────────────────┤\n│ frak │\n└─────────────────┘\n(1 row)\n\npostgres[3312622][1]=> SET force_parallel_mode = on;\nSET\npostgres[3312622][1]=> SELECT current_setting('session_authorization');\n┌─────────────────┐\n│ current_setting │\n├─────────────────┤\n│ andres │\n└─────────────────┘\n(1 row)\n\nbut that could be remedied just already done for role.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 22 Feb 2022 18:51:19 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "On Wed, 23 Feb 2022 at 15:51, Andres Freund <andres@anarazel.de> wrote:\n> The tests fail:\n> +ERROR: invalid value for parameter \"session_authorization\": \"andres\"\n> +CONTEXT: while setting parameter \"session_authorization\" to \"andres\"\n> +parallel worker\n> +while scanning relation \"public.pvactst\"\n>\n> but that's easily worked around. In fact, there's already code to do so, it\n> just is lacking awareness of session_authorization:\n\nI was experimenting with the attached patch which just has\ncheck_session_authorization() return true when not in transaction and\nwhen InitializingParallelWorker is true. It looks like there are\nalready some other check functions looking at the value of\nInitializingParallelWorker (e.g. check_transaction_read_only(),\ncheck_role())\n\nWith that done, I do not see any other errors when calling\nRestoreGUCState() while not in transaction. The parallel worker seems\nto pick up the session_authorization GUC correctly using the same\ndebug_parallel_query test that you tried out.\n\nOn looking at the other check functions, I noted down the following as\nones that may behave differently when called while not in transaction:\n\ncheck_transaction_read_only --- checks InitializingParallelWorker\ncheck_transaction_deferrable -- checks\nIsSubTransaction/FirstSnapshotSet, returns true if not\ncheck_client_encoding -- Checks IsTransactionState(). Behavioural change\ncheck_default_table_access_method -- accesses catalog if in xact.\nBehavioural change\ncheck_default_tablespace -- accesses catalog if in xact. Behavioural change\ncheck_temp_tablespaces -- accesses catalog if in xact. Behavioural change\ncheck_role -- Returns false if not in xact. Handled specially in\nRestoreGUCState()\ncheck_session_authorization -- Modify by patch to return true when not\nin xact and parallel worker starting\ncheck_timezone -- calls interval_in. Can't see any catalog lookup there.\ncheck_default_text_search_config -- Checks in IsTransactionState()\ncheck_transaction_isolation -- Check in IsTransactionState()\n\nIt seems that RestoreLibraryState() was called while in transaction so\nthat extension GUCs could be initialized before restoring the GUCs.\nSo there could be repercussions for extensions if we did this.\n\nDoes anyone see any reasons why we can't fix this with the attached?\n\nDavid", "msg_date": "Thu, 7 Dec 2023 19:29:51 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" }, { "msg_contents": "On Thu, 10 Feb 2022 at 13:33, Andres Freund <andres@anarazel.de> wrote:\n> Postmaster's GUC state will already be loaded via read_nondefault_variables(),\n> much earlier in startup. Well before bgworkers get control and before\n> transaction environment is up.\n>\n> Setting a watchpoint on enableFsync, in a parallel worker where postmaster\n> runs with enableFsync=off, shows the following:\n\nThe following comment in RestoreGUCState() starting with \"since the\nleader's\" seems to indicate this step is required.\n\n> * but already have their default values. Thus, this ends up being the\n> * same test that SerializeGUCState uses, even though the sets of\n> * variables involved may well be different since the leader's set of\n> * variables-not-at-default-values can differ from the set that are\n> * not-default in this freshly started worker.\n\nI'm just not quite clear on which cases this could be. Looking at\nInitializeOneGUCOption(), the newval comes from conf->boot_val, which,\nfor built-in GUCs just comes from the ConfigureNames* table in\nguc_tables.c. Perhaps there could be variances in values that are\npassed during the call to DefineCustom*Variable between the leader and\nthe worker in some extension's code.\n\nDavid\n\n\n", "msg_date": "Thu, 7 Dec 2023 19:33:13 +1300", "msg_from": "David Rowley <dgrowleyml@gmail.com>", "msg_from_op": false, "msg_subject": "Re: catalog access with reset GUCs during parallel worker startup" } ]
[ { "msg_contents": "Hi hackers.\n\nThere appears to be some unreachable code in the relcache function\nGetRelationPublicationActions.\n\nWhen the 'relation->rd_pubactions' is not NULL then the function\nunconditionally returns early (near the top).\n\nTherefore, the following code (near the bottom) seems to have no\npurpose because IIUC the 'rd_pubactions' can never be not NULL here.\n\nif (relation->rd_pubactions)\n{\npfree(relation->rd_pubactions);\nrelation->rd_pubactions = NULL;\n}\n\nThe latest Code Coverage report [1] also shows that this code is never executed.\n\n 5601 3556 : if (relation->rd_pubactions)\n 5602 : {\n 5603 0 : pfree(relation->rd_pubactions);\n 5604 0 : relation->rd_pubactions = NULL;\n 5605 : }\n\n~~\n\nPSA a patch to remove this unreachable code.\n\n------\n[1] https://coverage.postgresql.org/src/backend/utils/cache/relcache.c.gcov.html\n\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 10 Feb 2022 11:23:26 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "GetRelationPublicationActions. - Remove unreachable code" }, { "msg_contents": "Peter Smith <smithpb2250@gmail.com> writes:\n> There appears to be some unreachable code in the relcache function\n> GetRelationPublicationActions.\n> When the 'relation->rd_pubactions' is not NULL then the function\n> unconditionally returns early (near the top).\n> Therefore, the following code (near the bottom) seems to have no\n> purpose because IIUC the 'rd_pubactions' can never be not NULL here.\n\nI'm not sure it's as unreachable as all that. What you're not\naccounting for is the possibility of recursive cache loading,\nie something down inside the catalog fetches we have to do here\ncould already have made the field valid.\n\nAdmittedly, that's probably not very likely given expected usage\npatterns (to wit, that system catalogs shouldn't really have\npublication actions). But there are other places in relcache.c where\na coding pattern similar to this is demonstrably necessary to avoid\nmemory leakage. I'd rather leave the code as-is, maybe add a comment\nalong the lines of \"rd_pubactions is probably still NULL, but just in\ncase something already made it valid, avoid leaking memory\".\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Wed, 09 Feb 2022 19:34:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GetRelationPublicationActions. - Remove unreachable code" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:34 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Peter Smith <smithpb2250@gmail.com> writes:\n> > There appears to be some unreachable code in the relcache function\n> > GetRelationPublicationActions.\n> > When the 'relation->rd_pubactions' is not NULL then the function\n> > unconditionally returns early (near the top).\n> > Therefore, the following code (near the bottom) seems to have no\n> > purpose because IIUC the 'rd_pubactions' can never be not NULL here.\n>\n> I'm not sure it's as unreachable as all that. What you're not\n> accounting for is the possibility of recursive cache loading,\n> ie something down inside the catalog fetches we have to do here\n> could already have made the field valid.\n>\n> Admittedly, that's probably not very likely given expected usage\n> patterns (to wit, that system catalogs shouldn't really have\n> publication actions). But there are other places in relcache.c where\n> a coding pattern similar to this is demonstrably necessary to avoid\n> memory leakage. I'd rather leave the code as-is, maybe add a comment\n> along the lines of \"rd_pubactions is probably still NULL, but just in\n> case something already made it valid, avoid leaking memory\".\n\nOK. Thanks for your explanation and advice.\n\nPSA another patch to just a comment as suggested.\n\n------\nKind Regards,\nPeter Smith.\nFujitsu Australia", "msg_date": "Thu, 10 Feb 2022 12:37:30 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": true, "msg_subject": "Re: GetRelationPublicationActions. - Remove unreachable code" } ]
[ { "msg_contents": "Hi,\n\nI just noticed somewhat new code in LockBufferForCleanup(), added in\n\ncommit 39b03690b529935a3c33024ee68f08e2d347cf4f\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: 2021-01-13 22:59:17 +0900\n \n Log long wait time on recovery conflict when it's resolved.\n\ncommit 64638ccba3a659d8b8a3a4bc5b47307685a64a8a\nAuthor: Fujii Masao <fujii@postgresql.org>\nDate: 2020-03-30 17:35:03 +0900\n \n Report waiting via PS while recovery is waiting for buffer pin in hot standby.\n\n\nAfter those commit LockBufferForCleanup() contains code doing memory\nallocations, elogs. That doesn't strike me as a good idea:\n\nPreviously the code looked somewhat safe to use in critical section like\nblocks (although whether it'd be good idea to use in one is a different\nquestion), but not after. Even if not used in a critical section, adding new\nfailure conditions to low-level code that's holding LWLocks etc. doesn't seem\nlike a good idea.\n\nSecondly, the way it's done seems like a significant laying violation. Before\nthe HS related code in LockBufferForCleanup() was encapsulated in a few calls\nto routines dedicated to dealing with that. Now it's all over\nLockBufferForCleanup().\n\nIt also just increases the overhead of LockBuffer(). Adding palloc(), copying\nof process title, GetCurrentTimestamp() to a low level routine like this isn't\nfree - even if it's mostly in the contended paths.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Wed, 9 Feb 2022 18:22:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Logging in LockBufferForCleanup()" }, { "msg_contents": "On Wed, Feb 09, 2022 at 06:22:05PM -0800, Andres Freund wrote:\n> Previously the code looked somewhat safe to use in critical section like\n> blocks (although whether it'd be good idea to use in one is a different\n> question), but not after. Even if not used in a critical section, adding new\n> failure conditions to low-level code that's holding LWLocks etc. doesn't seem\n> like a good idea.\n\nThis is an interesting point. Would the addition of one or more\ncritical sections in this area impact its performance in any way?\n\n> It also just increases the overhead of LockBuffer(). Adding palloc(), copying\n> of process title, GetCurrentTimestamp() to a low level routine like this isn't\n> free - even if it's mostly in the contended paths.\n\nGood point.\n--\nMichael", "msg_date": "Thu, 10 Feb 2022 11:43:32 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Logging in LockBufferForCleanup()" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:22 AM Andres Freund <andres@anarazel.de> wrote:\n>\n> Hi,\n>\n> I just noticed somewhat new code in LockBufferForCleanup(), added in\n>\n> commit 39b03690b529935a3c33024ee68f08e2d347cf4f\n> Author: Fujii Masao <fujii@postgresql.org>\n> Date: 2021-01-13 22:59:17 +0900\n>\n> Log long wait time on recovery conflict when it's resolved.\n>\n> commit 64638ccba3a659d8b8a3a4bc5b47307685a64a8a\n> Author: Fujii Masao <fujii@postgresql.org>\n> Date: 2020-03-30 17:35:03 +0900\n>\n> Report waiting via PS while recovery is waiting for buffer pin in hot standby.\n>\n>\n> After those commit LockBufferForCleanup() contains code doing memory\n> allocations, elogs. That doesn't strike me as a good idea:\n>\n> Previously the code looked somewhat safe to use in critical section like\n> blocks (although whether it'd be good idea to use in one is a different\n> question), but not after. Even if not used in a critical section, adding new\n> failure conditions to low-level code that's holding LWLocks etc. doesn't seem\n> like a good idea.\n>\n> Secondly, the way it's done seems like a significant laying violation. Before\n> the HS related code in LockBufferForCleanup() was encapsulated in a few calls\n> to routines dedicated to dealing with that. Now it's all over\n> LockBufferForCleanup().\n>\n> It also just increases the overhead of LockBuffer(). Adding palloc(), copying\n> of process title, GetCurrentTimestamp() to a low level routine like this isn't\n> free - even if it's mostly in the contended paths.\n\nWhile I understand that these are theoretically valid concerns, it’s\nunclear to me how much the current code leads to bad effects in\npractice. There are other low-level codes/paths that call palloc()\nwhile holding LWLocks, and I’m not sure that the overheads result in\nvisible negative performance impact particularly because the calling\nto LockBufferForCleanup() is likely to accompany wait in the first\nplace. BTW I think calling to LockBufferForCleanup() in a critical\nsection is a bad idea for sure since it becomes uninterruptible.\n\nRegards,\n\n--\nMasahiko Sawada\nEDB: https://www.enterprisedb.com/\n\n\n", "msg_date": "Thu, 10 Feb 2022 20:28:57 +0900", "msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Logging in LockBufferForCleanup()" }, { "msg_contents": "\n\nOn 2022/02/10 20:28, Masahiko Sawada wrote:\n> While I understand that these are theoretically valid concerns, it’s\n> unclear to me how much the current code leads to bad effects in\n> practice. There are other low-level codes/paths that call palloc()\n> while holding LWLocks, and I’m not sure that the overheads result in\n> visible negative performance impact particularly because the calling\n> to LockBufferForCleanup() is likely to accompany wait in the first\n> place.\n\nI'm also not sure that. palloc() for PS display, copy of process title, GetCurrentTimestamp() and ereport() of recovery conflict happen when we fail to acquire the lock. In this case we also call ResolveRecoveryConflictWithBufferPin() and wait to be signaled there. So compared to the wait time or overhead caused in ResolveRecoveryConflictWithBufferPin(), ISTM that the overhead caused by them would be ralatively small. Also ereport() of recovery conflict can happen only when log_recovery_conflict_waits parameter is enabled and the deadlock timeout happens during the lock wait.\n\n\n> BTW I think calling to LockBufferForCleanup() in a critical\n> section is a bad idea for sure since it becomes uninterruptible.\n\nYes. And as far as I tested, currently LockBufferForCleanup() is not called in critical section. Right?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Fri, 11 Feb 2022 00:47:50 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: Logging in LockBufferForCleanup()" } ]
[ { "msg_contents": "Hi,\n\nI think we can use the return value of XLogRecGetBlockTag instead of\nan explicit check XLogRecHasBlockRef as the XLogRecGetBlockTag would\nanyways return false in such a case. Also, in some places the macros\nXLogRecHasBlockRef and XLogRecHasBlockImage aren't being used instead\nthe checks are being performed directly.\n\nAlthough the above change doesn't add any value or improve any\nperformance but makes use of the existing code better and removes some\nunnecessary/duplicate code. Attaching a small patch.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 10:23:19 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Use return value of XLogRecGetBlockTag instead of explicit block ref\n checks and also use XLogRecHasBlockRef/Image macros instead of explicit\n checks" } ]
[ { "msg_contents": "Hi,\n\nIt seems like 'IN' isn't used for description of a function input\nparameter in the docs unlike 'OUT'. AFAICS this is consistent across\nthe docs except the function pg_freespace. Attaching a tiny patch to\nremove 'IN' there.\n\nThoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 16:56:09 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Remove 'IN' from pg_freespace docs for consistency" } ]
[ { "msg_contents": "Hi,\n\nGiven that CheckpointWriteDelay() is really a hot code path i.e. gets\ncalled for every dirty buffer written to disk, here's an opportunity\nfor us to improve it by avoiding some function calls.\n\nFirstly, the AmCheckpointerProcess() macro check can be placed outside\nof the CheckpointWriteDelay() which avoids function calls (dirty\nbuffers written to disk * one function call cost) for a\nnon-checkpointer process(standalone backend) performing checkpoint.\n\nSecondly, remove the function ImmediateCheckpointRequested() and read\nthe ckpt_flags from the CheckpointerShmem with volatile qualifier.\nThis saves some LOC and cost = dirty buffers written to disk * one\nfunction call cost. The ImmediateCheckpointRequested() really does\nnothing great, it just reads from the shared memory without lock and\nchecks whether there's any immediate checkpoint request pending behind\nthe current one.\n\nAttaching a patch with the above changes. Please have a look at it.\n\nI did a small experiment[1] with a use case [2] on my dev system where\nI measured the total time spent in CheckpointWriteDelay() with and\nwithout patch, to my surprise, I didn't see much difference. It may be\nmy experiment is wrong, my dev box doesn't show much diff and others\nmay or may not notice the difference. Despite this, the patch proposed\nstill helps IMO as it saves a few LOC (16) and I'm sure it will also\nsave some time for standalone backend checkpoints.\n\nOther hackers may not agree with me on the readability (IMO, the patch\ndoesn't make it unreadable) or the diff that it creates with the\nprevious versions and so on. I'd rather argue that\nCheckpointWriteDelay() is really a hot code path in production\nenvironments and the patch proposed has some benefits.\n\nThoughts?\n\n[1] see \"write delay\" at the end of \"checkpoint complete\" message:\nHEAD:\n2022-02-08 05:56:45.551 UTC [651784] LOG: checkpoint starting: time\n2022-02-08 05:57:39.154 UTC [651784] LOG: checkpoint complete: wrote\n14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\nwrite=53.461 s, sync=0.027 s, total=53.604 s; sync files=22,\nlongest=0.016 s, average=0.002 s; distance=438865 kB, estimate=438865\nkB; write delay=53104.194 ms\n\n2022-02-08 05:59:24.173 UTC [652589] LOG: checkpoint starting: time\n2022-02-08 06:00:18.166 UTC [652589] LOG: checkpoint complete: wrote\n14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\nwrite=53.848 s, sync=0.030 s, total=53.993 s; sync files=22,\nlongest=0.017 s, average=0.002 s; distance=438865 kB, estimate=438865\nkB; write delay=53603.159 ms\n\nPATCHED:\n2022-02-08 06:07:26.286 UTC [662667] LOG: checkpoint starting: time\n2022-02-08 06:08:20.152 UTC [662667] LOG: checkpoint complete: wrote\n14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\nwrite=53.732 s, sync=0.026 s, total=53.867 s; sync files=22,\nlongest=0.016 s, average=0.002 s; distance=438865 kB, estimate=438865\nkB; write delay=53399.582 ms\n\n2022-02-08 06:10:17.554 UTC [663393] LOG: checkpoint starting: time\n2022-02-08 06:11:11.163 UTC [663393] LOG: checkpoint complete: wrote\n14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\nwrite=53.488 s, sync=0.023 s, total=53.610 s; sync files=22,\nlongest=0.018 s, average=0.002 s; distance=438865 kB, estimate=438865\nkB; write delay=53099.114 ms\n\n[2]\ncheckpoint_timeout = 60s\ncreate table t1(a1 int, b1 int);\n/* inserted 7mn rows */\ninsert into t1 select i, i*2 from generate_series(1, 7000000) i;\n\nRegards,\nBharath Rupireddy.", "msg_date": "Thu, 10 Feb 2022 17:31:55 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Refactor CheckpointWriteDelay()" }, { "msg_contents": "> Given that CheckpointWriteDelay() is really a hot code path i.e. gets\n> called for every dirty buffer written to disk, here's an opportunity\n> for us to improve it by avoiding some function calls.\n\nI have reviewed the patch and I agree that the patch improves the code.\n\n> Firstly, the AmCheckpointerProcess() macro check can be placed outside\n> of the CheckpointWriteDelay() which avoids function calls (dirty\n> buffers written to disk * one function call cost) for a\n> non-checkpointer process(standalone backend) performing checkpoint.\n\nIt definitely improves the code as it reduces one function call per\ndirty buffer write during non-checkpointer process even though we may\nnot see huge performance improvement during testing. But I am not in\nfavour of this change as many existing functions use this kind of\ncheck in the initial part of the function. The other reason is, if we\ncall this function in multiple places in future then we may end up\nhaving multiple checks or we may have to revert back the code.\n\n> Secondly, remove the function ImmediateCheckpointRequested() and read\n> the ckpt_flags from the CheckpointerShmem with volatile qualifier.\n> This saves some LOC and cost = dirty buffers written to disk * one\n> function call cost. The ImmediateCheckpointRequested() really does\n> nothing great, it just reads from the shared memory without lock and\n> checks whether there's any immediate checkpoint request pending behind\n> the current one.\n\n+1 for this change.\n\n> - * Perform the usual duties and take a nap, unless we're behind schedule,\n> - * in which case we just try to catch up as quickly as possible.\n> + * Perform the usual duties and take a nap, unless we're behind schedule or\n> + * an immediate checkpoint request is pending, in which case we just try to\n> + * catch up as quickly as possible.\n> + *\n> + * We don't need to acquire the ckpt_lck while reading ckpt_flags from\n> + * checkpointer shared memory, because we're only looking at a single flag\n> + * bit.\n\n+1 for the additional information. I feel the information related to\nthe pending shutdown request should also be added.\n\nThanks & Regards,\nNitin Jadhav\n\nOn Thu, Feb 10, 2022 at 5:32 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> Given that CheckpointWriteDelay() is really a hot code path i.e. gets\n> called for every dirty buffer written to disk, here's an opportunity\n> for us to improve it by avoiding some function calls.\n>\n> Firstly, the AmCheckpointerProcess() macro check can be placed outside\n> of the CheckpointWriteDelay() which avoids function calls (dirty\n> buffers written to disk * one function call cost) for a\n> non-checkpointer process(standalone backend) performing checkpoint.\n>\n> Secondly, remove the function ImmediateCheckpointRequested() and read\n> the ckpt_flags from the CheckpointerShmem with volatile qualifier.\n> This saves some LOC and cost = dirty buffers written to disk * one\n> function call cost. The ImmediateCheckpointRequested() really does\n> nothing great, it just reads from the shared memory without lock and\n> checks whether there's any immediate checkpoint request pending behind\n> the current one.\n>\n> Attaching a patch with the above changes. Please have a look at it.\n>\n> I did a small experiment[1] with a use case [2] on my dev system where\n> I measured the total time spent in CheckpointWriteDelay() with and\n> without patch, to my surprise, I didn't see much difference. It may be\n> my experiment is wrong, my dev box doesn't show much diff and others\n> may or may not notice the difference. Despite this, the patch proposed\n> still helps IMO as it saves a few LOC (16) and I'm sure it will also\n> save some time for standalone backend checkpoints.\n>\n> Other hackers may not agree with me on the readability (IMO, the patch\n> doesn't make it unreadable) or the diff that it creates with the\n> previous versions and so on. I'd rather argue that\n> CheckpointWriteDelay() is really a hot code path in production\n> environments and the patch proposed has some benefits.\n>\n> Thoughts?\n>\n> [1] see \"write delay\" at the end of \"checkpoint complete\" message:\n> HEAD:\n> 2022-02-08 05:56:45.551 UTC [651784] LOG: checkpoint starting: time\n> 2022-02-08 05:57:39.154 UTC [651784] LOG: checkpoint complete: wrote\n> 14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\n> write=53.461 s, sync=0.027 s, total=53.604 s; sync files=22,\n> longest=0.016 s, average=0.002 s; distance=438865 kB, estimate=438865\n> kB; write delay=53104.194 ms\n>\n> 2022-02-08 05:59:24.173 UTC [652589] LOG: checkpoint starting: time\n> 2022-02-08 06:00:18.166 UTC [652589] LOG: checkpoint complete: wrote\n> 14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\n> write=53.848 s, sync=0.030 s, total=53.993 s; sync files=22,\n> longest=0.017 s, average=0.002 s; distance=438865 kB, estimate=438865\n> kB; write delay=53603.159 ms\n>\n> PATCHED:\n> 2022-02-08 06:07:26.286 UTC [662667] LOG: checkpoint starting: time\n> 2022-02-08 06:08:20.152 UTC [662667] LOG: checkpoint complete: wrote\n> 14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\n> write=53.732 s, sync=0.026 s, total=53.867 s; sync files=22,\n> longest=0.016 s, average=0.002 s; distance=438865 kB, estimate=438865\n> kB; write delay=53399.582 ms\n>\n> 2022-02-08 06:10:17.554 UTC [663393] LOG: checkpoint starting: time\n> 2022-02-08 06:11:11.163 UTC [663393] LOG: checkpoint complete: wrote\n> 14740 buffers (90.0%); 0 WAL file(s) added, 0 removed, 27 recycled;\n> write=53.488 s, sync=0.023 s, total=53.610 s; sync files=22,\n> longest=0.018 s, average=0.002 s; distance=438865 kB, estimate=438865\n> kB; write delay=53099.114 ms\n>\n> [2]\n> checkpoint_timeout = 60s\n> create table t1(a1 int, b1 int);\n> /* inserted 7mn rows */\n> insert into t1 select i, i*2 from generate_series(1, 7000000) i;\n>\n> Regards,\n> Bharath Rupireddy.\n\n\n", "msg_date": "Tue, 15 Feb 2022 23:23:57 +0530", "msg_from": "Nitin Jadhav <nitinjadhavpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Refactor CheckpointWriteDelay()" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 17:31:55 +0530, Bharath Rupireddy wrote:\n> Secondly, remove the function ImmediateCheckpointRequested() and read\n> the ckpt_flags from the CheckpointerShmem with volatile qualifier.\n> This saves some LOC and cost = dirty buffers written to disk * one\n> function call cost.\n\nThe compiler can just inline ImmediateCheckpointRequested(). We don't gain\nanything by inlining it ourselves.\n\n\n> I did a small experiment[1] with a use case [2] on my dev system where\n> I measured the total time spent in CheckpointWriteDelay() with and\n> without patch, to my surprise, I didn't see much difference. It may be\n> my experiment is wrong, my dev box doesn't show much diff and others\n> may or may not notice the difference. Despite this, the patch proposed\n> still helps IMO as it saves a few LOC (16) and I'm sure it will also\n> save some time for standalone backend checkpoints.\n\nI am not surprised that you're not seeing a benefit. Compared to the cost of\nwriting out a buffer the cost of the function call here is neglegible.\n\nI don't think this is an improvement at all. Imagine what happens if we add a\nsecond callsite to CheckpointWriteDelay() or\nImmediateCheckpointRequested()...\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 19 Feb 2022 13:50:59 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Refactor CheckpointWriteDelay()" } ]
[ { "msg_contents": "The provided link\n https://www.postgresql.org/docs/release/\n\nleads to\n https://www.postgresql.org/docs/release/14.2/\n\nwhich gives 'Not Found' for me (Netherlands)\n\n\nAt least one person on IRC reports it 'works' for them but it seems \nthere still something wrong..\n\n\nErik Rijkers\n\n\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:35:25 +0100", "msg_from": "Erik Rijkers <er@xs4all.nl>", "msg_from_op": true, "msg_subject": "faulty link" }, { "msg_contents": "It works well here (Czech republic).\n\nčt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>\n> The provided link\n> https://www.postgresql.org/docs/release/\n>\n> leads to\n> https://www.postgresql.org/docs/release/14.2/\n>\n> which gives 'Not Found' for me (Netherlands)\n>\n>\n> At least one person on IRC reports it 'works' for them but it seems\n> there still something wrong..\n>\n>\n> Erik Rijkers\n>\n>\n>\n>\n>\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:48:29 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": "\n> leads to\n> https://www.postgresql.org/docs/release/14.2/\n> \n> which gives 'Not Found' for me (Netherlands)\n\nSame here: Not Found. (Germany)\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:52:47 +0100", "msg_from": "walther@technowledgy.de", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": ">The provided link\n>   https://www.postgresql.org/docs/release/\n\n>leads to\n>   https://www.postgresql.org/docs/release/14.2/\n\n>which gives 'Not Found' for me (Netherlands)\n\nWorks fine for me in Germany\n\nRegards\nDaniel\n\n", "msg_date": "Thu, 10 Feb 2022 14:53:34 +0000", "msg_from": "\"Daniel Westermann (DWE)\" <daniel.westermann@dbi-services.com>", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": "čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>\n> The provided link\n> https://www.postgresql.org/docs/release/\n>\n> leads to\n> https://www.postgresql.org/docs/release/14.2/\n>\n> which gives 'Not Found' for me (Netherlands)\n\nThinking about that again, the 14.2 release just happened. Could it be\njust a matter of propagating new release info to mirrors?\n\n>\n> At least one person on IRC reports it 'works' for them but it seems\n> there still something wrong..\n>\n>\n> Erik Rijkers\n>\n>\n>\n>\n>\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:54:42 +0100", "msg_from": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com>", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": "=?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>> The provided link\n>> https://www.postgresql.org/docs/release/\n>> leads to\n>> https://www.postgresql.org/docs/release/14.2/\n>> which gives 'Not Found' for me (Netherlands)\n\n> Thinking about that again, the 14.2 release just happened. Could it be\n> just a matter of propagating new release info to mirrors?\n\nThe link works for me, too (USA). Stale cache seems like a reasonable\nexplanation for the OP's problem --- maybe clearing browser cache\nwould help?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 10 Feb 2022 10:23:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n>> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>>> The provided link\n>>> https://www.postgresql.org/docs/release/\n>>> leads to\n>>> https://www.postgresql.org/docs/release/14.2/\n>>> which gives 'Not Found' for me (Netherlands)\n>\n>> Thinking about that again, the 14.2 release just happened. Could it be\n>> just a matter of propagating new release info to mirrors?\n>\n> The link works for me, too (USA). Stale cache seems like a reasonable\n> explanation for the OP's problem --- maybe clearing browser cache\n> would help?\n\nI'm getting a 404 as well from London. After trying multiple times with\ncurl I did get one 200 response, but it's mostly 404s.\n\nIt looks like some of the mirrors have it, but not all:\n\n$ for h in $(dig +short -tA www.mirrors.postgresql.org); do echo -n \"$h: \"; curl -i -k -s -HHost:www.postgresql.org \"https://$h/docs/release/14.2/\" | grep ^HTTP; done\n 72.32.157.230: HTTP/2 200\n 87.238.57.232: HTTP/2 404\n 217.196.149.50: HTTP/2 200\n\n$ for h in $(dig +short -tAAAA www.mirrors.postgresql.org); do echo -n \"$h: \"; curl -i -k -s -HHost:www.postgresql.org \"https://[$h]/docs/release/14.2/\" | grep ^HTTP; done\n 2001:4800:3e1:1::230: HTTP/2 200\n 2a02:c0:301:0:ffff::32: HTTP/2 404\n 2a02:16a8:dc51::50: HTTP/2 200\n \n> \t\t\tregards, tom lane\n\n- ilmari\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:53:02 +0000", "msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>", "msg_from_op": false, "msg_subject": "Re: faulty link" }, { "msg_contents": "On Thu, 10 Feb 2022 at 15:53, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org>\nwrote:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n> > =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n> >> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n> >>> The provided link\n> >>> https://www.postgresql.org/docs/release/\n> >>> leads to\n> >>> https://www.postgresql.org/docs/release/14.2/\n> >>> which gives 'Not Found' for me (Netherlands)\n> >\n> >> Thinking about that again, the 14.2 release just happened. Could it be\n> >> just a matter of propagating new release info to mirrors?\n> >\n> > The link works for me, too (USA). Stale cache seems like a reasonable\n> > explanation for the OP's problem --- maybe clearing browser cache\n> > would help?\n>\n> I'm getting a 404 as well from London. After trying multiple times with\n> curl I did get one 200 response, but it's mostly 404s.\n>\n> It looks like some of the mirrors have it, but not all:\n>\n> $ for h in $(dig +short -tA www.mirrors.postgresql.org); do echo -n \"$h:\n> \"; curl -i -k -s -HHost:www.postgresql.org \"https://$h/docs/release/14.2/\"\n> | grep ^HTTP; done\n> 72.32.157.230: HTTP/2 200\n> 87.238.57.232: HTTP/2 404\n> 217.196.149.50: HTTP/2 200\n>\n> $ for h in $(dig +short -tAAAA www.mirrors.postgresql.org); do echo -n\n> \"$h: \"; curl -i -k -s -HHost:www.postgresql.org \"https://[$h]/docs/release/14.2/\"\n> | grep ^HTTP; done\n> 2001:4800:3e1:1::230: HTTP/2 200\n> 2a02:c0:301:0:ffff::32: HTTP/2 404\n> 2a02:16a8:dc51::50: HTTP/2 200\n>\n\nDespite the name, they're not actually mirrors. They're varnish caches. By\nthe looks of it one of them cached a 404 (probably someone tried to access\nthe new page before it really did exist). I've purged /docs/release now,\nand everything is returning 200.\n\n-- \nDave Page\nBlog: https://pgsnake.blogspot.com\nTwitter: @pgsnake\n\nEDB: https://www.enterprisedb.com\n\nOn Thu, 10 Feb 2022 at 15:53, Dagfinn Ilmari Mannsåker <ilmari@ilmari.org> wrote:Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> =?UTF-8?B?Sm9zZWYgxaBpbcOhbmVr?= <josef.simanek@gmail.com> writes:\n>> čt 10. 2. 2022 v 15:35 odesílatel Erik Rijkers <er@xs4all.nl> napsal:\n>>> The provided link\n>>> https://www.postgresql.org/docs/release/\n>>> leads to\n>>> https://www.postgresql.org/docs/release/14.2/\n>>> which gives 'Not Found' for me (Netherlands)\n>\n>> Thinking about that again, the 14.2 release just happened. Could it be\n>> just a matter of propagating new release info to mirrors?\n>\n> The link works for me, too (USA).  Stale cache seems like a reasonable\n> explanation for the OP's problem --- maybe clearing browser cache\n> would help?\n\nI'm getting a 404 as well from London. After trying multiple times with\ncurl I did get one 200 response, but it's mostly 404s.\n\nIt looks like some of the mirrors have it, but not all:\n\n$ for h in $(dig +short -tA www.mirrors.postgresql.org); do echo -n \"$h: \"; curl -i -k -s -HHost:www.postgresql.org \"https://$h/docs/release/14.2/\" | grep ^HTTP; done\n 72.32.157.230: HTTP/2 200\n 87.238.57.232: HTTP/2 404\n 217.196.149.50: HTTP/2 200\n\n$ for h in $(dig +short -tAAAA www.mirrors.postgresql.org); do echo -n \"$h: \"; curl -i -k -s -HHost:www.postgresql.org \"https://[$h]/docs/release/14.2/\" | grep ^HTTP; done\n 2001:4800:3e1:1::230: HTTP/2 200\n 2a02:c0:301:0:ffff::32: HTTP/2 404\n 2a02:16a8:dc51::50: HTTP/2 200Despite the name, they're not actually mirrors. They're varnish caches. By the looks of it one of them cached a 404 (probably someone tried to access the new page before it really did exist). I've purged /docs/release now, and everything is returning 200. -- Dave PageBlog: https://pgsnake.blogspot.comTwitter: @pgsnakeEDB: https://www.enterprisedb.com", "msg_date": "Thu, 10 Feb 2022 16:07:18 +0000", "msg_from": "Dave Page <dpage@pgadmin.org>", "msg_from_op": false, "msg_subject": "Re: faulty link" } ]
[ { "msg_contents": "Hi hackers,\n\nPostgres first records state transaction in CLOG, then removes \ntransaction from procarray and finally releases locks.\nBut it can happen that transaction is marked as committed in CLOG, \nXMAX_COMMITTED bit is set in modified tuple but\nTransactionIdIsInProgress still returns true. As a result \"t_xmin %u is \nuncommitted in tuple (%u,%u) to be updated in table\"\nerror is reported.\n\nTransactionIdIsInProgress first checks for cached XID status:\n\n     /*\n      * We may have just checked the status of this transaction, so if it is\n      * already known to be completed, we can fall out without any access to\n      * shared memory.\n      */\n     if (TransactionIdIsKnownCompleted(xid))\n     {\n         xc_by_known_xact_inc();\n         return false;\n     }\n\nThis status cache is updated by TransactionLogFetch.\nSo once transaction status is fetched from CLOG, subsequent invocation \nof TransactionIdIsKnownCompleted will return true\neven if transaction is not removed from procarray.\n\nThe following stack illustrates how it can happen:\n\n#6  0x000055ab7758bbaf in TransactionLogFetch (transactionId=4482)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/transam/transam.c:88\n#7  TransactionLogFetch (transactionId=4482)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/transam/transam.c:52\n#8  0x000055ab7758bc7d in TransactionIdDidAbort \n(transactionId=transactionId@entry=4482)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/transam/transam.c:186\n#9  0x000055ab77812a43 in TransactionIdIsInProgress (xid=4482)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/storage/ipc/procarray.c:1587\n#10 0x000055ab7753de79 in HeapTupleSatisfiesVacuumHorizon \n(htup=htup@entry=0x7ffe562cec50, buffer=buffer@entry=1817,\n     dead_after=dead_after@entry=0x7ffe562ceb14)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/heap/heapam_visibility.c:1281\n#11 0x000055ab7753e2c5 in HeapTupleSatisfiesNonVacuumable (buffer=1817, \nsnapshot=0x7ffe562cec70, htup=0x7ffe562cec50)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/heap/heapam_visibility.c:1449\n#12 HeapTupleSatisfiesVisibility (tup=tup@entry=0x7ffe562cec50, \nsnapshot=snapshot@entry=0x7ffe562cec70, buffer=buffer@entry=1817)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/heap/heapam_visibility.c:1804\n#13 0x000055ab7752f3b5 in heap_hot_search_buffer \n(tid=tid@entry=0x7ffe562cec2a, relation=relation@entry=0x7fa57fbf17c8, \nbuffer=buffer@entry=1817,\n     snapshot=snapshot@entry=0x7ffe562cec70, \nheapTuple=heapTuple@entry=0x7ffe562cec50, all_dead=all_dead@entry=0x0, \nfirst_call=true)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/heap/heapam.c:1802\n#14 0x000055ab775370d8 in heap_index_delete_tuples (rel=0x7fa57fbf17c8, \ndelstate=<optimized out>)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/heap/heapam.c:7480\n#15 0x000055ab7755767a in table_index_delete_tuples \n(delstate=0x7ffe562d0160, rel=0x2)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/include/access/tableam.h:1327\n#16 _bt_delitems_delete_check (rel=rel@entry=0x7fa57fbf4518, \nbuf=buf@entry=1916, heapRel=heapRel@entry=0x7fa57fbf17c8,\n     delstate=delstate@entry=0x7ffe562d0160)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtpage.c:1541\n#17 0x000055ab7754e475 in _bt_bottomupdel_pass \n(rel=rel@entry=0x7fa57fbf4518, buf=buf@entry=1916, \nheapRel=heapRel@entry=0x7fa57fbf17c8, newitemsz=20)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtdedup.c:406\n#18 0x000055ab7755010a in _bt_delete_or_dedup_one_page \n(rel=0x7fa57fbf4518, heapRel=0x7fa57fbf17c8, insertstate=0x7ffe562d0640,\n--Type <RET> for more, q to quit, c to continue without paging--\n     simpleonly=<optimized out>, checkingunique=<optimized out>, \nuniquedup=<optimized out>, indexUnchanged=true)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtinsert.c:2763\n#19 0x000055ab775548c3 in _bt_findinsertloc (heapRel=0x7fa57fbf17c8, \nstack=0x55ab7a1cde08, indexUnchanged=true, checkingunique=true,\n     insertstate=0x7ffe562d0640, rel=0x7fa57fbf4518)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtinsert.c:904\n#20 _bt_doinsert (rel=rel@entry=0x7fa57fbf4518, \nitup=itup@entry=0x55ab7a1cde30, \ncheckUnique=checkUnique@entry=UNIQUE_CHECK_YES,\n     indexUnchanged=indexUnchanged@entry=true, \nheapRel=heapRel@entry=0x7fa57fbf17c8)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtinsert.c:255\n#21 0x000055ab7755a931 in btinsert (rel=0x7fa57fbf4518, \nvalues=<optimized out>, isnull=<optimized out>, ht_ctid=0x55ab7a1cd068, \nheapRel=0x7fa57fbf17c8,\n     checkUnique=UNIQUE_CHECK_YES, indexUnchanged=true, \nindexInfo=0x55ab7a1cdbd0)\n     at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/access/nbtree/nbtree.c:199\n#22 0x000055ab7769ae59 in ExecInsertIndexTuples \n(resultRelInfo=resultRelInfo@entry=0x55ab7a1c88b8, \nslot=slot@entry=0x55ab7a1cd038,\n     estate=estate@entry=0x55ab7a1c8430, update=update@entry=true, \nnoDupErr=noDupErr@entry=false, specConflict=specConflict@entry=0x0,\n     arbiterIndexes=0x0) at \n/home/knizhnik/zenith.main/tmp_install/build/../../vendor/postgres/src/backend/executor/execIndexing.c:415\n\n\nheap_lock_tuple will not wait transaction completion if \nHEAP_XMAX_COMMITTED is set in tuple and returns TM_Updated:\n\n         /*\n          * Time to sleep on the other transaction/multixact, if necessary.\n          *\n          * If the other transaction is an update/delete that's already\n          * committed, then sleeping cannot possibly do any good: if we're\n          * required to sleep, get out to raise an error instead.\n...\n\n\nThe problem is not so easy to reproduce. I got it when performe commits \nof transaction with large number of subtransactions with several \nsynchronous replicas.\nBut it is possible to artificially introduce delay between \nRecordTransactionCommit() and ProcArrayEndTransaction():\n\ndiff --git a/src/backend/access/transam/xact.c \nb/src/backend/access/transam/xact.c\nindex ca6f6d57d3..d095fd13b0 100644\n--- a/src/backend/access/transam/xact.c\n+++ b/src/backend/access/transam/xact.c\n@@ -2187,6 +2187,7 @@ CommitTransaction(void)\n                  * durably commit.\n                  */\n                 latestXid = RecordTransactionCommit();\n+               pg_usleep(random() % 100000);\n         }\n         else\n         {\n\n\n\n\nThen the problem can be reproduced with the following pgbebch script:\n\nupdate.sql:\n\\set aid random(1, 1000)\nselect do_update(:aid,100)\n\n\nupdate function:\ncreate or replace function do_update(id integer, level integer) returns \nvoid as $$\nbegin\n     begin\n         if level > 0 then\n             perform do_update(id, level-1);\n         else\n             update pgbench_accounts SET abalance = abalance + 1 WHERE \naid = id;\n         end if;\n     exception WHEN OTHERS THEN\n         raise notice '% %', SQLERRM, SQLSTATE;\n     end;\nend; $$ language plpgsql;\n\n\npgbench -i postgres\npgbench  -f update.sql -c 100 -T 100 -P 1 postgres\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 19:46:43 +0300", "msg_from": "Konstantin Knizhnik <knizhnik@garret.ru>", "msg_from_op": true, "msg_subject": "Race condition in TransactionIdIsInProgress" }, { "msg_contents": "\n\n> On 10 Feb 2022, at 21:46, Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> As a result \"t_xmin %u is uncommitted in tuple (%u,%u) to be updated in table\"\n> error is reported.\n\nWow, cool, that seem to be a solution to one more mysterious corruption thread - “reporting TID/table with corruption error\" [0]. Thank you!\n\nNow it’s obvious that this is not a real data corruption. Maybe let’s remove corruption error code from the error? I had been literally woken at night by this code few times in January.\n\nAnd do you have a plan how to fix the actual issue?\n\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/202108191637.oqyzrdtnheir%40alvherre.pgsql\n\n\n\n", "msg_date": "Thu, 10 Feb 2022 22:33:56 +0500", "msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On 2022-Feb-10, Andrey Borodin wrote:\n\n> > On 10 Feb 2022, at 21:46, Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> > As a result \"t_xmin %u is uncommitted in tuple (%u,%u) to be updated in table\"\n> > error is reported.\n> \n> Wow, cool, that seem to be a solution to one more mysterious\n> corruption thread - “reporting TID/table with corruption error\" [0].\n> Thank you!\n\nOoh, great find. I'll have a look at this soon. Thanks for CCing me.\n\n-- \nÁlvaro Herrera Valdivia, Chile — https://www.EnterpriseDB.com/\n\"I love the Postgres community. It's all about doing things _properly_. :-)\"\n(David Garamond)\n\n\n", "msg_date": "Thu, 10 Feb 2022 16:33:15 -0300", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On Thu, Feb 10, 2022 at 11:46 AM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> Postgres first records state transaction in CLOG, then removes\n> transaction from procarray and finally releases locks.\n> But it can happen that transaction is marked as committed in CLOG,\n> XMAX_COMMITTED bit is set in modified tuple but\n> TransactionIdIsInProgress still returns true. As a result \"t_xmin %u is\n> uncommitted in tuple (%u,%u) to be updated in table\"\n> error is reported.\n\nThis is exactly why the HeapTupleSatisfiesFOO() functions all test\nTransactionIdIsInProgress() first and TransactionIdDidCommit() only if\nit returns false. I don't think it's really legal to test\nTransactionIdDidCommit() without checking TransactionIdIsInProgress()\nfirst. And I bet that's exactly what this code is doing...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:21:27 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 15:21:27 -0500, Robert Haas wrote:\n> On Thu, Feb 10, 2022 at 11:46 AM Konstantin Knizhnik <knizhnik@garret.ru> wrote:\n> > Postgres first records state transaction in CLOG, then removes\n> > transaction from procarray and finally releases locks.\n> > But it can happen that transaction is marked as committed in CLOG,\n> > XMAX_COMMITTED bit is set in modified tuple but\n> > TransactionIdIsInProgress still returns true. As a result \"t_xmin %u is\n> > uncommitted in tuple (%u,%u) to be updated in table\"\n> > error is reported.\n>\n> This is exactly why the HeapTupleSatisfiesFOO() functions all test\n> TransactionIdIsInProgress() first and TransactionIdDidCommit() only if\n> it returns false. I don't think it's really legal to test\n> TransactionIdDidCommit() without checking TransactionIdIsInProgress()\n> first. And I bet that's exactly what this code is doing...\n\nThe problem is that the TransactionIdDidAbort() call is coming from within\nTransactionIdIsInProgress().\n\nIf I understand correctly, the problem is that the xid cache stuff doesn't\ncorrectly work for subtransactions and breaks TransactionIdIsInProgress().\n\nFor the problem to occur you need an xid that is for a subtransaction that\nsuboverflowed and that is 'clog committed' but not 'procarray committed'.\n\nIf that xid is tested twice with TransactionIdIsInProgress(), you will get a\nbogus result on the second call within the same session.\n\nThe first TransactionIdIsInProgress() will go to\n\t/*\n\t * Step 4: have to check pg_subtrans.\n\t *\n\t * At this point, we know it's either a subtransaction of one of the Xids\n\t * in xids[], or it's not running. If it's an already-failed\n\t * subtransaction, we want to say \"not running\" even though its parent may\n\t * still be running. So first, check pg_xact to see if it's been aborted.\n\t */\n\txc_slow_answer_inc();\n\n\tif (TransactionIdDidAbort(xid))\n\t\treturn false;\n\nand fetch that xid. The TransactionLogFetch() will set cachedFetchXid, because\nthe xid's status is TRANSACTION_STATUS_COMMITTED. Because the transaction\ndidn't abort TransactionIdIsInProgress() will go onto the the following loop,\nsee the toplevel xid is in progress in the procarray and return true.\n\nThe second call to TransactionIdIsInProgress() will see that the fetched\ntransaction matches the cached transactionid and return false.\n\n\t/*\n\t * We may have just checked the status of this transaction, so if it is\n\t * already known to be completed, we can fall out without any access to\n\t * shared memory.\n\t */\n\tif (TransactionIdIsKnownCompleted(xid))\n\t{\n\t\txc_by_known_xact_inc();\n\t\treturn false;\n\t}\n\nWithout any improper uses of TransactionIdDidAbort()/TransactionIdDidCommit()\nbefore TransactionIdIsInProgress(). And other sessions will still return\ntrue. And even *this* session can return true again, if another transaction is\nchecked that ends up caching another transaction status in cachedFetchXid.\n\n\nIt looks to me like the TransactionIdIsKnownCompleted() path in\nTransactionIdIsInProgress() is just entirely broken. An prior\nTransactionLogFetch() can't be allowed to make subsequent\nTransactionIdIsInProgress() calls return wrong values.\n\n\nIf I understand the problem correctly, a SELECT\npg_xact_status(clog-committed-pgproc-in-progres) can make\nTransactionIdIsInProgress() return wrong values too. It's not just\nTransactionIdIsInProgress() itself.\n\n\nLooks like this problem goes back all the way back to\n\ncommit 611b4393f22f2bb43135501cd6b7591299b6b453\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: 2008-03-11 20:20:35 +0000\n\n Make TransactionIdIsInProgress check transam.c's single-item XID status cache\n before it goes groveling through the ProcArray. In situations where the same\n recently-committed transaction ID is checked repeatedly by tqual.c, this saves\n a lot of shared-memory searches. And it's cheap enough that it shouldn't\n hurt noticeably when it doesn't help.\n Concept and patch by Simon, some minor tweaking and comment-cleanup by Tom.\n\n\nI think this may actually mean that the hot corruption problem fixed in\n\ncommit 18b87b201f7\nAuthor: Andres Freund <andres@anarazel.de>\nDate: 2021-12-10 20:12:26 -0800\n\n Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning.\n\nfor 14/master is present in older branches too :(. Need to trace through the\nHTSV and pruning logic with a bit more braincells than currently available to\nbe sure.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 21:56:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 21:56:09 -0800, Andres Freund wrote:\n> I think this may actually mean that the hot corruption problem fixed in\n> \n> commit 18b87b201f7\n> Author: Andres Freund <andres@anarazel.de>\n> Date: 2021-12-10 20:12:26 -0800\n> \n> Fix possible HOT corruption when RECENTLY_DEAD changes to DEAD while pruning.\n> \n> for 14/master is present in older branches too :(. Need to trace through the\n> HTSV and pruning logic with a bit more braincells than currently available to\n> be sure.\n\nOn second thought, there's probably sufficiently more direct corruption this\ncan cause than corruption via hot pruning. Not that that's not a problem, but\n... Inconsistent TransactionIdIsInProgress() result can wreak havoc quite\nbroadly.\n\nLooks lik syncrep will make this a lot worse, because it can drastically\nincrease the window between the TransactionIdCommitTree() and\nProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\nleast it might make it easier to write tests exercising this scenario...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 10 Feb 2022 22:11:38 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On Fri, 11 Feb 2022 at 06:11, Andres Freund <andres@anarazel.de> wrote:\n\n> Looks lik syncrep will make this a lot worse, because it can drastically\n> increase the window between the TransactionIdCommitTree() and\n> ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> least it might make it easier to write tests exercising this scenario...\n\nAgreed\n\nTransactionIdIsKnownCompleted(xid) is only broken because the single\nitem cache is set too early in some cases. The single item cache is\nimportant for performance, so we just need to be more careful about\nsetting the cache.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Fri, 11 Feb 2022 08:48:43 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On Fri, 11 Feb 2022 at 08:48, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n>\n> On Fri, 11 Feb 2022 at 06:11, Andres Freund <andres@anarazel.de> wrote:\n>\n> > Looks lik syncrep will make this a lot worse, because it can drastically\n> > increase the window between the TransactionIdCommitTree() and\n> > ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> > least it might make it easier to write tests exercising this scenario...\n>\n> Agreed\n>\n> TransactionIdIsKnownCompleted(xid) is only broken because the single\n> item cache is set too early in some cases. The single item cache is\n> important for performance, so we just need to be more careful about\n> setting the cache.\n\nSomething like this... fix_cachedFetchXid.v1.patch prevents the cache\nbeing set, but this fails! Not worked out why, yet.\n\njust_remove_TransactionIdIsKnownCompleted_call.v1.patch\njust removes the known offending call, passes make check, but IMHO\nleaves the same error just as likely by other callers.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/", "msg_date": "Fri, 11 Feb 2022 13:48:59 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-11 13:48:59 +0000, Simon Riggs wrote:\n> On Fri, 11 Feb 2022 at 08:48, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> >\n> > On Fri, 11 Feb 2022 at 06:11, Andres Freund <andres@anarazel.de> wrote:\n> >\n> > > Looks lik syncrep will make this a lot worse, because it can drastically\n> > > increase the window between the TransactionIdCommitTree() and\n> > > ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> > > least it might make it easier to write tests exercising this scenario...\n> >\n> > Agreed\n> >\n> > TransactionIdIsKnownCompleted(xid) is only broken because the single\n> > item cache is set too early in some cases. The single item cache is\n> > important for performance, so we just need to be more careful about\n> > setting the cache.\n> \n> Something like this... fix_cachedFetchXid.v1.patch prevents the cache\n> being set, but this fails! Not worked out why, yet.\n\nI don't think it's safe to check !TransactionIdKnownNotInProgress() in\nTransactionLogFetch(), given that TransactionIdKnownNotInProgress() needs to\ndo TransactionLogFetch() internally.\n\n\n> just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n\nI think this might contain a different diff than what the name suggests?\n\n\n> just removes the known offending call, passes make check, but IMHO\n> leaves the same error just as likely by other callers.\n\nWhich other callers are you referring to?\n\n\n\nTo me it seems that the \"real\" reason behind this specific issue being\nnontrivial to fix and behind the frequent error of calling\nTransactionIdDidCommit() before TransactionIdIsInProgress() is\nthat we've really made a mess out of the transaction status determination code\n/ API. IMO the original sin is requiring the complicated\n\nif (TransactionIdIsCurrentTransactionId(xid)\n...\nelse if (TransactionIdIsInProgress(xid)\n...\nelse if (TransactionIdDidCommit(xid)\n...\n\ndance at pretty much any place checking transaction status. The multiple calls\nfor the same xid is, I think, what pushed the caching down to the\nTransactionLogFetch level.\n\nISTM that we should introduce something like TransactionIdStatus(xid) that\nreturns\nXACT_STATUS_CURRENT\nXACT_STATUS_IN_PROGRESS\nXACT_STATUS_COMMITTED\nXACT_STATUS_ABORTED\naccompanied by a TransactionIdStatusMVCC(xid, snapshot) that checks\nXidInMVCCSnapshot() instead of TransactionIdIsInProgress().\n\nI think that'd also make visibility checks faster - we spend a good chunk of\ncycles doing unnecessary function calls and repeatedly gathering information.\n\n\nIt's also kind of weird that TransactionIdIsCurrentTransactionId() isn't more\noptimized - TransactionIdIsInProgress() knows it doesn't need to check\nanything before RecentXmin, but TransactionIdIsCurrentTransactionId()\ndoesn't. Nor does TransactionIdIsCurrentTransactionId() check if the xid is\nsmaller than GetTopTransactionIdIfAny() before bsearching through subxids.\n\nOf course anything along these lines would never be backpatchable...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Feb 2022 11:08:48 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-10 22:11:38 -0800, Andres Freund wrote:\n> Looks lik syncrep will make this a lot worse, because it can drastically\n> increase the window between the TransactionIdCommitTree() and\n> ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> least it might make it easier to write tests exercising this scenario...\n\nFWIW, I've indeed reproduced this fairly easily with such a setup. A pgbench\nr/w workload that's been modified to start 70 savepoints at the start shows\n\npgbench: error: client 22 script 0 aborted in command 12 query 0: ERROR: t_xmin 3853739 is uncommitted in tuple (2,159) to be updated in table \"pgbench_branches\"\npgbench: error: client 13 script 0 aborted in command 12 query 0: ERROR: t_xmin 3954305 is uncommitted in tuple (2,58) to be updated in table \"pgbench_branches\"\npgbench: error: client 7 script 0 aborted in command 12 query 0: ERROR: t_xmin 4017908 is uncommitted in tuple (3,44) to be updated in table \"pgbench_branches\"\n\nafter a few minutes of running with a local, not slowed down, syncrep. Without\nany other artifical slowdowns or such.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:41:24 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-11 16:41:24 -0800, Andres Freund wrote:\n> FWIW, I've indeed reproduced this fairly easily with such a setup. A pgbench\n> r/w workload that's been modified to start 70 savepoints at the start shows\n> \n> pgbench: error: client 22 script 0 aborted in command 12 query 0: ERROR: t_xmin 3853739 is uncommitted in tuple (2,159) to be updated in table \"pgbench_branches\"\n> pgbench: error: client 13 script 0 aborted in command 12 query 0: ERROR: t_xmin 3954305 is uncommitted in tuple (2,58) to be updated in table \"pgbench_branches\"\n> pgbench: error: client 7 script 0 aborted in command 12 query 0: ERROR: t_xmin 4017908 is uncommitted in tuple (3,44) to be updated in table \"pgbench_branches\"\n> \n> after a few minutes of running with a local, not slowed down, syncrep. Without\n> any other artifical slowdowns or such.\n\nAnd this can easily be triggered even without subtransactions, in a completely\nreliable way.\n\nThe only reason I'm so far not succeeding in turning it into an\nisolationtester spec is that a transaction waiting for SyncRep doesn't count\nas waiting for isolationtester.\n\nBasically\n\nS1: BEGIN; $xid = txid_current(); UPDATE; COMMIT; <commit wait for syncrep>\nS2: SELECT pg_xact_status($xid);\nS2: UPDATE;\n\nsuffices, because the pg_xact_status() causes an xlog fetch, priming the xid\ncache, which then causes the TransactionIdIsInProgress() to take the early\nreturn path, despite the transaction still being in progress. Which then\nallows the update to proceed, despite the S1 not having \"properly committed\"\nyet.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Feb 2022 19:42:31 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On Fri, 11 Feb 2022 at 19:08, Andres Freund <andres@anarazel.de> wrote:\n\n> On 2022-02-11 13:48:59 +0000, Simon Riggs wrote:\n> > On Fri, 11 Feb 2022 at 08:48, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > >\n> > > On Fri, 11 Feb 2022 at 06:11, Andres Freund <andres@anarazel.de> wrote:\n> > >\n> > > > Looks lik syncrep will make this a lot worse, because it can drastically\n> > > > increase the window between the TransactionIdCommitTree() and\n> > > > ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> > > > least it might make it easier to write tests exercising this scenario...\n> > >\n> > > Agreed\n> > >\n> > > TransactionIdIsKnownCompleted(xid) is only broken because the single\n> > > item cache is set too early in some cases. The single item cache is\n> > > important for performance, so we just need to be more careful about\n> > > setting the cache.\n> >\n> > Something like this... fix_cachedFetchXid.v1.patch prevents the cache\n> > being set, but this fails! Not worked out why, yet.\n>\n> I don't think it's safe to check !TransactionIdKnownNotInProgress() in\n> TransactionLogFetch(), given that TransactionIdKnownNotInProgress() needs to\n> do TransactionLogFetch() internally.\n\nThat's not correct because you're confusing\nTransactionIdKnownNotInProgress(), which is a new function introduced\nby the patch, with the existing function\nTransactionIdIsKnownCompleted().\n\n\n> > just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n>\n> I think this might contain a different diff than what the name suggests?\n\nNot at all, please don't be confused by the name. The patch removes\nthe call to TransactionIdIsKnownCompleted() from\nTransactionIdIsInProgress().\n\nI'm not sure it is possible to remove TransactionIdIsKnownCompleted()\nin backbranches.\n\n\n> > just removes the known offending call, passes make check, but IMHO\n> > leaves the same error just as likely by other callers.\n>\n> Which other callers are you referring to?\n\nI don't know of any, but we can't just remove a function in a\nbackbranch, can we?\n\n\n> To me it seems that the \"real\" reason behind this specific issue being\n> nontrivial to fix and behind the frequent error of calling\n> TransactionIdDidCommit() before TransactionIdIsInProgress() is\n> that we've really made a mess out of the transaction status determination code\n> / API. IMO the original sin is requiring the complicated\n>\n> if (TransactionIdIsCurrentTransactionId(xid)\n> ...\n> else if (TransactionIdIsInProgress(xid)\n> ...\n> else if (TransactionIdDidCommit(xid)\n> ...\n>\n> dance at pretty much any place checking transaction status.\n\nAgreed, it's pretty weird to have to call the functions in the right\norder or you get the wrong answer. Especially since we have no\ncross-check to ensure the correct sequence was followed.\n\n\n> The multiple calls\n> for the same xid is, I think, what pushed the caching down to the\n> TransactionLogFetch level.\n>\n> ISTM that we should introduce something like TransactionIdStatus(xid) that\n> returns\n> XACT_STATUS_CURRENT\n> XACT_STATUS_IN_PROGRESS\n> XACT_STATUS_COMMITTED\n> XACT_STATUS_ABORTED\n> accompanied by a TransactionIdStatusMVCC(xid, snapshot) that checks\n> XidInMVCCSnapshot() instead of TransactionIdIsInProgress().\n>\n> I think that'd also make visibility checks faster - we spend a good chunk of\n> cycles doing unnecessary function calls and repeatedly gathering information.\n>\n>\n> It's also kind of weird that TransactionIdIsCurrentTransactionId() isn't more\n> optimized - TransactionIdIsInProgress() knows it doesn't need to check\n> anything before RecentXmin, but TransactionIdIsCurrentTransactionId()\n> doesn't. Nor does TransactionIdIsCurrentTransactionId() check if the xid is\n> smaller than GetTopTransactionIdIfAny() before bsearching through subxids.\n>\n> Of course anything along these lines would never be backpatchable...\n\nAgreed\n\nI've presented a simple patch intended for backpatching. You may not\nlike the name, but please have another look at\n\"just_remove_TransactionIdIsKnownCompleted_call.v1.patch\".\nI believe it is backpatchable with minimal impact and without loss of\nperformance.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 12 Feb 2022 13:25:58 +0000", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 13:25:58 +0000, Simon Riggs wrote:\n> On Fri, 11 Feb 2022 at 19:08, Andres Freund <andres@anarazel.de> wrote:\n> \n> > On 2022-02-11 13:48:59 +0000, Simon Riggs wrote:\n> > > On Fri, 11 Feb 2022 at 08:48, Simon Riggs <simon.riggs@enterprisedb.com> wrote:\n> > > >\n> > > > On Fri, 11 Feb 2022 at 06:11, Andres Freund <andres@anarazel.de> wrote:\n> > > >\n> > > > > Looks lik syncrep will make this a lot worse, because it can drastically\n> > > > > increase the window between the TransactionIdCommitTree() and\n> > > > > ProcArrayEndTransaction() due to the SyncRepWaitForLSN() inbetween. But at\n> > > > > least it might make it easier to write tests exercising this scenario...\n> > > >\n> > > > Agreed\n> > > >\n> > > > TransactionIdIsKnownCompleted(xid) is only broken because the single\n> > > > item cache is set too early in some cases. The single item cache is\n> > > > important for performance, so we just need to be more careful about\n> > > > setting the cache.\n> > >\n> > > Something like this... fix_cachedFetchXid.v1.patch prevents the cache\n> > > being set, but this fails! Not worked out why, yet.\n> >\n> > I don't think it's safe to check !TransactionIdKnownNotInProgress() in\n> > TransactionLogFetch(), given that TransactionIdKnownNotInProgress() needs to\n> > do TransactionLogFetch() internally.\n> \n> That's not correct because you're confusing\n> TransactionIdKnownNotInProgress(), which is a new function introduced\n> by the patch, with the existing function\n> TransactionIdIsKnownCompleted().\n\nI don't think so. This call of the new TransactionIdKnownNotInProgress()\n\n> --- a/src/backend/access/transam/transam.c\n> +++ b/src/backend/access/transam/transam.c\n> @@ -73,6 +73,14 @@ TransactionLogFetch(TransactionId transactionId)\n> \t\treturn TRANSACTION_STATUS_ABORTED;\n> \t}\n> \n> +\t/*\n> +\t * Safeguard that we have called TransactionIsIdInProgress() before\n> +\t * checking commit log manager, to ensure that we do not cache the\n> +\t * result until the xid is no longer in the procarray at eoxact.\n> +\t */\n> +\tif (!TransactionIdKnownNotInProgress(transactionId))\n> +\t\treturn TRANSACTION_STATUS_IN_PROGRESS;\n> +\n> \t/*\n> \t * Get the transaction status.\n> \t */\n\nisn't right. Consider what happens to TransactionIdIsInProgress(x)'s call to\nTransactionIdDidAbort(x) at \"step 4\". TransactionIdDidAbort(x) will call\nTransactionLogFetch(x). cachedXidIsNotInProgress isn't yet set to x, so\nTransactionLogFetch() returns TRANSACTION_STATUS_IN_PROGRESS. Even if the\n(sub)transaction aborted.\n\nWhich explains why your first patch doesn't work.\n\n\n> > > just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n> >\n> > I think this might contain a different diff than what the name suggests?\n> \n> Not at all, please don't be confused by the name. The patch removes\n> the call to TransactionIdIsKnownCompleted() from\n> TransactionIdIsInProgress().\n\nI guess for me \"just remove\" doesn't include adding a new cache...\n\n\n> I'm not sure it is possible to remove TransactionIdIsKnownCompleted()\n> in backbranches.\n\nI think it'd be fine if we needed to. Or we could just make it always return\nfalse or such.\n\n\n> > > just removes the known offending call, passes make check, but IMHO\n> > > leaves the same error just as likely by other callers.\n> >\n> > Which other callers are you referring to?\n> \n> I don't know of any, but we can't just remove a function in a\n> backbranch, can we?\n\nWe have in the past, if it's a \"sufficiently internal\" function.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 13:50:35 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On 12/02/2022 05:42, Andres Freund wrote:\n> On 2022-02-11 16:41:24 -0800, Andres Freund wrote:\n>> FWIW, I've indeed reproduced this fairly easily with such a setup. A pgbench\n>> r/w workload that's been modified to start 70 savepoints at the start shows\n>>\n>> pgbench: error: client 22 script 0 aborted in command 12 query 0: ERROR: t_xmin 3853739 is uncommitted in tuple (2,159) to be updated in table \"pgbench_branches\"\n>> pgbench: error: client 13 script 0 aborted in command 12 query 0: ERROR: t_xmin 3954305 is uncommitted in tuple (2,58) to be updated in table \"pgbench_branches\"\n>> pgbench: error: client 7 script 0 aborted in command 12 query 0: ERROR: t_xmin 4017908 is uncommitted in tuple (3,44) to be updated in table \"pgbench_branches\"\n>>\n>> after a few minutes of running with a local, not slowed down, syncrep. Without\n>> any other artifical slowdowns or such.\n> \n> And this can easily be triggered even without subtransactions, in a completely\n> reliable way.\n> \n> The only reason I'm so far not succeeding in turning it into an\n> isolationtester spec is that a transaction waiting for SyncRep doesn't count\n> as waiting for isolationtester.\n> \n> Basically\n> \n> S1: BEGIN; $xid = txid_current(); UPDATE; COMMIT; <commit wait for syncrep>\n> S2: SELECT pg_xact_status($xid);\n> S2: UPDATE;\n> \n> suffices, because the pg_xact_status() causes an xlog fetch, priming the xid\n> cache, which then causes the TransactionIdIsInProgress() to take the early\n> return path, despite the transaction still being in progress. Which then\n> allows the update to proceed, despite the S1 not having \"properly committed\"\n> yet.\n\nI started to improve isolationtester, to allow the spec file to specify \na custom query to check for whether the backend is blocked. But then I \nrealized that it's not quite enough: there is currently no way write the \n \"$xid = txid_current()\" step either. Isolationtester doesn't have \nvariables like that. Also, you need to set synchronous_standby_names to \nmake it block, which seems a bit weird in an isolation test.\n\nI wish we had an explicit way to inject delays or failures. We discussed \nit before \n(https://www.postgresql.org/message-id/flat/CANXE4TdxdESX1jKw48xet-5GvBFVSq%3D4cgNeioTQff372KO45A%40mail.gmail.com), \nbut I don't want to pick up that task right now.\n\nAnyway, I wrote a TAP test for this instead. See attached. I'm not sure \nif this is worth committing, the pg_xact_status() trick is quite special \nfor this particular bug.\n\nSimon's just_remove_TransactionIdIsKnownCompleted_call.v1.patch looks \ngood to me. Replying to the discussion on that:\n\nOn 12/02/2022 23:50, Andres Freund wrote:\n> On 2022-02-12 13:25:58 +0000, Simon Riggs wrote:\n>> I'm not sure it is possible to remove TransactionIdIsKnownCompleted()\n>> in backbranches.\n> \n> I think it'd be fine if we needed to. Or we could just make it always return\n> false or such.\n> \n> \n>>>> just removes the known offending call, passes make check, but IMHO\n>>>> leaves the same error just as likely by other callers.\n>>>\n>>> Which other callers are you referring to?\n>>\n>> I don't know of any, but we can't just remove a function in a\n>> backbranch, can we?\n> \n> We have in the past, if it's a \"sufficiently internal\" function.\n\nI think we should remove it in v15, and leave it as it is in \nbackbranches. Just add a comment to point out that the name is a bit \nmisleading, because it checks the clog rather than the proc array. It's \nnot inherently dangerous, and someone might have a legit use case for \nit. True, someone might also be doing this incorrect thing with it, but \nstill seems like the right balance to me.\n\nI think we also need to change pg_xact_status(), to also call \nTransactionIdIsInProgress() before TransactionIdDidCommit/Abort(). \nCurrently, if it returns \"committed\", and you start a new transaction, \nthe new transaction might not yet see the effects of that \"committed\" \ntransaction. With that, you cannot reproduce the original bug with \npg_xact_status() though, so there's no point in committing that test then.\n\nIn summary, I think we should:\n- commit and backpatch Simon's \njust_remove_TransactionIdIsKnownCompleted_call.v1.patch\n- fix pg_xact_status() to check TransactionIdIsInProgress()\n- in v15, remove TransationIdIsKnownCompleted function altogether\n\nI'll try to get around to that in the next few days, unless someone \nbeats me to it.\n\n- Heikki", "msg_date": "Thu, 23 Jun 2022 22:03:27 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "Hi,\n\nOn 2022-06-23 22:03:27 +0300, Heikki Linnakangas wrote:\n> On 12/02/2022 05:42, Andres Freund wrote:\n> > The only reason I'm so far not succeeding in turning it into an\n> > isolationtester spec is that a transaction waiting for SyncRep doesn't count\n> > as waiting for isolationtester.\n> >\n> > Basically\n> >\n> > S1: BEGIN; $xid = txid_current(); UPDATE; COMMIT; <commit wait for syncrep>\n> > S2: SELECT pg_xact_status($xid);\n> > S2: UPDATE;\n> >\n> > suffices, because the pg_xact_status() causes an xlog fetch, priming the xid\n> > cache, which then causes the TransactionIdIsInProgress() to take the early\n> > return path, despite the transaction still being in progress. Which then\n> > allows the update to proceed, despite the S1 not having \"properly committed\"\n> > yet.\n>\n> I started to improve isolationtester, to allow the spec file to specify a\n> custom query to check for whether the backend is blocked. But then I\n> realized that it's not quite enough: there is currently no way write the\n> \"$xid = txid_current()\" step either. Isolationtester doesn't have variables\n> like that. Also, you need to set synchronous_standby_names to make it block,\n> which seems a bit weird in an isolation test.\n\nI don't think we strictly need $xid - it likely can be implemented via\nsomething like\n SELECT pg_xact_status(backend_xid) FROM pg_stat_activity WHERE application_name = 'testname/s1';\n\n\n> I wish we had an explicit way to inject delays or failures. We discussed it\n> before (https://www.postgresql.org/message-id/flat/CANXE4TdxdESX1jKw48xet-5GvBFVSq%3D4cgNeioTQff372KO45A%40mail.gmail.com),\n> but I don't want to pick up that task right now.\n\nUnderstandable :(\n\n\n> Anyway, I wrote a TAP test for this instead. See attached. I'm not sure if\n> this is worth committing, the pg_xact_status() trick is quite special for\n> this particular bug.\n\nYea, not sure either.\n\n\n> Simon's just_remove_TransactionIdIsKnownCompleted_call.v1.patch looks good\n> to me. Replying to the discussion on that:\n>\n> On 12/02/2022 23:50, Andres Freund wrote:\n> > On 2022-02-12 13:25:58 +0000, Simon Riggs wrote:\n> > > I'm not sure it is possible to remove TransactionIdIsKnownCompleted()\n> > > in backbranches.\n> >\n> > I think it'd be fine if we needed to. Or we could just make it always return\n> > false or such.\n> >\n> >\n> > > > > just removes the known offending call, passes make check, but IMHO\n> > > > > leaves the same error just as likely by other callers.\n> > > >\n> > > > Which other callers are you referring to?\n> > >\n> > > I don't know of any, but we can't just remove a function in a\n> > > backbranch, can we?\n> >\n> > We have in the past, if it's a \"sufficiently internal\" function.\n>\n> I think we should remove it in v15, and leave it as it is in backbranches.\n> Just add a comment to point out that the name is a bit misleading, because\n> it checks the clog rather than the proc array. It's not inherently\n> dangerous, and someone might have a legit use case for it. True, someone\n> might also be doing this incorrect thing with it, but still seems like the\n> right balance to me.\n>\n> I think we also need to change pg_xact_status(), to also call\n> TransactionIdIsInProgress() before TransactionIdDidCommit/Abort().\n> Currently, if it returns \"committed\", and you start a new transaction, the\n> new transaction might not yet see the effects of that \"committed\"\n> transaction. With that, you cannot reproduce the original bug with\n> pg_xact_status() though, so there's no point in committing that test then.\n>\n> In summary, I think we should:\n> - commit and backpatch Simon's\n> just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n> - fix pg_xact_status() to check TransactionIdIsInProgress()\n> - in v15, remove TransationIdIsKnownCompleted function altogether\n>\n> I'll try to get around to that in the next few days, unless someone beats me\n> to it.\n\nMakes sense.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Thu, 23 Jun 2022 18:43:07 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On 24/06/2022 04:43, Andres Freund wrote:\n> On 2022-06-23 22:03:27 +0300, Heikki Linnakangas wrote:\n>> In summary, I think we should:\n>> - commit and backpatch Simon's\n>> just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n>> - fix pg_xact_status() to check TransactionIdIsInProgress()\n>> - in v15, remove TransationIdIsKnownCompleted function altogether\n>>\n>> I'll try to get around to that in the next few days, unless someone beats me\n>> to it.\n> \n> Makes sense.\n\nThis is what I came up with for master. One difference from Simon's \noriginal patch is that I decided to not expose the \nTransactionIdIsKnownNotInProgress(), as there are no other callers of it \nin core, and it doesn't seem useful to extensions. I inlined it into the \ncaller instead.\n\nBTW, should we worry about XID wraparound with the cache? Could you have \na backend sit idle for 2^32 transactions, without making any \nTransactionIdIsKnownNotInProgress() calls? That's not new with this \npatch, though, it could happen with the single-item cache in transam.c, too.\n\n- Heikki", "msg_date": "Sat, 25 Jun 2022 12:18:51 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On Sat, 25 Jun 2022 at 10:18, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> On 24/06/2022 04:43, Andres Freund wrote:\n> > On 2022-06-23 22:03:27 +0300, Heikki Linnakangas wrote:\n> >> In summary, I think we should:\n> >> - commit and backpatch Simon's\n> >> just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n> >> - fix pg_xact_status() to check TransactionIdIsInProgress()\n> >> - in v15, remove TransationIdIsKnownCompleted function altogether\n> >>\n> >> I'll try to get around to that in the next few days, unless someone beats me\n> >> to it.\n> >\n> > Makes sense.\n>\n> This is what I came up with for master. One difference from Simon's\n> original patch is that I decided to not expose the\n> TransactionIdIsKnownNotInProgress(), as there are no other callers of it\n> in core, and it doesn't seem useful to extensions. I inlined it into the\n> caller instead.\n\nLooks good, thanks.\n\n> BTW, should we worry about XID wraparound with the cache? Could you have\n> a backend sit idle for 2^32 transactions, without making any\n> TransactionIdIsKnownNotInProgress() calls? That's not new with this\n> patch, though, it could happen with the single-item cache in transam.c, too.\n\nYes, as a separate patch only in PG15+, imho.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n", "msg_date": "Sat, 25 Jun 2022 11:10:22 +0100", "msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" }, { "msg_contents": "On 25/06/2022 13:10, Simon Riggs wrote:\n> On Sat, 25 Jun 2022 at 10:18, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>>\n>> On 24/06/2022 04:43, Andres Freund wrote:\n>>> On 2022-06-23 22:03:27 +0300, Heikki Linnakangas wrote:\n>>>> In summary, I think we should:\n>>>> - commit and backpatch Simon's\n>>>> just_remove_TransactionIdIsKnownCompleted_call.v1.patch\n>>>> - fix pg_xact_status() to check TransactionIdIsInProgress()\n>>>> - in v15, remove TransationIdIsKnownCompleted function altogether\n>>>>\n>>>> I'll try to get around to that in the next few days, unless someone beats me\n>>>> to it.\n>>>\n>>> Makes sense.\n>>\n>> This is what I came up with for master. One difference from Simon's\n>> original patch is that I decided to not expose the\n>> TransactionIdIsKnownNotInProgress(), as there are no other callers of it\n>> in core, and it doesn't seem useful to extensions. I inlined it into the\n>> caller instead.\n> \n> Looks good, thanks.\n\nCommitted and backpatched. Thanks!\n\n- Heikki\n\n\n", "msg_date": "Mon, 27 Jun 2022 08:26:22 +0300", "msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Race condition in TransactionIdIsInProgress" } ]
[ { "msg_contents": "Hi hackers,\n\nI've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n\nhttps://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n\n*) It's not all, but 1041, compared to the 338 found on PGXN.\n\nMaybe it would be an idea to provide a lightweight solution,\ne.g. maintaining a simple curated list of repo urls,\npublished at postgresql.org <https://www.postgresql.org/> or wiki.postgresql.org,\nwith a simple form allowing missing repos to be suggested for insertion?\n\n/Joel\n\nHi hackers,I've compiled a list of all* PostgreSQL EXTENSIONs in the world:https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47*) It's not all, but 1041, compared to the 338 found on PGXN.Maybe it would be an idea to provide a lightweight solution,e.g. maintaining a simple curated list of repo urls,published at postgresql.org or wiki.postgresql.org,with a simple form allowing missing repos to be suggested for insertion?/Joel", "msg_date": "Thu, 10 Feb 2022 21:18:46 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Thu, Feb 10, 2022 at 3:19 PM Joel Jacobson <joel@compiler.org> wrote:\n> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n>\n> https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n>\n> *) It's not all, but 1041, compared to the 338 found on PGXN.\n>\n> Maybe it would be an idea to provide a lightweight solution,\n> e.g. maintaining a simple curated list of repo urls,\n> published at postgresql.org or wiki.postgresql.org,\n> with a simple form allowing missing repos to be suggested for insertion?\n\nI think a list like this is probably not useful without at least a\nbrief description of each one, and maybe some attempt at\ncategorization. If I want a PostgreSQL extension to bake tasty\nlasagne, I'm not going to go scroll through 1041 entries and hope\nsomething jumps out at me. I'm going to Google \"PostgreSQL lasagne\nextension\" and see if anything promising shows up. But if I had a list\nthat were organized by category, I might try looking for a relevant\ncategory and then reading the blurbs for each one...\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 10 Feb 2022 15:35:21 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On 2/10/22 15:35, Robert Haas wrote:\n> On Thu, Feb 10, 2022 at 3:19 PM Joel Jacobson <joel@compiler.org> wrote:\n>> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n>>\n>> https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n>>\n>> *) It's not all, but 1041, compared to the 338 found on PGXN.\n>>\n>> Maybe it would be an idea to provide a lightweight solution,\n>> e.g. maintaining a simple curated list of repo urls,\n>> published at postgresql.org or wiki.postgresql.org,\n>> with a simple form allowing missing repos to be suggested for insertion?\n> \n> I think a list like this is probably not useful without at least a\n> brief description of each one, and maybe some attempt at\n> categorization. If I want a PostgreSQL extension to bake tasty\n> lasagne, I'm not going to go scroll through 1041 entries and hope\n> something jumps out at me. I'm going to Google \"PostgreSQL lasagne\n> extension\" and see if anything promising shows up. But if I had a list\n> that were organized by category, I might try looking for a relevant\n> category and then reading the blurbs for each one...\n\nAgreed.\n\nThe example I really like is the R project CRAN \"Task Views\" for their \npackages (currently the CRAN package repository has 18917 available \npackages):\n\nhttps://cloud.r-project.org/web/views/\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Thu, 10 Feb 2022 16:04:17 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "2022年2月11日(金) 5:19 Joel Jacobson <joel@compiler.org>:\n>\n> Hi hackers,\n>\n> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n>\n> https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n>\n> *) It's not all, but 1041, compared to the 338 found on PGXN.\n\nMore precisely it's a list of all? the repositories with PostgreSQL\nextensions on\nGitHub?\n\nFWIW I see a few of mine on there, which are indeed PostgreSQL extensions,\nbut for are also mostly just random, mainly unmaintained non-production-ready\ncode I've dumped on GitHub in the unlikely event it's of any interest.\n\nThe only one I consider of possible general, viable use (and which is available\nas part of the community packaging infrastructure) is also listed on PGXN.\n\nOTOH not everything is on GitHub; PostGIS comes to mind.\n\n> Maybe it would be an idea to provide a lightweight solution,\n> e.g. maintaining a simple curated list of repo urls,\n> published at postgresql.org or wiki.postgresql.org,\n> with a simple form allowing missing repos to be suggested for insertion?\n\nThe wiki sounds like a good starting point, assuming someone is willing to\ncreate/curate/maintain the list. It would need weeding out of any\nextensions which\nare inactive/unmaintained, duplicates/copies/forks (e.g. I see three\ninstances of\nblackhole_fdw listed, but not the original repo, which is not even on\nGitHub) etc..\n\nRegards\n\nIan Barwick\n\n-- \nEnterpriseDB: https://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 09:22:01 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "Hi,\n\nOn Fri, Feb 11, 2022 at 09:22:01AM +0900, Ian Lawrence Barwick wrote:\n> 2022年2月11日(金) 5:19 Joel Jacobson <joel@compiler.org>:\n> >\n> > Hi hackers,\n> >\n> > I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n> >\n> > https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n> >\n> > *) It's not all, but 1041, compared to the 338 found on PGXN.\n> \n> The only one I consider of possible general, viable use (and which is available\n> as part of the community packaging infrastructure) is also listed on PGXN.\n\nAgreed, PGXN is already the official place for that and has all you need to\nproperly look for what you want AFAICT.\n\n> > Maybe it would be an idea to provide a lightweight solution,\n> > e.g. maintaining a simple curated list of repo urls,\n> > published at postgresql.org or wiki.postgresql.org,\n> > with a simple form allowing missing repos to be suggested for insertion?\n> \n> The wiki sounds like a good starting point, assuming someone is willing to\n> create/curate/maintain the list. It would need weeding out of any\n> extensions which\n> are inactive/unmaintained, duplicates/copies/forks (e.g. I see three\n> instances of\n> blackhole_fdw listed, but not the original repo, which is not even on\n> GitHub) etc..\n\nThere are already some repositories that tries to gather some curated list of\nprojects around postgres, which probably already have the same problem with\nabandoned or simply moved projects.\n\nThe only real solution is to have authors keep publishing and updating their\ntools on PGXN.\n\n\n", "msg_date": "Fri, 11 Feb 2022 11:15:49 +0800", "msg_from": "Julien Rouhaud <rjuju123@gmail.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Thu, Feb 10, 2022 at 04:04:17PM -0500, Joe Conway wrote:\n> On 2/10/22 15:35, Robert Haas wrote:\n> >On Thu, Feb 10, 2022 at 3:19 PM Joel Jacobson <joel@compiler.org> wrote:\n> >>I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n> >>\n> >>https://gist.github.com/joelonsql/e5aa27f8cc9bd22b8999b7de8aee9d47\n> >>\n> >>*) It's not all, but 1041, compared to the 338 found on PGXN.\n\nHow did you make the list? (I'd imagine doing it by searching for\nrepositories containing evidence like \\bpgxs\\b matches.)\n\n> >>Maybe it would be an idea to provide a lightweight solution,\n> >>e.g. maintaining a simple curated list of repo urls,\n> >>published at postgresql.org or wiki.postgresql.org,\n> >>with a simple form allowing missing repos to be suggested for insertion?\n> >\n> >I think a list like this is probably not useful without at least a\n> >brief description of each one, and maybe some attempt at\n> >categorization. If I want a PostgreSQL extension to bake tasty\n> >lasagne, I'm not going to go scroll through 1041 entries and hope\n> >something jumps out at me. I'm going to Google \"PostgreSQL lasagne\n> >extension\" and see if anything promising shows up. But if I had a list\n> >that were organized by category, I might try looking for a relevant\n> >category and then reading the blurbs for each one...\n> \n> Agreed.\n\nWhen I change back-branch APIs and ABIs, I use a PGXN snapshot to judge the\nscope of affected modules. Supplementing the search with these git\nrepositories would help, even without further curation. I agree curation\nwould make the list more valuable for other use cases. Contributing to PGXN\nmay be the way to go, though.\n\n\n", "msg_date": "Thu, 10 Feb 2022 19:46:39 -0800", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Thu, Feb 10, 2022, at 21:35, Robert Haas wrote:\n> I think a list like this is probably not useful without at least a\n> brief description of each one, and maybe some attempt at\n> categorization.\n\n+1\n\nAs a first attempt, I've added the description from the Github-repos, and two categories to start the discussion:\n\n- Uncategorized\n- Foreign Data Wrappers\n\nThe categories are clickable and listed in the beginning, as a form of ToC.\n\n/Joel\nOn Thu, Feb 10, 2022, at 21:35, Robert Haas wrote:> I think a list like this is probably not useful without at least a> brief description of each one, and maybe some attempt at> categorization.+1As a first attempt, I've added the description from the Github-repos, and two categories to start the discussion:- Uncategorized- Foreign Data WrappersThe categories are clickable and listed in the beginning, as a form of ToC./Joel", "msg_date": "Mon, 21 Feb 2022 21:16:27 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Mon, Feb 21, 2022, at 21:16, Joel Jacobson wrote:\n> As a first attempt, I've added the description from the Github-repos, and two categories to start the discussion:\n>\n> - Uncategorized\n> - Foreign Data Wrappers\n\nSome more categories added:\n\n- Access Methods\n- Aggregate Functions\n- Data Types\n- Dictionaries\n- Procedural Languages\n\n/Joel\nOn Mon, Feb 21, 2022, at 21:16, Joel Jacobson wrote:> As a first attempt, I've added the description from the Github-repos, and two categories to start the discussion:>> - Uncategorized> - Foreign Data WrappersSome more categories added:- Access Methods- Aggregate Functions- Data Types- Dictionaries- Procedural Languages/Joel", "msg_date": "Mon, 21 Feb 2022 23:35:39 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "Hi Joel,\n\n> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n\nThe list is great. Thanks for doing this!\n\nJust curious, is it a one-time experiment or you are going to keep the\nlist in an actual state similarly to awesome-* lists on GitHub, or ...\n?\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n", "msg_date": "Tue, 22 Feb 2022 11:13:03 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Tue, Feb 22, 2022, at 09:13, Aleksander Alekseev wrote:\n> Hi Joel,\n>\n>> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:\n>\n> The list is great. Thanks for doing this!\n\nGlad to hear!\n\n> Just curious, is it a one-time experiment or you are going to keep the\n> list in an actual state similarly to awesome-* lists on GitHub, or ...\n> ?\n\nUsers can report missing repos by leaving comments on the Gist, and I will then be notified and add them to the list.\nTwo users have contributed already.\n\n/Joel\nOn Tue, Feb 22, 2022, at 09:13, Aleksander Alekseev wrote:> Hi Joel,>>> I've compiled a list of all* PostgreSQL EXTENSIONs in the world:>> The list is great. Thanks for doing this!Glad to hear!> Just curious, is it a one-time experiment or you are going to keep the> list in an actual state similarly to awesome-* lists on GitHub, or ...> ?Users can report missing repos by leaving comments on the Gist, and I will then be notified and add them to the list.Two users have contributed already./Joel", "msg_date": "Wed, 23 Feb 2022 09:03:52 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Fri, Feb 11, 2022, at 01:22, Ian Lawrence Barwick wrote:\n> More precisely it's a list of all? the repositories with PostgreSQL\n> extensions on\n> GitHub?\n\nYes, the ambition is to list all repos at GitHub, and to manually add repos hosted at GitLab and other places.\n\n> OTOH not everything is on GitHub; PostGIS comes to mind.\n\nI've created a new category, \"Spatial and Geographic Objects\",\nand added PostGIS and all other PostGIS-related extensions to it.\n\n(The reason why it wasn't found automatically is because there is no .control-file is the root dir,\nand its Makefile didn't contain a PGXS line.)\n\n> The wiki sounds like a good starting point, assuming someone is willing to\n> create/curate/maintain the list. It would need weeding out of any\n> extensions which\n> are inactive/unmaintained, duplicates/copies/forks\n\nI agree, it needs to be curated, lots of noise.\nI'm working on it, hopefully others will join in.\n\n> (e.g. I see three\n> instances of\n> blackhole_fdw listed, but not the original repo, which is not even on\n> GitHub) etc..\n\nThanks, I've fixed blackhole_fdw manually:\n\n- https://bitbucket.org/adunstan/blackhole_fdw\n - https://github.com/chenquanzhao/blackhole_fdw\n\n/Joel\n\n\n", "msg_date": "Wed, 23 Feb 2022 09:52:18 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Fri, Feb 11, 2022, at 04:46, Noah Misch wrote:\n> How did you make the list? (I'd imagine doing it by searching for\n> repositories containing evidence like \\bpgxs\\b matches.)\n\nSearching Github for repos with a *.control file in the root dir and a Makefile containing ^PGXS\n\nHmm, now that you say it, maybe the ^PGXS regex should be case-insensitive,\nif pgxs can be written in e.g. lower case?\n\n/Joel\n\n\n", "msg_date": "Wed, 23 Feb 2022 10:00:03 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "\nOn 2/23/22 03:52, Joel Jacobson wrote:\n>\n>> (e.g. I see three\n>> instances of\n>> blackhole_fdw listed, but not the original repo, which is not even on\n>> GitHub) etc..\n> Thanks, I've fixed blackhole_fdw manually:\n>\n> - https://bitbucket.org/adunstan/blackhole_fdw\n> - https://github.com/chenquanzhao/blackhole_fdw\n>\n\n\nYeah, I have several others on bitbucket that might be of interest (in\nvarying states of doneness):\n\n\nA thin layer over the Redis API:\n\nhttps://bitbucket.org/adunstan/redis_wrapper\n\n\nGenerate a CSV line from a row:\n\nhttps://bitbucket.org/adunstan/row_to_csv\n\n\nClosed forms of discrete builtin ranges:\n\nhttps://bitbucket.org/adunstan/pg-closed-ranges\n\n\nFDW to generate random data:\n\nhttps://bitbucket.org/adunstan/rotfang-fdw\n\n\nComparison ops for JSON:\n\nhttps://bitbucket.org/adunstan/jsoncmp\n\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Wed, 23 Feb 2022 09:23:36 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Wed, Feb 23, 2022, at 6:00 AM, Joel Jacobson wrote:\n> On Fri, Feb 11, 2022, at 04:46, Noah Misch wrote:\n> > How did you make the list? (I'd imagine doing it by searching for\n> > repositories containing evidence like \\bpgxs\\b matches.)\n> \n> Searching Github for repos with a *.control file in the root dir and a Makefile containing ^PGXS\nInteresting. What's an extension? It is something that contains user-defined\nobjects. It would be good if your list was expanded to contain addons (modules)\nthat are basically plugins that don't create additional objects in the database\ne.g. an output plugin or a module that uses any hooks (such as auth_delay).\nThey generally don't provide control file (for example, wal2json). I don't know\nif can only rely on PGXS check because there are client programs that uses the\nPGXS infrastructure to build it.\n\n> Hmm, now that you say it, maybe the ^PGXS regex should be case-insensitive,\n> if pgxs can be written in e.g. lower case?\nMakefile variable names are case-sensitive. You cannot write pgxs or PgXs; it\nshould be PGXS.\n\n\n--\nEuler Taveira\nEDB https://www.enterprisedb.com/\n\nOn Wed, Feb 23, 2022, at 6:00 AM, Joel Jacobson wrote:On Fri, Feb 11, 2022, at 04:46, Noah Misch wrote:> How did you make the list?  (I'd imagine doing it by searching for> repositories containing evidence like \\bpgxs\\b matches.)Searching Github for repos with a *.control file in the root dir and a Makefile containing ^PGXSInteresting. What's an extension? It is something that contains user-definedobjects. It would be good if your list was expanded to contain addons (modules)that are basically plugins that don't create additional objects in the databasee.g. an output plugin or a module that uses any hooks (such as auth_delay).They generally don't provide control file (for example, wal2json). I don't knowif can only rely on PGXS check because there are client programs that uses thePGXS infrastructure to build it.Hmm, now that you say it, maybe the ^PGXS regex should be case-insensitive,if pgxs can be written in e.g. lower case?Makefile variable names are case-sensitive. You cannot write pgxs or PgXs; itshould be PGXS.--Euler TaveiraEDB   https://www.enterprisedb.com/", "msg_date": "Wed, 23 Feb 2022 11:33:31 -0300", "msg_from": "\"Euler Taveira\" <euler@eulerto.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On 2/23/22 09:33, Euler Taveira wrote:\n> On Wed, Feb 23, 2022, at 6:00 AM, Joel Jacobson wrote:\n>> On Fri, Feb 11, 2022, at 04:46, Noah Misch wrote:\n>> > How did you make the list?  (I'd imagine doing it by searching for\n>> > repositories containing evidence like \\bpgxs\\b matches.)\n>>\n>> Searching Github for repos with a *.control file in the root dir and a \n>> Makefile containing ^PGXS\n> Interesting. What's an extension? It is something that contains user-defined\n> objects. It would be good if your list was expanded to contain addons \n> (modules)\n> that are basically plugins that don't create additional objects in the \n> database\n> e.g. an output plugin or a module that uses any hooks (such as auth_delay).\n> They generally don't provide control file (for example, wal2json). I \n> don't know\n> if can only rely on PGXS check because there are client programs that \n> uses the\n> PGXS infrastructure to build it.\n> \n>> Hmm, now that you say it, maybe the ^PGXS regex should be \n>> case-insensitive,\n>> if pgxs can be written in e.g. lower case?\n> Makefile variable names are case-sensitive. You cannot write pgxs or \n> PgXs; it\n> should be PGXS.\n\n\nWhat about scanning for \"PG_MODULE_MAGIC\"?\n\nJoe\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Wed, 23 Feb 2022 09:37:57 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "Hi Joe,\n\n> What about scanning for \"PG_MODULE_MAGIC\"?\n\nAn extension can be written without using C at all. BTW some extensions [1]\nare written in Rust these days.\n\n[1]: https://github.com/timescale/timescaledb-toolkit\n\n-- \nBest regards,\nAleksander Alekseev\n\nHi Joe,> What about scanning for \"PG_MODULE_MAGIC\"?An extension can be written without using C at all. BTW some extensions [1] are written in Rust these days.[1]: https://github.com/timescale/timescaledb-toolkit-- Best regards,Aleksander Alekseev", "msg_date": "Wed, 23 Feb 2022 17:52:53 +0300", "msg_from": "Aleksander Alekseev <aleksander@timescale.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On 2/23/22 09:52, Aleksander Alekseev wrote:\n> > What about scanning for \"PG_MODULE_MAGIC\"?\n> \n> An extension can be written without using C at all. BTW some extensions \n> [1] are written in Rust these days.\n\nSure, but scanning for PG_MODULE_MAGIC may well pick up repos that would \notherwise have been missed. I didn't say that should be the only filter \nused ¯\\_(ツ)_/¯\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development\n\n\n", "msg_date": "Wed, 23 Feb 2022 09:59:14 -0500", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" }, { "msg_contents": "On Wed, Feb 23, 2022, at 15:23, Andrew Dunstan wrote:\n> Yeah, I have several others on bitbucket that might be of interest (in\n> varying states of doneness):\n\nThanks, added.\n\n/Joel\n\n\n", "msg_date": "Wed, 23 Feb 2022 21:18:46 +0100", "msg_from": "\"Joel Jacobson\" <joel@compiler.org>", "msg_from_op": true, "msg_subject": "Re: List of all* PostgreSQL EXTENSIONs in the world" } ]
[ { "msg_contents": "I'd like to suggest a patch for reloption regression tests.\n\nThis patch tests case, that can be rarely met in actual life: when reloptions \nhave some illegal option set (as a result of malfunction or extension \ndowngrade or something), and user tries to remove this option by using RESET.\nCurrent postgres behaviour is to actually remove this option.\n\nLike:\n\nUPDATE pg_class SET reloptions = '{illegal_option=4}' \n WHERE oid = 'reloptions_test'::regclass;\n\nALTER TABLE reloptions_test RESET (illegal_option);\n\nWhy this should be tested:\n1. It is what postgres actually do now.\n2. This behaviour is reasonable. DB User can fix problem without updating \npg_class, having rights to change his own table.\n3. Better to get test alarm, if this behavior is accidentally changed. \n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 11 Feb 2022 12:51:22 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "[PATCH] minor reloption regression tests improvement" }, { "msg_contents": "I don't think this is worth spending time adding tests for. I get what\nyou're saying that this is at least semi-intentional behaviour and\ndesirable behaviour so it should have tests ensuring that it continues\nto work. But it's not documented behaviour and the test is basically\ntesting that the implementation is this specific implementation.\n\nI don't think the test is really a bad idea but neither is it really\nvery useful and I think it's not worth having people spend time\nreviewing and discussing. I'm leaning to saying this patch is\nrejected.\n\n\n", "msg_date": "Mon, 7 Mar 2022 12:04:49 -0500", "msg_from": "Greg Stark <stark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "В письме от понедельник, 7 марта 2022 г. 20:04:49 MSK пользователь Greg Stark \nнаписал:\n> I don't think this is worth spending time adding tests for. I get what\n> you're saying that this is at least semi-intentional behaviour and\n> desirable behaviour so it should have tests ensuring that it continues\n> to work. But it's not documented behaviour and the test is basically\n> testing that the implementation is this specific implementation.\n> \n> I don't think the test is really a bad idea but neither is it really\n> very useful and I think it's not worth having people spend time\n> reviewing and discussing. I'm leaning to saying this patch is\n> rejected.\n\nThis is a regression test. To test if behaviour brocken or not.\n\nActually I came to the idea of the test when I wrote my big reloption patch. \nI've broken this behaviour by mistake and did not notice it as it have not \nbeen properly tested. So I decided it would be goo to add test to it before \nadding and big patches.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su\n\n\n\n\n\n", "msg_date": "Mon, 07 Mar 2022 20:50:53 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "This patch is hanging open in the March commitfest. There was a bit of back-and-forth on whether it should be rejected, but no clear statement on the issue, so I'm going to mark it Returned with Feedback. If you still feel strongly about this patch, please feel free to re-register it in the July commitfest.\r\n\r\nThanks,\r\n--Jacob", "msg_date": "Wed, 29 Jun 2022 21:38:48 +0000", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "В письме от четверг, 30 июня 2022 г. 00:38:48 MSK пользователь Jacob Champion \nнаписал:\n> This patch is hanging open in the March commitfest. There was a bit of\n> back-and-forth on whether it should be rejected, but no clear statement on\n> the issue, so I'm going to mark it Returned with Feedback. If you still\n> feel strongly about this patch, please feel free to re-register it in the\n> July commitfest.\n\nHi! I am surely feel this patch is important. I have bigger patch \nhttps://commitfest.postgresql.org/38/3536/ and this test makes sense as a part \nof big work of options refactoring,\n\nI am also was strongly advised to commit things chopped into smaller parts, \nwhen possible. This test can be commit separately so I am doing it.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Thu, 30 Jun 2022 06:47:48 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "В письме от четверг, 30 июня 2022 г. 06:47:48 MSK пользователь Nikolay Shaplov \nнаписал:\n \n> Hi! I am surely feel this patch is important. I have bigger patch\n> https://commitfest.postgresql.org/38/3536/ and this test makes sense as a\n> part of big work of options refactoring,\n> \n> I am also was strongly advised to commit things chopped into smaller parts,\n> when possible. This test can be commit separately so I am doing it.\n\nLet me again explain why this test is importaint, so potential reviewers can \neasily find this information.\n\nTests are for developers. You change the code and see that something does not \nwork anymore, as it worked before.\nWhen you change the code, you should keep both documented and undocumented \nbehaviour. Because user's code can intentionally or accidentally use it. \n\nSo known undocumented behavior is better to be tested as well as documented \none. If you as developer want to change undocumented behavior, it is better to \ndo it consciously. This test insures that.\n\nI, personally hit this problem while working with reloption code. Older \nversion of my new patch changed this behaviour, and I even did not notice \nthat. So I decided to write this test to ensure, that nobody ever meet same \nproblem. \n\nAnd as I said before, I was strongly advised to commit in small parts, when \npossible. This test can be commit separately. \n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Thu, 30 Jun 2022 07:04:25 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "On Wed, Jun 29, 2022 at 9:04 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> В письме от четверг, 30 июня 2022 г. 06:47:48 MSK пользователь Nikolay Shaplov\n> написал:\n>\n> > Hi! I am surely feel this patch is important. I have bigger patch\n> > https://commitfest.postgresql.org/38/3536/ and this test makes sense as a\n> > part of big work of options refactoring,\n> >\n> > I am also was strongly advised to commit things chopped into smaller parts,\n> > when possible. This test can be commit separately so I am doing it.\n>\n> Let me again explain why this test is importaint, so potential reviewers can\n> easily find this information.\n>\n> Tests are for developers. You change the code and see that something does not\n> work anymore, as it worked before.\n> When you change the code, you should keep both documented and undocumented\n> behaviour. Because user's code can intentionally or accidentally use it.\n\n[developer hat] Right, but I think Greg also pointed out the tradeoff\nhere, and my understanding was that he didn't feel that the tradeoff\nwas enough. If this is related to a bigger refactoring, it may be\neasier to argue the test's value if it's discussed there? (This could\nbe frustrating if you've just been told to split things up, sorry.)\n\n[CFM hat] Since you feel strongly about the patch, and we're short on\ntime before the commitfest starts, I have re-registered this. That way\nthere can be an explicit decision as opposed to a pocket veto by me.\n\n https://commitfest.postgresql.org/38/3747/\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Thu, 30 Jun 2022 16:16:24 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "On 6/30/22 16:16, Jacob Champion wrote:\n> [CFM hat] Since you feel strongly about the patch, and we're short on\n> time before the commitfest starts, I have re-registered this. That way\n> there can be an explicit decision as opposed to a pocket veto by me.\n\n[CFM hat] Okay, with another CF come and gone without review I feel much\nmore confident about closing this as Returned with Feedback.\n\n[dev hat] Specifically I don't think this patch is reviewable alone; it\nneeds to be grouped with the functionality change that needed the\nadditional coverage. That way it'll be much easier for a reviewer to\ndecide whether 1) it's covering the right spots and 2) it's an overall\nuseful addition.\n\nThat doesn't mean you have to smash it into another commit; it can be a\nseparate test commit as part of a bigger patchset, and the commit\nmessage can include the motivation for why you wrote the new test.\n\nThanks,\n--Jacob\n\n\n", "msg_date": "Tue, 2 Aug 2022 14:26:28 -0700", "msg_from": "Jacob Champion <jchampion@timescale.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" }, { "msg_contents": "On 2022-Mar-07, Greg Stark wrote:\n\n> I don't think this is worth spending time adding tests for. I get what\n> you're saying that this is at least semi-intentional behaviour and\n> desirable behaviour so it should have tests ensuring that it continues\n> to work. But it's not documented behaviour and the test is basically\n> testing that the implementation is this specific implementation.\n> \n> I don't think the test is really a bad idea but neither is it really\n> very useful and I think it's not worth having people spend time\n> reviewing and discussing. I'm leaning to saying this patch is\n> rejected.\n\nI've pushed this patch; I agree with Greg that this is semi-intentional\nand desirable, even if not documented. (I doubt we would document it\nanyway, as having previously SET an invalid option is not something that\nhappens commonly; what reason would we have for documenting that it\nworks to RESET such an option?) As for the implementation being this\nspecific one, that's something already embedded in all of\nreloptions.sql, so I don't really mind that, if we do change it, we'll\nhave to rewrite pretty much the whole file and not \"the whole file\nexcept these three lines\".\n\nLastly, there is a long-running patch proposal to refactor reloptions\nquite significantly, so it'll be good to ensure that the new\nimplementation doesn't break the features we already have; and this one\ncorner is one thing Nikolay already said his patch had broken and failed\nto notice right away.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"La verdad no siempre es bonita, pero el hambre de ella sí\"\n\n\n", "msg_date": "Fri, 8 Dec 2023 12:19:22 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] minor reloption regression tests improvement" } ]
[ { "msg_contents": "Hi,\n\nThe problem is that whenever we are going for streaming we always set\nXLogCtl->InstallXLogFileSegmentActive to true, but while switching\nfrom streaming to archive we do not always reset it so it hits\nassertion in some cases. Basically we reset it inside\nXLogShutdownWalRcv() but while switching from the streaming mode we\nonly call it conditionally if WalRcvStreaming(). But it is very much\npossible that even before we call WalRcvStreaming() the walreceiver\nmight have set alrcv->walRcvState to WALRCV_STOPPED. So now\nWalRcvStreaming() will return false. So I agree now we do not want to\nreally shut down the walreceiver but who will reset the flag?\n\nI just ran some tests on primary and attached the walreceiver to gdb\nand waited for it to exit with timeout and then the recovery process\nhit the assertion.\n\n2022-02-11 14:33:56.976 IST [60978] FATAL: terminating walreceiver\ndue to timeout\ncp: cannot stat\n‘/home/dilipkumar/work/PG/install/bin/wal_archive/00000002.history’:\nNo such file or directory\n2022-02-11 14:33:57.002 IST [60973] LOG: restored log file\n\"000000010000000000000003\" from archive\nTRAP: FailedAssertion(\"!XLogCtl->InstallXLogFileSegmentActive\", File:\n\"xlog.c\", Line: 3823, PID: 60973)\n\nI have just applied a quick fix and that solved the issue, basically\nif the last failed source was streaming and the WalRcvStreaming() is\nfalse then just reset this flag.\n\n@@ -12717,6 +12717,12 @@ WaitForWALToBecomeAvailable(XLogRecPtr\nRecPtr, bool randAccess,\n */\n if (WalRcvStreaming())\n XLogShutdownWalRcv();\n+ else\n+ {\n+\nLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n+\nXLogCtl->InstallXLogFileSegmentActive = false;\n+ LWLockRelease(ControlFileLock);\n+ }\n\n\n\n--\nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:32:45 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Fri, Feb 11, 2022 at 3:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> Hi,\n>\n> The problem is that whenever we are going for streaming we always set\n> XLogCtl->InstallXLogFileSegmentActive to true, but while switching\n> from streaming to archive we do not always reset it so it hits\n> assertion in some cases. Basically we reset it inside\n> XLogShutdownWalRcv() but while switching from the streaming mode we\n> only call it conditionally if WalRcvStreaming(). But it is very much\n> possible that even before we call WalRcvStreaming() the walreceiver\n> might have set alrcv->walRcvState to WALRCV_STOPPED. So now\n> WalRcvStreaming() will return false. So I agree now we do not want to\n> really shut down the walreceiver but who will reset the flag?\n>\n> I just ran some tests on primary and attached the walreceiver to gdb\n> and waited for it to exit with timeout and then the recovery process\n> hit the assertion.\n>\n> 2022-02-11 14:33:56.976 IST [60978] FATAL: terminating walreceiver\n> due to timeout\n> cp: cannot stat\n> ‘/home/dilipkumar/work/PG/install/bin/wal_archive/00000002.history’:\n> No such file or directory\n> 2022-02-11 14:33:57.002 IST [60973] LOG: restored log file\n> \"000000010000000000000003\" from archive\n> TRAP: FailedAssertion(\"!XLogCtl->InstallXLogFileSegmentActive\", File:\n> \"xlog.c\", Line: 3823, PID: 60973)\n>\n> I have just applied a quick fix and that solved the issue, basically\n> if the last failed source was streaming and the WalRcvStreaming() is\n> false then just reset this flag.\n\nIIUC, the issue can happen while the walreceiver failed to get WAL\nfrom primary for whatever reasons and its status is not\nWALRCV_STOPPING or WALRCV_STOPPED, and the startup process moved ahead\nin WaitForWALToBecomeAvailable for reading from archive which ends up\nin this assertion failure. ITSM, a rare scenario and it depends on\nwhat walreceiver does between failure to get WAL from primary and\nupdating status to WALRCV_STOPPING or WALRCV_STOPPED.\n\nIf the above race condition is a serious problem, if one thinks at\nleast it is a problem at all, that needs to be fixed. I don't think\njust making InstallXLogFileSegmentActive false is enough. By looking\nat the comment [1], it doesn't make sense to move ahead for restoring\nfrom the archive location without the WAL receiver fully stopped.\nIMO, the real fix is to just remove WalRcvStreaming() and call\nXLogShutdownWalRcv() unconditionally. Anyways, we have the\nAssert(!WalRcvStreaming()); down below. I don't think it will create\nany problem.\n\n[1]\n /*\n * Before we leave XLOG_FROM_STREAM state, make sure that\n * walreceiver is not active, so that it won't overwrite\n * WAL that we restore from archive.\n */\n if (WalRcvStreaming())\n XLogShutdownWalRcv();\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 11 Feb 2022 18:21:53 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Fri, Feb 11, 2022 at 6:22 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 3:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> >\n\n> IIUC, the issue can happen while the walreceiver failed to get WAL\n> from primary for whatever reasons and its status is not\n> WALRCV_STOPPING or WALRCV_STOPPED, and the startup process moved ahead\n> in WaitForWALToBecomeAvailable for reading from archive which ends up\n> in this assertion failure. ITSM, a rare scenario and it depends on\n> what walreceiver does between failure to get WAL from primary and\n> updating status to WALRCV_STOPPING or WALRCV_STOPPED.\n>\n> If the above race condition is a serious problem, if one thinks at\n> least it is a problem at all, that needs to be fixed.\n\nI don't think we can design a software which has open race conditions\neven though they are rarely occurring :)\n\nI don't think\n> just making InstallXLogFileSegmentActive false is enough. By looking\n> at the comment [1], it doesn't make sense to move ahead for restoring\n> from the archive location without the WAL receiver fully stopped.\n> IMO, the real fix is to just remove WalRcvStreaming() and call\n> XLogShutdownWalRcv() unconditionally. Anyways, we have the\n> Assert(!WalRcvStreaming()); down below. I don't think it will create\n> any problem.\n\nIf WalRcvStreaming() is returning false that means walreceiver is\nalready stopped so we don't need to shutdown it externally. I think\nlike we are setting this flag outside start streaming we can reset it\nalso outside XLogShutdownWalRcv. Or I am fine even if we call\nXLogShutdownWalRcv() because if walreceiver is stopped it will just\nreset the flag we want it to reset and it will do nothing else.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 18:30:43 +0530", "msg_from": "Dilip Kumar <dilipbalaut@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Fri, Feb 11, 2022 at 6:31 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Fri, Feb 11, 2022 at 6:22 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > On Fri, Feb 11, 2022 at 3:33 PM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n> > >\n>\n> > IIUC, the issue can happen while the walreceiver failed to get WAL\n> > from primary for whatever reasons and its status is not\n> > WALRCV_STOPPING or WALRCV_STOPPED, and the startup process moved ahead\n> > in WaitForWALToBecomeAvailable for reading from archive which ends up\n> > in this assertion failure. ITSM, a rare scenario and it depends on\n> > what walreceiver does between failure to get WAL from primary and\n> > updating status to WALRCV_STOPPING or WALRCV_STOPPED.\n> >\n> > If the above race condition is a serious problem, if one thinks at\n> > least it is a problem at all, that needs to be fixed.\n>\n> I don't think we can design a software which has open race conditions\n> even though they are rarely occurring :)\n\nYes.\n\n> I don't think\n> > just making InstallXLogFileSegmentActive false is enough. By looking\n> > at the comment [1], it doesn't make sense to move ahead for restoring\n> > from the archive location without the WAL receiver fully stopped.\n> > IMO, the real fix is to just remove WalRcvStreaming() and call\n> > XLogShutdownWalRcv() unconditionally. Anyways, we have the\n> > Assert(!WalRcvStreaming()); down below. I don't think it will create\n> > any problem.\n>\n> If WalRcvStreaming() is returning false that means walreceiver is\n> already stopped so we don't need to shutdown it externally. I think\n> like we are setting this flag outside start streaming we can reset it\n> also outside XLogShutdownWalRcv. Or I am fine even if we call\n> XLogShutdownWalRcv() because if walreceiver is stopped it will just\n> reset the flag we want it to reset and it will do nothing else.\n\nAs I said, I'm okay with just calling XLogShutdownWalRcv()\nunconditionally as it just returns in case walreceiver has already\nstopped and updates the InstallXLogFileSegmentActive flag to false.\n\nLet's also hear what other hackers have to say about this.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Fri, 11 Feb 2022 22:25:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "At Fri, 11 Feb 2022 22:25:49 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> > I don't think\n> > > just making InstallXLogFileSegmentActive false is enough. By looking\n> > > at the comment [1], it doesn't make sense to move ahead for restoring\n> > > from the archive location without the WAL receiver fully stopped.\n> > > IMO, the real fix is to just remove WalRcvStreaming() and call\n> > > XLogShutdownWalRcv() unconditionally. Anyways, we have the\n> > > Assert(!WalRcvStreaming()); down below. I don't think it will create\n> > > any problem.\n> >\n> > If WalRcvStreaming() is returning false that means walreceiver is\n> > already stopped so we don't need to shutdown it externally. I think\n> > like we are setting this flag outside start streaming we can reset it\n> > also outside XLogShutdownWalRcv. Or I am fine even if we call\n> > XLogShutdownWalRcv() because if walreceiver is stopped it will just\n> > reset the flag we want it to reset and it will do nothing else.\n> \n> As I said, I'm okay with just calling XLogShutdownWalRcv()\n> unconditionally as it just returns in case walreceiver has already\n> stopped and updates the InstallXLogFileSegmentActive flag to false.\n> \n> Let's also hear what other hackers have to say about this.\n\nFirstly, good catch:) And the direction seems right.\n\nIt seems like an overlook of cc2c7d65fc. We cannot install new wal\nsegments only while we're in archive recovery. Conversely, we must\nturn off it when entering archive recovery (not exiting streaming\nrecovery). So, *I* feel like to do that at the beginning of\nXLOG_FROM_ARCHIVE/PG_WAL rather than the end of XLOG_FROM_STREAM.\n\n(And I would like to remove XLogShutDownWalRcv() and turn off the flag\n in StartupXLOG explicitly, but it would be overdone.)\n\n--- a/src/backend/access/transam/xlog.c\n+++ b/src/backend/access/transam/xlog.c\n@@ -12800,6 +12800,16 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n \t\t\t\t */\n \t\t\t\tAssert(!WalRcvStreaming());\n \n+\t\t\t\t/*\n+\t\t\t\t * WAL segment installation conflicts with archive\n+\t\t\t\t * recovery. Make sure it is turned off. XLogShutdownWalRcv()\n+\t\t\t\t * does that but it is not done when the process has voluntary\n+\t\t\t\t * exited for example for replication timeout.\n+\t\t\t\t */\n+\t\t\t\tLWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n+\t\t\t\tXLogCtl->InstallXLogFileSegmentActive = false;\n+\t\t\t\tLWLockRelease(ControlFileLock);\n+\n \t\t\t\t/* Close any old file we might have open. */\n \t\t\t\tif (readFile >= 0)\n\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Mon, 14 Feb 2022 17:14:28 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Mon, Feb 14, 2022 at 1:44 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> At Fri, 11 Feb 2022 22:25:49 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in\n> > > I don't think\n> > > > just making InstallXLogFileSegmentActive false is enough. By looking\n> > > > at the comment [1], it doesn't make sense to move ahead for restoring\n> > > > from the archive location without the WAL receiver fully stopped.\n> > > > IMO, the real fix is to just remove WalRcvStreaming() and call\n> > > > XLogShutdownWalRcv() unconditionally. Anyways, we have the\n> > > > Assert(!WalRcvStreaming()); down below. I don't think it will create\n> > > > any problem.\n> > >\n> > > If WalRcvStreaming() is returning false that means walreceiver is\n> > > already stopped so we don't need to shutdown it externally. I think\n> > > like we are setting this flag outside start streaming we can reset it\n> > > also outside XLogShutdownWalRcv. Or I am fine even if we call\n> > > XLogShutdownWalRcv() because if walreceiver is stopped it will just\n> > > reset the flag we want it to reset and it will do nothing else.\n> >\n> > As I said, I'm okay with just calling XLogShutdownWalRcv()\n> > unconditionally as it just returns in case walreceiver has already\n> > stopped and updates the InstallXLogFileSegmentActive flag to false.\n> >\n> > Let's also hear what other hackers have to say about this.\n>\n> Firstly, good catch:) And the direction seems right.\n>\n> It seems like an overlook of cc2c7d65fc. We cannot install new wal\n> segments only while we're in archive recovery. Conversely, we must\n> turn off it when entering archive recovery (not exiting streaming\n> recovery). So, *I* feel like to do that at the beginning of\n> XLOG_FROM_ARCHIVE/PG_WAL rather than the end of XLOG_FROM_STREAM.\n>\n> (And I would like to remove XLogShutDownWalRcv() and turn off the flag\n> in StartupXLOG explicitly, but it would be overdone.)\n>\n> --- a/src/backend/access/transam/xlog.c\n> +++ b/src/backend/access/transam/xlog.c\n> @@ -12800,6 +12800,16 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> */\n> Assert(!WalRcvStreaming());\n>\n> + /*\n> + * WAL segment installation conflicts with archive\n> + * recovery. Make sure it is turned off. XLogShutdownWalRcv()\n> + * does that but it is not done when the process has voluntary\n> + * exited for example for replication timeout.\n> + */\n> + LWLockAcquire(ControlFileLock, LW_EXCLUSIVE);\n> + XLogCtl->InstallXLogFileSegmentActive = false;\n> + LWLockRelease(ControlFileLock);\n> +\n> /* Close any old file we might have open. */\n> if (readFile >= 0)\n\nToday I encountered the assertion failure [2] twice while working on\nanother patch [1]. The pattern seems to be that the walreceiver got\nkilled or crashed and set it's state to WALRCV_STOPPING or\nWALRCV_STOPPED by the team the WAL state machine moves to archive and\nhence the following XLogShutdownWalRcv() code will not get hit:\n\n /*\n * Before we leave XLOG_FROM_STREAM state, make sure that\n * walreceiver is not active, so that it won't overwrite\n * WAL that we restore from archive.\n */\n if (WalRcvStreaming())\n ShutdownWalRcv();\n\nI agree with Kyotaro-san to reset InstallXLogFileSegmentActive before\nentering into the archive mode. Hence I tweaked the code introduced by\nthe following commit a bit, the result v1 patch is attached herewith.\nPlease review it.\n\ncommit cc2c7d65fc27e877c9f407587b0b92d46cd6dd16\nAuthor: Noah Misch <noah@leadboat.com>\nDate: Mon Jun 28 18:34:56 2021 -0700\n\n Skip WAL recycling and preallocation during archive recovery.\n\n[1] https://www.postgresql.org/message-id/CALj2ACUYz1z6QPduGn5gguCkfd-ko44j4hKcOMtp6fzv9xEWgw%40mail.gmail.com\n[2]\n2022-08-11 05:10:29.051 UTC [1487086] FATAL: terminating walreceiver\ndue to timeout\ncp: cannot stat '/home/ubuntu/archived_wal/00000002.history': No such\nfile or directory\n2022-08-11 05:10:29.054 UTC [1487075] LOG: switched WAL source from\nstream to archive after failure\n2022-08-11 05:10:29.067 UTC [1487075] LOG: restored log file\n\"000000010000000000000034\" from archive\nTRAP: FailedAssertion(\"!IsInstallXLogFileSegmentActive()\", File:\n\"xlogrecovery.c\", Line: 4149, PID: 1487075)\npostgres: startup waiting for\n000000010000000000000034(ExceptionalCondition+0xd6)[0x559d20e6deb1]\npostgres: startup waiting for\n000000010000000000000034(+0x1e4e14)[0x559d20813e14]\npostgres: startup waiting for\n000000010000000000000034(+0x1e50c0)[0x559d208140c0]\npostgres: startup waiting for\n000000010000000000000034(+0x1e4354)[0x559d20813354]\npostgres: startup waiting for\n000000010000000000000034(+0x1e3946)[0x559d20812946]\npostgres: startup waiting for\n000000010000000000000034(+0x1dbc58)[0x559d2080ac58]\npostgres: startup waiting for\n000000010000000000000034(+0x1db3ce)[0x559d2080a3ce]\npostgres: startup waiting for\n000000010000000000000034(XLogReadAhead+0x3d)[0x559d2080aa23]\npostgres: startup waiting for\n000000010000000000000034(+0x1d9713)[0x559d20808713]\npostgres: startup waiting for\n000000010000000000000034(+0x1d909c)[0x559d2080809c]\npostgres: startup waiting for\n000000010000000000000034(XLogPrefetcherReadRecord+0xf9)[0x559d208092a2]\npostgres: startup waiting for\n000000010000000000000034(+0x1e3427)[0x559d20812427]\npostgres: startup waiting for\n000000010000000000000034(PerformWalRecovery+0x465)[0x559d2080fd0f]\npostgres: startup waiting for\n000000010000000000000034(StartupXLOG+0x987)[0x559d207fc732]\npostgres: startup waiting for\n000000010000000000000034(StartupProcessMain+0xf8)[0x559d20bc3990]\npostgres: startup waiting for\n000000010000000000000034(AuxiliaryProcessMain+0x1ae)[0x559d20bb62cf]\npostgres: startup waiting for\n000000010000000000000034(+0x593020)[0x559d20bc2020]\npostgres: startup waiting for\n000000010000000000000034(+0x591ebb)[0x559d20bc0ebb]\npostgres: startup waiting for\n000000010000000000000034(+0x590907)[0x559d20bbf907]\n/lib/x86_64-linux-gnu/libc.so.6(+0x42520)[0x7fe518b26520]\n/lib/x86_64-linux-gnu/libc.so.6(__select+0xbd)[0x7fe518bff74d]\npostgres: startup waiting for\n000000010000000000000034(+0x58e1bd)[0x559d20bbd1bd]\npostgres: startup waiting for\n000000010000000000000034(PostmasterMain+0x14cc)[0x559d20bbcaa2]\npostgres: startup waiting for\n000000010000000000000034(+0x44a752)[0x559d20a79752]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7fe518b0dd90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7fe518b0de40]\npostgres: startup waiting for\n000000010000000000000034(_start+0x25)[0x559d206fe5b5]\n2022-08-11 05:10:29.161 UTC [1486413] LOG: startup process (PID\n1487075) was terminated by signal 6: Aborted\n2022-08-11 05:10:29.161 UTC [1486413] LOG: terminating any other\nactive server processes\n2022-08-11 05:10:29.169 UTC [1486413] LOG: shutting down due to\nstartup process failure\n2022-08-11 05:10:29.181 UTC [1486413] LOG: database system is shut down\n\n2022-08-11 05:34:54.234 UTC [1509079] LOG: switched WAL source from\nstream to archive after failure\n2022-08-11 05:34:54.234 UTC [1509079] LOG: current WAL source is\narchive and last WAL source is stream, standby mode 1\n2022-08-11 05:34:54.248 UTC [1509079] LOG: restored log file\n\"000000010000000000000003\" from archive\nTRAP: FailedAssertion(\"!IsInstallXLogFileSegmentActive()\", File:\n\"xlogrecovery.c\", Line: 4155, PID: 1509079)\npostgres: startup waiting for\n000000010000000000000003(ExceptionalCondition+0xd6)[0x55e9cb1cbf2a]\npostgres: startup waiting for\n000000010000000000000003(+0x1e4e8d)[0x55e9cab71e8d]\npostgres: startup waiting for\n000000010000000000000003(+0x1e5139)[0x55e9cab72139]\npostgres: startup waiting for\n000000010000000000000003(+0x1e43cd)[0x55e9cab713cd]\npostgres: startup waiting for\n000000010000000000000003(+0x1e3946)[0x55e9cab70946]\npostgres: startup waiting for\n000000010000000000000003(+0x1dbc58)[0x55e9cab68c58]\npostgres: startup waiting for\n000000010000000000000003(+0x1db0b7)[0x55e9cab680b7]\npostgres: startup waiting for\n000000010000000000000003(XLogReadAhead+0x3d)[0x55e9cab68a23]\npostgres: startup waiting for\n000000010000000000000003(+0x1d9713)[0x55e9cab66713]\npostgres: startup waiting for\n000000010000000000000003(+0x1d909c)[0x55e9cab6609c]\npostgres: startup waiting for\n000000010000000000000003(XLogPrefetcherReadRecord+0xf9)[0x55e9cab672a2]\npostgres: startup waiting for\n000000010000000000000003(+0x1e3427)[0x55e9cab70427]\npostgres: startup waiting for\n000000010000000000000003(+0x1e4911)[0x55e9cab71911]\npostgres: startup waiting for\n000000010000000000000003(InitWalRecovery+0xaf4)[0x55e9cab6bc4e]\npostgres: startup waiting for\n000000010000000000000003(StartupXLOG+0x4af)[0x55e9cab5a25a]\npostgres: startup waiting for\n000000010000000000000003(StartupProcessMain+0xf8)[0x55e9caf21a09]\npostgres: startup waiting for\n000000010000000000000003(AuxiliaryProcessMain+0x1ae)[0x55e9caf14348]\npostgres: startup waiting for\n000000010000000000000003(+0x593099)[0x55e9caf20099]\npostgres: startup waiting for\n000000010000000000000003(PostmasterMain+0x1476)[0x55e9caf1aac5]\npostgres: startup waiting for\n000000010000000000000003(+0x44a7cb)[0x55e9cadd77cb]\n/lib/x86_64-linux-gnu/libc.so.6(+0x29d90)[0x7f754c287d90]\n/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0x80)[0x7f754c287e40]\npostgres: startup waiting for\n000000010000000000000003(_start+0x25)[0x55e9caa5c5b5]\n2022-08-11 05:34:54.336 UTC [1509076] LOG: startup process (PID\n1509079) was terminated by signal 6: Aborted\n2022-08-11 05:34:54.336 UTC [1509076] LOG: aborting startup due to\nstartup process failure\n2022-08-11 05:34:54.337 UTC [1509076] LOG: database system is shut down\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/", "msg_date": "Thu, 11 Aug 2022 22:06:24 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Thu, Aug 11, 2022 at 10:06 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Today I encountered the assertion failure [2] twice while working on\n> another patch [1]. The pattern seems to be that the walreceiver got\n> killed or crashed and set it's state to WALRCV_STOPPING or\n> WALRCV_STOPPED by the team the WAL state machine moves to archive and\n> hence the following XLogShutdownWalRcv() code will not get hit:\n>\n> /*\n> * Before we leave XLOG_FROM_STREAM state, make sure that\n> * walreceiver is not active, so that it won't overwrite\n> * WAL that we restore from archive.\n> */\n> if (WalRcvStreaming())\n> ShutdownWalRcv();\n>\n> I agree with Kyotaro-san to reset InstallXLogFileSegmentActive before\n> entering into the archive mode. Hence I tweaked the code introduced by\n> the following commit a bit, the result v1 patch is attached herewith.\n> Please review it.\n\nI added it to the current commitfest to not lose track of it:\nhttps://commitfest.postgresql.org/39/3814/.\n\n-- \nBharath Rupireddy\nRDS Open Source Databases: https://aws.amazon.com/rds/postgresql/\n\n\n", "msg_date": "Mon, 15 Aug 2022 11:30:49 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Mon, Aug 15, 2022 at 11:30 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> On Thu, Aug 11, 2022 at 10:06 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Today I encountered the assertion failure [2] twice while working on\n> > another patch [1]. The pattern seems to be that the walreceiver got\n> > killed or crashed and set it's state to WALRCV_STOPPING or\n> > WALRCV_STOPPED by the team the WAL state machine moves to archive and\n> > hence the following XLogShutdownWalRcv() code will not get hit:\n> >\n> > /*\n> > * Before we leave XLOG_FROM_STREAM state, make sure that\n> > * walreceiver is not active, so that it won't overwrite\n> > * WAL that we restore from archive.\n> > */\n> > if (WalRcvStreaming())\n> > ShutdownWalRcv();\n> >\n> > I agree with Kyotaro-san to reset InstallXLogFileSegmentActive before\n> > entering into the archive mode. Hence I tweaked the code introduced by\n> > the following commit a bit, the result v1 patch is attached herewith.\n> > Please review it.\n>\n> I added it to the current commitfest to not lose track of it:\n> https://commitfest.postgresql.org/39/3814/.\n\nToday, I spent some more time on this issue, I modified the v1 patch\nposted upthread a bit - now resetting the InstallXLogFileSegmentActive\nonly when the WAL source switched to archive, not every time in\narchive mode.\n\nI'm attaching v2 patch here with, please review it further.\n\nJust for the records - there's another report of the assertion failure\nat [1], many thanks to Kyotaro-san for providing consistent\nreproducible steps.\n\n[1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Sat, 10 Sep 2022 07:52:01 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Sat, Sep 10, 2022 at 07:52:01AM +0530, Bharath Rupireddy wrote:\n> Today, I spent some more time on this issue, I modified the v1 patch\n> posted upthread a bit - now resetting the InstallXLogFileSegmentActive\n> only when the WAL source switched to archive, not every time in\n> archive mode.\n> \n> I'm attaching v2 patch here with, please review it further.\n> \n> Just for the records - there's another report of the assertion failure\n> at [1], many thanks to Kyotaro-san for providing consistent\n> reproducible steps.\n> \n> [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n\nWell, the fact that cc2c7d6 is involved here makes this thread an open\nitem for PG15 as far as I can see, assigned to Noah (added now in\nCC).\n\nWhile reading your last patch, I have found rather confusing that we\nonly reset InstallXLogFileSegmentActive when the current source is the\narchives and it does not match the old source. This code is already\ncomplicated, and I don't think that having more assumptions in its\ninternals is a good thing when it comes to its long-term maintenance.\nIn short, HEAD is rather conservative when it comes to set\nInstallXLogFileSegmentActive, switching it only when we request\nstreaming with RequestXLogStreaming(), but too aggressive when it\ncomes to reset it and we want something in the middle ground. FWIW, I\nfind better the approach taken by Horiguchi-san in [1] to reset the\nstate before we attempt to read WAL from the archives *or* pg_wal,\nafter we know that the last source has failed, where we know that we\nare not streaming yet (but recovery may be requested soon).\n\nSide note: I don't see much the point of having two routines named\nSetInstallXLogFileSegmentActive and ResetInstallXLogFileSegmentActive\nthat do the opposite thing. We could just have one.\n\n[1]: https://www.postgresql.org/message-id/20220214.171428.735280610520122187.horikyota.ntt@gmail.com\n--\nMichael", "msg_date": "Mon, 12 Sep 2022 16:00:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Mon, Sep 12, 2022 at 12:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Sep 10, 2022 at 07:52:01AM +0530, Bharath Rupireddy wrote:\n> > Today, I spent some more time on this issue, I modified the v1 patch\n> > posted upthread a bit - now resetting the InstallXLogFileSegmentActive\n> > only when the WAL source switched to archive, not every time in\n> > archive mode.\n> >\n> > I'm attaching v2 patch here with, please review it further.\n> >\n> > Just for the records - there's another report of the assertion failure\n> > at [1], many thanks to Kyotaro-san for providing consistent\n> > reproducible steps.\n> >\n> > [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n>\n> Well, the fact that cc2c7d6 is involved here makes this thread an open\n> item for PG15 as far as I can see, assigned to Noah (added now in\n> CC).\n\nThanks.\n\n> While reading your last patch, I have found rather confusing that we\n> only reset InstallXLogFileSegmentActive when the current source is the\n> archives and it does not match the old source. This code is already\n> complicated, and I don't think that having more assumptions in its\n> internals is a good thing when it comes to its long-term maintenance.\n> In short, HEAD is rather conservative when it comes to set\n> InstallXLogFileSegmentActive, switching it only when we request\n> streaming with RequestXLogStreaming(), but too aggressive when it\n> comes to reset it and we want something in the middle ground. FWIW, I\n> find better the approach taken by Horiguchi-san in [1] to reset the\n> state before we attempt to read WAL from the archives *or* pg_wal,\n> after we know that the last source has failed, where we know that we\n> are not streaming yet (but recovery may be requested soon).\n>\n> [1]: https://www.postgresql.org/message-id/20220214.171428.735280610520122187.horikyota.ntt@gmail.com\n\nI think the fix at [1] is wrong. It basically resets\nInstallXLogFileSegmentActive for both XLOG_FROM_ARCHIVE and\nXLOG_FROM_PG_WAL, remember that we don't need WAL files preallocation\nand recycling just for archive. Moreover, if we were to reset just for\narchive there, it's too aggressive, meaning for every WAL record, we\ntake ControlFileLock in exclusive mode to reset it.\n\nIMO, doing it once when we switch the source to archive is the correct\nway because it avoids frequent ControlFileLock acquisitions and makes\nthe fix more intuitive. And we have a comment saying why we reset\nInstallXLogFileSegmentActive, if required we can point to the commit\ncc2c7d6 in the comments.\n\n> Side note: I don't see much the point of having two routines named\n> SetInstallXLogFileSegmentActive and ResetInstallXLogFileSegmentActive\n> that do the opposite thing. We could just have one.\n\nIt's an implementation choice. However, I changed it in the v3 patch.\n\nPlease review the attached v3 patch further.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 12 Sep 2022 17:58:08 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Mon, Sep 12, 2022 at 05:58:08PM +0530, Bharath Rupireddy wrote:\n> On Mon, Sep 12, 2022 at 12:30 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Sat, Sep 10, 2022 at 07:52:01AM +0530, Bharath Rupireddy wrote:\n> > > Just for the records - there's another report of the assertion failure\n> > > at [1], many thanks to Kyotaro-san for providing consistent\n> > > reproducible steps.\n> > >\n> > > [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n\nI plan to use that message's patch, because it guarantees WALRCV_STOPPED at\nthe code location being changed. Today, in the unlikely event of\n!WalRcvStreaming() due to WALRCV_WAITING or WALRCV_STOPPING, that code\nproceeds without waiting for WALRCV_STOPPED. I think WALRCV_STOPPING can't\nhappen there. Even if WALRCV_WAITING also can't happen, it's a simplicity win\nto remove the need to think about that. Dilip Kumar and Nathan Bossart also\nstated support for that design.\n\nIf WALRCV_WAITING or WALRCV_STOPPING can happen at that patch's code site, I\nperhaps should back-patch the change to released versions. Does anyone know\nwhether one or both can happen?\n\n> > FWIW, I\n> > find better the approach taken by Horiguchi-san in [1] to reset the\n> > state before we attempt to read WAL from the archives *or* pg_wal,\n> > after we know that the last source has failed, where we know that we\n> > are not streaming yet (but recovery may be requested soon).\n> >\n> > [1]: https://www.postgresql.org/message-id/20220214.171428.735280610520122187.horikyota.ntt@gmail.com\n> \n> I think the fix at [1] is wrong. It basically resets\n> InstallXLogFileSegmentActive for both XLOG_FROM_ARCHIVE and\n> XLOG_FROM_PG_WAL, remember that we don't need WAL files preallocation\n> and recycling just for archive. Moreover, if we were to reset just for\n> archive there, it's too aggressive, meaning for every WAL record, we\n> take ControlFileLock in exclusive mode to reset it.\n> \n> IMO, doing it once when we switch the source to archive is the correct\n> way because it avoids frequent ControlFileLock acquisitions and makes\n> the fix more intuitive. And we have a comment saying why we reset\n> InstallXLogFileSegmentActive, if required we can point to the commit\n> cc2c7d6 in the comments.\n\nThe startup process maintains the invariant that archive recovery and\nstreaming recovery don't overlap in time; one stops before the other starts.\nAs long as it achieves that, I'm not aware of a cause to care whether the flag\nreset happens when streaming ends vs. when archive recovery begins. If the\nstartup process develops a defect such that it ceases to maintain that\ninvariant, \"when archive recovery begins\" does have the\nadvantage-or-disadvantage of causing a failure in non-assert builds. The\nother way gets only an assertion failure. Michael Paquier or Bharath\nRupireddy, did you envision benefits other than that one?\n\n\n", "msg_date": "Mon, 12 Sep 2022 20:22:56 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Mon, Sep 12, 2022 at 08:22:56PM -0700, Noah Misch wrote:\n> I plan to use that message's patch, because it guarantees WALRCV_STOPPED at\n> the code location being changed. Today, in the unlikely event of\n> !WalRcvStreaming() due to WALRCV_WAITING or WALRCV_STOPPING, that code\n> proceeds without waiting for WALRCV_STOPPED. I think WALRCV_STOPPING can't\n> happen there. Even if WALRCV_WAITING also can't happen, it's a simplicity win\n> to remove the need to think about that. Dilip Kumar and Nathan Bossart also\n> stated support for that design.\n\nI did not notice this one. Sounds pretty much right.\n\n> If WALRCV_WAITING or WALRCV_STOPPING can happen at that patch's code site, I\n> perhaps should back-patch the change to released versions. Does anyone know\n> whether one or both can happen?\n\nHmm. I agree that this could be a good thing in the long-term.\n\n> The startup process maintains the invariant that archive recovery and\n> streaming recovery don't overlap in time; one stops before the other starts.\n> As long as it achieves that, I'm not aware of a cause to care whether the flag\n> reset happens when streaming ends vs. when archive recovery begins. If the\n> startup process develops a defect such that it ceases to maintain that\n> invariant, \"when archive recovery begins\" does have the\n> advantage-or-disadvantage of causing a failure in non-assert builds. The\n> other way gets only an assertion failure. Michael Paquier or Bharath\n> Rupireddy, did you envision benefits other than that one?\n\nI think that you are forgetting one case here though: a standby where\nstandby.signal is set without restore_command and with\nprimary_conninfo. It could move on with recovery even if the WAL\nreceiver is unstable when something external pushes more WAL segments\nto the standby's pg_wal/. So my point of upthread is that we should\nmake sure that standby.signal+primary_conninfo resets the flag state\nwhen the WAL receiver stops streaming in this case as well.\n\nThe proposed v1-0001-Do-not-skip-calling-XLogShutdownWalRcv.patch\nwould achieve that, because it does not change the state of\nInstallXLogFileSegmentActive depending on the source being\nXLOG_FROM_ARCHIVE. So I'm fine with what you want to do.\n--\nMichael", "msg_date": "Tue, 13 Sep 2022 13:10:36 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Tue, Sep 13, 2022 at 8:52 AM Noah Misch <noah@leadboat.com> wrote:\n>\n> > > > [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n>\n> I plan to use that message's patch, because it guarantees WALRCV_STOPPED at\n> the code location being changed. Today, in the unlikely event of\n> !WalRcvStreaming() due to WALRCV_WAITING or WALRCV_STOPPING, that code\n> proceeds without waiting for WALRCV_STOPPED.\n\nHm. That was the original fix [2] proposed and it works. The concern\nis that XLogShutdownWalRcv() does a bunch of work via ShutdownWalRcv()\n- it calls ConditionVariablePrepareToSleep(),\nConditionVariableCancelSleep() (has lock 2 acquisitions and\nrequisitions) and 1 function call WalRcvRunning()) even for\nWALRCV_STOPPED case, all this is unnecessary IMO when we determine the\nwalreceiver is state is already WALRCV_STOPPED. I think we can do\nsomething like [3] to allay this concern and go with the fix proposed\nat [1] unconditionally calling XLogShutdownWalRcv().\n\n> If WALRCV_WAITING or WALRCV_STOPPING can happen at that patch's code site, I\n> perhaps should back-patch the change to released versions. Does anyone know\n> whether one or both can happen?\n\nIMO, we must back-patch to the version where\ncc2c7d65fc27e877c9f407587b0b92d46cd6dd16 got introduced irrespective\nof any of the above happening.\n\nThoughts?\n\n[1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n[2] - https://www.postgresql.org/message-id/flat/CALj2ACUoBWbaFo_t0gew%2Bx6n0V%2BmpvB_23HLvsVD9abgCShV5A%40mail.gmail.com#e762ee94cbbe32a0b8c72c60793626b3\n[3] -diff --git a/src/backend/replication/walreceiverfuncs.c\nb/src/backend/replication/walreceiverfuncs.c\nindex 90798b9d53..3487793c7a 100644\n--- a/src/backend/replication/walreceiverfuncs.c\n+++ b/src/backend/replication/walreceiverfuncs.c\n@@ -181,6 +181,7 @@ ShutdownWalRcv(void)\n WalRcvData *walrcv = WalRcv;\n pid_t walrcvpid = 0;\n bool stopped = false;\n+ bool was_stopped = false;\n\n /*\n * Request walreceiver to stop. Walreceiver will switch to\nWALRCV_STOPPED\n@@ -191,6 +192,7 @@ ShutdownWalRcv(void)\n switch (walrcv->walRcvState)\n {\n case WALRCV_STOPPED:\n+ was_stopped = true;\n break;\n case WALRCV_STARTING:\n walrcv->walRcvState = WALRCV_STOPPED;\n@@ -208,6 +210,10 @@ ShutdownWalRcv(void)\n }\n SpinLockRelease(&walrcv->mutex);\n\n+ /* Quick exit if walreceiver was already stopped. */\n+ if (was_stopped)\n+ return;\n+\n /* Unnecessary but consistent. */\n if (stopped)\n ConditionVariableBroadcast(&walrcv->walRcvStoppedCV);\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Tue, 13 Sep 2022 11:56:16 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "At Tue, 13 Sep 2022 11:56:16 +0530, Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com> wrote in \n> On Tue, Sep 13, 2022 at 8:52 AM Noah Misch <noah@leadboat.com> wrote:\n> >\n> > > > > [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n> >\n> > I plan to use that message's patch, because it guarantees WALRCV_STOPPED at\n> > the code location being changed. Today, in the unlikely event of\n> > !WalRcvStreaming() due to WALRCV_WAITING or WALRCV_STOPPING, that code\n> > proceeds without waiting for WALRCV_STOPPED.\n> \n> Hm. That was the original fix [2] proposed and it works. The concern\n\n(Mmm. sorry for omitting that.)\n\n> is that XLogShutdownWalRcv() does a bunch of work via ShutdownWalRcv()\n> - it calls ConditionVariablePrepareToSleep(),\n\nAnyway the code path is executed in almost all cases because the same\nassertion fires otherwise. So I don't see a problem if we do the bunch\nof synchronization things also in that rare case. I'm not sure we\nwant to do [3].\n\n> > If WALRCV_WAITING or WALRCV_STOPPING can happen at that patch's code site, I\n> > perhaps should back-patch the change to released versions. Does anyone know\n> > whether one or both can happen?\n> \n> IMO, we must back-patch to the version where\n> cc2c7d65fc27e877c9f407587b0b92d46cd6dd16 got introduced irrespective\n> of any of the above happening.\n\nThat is, PG15? I agree to that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Tue, 13 Sep 2022 19:26:29 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Tue, Sep 13, 2022 at 3:56 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n>\n> > is that XLogShutdownWalRcv() does a bunch of work via ShutdownWalRcv()\n> > - it calls ConditionVariablePrepareToSleep(),\n>\n> Anyway the code path is executed in almost all cases because the same\n> assertion fires otherwise. So I don't see a problem if we do the bunch\n> of synchronization things also in that rare case. I'm not sure we\n> want to do [3].\n\nIMO, we don't need to let ShutdownWalRcv() to do extra work of\nConditionVariablePrepareToSleep() - WalRcvRunning() -\nConditionVariableCancelSleep() for WALRCV_STOPPED cases when we know\nthe walreceiver status before - even though it doesn't have any\nproblems per se and at that place in the code the WALRCV_STOPPED cases\nare more frequent than any other walreceiver cases.\n\nHaving said that, let's also hear from other hackers.\n\n-- \nBharath Rupireddy\nPostgreSQL Contributors Team\nRDS Open Source Databases\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Thu, 15 Sep 2022 15:55:54 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Tue, Sep 13, 2022 at 11:56:16AM +0530, Bharath Rupireddy wrote:\n> On Tue, Sep 13, 2022 at 8:52 AM Noah Misch <noah@leadboat.com> wrote:\n> > > > > [1] - https://www.postgresql.org/message-id/flat/20220909.172949.2223165886970819060.horikyota.ntt%40gmail.com\n> >\n> > I plan to use that message's patch, because it guarantees WALRCV_STOPPED at\n> > the code location being changed. Today, in the unlikely event of\n> > !WalRcvStreaming() due to WALRCV_WAITING or WALRCV_STOPPING, that code\n> > proceeds without waiting for WALRCV_STOPPED.\n\nPushed that way.\n\n> Hm. That was the original fix [2] proposed and it works. The concern\n> is that XLogShutdownWalRcv() does a bunch of work via ShutdownWalRcv()\n> - it calls ConditionVariablePrepareToSleep(),\n> ConditionVariableCancelSleep() (has lock 2 acquisitions and\n> requisitions) and 1 function call WalRcvRunning()) even for\n> WALRCV_STOPPED case, all this is unnecessary IMO when we determine the\n> walreceiver is state is already WALRCV_STOPPED.\n\nThat's fine. If we're reaching this code at high frequency, that implies\nwe're also forking walreceiver processes at high frequency. This code would\nbe a trivial part of the overall cost.\n\n> > If WALRCV_WAITING or WALRCV_STOPPING can happen at that patch's code site, I\n> > perhaps should back-patch the change to released versions. Does anyone know\n> > whether one or both can happen?\n\nIf anyone discovers such cases later, we can extend the back-patch then.\n\n> IMO, we must back-patch to the version where\n> cc2c7d65fc27e877c9f407587b0b92d46cd6dd16 got introduced irrespective\n> of any of the above happening.\n\nCorrect. The sentences were about *released* versions, not v15.\n\n\n", "msg_date": "Thu, 15 Sep 2022 06:58:43 -0700", "msg_from": "Noah Misch <noah@leadboat.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "On Thu, Sep 15, 2022 at 06:58:43AM -0700, Noah Misch wrote:\n> Pushed that way.\n\nThanks for taking care of it, Noah.\n--\nMichael", "msg_date": "Fri, 16 Sep 2022 09:05:46 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" }, { "msg_contents": "At Fri, 16 Sep 2022 09:05:46 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Sep 15, 2022 at 06:58:43AM -0700, Noah Misch wrote:\n> > Pushed that way.\n> \n> Thanks for taking care of it, Noah.\n\n+1. Thanks.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n", "msg_date": "Fri, 16 Sep 2022 10:45:54 +0900 (JST)", "msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Assertion failure in WaitForWALToBecomeAvailable state machine" } ]
[ { "msg_contents": "Add suport for server-side LZ4 base backup compression.\n\nLZ4 compression can be a lot faster than gzip compression, so users\nmay prefer it even if the compression ratio is not as good. We will\nwant pg_basebackup to support LZ4 compression and decompression on the\nclient side as well, and there is a pending patch for that, but it's\nby a different author, so I am committing this part separately for\nthat reason.\n\nJeevan Ladhe, reviewed by Tushar Ahuja and by me.\n\nDiscussion: http://postgr.es/m/CANm22Cg9cArXEaYgHVZhCnzPLfqXCZLAzjwTq7Fc0quXRPfbxA@mail.gmail.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/dab298471ff2f91f33bc25bfb73e435d3ab02148\n\nModified Files\n--------------\ndoc/src/sgml/protocol.sgml | 7 +-\ndoc/src/sgml/ref/pg_basebackup.sgml | 24 ++-\nsrc/backend/replication/Makefile | 1 +\nsrc/backend/replication/basebackup.c | 7 +-\nsrc/backend/replication/basebackup_lz4.c | 298 ++++++++++++++++++++++++++++++\nsrc/bin/pg_basebackup/pg_basebackup.c | 18 +-\nsrc/bin/pg_verifybackup/Makefile | 1 +\nsrc/bin/pg_verifybackup/t/008_untar.pl | 10 +-\nsrc/include/replication/basebackup_sink.h | 1 +\n9 files changed, 349 insertions(+), 18 deletions(-)", "msg_date": "Fri, 11 Feb 2022 13:54:36 +0000", "msg_from": "Robert Haas <rhaas@postgresql.org>", "msg_from_op": true, "msg_subject": "pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Fri, Feb 11, 2022 at 01:54:36PM +0000, Robert Haas wrote:\n> Add suport for server-side LZ4 base backup compression.\n> \n> LZ4 compression can be a lot faster than gzip compression, so users\n> may prefer it even if the compression ratio is not as good. We will\n> want pg_basebackup to support LZ4 compression and decompression on the\n> client side as well, and there is a pending patch for that, but it's\n> by a different author, so I am committing this part separately for\n> that reason.\n> \n> Jeevan Ladhe, reviewed by Tushar Ahuja and by me.\n\ncopperhead seems to be unhappy here:\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=copperhead&dt=2022-02-11%2021%3A52%3A48\n# Running: lz4 -d -m\n/home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/base.tar.lz4\nCan't exec \"lz4\": No such file or directory at\n/home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/../../../src/test/perl/PostgreSQL/Test/Utils.pm\nline 388.\nBail out! failed to execute command \"lz4 -d -m\n/home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/base.tar.lz4\":\nNo such file or directory\n### Stopping node \"primary\" using mode immediate\n\nPerhaps you'd better check that 'decompress_program' can be executed\nand skip things if the command is not around? Note that\n020_pg_receivewal.pl does an extra lz4 --version as a safety measure,\nbut 008_untar.pl does not.\n--\nMichael", "msg_date": "Sat, 12 Feb 2022 13:38:45 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Fri, Feb 11, 2022 at 11:39 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Perhaps you'd better check that 'decompress_program' can be executed\n> and skip things if the command is not around? Note that\n> 020_pg_receivewal.pl does an extra lz4 --version as a safety measure,\n> but 008_untar.pl does not.\n\nYou may be right, but I'm not entirely convinced. $ENV{'LZ4'} here is\nbeing set by make, and make is setting it to whatever configure found.\nIf configure found a version of the lz4 program that doesn't actually\nwork, isn't that configure's fault, or the system administrator's\nfault, rather than this test's fault?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Feb 2022 07:24:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "Hi,\n\nOn 2022-02-12 13:38:45 +0900, Michael Paquier wrote:\n> Perhaps you'd better check that 'decompress_program' can be executed\n> and skip things if the command is not around? Note that\n> 020_pg_receivewal.pl does an extra lz4 --version as a safety measure,\n> but 008_untar.pl does not.\n\nActually executing the command doesn't seem useful? It doesn't seem useful to\nignore an executable that's not actually executable or such. Testing whether\nthe file exists makes sense though.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 14:06:43 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On 2022-02-12 07:24:46 -0500, Robert Haas wrote:\n> You may be right, but I'm not entirely convinced. $ENV{'LZ4'} here is\n> being set by make, and make is setting it to whatever configure found.\n> If configure found a version of the lz4 program that doesn't actually\n> work, isn't that configure's fault, or the system administrator's\n> fault, rather than this test's fault?\n\nI don't think there's an actual configure check for the lz4 binary? Looks like\na static assignment in src/Makefile.global.in to me.\n\n\n", "msg_date": "Sat, 12 Feb 2022 14:09:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Sat, Feb 12, 2022 at 5:09 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-12 07:24:46 -0500, Robert Haas wrote:\n> > You may be right, but I'm not entirely convinced. $ENV{'LZ4'} here is\n> > being set by make, and make is setting it to whatever configure found.\n> > If configure found a version of the lz4 program that doesn't actually\n> > work, isn't that configure's fault, or the system administrator's\n> > fault, rather than this test's fault?\n>\n> I don't think there's an actual configure check for the lz4 binary? Looks like\n> a static assignment in src/Makefile.global.in to me.\n\nOh. That seems kind of dumb.\n\nIt does mean that trying to run it is all that we can do, though,\nbecause we don't have a full path.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sat, 12 Feb 2022 17:10:32 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Feb 12, 2022 at 5:09 PM Andres Freund <andres@anarazel.de> wrote:\n>> I don't think there's an actual configure check for the lz4 binary? Looks like\n>> a static assignment in src/Makefile.global.in to me.\n\n> Oh. That seems kind of dumb.\n\nIt looks to me like somebody figured it didn't need any more support\nthan gzip/bzip2, which is wrong on a couple of grounds:\n* hardly any modern platforms lack those, unlike lz4\n* we don't invoke either one of them during testing, only when\n you explicitly ask to make a compressed tarball\n\nI think adding an explicit PGAC_PATH_PROGS would be worth the cycles.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 17:16:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Sat, Feb 12, 2022 at 05:16:03PM -0500, Tom Lane wrote:\n> It looks to me like somebody figured it didn't need any more support\n> than gzip/bzip2, which is wrong on a couple of grounds:\n> * hardly any modern platforms lack those, unlike lz4\n> * we don't invoke either one of them during testing, only when\n> you explicitly ask to make a compressed tarball\n> \n> I think adding an explicit PGAC_PATH_PROGS would be worth the cycles.\n\nWell, this somebody is I, and the buildfarm did not blow up on any of\nthat so that looked rather fine. Adding a few cycles for this check\nis fine by me. What do you think of the attached?\n--\nMichael", "msg_date": "Sun, 13 Feb 2022 12:28:31 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sat, Feb 12, 2022 at 05:16:03PM -0500, Tom Lane wrote:\n>> I think adding an explicit PGAC_PATH_PROGS would be worth the cycles.\n\n> Well, this somebody is I, and the buildfarm did not blow up on any of\n> that so that looked rather fine.\n\nEh? copperhead for one is failing for exactly this reason:\n\nBailout called. Further testing stopped: failed to execute command \"lz4 -d -m /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/base.tar.lz4\": No such file or directory\n\n> Adding a few cycles for this check\n> is fine by me. What do you think of the attached?\n\nLooks OK as far as it goes ... but do we need anything on the\nMSVC side?\n\nAlso, I can't help wondering why we have both GZIP_PROGRAM and GZIP\nvariables. I suppose that's a separate issue though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 23:12:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Sat, Feb 12, 2022 at 11:12 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Eh? copperhead for one is failing for exactly this reason:\n>\n> Bailout called. Further testing stopped: failed to execute command \"lz4 -d -m /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/base.tar.lz4\": No such file or directory\n\nWell, that's because I didn't realize that LZ4 might be set to\nsomething that doesn't work at all. So Michael's thing worked, but\nbecause it (in my view) fixed the problem in a more surprising place\nthan I would have preferred, I made a commit later that turned out to\nbreak the buildfarm. So you can blame either one of us that you like.\n\nThanks, Michael, for preparing a patch.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 13 Feb 2022 11:13:51 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Sat, Feb 12, 2022 at 11:12:50PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> Well, this somebody is I, and the buildfarm did not blow up on any of\n>> that so that looked rather fine.\n> \n> Eh? copperhead for one is failing for exactly this reason:\n> \n> Bailout called. Further testing stopped: failed to execute command\n> \"lz4 -d -m\n> /home/pgbf/buildroot/HEAD/pgsql.build/src/bin/pg_verifybackup/tmp_check/t_008_untar_primary_data/backup/server-backup/base.tar.lz4\":\n> No such file or directory\n\nWell, I have put in place a check in the TAP tests, that Robert has\nmanaged to break. It looks like it was wrong of me to assume that\nit would be fine as-is. Sorry about that.\n\n>> Adding a few cycles for this check\n>> is fine by me. What do you think of the attached?\n> \n> Looks OK as far as it goes ... but do we need anything on the\n> MSVC side?\n\nIt is possible to tweak the environment variable so one can unset it.\nIf we make things consistent with the configure check, we could tweak\nthe default to not be \"lz4\" if we don't have --with-lz4 in the MSVC\nconfiguration? I don't recall seeing in the wild an installer for LZ4\nwhere we would have the command but not the libraries, so perhaps\nthat's not worth the trouble, anyway, and we don't have any code paths\nin the tests where we use LZ4 without --with-lz4.\n\n> Also, I can't help wondering why we have both GZIP_PROGRAM and GZIP\n> variables. I suppose that's a separate issue though.\n\nFrom gzip's man page:\n\"The obsolescent environment variable GZIP can hold a set of default\noptions for gzip.\"\n\nThis means that saving the command into a variable with the same name\ninteracts with the command launch. This was discussed here:\nhttps://www.postgresql.org/message-id/nr63G0R6veCrLY30QMwxYA2aCawclWJopZiKEDN8HtKwIoIqZ79fVmSE2GxTyPD1Mk9FzRdfLCrc-BSGO6vTlDT_E2-aqtWeEiNn-qsDDzk=@pm.me\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 09:02:29 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Sun, Feb 13, 2022 at 11:13:51AM -0500, Robert Haas wrote:\n> Well, that's because I didn't realize that LZ4 might be set to\n> something that doesn't work at all. So Michael's thing worked, but\n> because it (in my view) fixed the problem in a more surprising place\n> than I would have preferred, I made a commit later that turned out to\n> break the buildfarm. So you can blame either one of us that you like.\n> \n> Thanks, Michael, for preparing a patch.\n\nPatch that was slightly wrong, as the tests of pg_verifybackup still\nfailed once I moved away the lz4 command in my environment because LZ4\ngets set to an empty value. I have checked this case locally, and\napplied the patch to add the ./configure check, so copperhead should\nget back to green.\n\nA last thing, that has been mentioned by Andres upthread, is that we\nshould be able to remove the extra commands run with --version in the\ntests of pg_basebackup, as of the attached. I have not done that yet,\nas it seems better to wait for copperhead first, and the tests of\npg_basebackup run before pg_verifybackup so I don't want to mask one\nerror for another in case the buildfarm says something.\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 10:53:04 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." }, { "msg_contents": "On Mon, Feb 14, 2022 at 10:53:04AM +0900, Michael Paquier wrote:\n> A last thing, that has been mentioned by Andres upthread, is that we\n> should be able to remove the extra commands run with --version in the\n> tests of pg_basebackup, as of the attached. I have not done that yet,\n> as it seems better to wait for copperhead first, and the tests of\n> pg_basebackup run before pg_verifybackup so I don't want to mask one\n> error for another in case the buildfarm says something.\n\ncopperhead has reported back a couple of hours ago, and it has\nswitched back to green. Hence, I have moved on with the remaining\npiece and removed all those --version checks from the tests of\npg_basebackup and pg_receivewal.\n--\nMichael", "msg_date": "Tue, 15 Feb 2022 13:46:03 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pgsql: Add suport for server-side LZ4 base backup compression." } ]
[ { "msg_contents": "Hi,\n\nI was looking over the buildfarm results yesterday and this morning,\nand things seem to be mostly green, but not entirely. 'hippopotamus'\nwas failing yesterday with strange LDAP-related errors, and is now\ncomplaining about pgsql-Git-Dirty, which I assume means that someone\nis doing something funny on that machine manually. 'jay' has the same\nmaintainer and is also showing some failures.\n\nThe last failure on jay looks like this:\n\ntest pgp-armor ... ok 126 ms\ntest pgp-decrypt ... ok 192 ms\ntest pgp-encrypt ... ok 611 ms\ntest pgp-compression ... FAILED (test process exited with\nexit code 127) 2 ms\ntest pgp-pubkey-decrypt ... FAILED (test process exited with\nexit code 127) 2 ms\ntest pgp-pubkey-encrypt ... FAILED 9 ms\ntest pgp-info ... FAILED 8 ms\n\nIs there some maintenance happening on these machines that means we\nshould discount these failures, or is something broken?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:16:48 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": true, "msg_subject": "the build farm is ok, but not the hippopotamus (or the jay)" }, { "msg_contents": "On 2/11/22 16:16, Robert Haas wrote:\n> Hi,\n> \n> I was looking over the buildfarm results yesterday and this morning,\n> and things seem to be mostly green, but not entirely. 'hippopotamus'\n> was failing yesterday with strange LDAP-related errors, and is now\n> complaining about pgsql-Git-Dirty, which I assume means that someone\n> is doing something funny on that machine manually. 'jay' has the same\n> maintainer and is also showing some failures.\n> \n\nYes, those are \"my\" machines. I've been upgrading the OS on them, and \nthis is likely a fallout from that. Sorry about the noise.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Feb 2022 18:06:10 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: the build farm is ok, but not the hippopotamus (or the jay)" } ]
[ { "msg_contents": "Hello,\n\nWhen building Postgres using MSVC with Kerberos Version 4.1 and OpenSSL\n1.1.1l (both of them, using only one will raise no errors), I see errors\nlike:\n\n\"C:\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\postgres\\postgres.vcxproj\" (default target) (2) ->\n(ClCompile target) ->\n C:\\postgres\\src\\backend\\libpq\\be-secure-openssl.c(583,43): warning C4047:\n'function': 'X509_NAME *' differs in levels of indirection from 'int'\n[C:\\postgres\\pos\ntgres.vcxproj]\n\n\"C:\\postgres\\pgsql.sln\" (default target) (1) ->\n\"C:\\postgres\\postgres.vcxproj\" (default target) (2) ->\n(ClCompile target) ->\n C:\\postgres\\src\\backend\\libpq\\be-secure-openssl.c(74,35): error C2143:\nsyntax error: missing ')' before '(' [C:\\postgres\\postgres.vcxproj]\n\nThere is a comment in 'src/backend/libpq/be-secure-openssl.c' addressing\nthis issue, but I have to explicitly undefine X509_NAME. Please find\nattached a patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 11 Feb 2022 16:31:20 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "OpenSSL conflicts with wincrypt.h" }, { "msg_contents": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> There is a comment in 'src/backend/libpq/be-secure-openssl.c' addressing\n> this issue, but I have to explicitly undefine X509_NAME. Please find\n> attached a patch for so.\n\nUm ... why? Shouldn't the #undef in the OpenSSL headers take care\nof the problem?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 10:36:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: OpenSSL conflicts with wincrypt.h" }, { "msg_contents": "On Fri, Feb 11, 2022 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> =?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <\n> juanjo.santamaria@gmail.com> writes:\n> > There is a comment in 'src/backend/libpq/be-secure-openssl.c' addressing\n> > this issue, but I have to explicitly undefine X509_NAME. Please find\n> > attached a patch for so.\n>\n> Um ... why? Shouldn't the #undef in the OpenSSL headers take care\n> of the problem?\n>\n> After <openssl/ossl_typ.h> has been included, any inclusion of\n<wincrypt.h> will be troublesome. There is already something similar in\n'contrib/sslinfo/sslinfo.c'. This shouldn't be a problem while\ndefining WIN32_LEAN_AND_MEAN, but kerberos is directly including\n<wincrypt.h> in <win-mac.h>.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Fri, Feb 11, 2022 at 4:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?= <juanjo.santamaria@gmail.com> writes:\n> There is a comment in 'src/backend/libpq/be-secure-openssl.c' addressing\n> this issue, but I have to explicitly undefine X509_NAME. Please find\n> attached a patch for so.\n\nUm ... why?  Shouldn't the #undef in the OpenSSL headers take care\nof the problem?After <openssl/ossl_typ.h> has been included, any inclusion of <wincrypt.h> will be troublesome. There is already something similar in 'contrib/sslinfo/sslinfo.c'. This shouldn't be a  problem while defining WIN32_LEAN_AND_MEAN, but kerberos is directly including <wincrypt.h> in <win-mac.h>.Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 15 Feb 2022 14:12:47 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: OpenSSL conflicts with wincrypt.h" }, { "msg_contents": "\n\n> On 15 Feb 2022, at 14:12, Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:\n\n> After <openssl/ossl_typ.h> has been included, any inclusion of <wincrypt.h> will be troublesome.\n\nNot that it changes anything here, but FTR: in OpenSSL 3.0.0 and onwards this\nundef has been moved to <openssl/types.h>\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Tue, 15 Feb 2022 14:32:28 +0100", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: OpenSSL conflicts with wincrypt.h" } ]
[ { "msg_contents": "Hello,\n\nIf you execute pg_receivewal.exe with the option \"--compression-method\ngzip\" it will fail with no error. You can see an error in the db log:\n\n2022-02-10 11:46:32.725 CET [11664][walsender] [pg_receivewal][3/0:0] LOG:\ncould not receive data from client: An existing connection was forcibly\nclosed by the remote host.\n\nTracing the execution you can see:\n\nUnhandled exception at 0x00007FFEA78C1208 (ucrtbase.dll) in An invalid\nparameter was passed to a function that considers invalid parameters fatal.\n ucrtbase.dll!00007ff8e8ae36a2() Unknown\n> zlib1.dll!gz_comp(gz_state * state, int flush) Line 111 C\n zlib1.dll!gz_write(gz_state * state, const void * buf, unsigned __int64\nlen) Line 235 C\n pg_receivewal.exe!dir_write(void * f, const void * buf, unsigned __int64\ncount) Line 300 C\n pg_receivewal.exe!ProcessXLogDataMsg(pg_conn * conn, StreamCtl * stream,\nchar * copybuf, int len, unsigned __int64 * blockpos) Line 1150 C\n pg_receivewal.exe!HandleCopyStream(pg_conn * conn, StreamCtl * stream,\nunsigned __int64 * stoppos) Line 850 C\n pg_receivewal.exe!ReceiveXlogStream(pg_conn * conn, StreamCtl * stream)\nLine 605 C\n pg_receivewal.exe!StreamLog() Line 636 C\n pg_receivewal.exe!main(int argc, char * * argv) Line 1005 C\n\nThe problem comes from the file descriptor passed to gzdopen() in\n'src/bin/pg_basebackup/walmethods.c'. Using gzopen() instead, solves the\nissue without ifdefing for WIN32. Please find attached a patch for so.\n\nRegards,\n\nJuan José Santamaría Flecha", "msg_date": "Fri, 11 Feb 2022 16:33:11 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "Hi,\n\nOn 2022-02-11 16:33:11 +0100, Juan Jos� Santamar�a Flecha wrote:\n> If you execute pg_receivewal.exe with the option \"--compression-method\n> gzip\" it will fail with no error. You can see an error in the db log:\n> \n> 2022-02-10 11:46:32.725 CET [11664][walsender] [pg_receivewal][3/0:0] LOG:\n> could not receive data from client: An existing connection was forcibly\n> closed by the remote host.\n> \n> Tracing the execution you can see:\n> \n> Unhandled exception at 0x00007FFEA78C1208 (ucrtbase.dll) in An invalid\n> parameter was passed to a function that considers invalid parameters fatal.\n> ucrtbase.dll!00007ff8e8ae36a2() Unknown\n> > zlib1.dll!gz_comp(gz_state * state, int flush) Line 111 C\n> zlib1.dll!gz_write(gz_state * state, const void * buf, unsigned __int64\n> len) Line 235 C\n> pg_receivewal.exe!dir_write(void * f, const void * buf, unsigned __int64\n> count) Line 300 C\n> pg_receivewal.exe!ProcessXLogDataMsg(pg_conn * conn, StreamCtl * stream,\n> char * copybuf, int len, unsigned __int64 * blockpos) Line 1150 C\n> pg_receivewal.exe!HandleCopyStream(pg_conn * conn, StreamCtl * stream,\n> unsigned __int64 * stoppos) Line 850 C\n> pg_receivewal.exe!ReceiveXlogStream(pg_conn * conn, StreamCtl * stream)\n> Line 605 C\n> pg_receivewal.exe!StreamLog() Line 636 C\n> pg_receivewal.exe!main(int argc, char * * argv) Line 1005 C\n> \n> The problem comes from the file descriptor passed to gzdopen() in\n> 'src/bin/pg_basebackup/walmethods.c'. Using gzopen() instead, solves the\n> issue without ifdefing for WIN32. Please find attached a patch for so.\n\nI hit this as well. The problem is really caused by using a debug build of\npostgres vs a production build of zlib. The use different C runtimes, with\ndifferent file descriptors. A lot of resources in the windows world are\nunfortunately tied to the C runtime and that there can multiple C runtimes in\na single process.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Fri, 11 Feb 2022 07:50:44 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "On Fri, Feb 11, 2022 at 07:50:44AM -0800, Andres Freund wrote:\n> On 2022-02-11 16:33:11 +0100, Juan José Santamaría Flecha wrote:\n>> The problem comes from the file descriptor passed to gzdopen() in\n>> 'src/bin/pg_basebackup/walmethods.c'. Using gzopen() instead, solves the\n>> issue without ifdefing for WIN32. Please find attached a patch for so.\n\nThis looks wrong to me as gzclose() would close() the file descriptor\nwe pass in via gzdopen(), and some code paths of walmethods.c rely on\nthis assumption so this would leak fds on repeated errors.\n\n> I hit this as well. The problem is really caused by using a debug build of\n> postgres vs a production build of zlib. The use different C runtimes, with\n> different file descriptors. A lot of resources in the windows world are\n> unfortunately tied to the C runtime and that there can multiple C runtimes in\n> a single process.\n\nIt may be worth warning about that at build time, or just document the\nmatter perhaps? I don't recall seeing any MSIs related to zlib that\nhave compile with the debug options.\n--\nMichael", "msg_date": "Mon, 14 Feb 2022 15:31:42 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "On Mon, Feb 14, 2022 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Fri, Feb 11, 2022 at 07:50:44AM -0800, Andres Freund wrote:\n>\n> > I hit this as well. The problem is really caused by using a debug build\n> of\n> > postgres vs a production build of zlib. The use different C runtimes,\n> with\n> > different file descriptors. A lot of resources in the windows world are\n> > unfortunately tied to the C runtime and that there can multiple C\n> runtimes in\n> > a single process.\n>\n> It may be worth warning about that at build time, or just document the\n> matter perhaps? I don't recall seeing any MSIs related to zlib that\n> have compile with the debug options.\n>\n\nBuilding with a mix of debug and release components is not a good practice\nfor issues like this, but is not something that MSVC forbids. If this is\nsomething we want to discard altogether, we can check at build time with\n'dumpbin' to ensure that all dlls share the same vcruntime.dll.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Feb 14, 2022 at 7:31 AM Michael Paquier <michael@paquier.xyz> wrote:On Fri, Feb 11, 2022 at 07:50:44AM -0800, Andres Freund wrote:\n> I hit this as well. The problem is really caused by using a debug build of\n> postgres vs a production build of zlib. The use different C runtimes, with\n> different file descriptors. A lot of resources in the windows world are\n> unfortunately tied to the C runtime and that there can multiple C runtimes in\n> a single process.\n\nIt may be worth warning about that at build time, or just document the\nmatter perhaps?  I don't recall seeing any MSIs related to zlib that\nhave compile with the debug options. Building with a mix of debug and release components is not a good practice for issues like this, but is not something that MSVC forbids. If this is something we want to discard altogether, we can check at build time with 'dumpbin' to ensure that all dlls share the same vcruntime.dll.Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 15 Feb 2022 11:21:42 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "Hi,\n\nOn 2022-02-15 11:21:42 +0100, Juan Jos� Santamar�a Flecha wrote:\n> Building with a mix of debug and release components is not a good practice\n> for issues like this, but is not something that MSVC forbids.\n\nIt's not a problem if you never need share memory, file descriptors, locales,\n... state across DLL boundaries...\n\n\n> If this is something we want to discard altogether, we can check at build\n> time with 'dumpbin' to ensure that all dlls share the same vcruntime.dll.\n\nI think these days it's mostly a question of ucrtbase.dll vs ucrtbased.dll,\nright?\n\nFWIW, I've looked at using either vcpkg or conan to more easily install /\nbuild dependencies on windows, in the right debug / release mode.\n\nThe former was easier, but unfortunately the zlib, lz4, ... packages don't\ninclude the gzip, lz4 etc binaries right now. That might be easy enough to fix\nwith an option. It does require building the dependencies locally.\n\nConan seemed to be a nicer architecture, but there's no builtin way to update\nto newer dependency versions, and non-versioned dependencies don't actually\nwork. It has pre-built dependencies for the common cases.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Feb 2022 08:25:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "On Tue, Feb 15, 2022 at 5:25 PM Andres Freund <andres@anarazel.de> wrote:\n\n>\n> > If this is something we want to discard altogether, we can check at build\n> > time with 'dumpbin' to ensure that all dlls share the same vcruntime.dll.\n>\n> I think these days it's mostly a question of ucrtbase.dll vs ucrtbased.dll,\n> right?\n>\n> Either one would do the trick, just to tell apart 'debug' from 'release'.\n\n\n> FWIW, I've looked at using either vcpkg or conan to more easily install\n> /\n> build dependencies on windows, in the right debug / release mode.\n>\n> The former was easier, but unfortunately the zlib, lz4, ... packages don't\n> include the gzip, lz4 etc binaries right now. That might be easy enough to\n> fix\n> with an option. It does require building the dependencies locally.\n>\n> Conan seemed to be a nicer architecture, but there's no builtin way to\n> update\n> to newer dependency versions, and non-versioned dependencies don't actually\n> work. It has pre-built dependencies for the common cases.\n>\n> I find that the repositories that are being currently proposed as a source\nfor the Windows depencies [1], will lead to outdated or stale packages in\nmany cases.\n\nI have been testing vcpkg, and my only grievances have been that some\npackages were terribly slow to install and a couple were missing: Kerberos\n(but its own project seems to be active [2]) and ossp-uuid (which is to be\nexpected).\n\nI will have to do some testing with Conan and see how both compare.\n\n[1] https://www.postgresql.org/docs/devel/install-windows-full.html\n[2] https://web.mit.edu/Kerberos/\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Feb 15, 2022 at 5:25 PM Andres Freund <andres@anarazel.de> wrote:\n> If this is something we want to discard altogether, we can check at build\n> time with 'dumpbin' to ensure that all dlls share the same vcruntime.dll.\n\nI think these days it's mostly a question of ucrtbase.dll vs ucrtbased.dll,\nright?\nEither one would do the trick, just to tell apart 'debug' from 'release'. \nFWIW, I've looked at using either \n\nvcpkg   or conan to more easily install /\nbuild dependencies on windows, in the right debug / release mode.\n\nThe former was easier, but unfortunately the zlib, lz4, ... packages don't\ninclude the gzip, lz4 etc binaries right now. That might be easy enough to fix\nwith an option. It does require building the dependencies locally.\n\nConan seemed to be a nicer architecture, but there's no builtin way to update\nto newer dependency versions, and non-versioned dependencies don't actually\nwork. It has pre-built dependencies for the common cases.\nI find that the repositories that are being currently proposed as a source for the Windows depencies [1], will lead to outdated or stale packages in many cases.I have been testing vcpkg, and my only grievances have been that some packages were terribly slow to install and a couple were missing: Kerberos (but its own project seems to be active [2]) and ossp-uuid (which is to be expected).I will have to do some testing with Conan and see how both compare.[1] https://www.postgresql.org/docs/devel/install-windows-full.html[2] https://web.mit.edu/Kerberos/Regards,Juan José Santamaría Flecha", "msg_date": "Wed, 16 Feb 2022 12:20:54 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" }, { "msg_contents": "Please excuse my late reply.\n\nOn Wed, Feb 16, 2022 at 12:20 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n> On Tue, Feb 15, 2022 at 5:25 PM Andres Freund <andres@anarazel.de> wrote:\n>\n>>\n>> FWIW, I've looked at using either vcpkg or conan to more easily\n>> install /\n>>\n> build dependencies on windows, in the right debug / release mode.\n>>\n>> The former was easier, but unfortunately the zlib, lz4, ... packages don't\n>> include the gzip, lz4 etc binaries right now. That might be easy enough\n>> to fix\n>> with an option. It does require building the dependencies locally.\n>>\n>> Conan seemed to be a nicer architecture, but there's no builtin way to\n>> update\n>> to newer dependency versions, and non-versioned dependencies don't\n>> actually\n>> work. It has pre-built dependencies for the common cases.\n>>\n>> I will have to do some testing with Conan and see how both compare.\n>\n\nI have found a couple of additional issues when trying to build using\nConan, all libraries are static and there aren't any prebuilt packages for\nMSVC 2022 yet. Also for some specific packages:\n\n - zstd doesn't include the binaries either.\n - gettext packages only provides binaries, no libraries, nor headers.\n - openssl builds the binaries, but is missing some objects in the\nlibraries (error LNK2019: unresolved external symbol).\n - libxml builds the binaries, but is missing some objects in the\nlibraries (error LNK2019: unresolved external symbol).\n\n,Regards,\n\nJuan José Santamaría Flecha\n\nPlease excuse my late reply.On Wed, Feb 16, 2022 at 12:20 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:On Tue, Feb 15, 2022 at 5:25 PM Andres Freund <andres@anarazel.de> wrote:FWIW, I've looked at using either \n\nvcpkg   or conan to more easily install /\nbuild dependencies on windows, in the right debug / release mode.\n\nThe former was easier, but unfortunately the zlib, lz4, ... packages don't\ninclude the gzip, lz4 etc binaries right now. That might be easy enough to fix\nwith an option. It does require building the dependencies locally.\n\nConan seemed to be a nicer architecture, but there's no builtin way to update\nto newer dependency versions, and non-versioned dependencies don't actually\nwork. It has pre-built dependencies for the common cases.\nI will have to do some testing with Conan and see how both compare.I have found a couple of additional issues when trying to build using Conan, all libraries are static and there aren't any prebuilt packages for MSVC 2022 yet. Also for some specific packages:    - zstd doesn't include the binaries either.    - gettext packages only provides binaries, no libraries, nor headers.    - openssl builds the binaries, but is missing some objects in the libraries (error LNK2019: unresolved external symbol).    - libxml builds the binaries, but is missing some objects in the libraries (error LNK2019: unresolved external symbol).,Regards,Juan José Santamaría Flecha", "msg_date": "Tue, 22 Feb 2022 10:20:23 +0100", "msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>", "msg_from_op": true, "msg_subject": "Re: pg_receivewal.exe unhandled exception in zlib1.dll" } ]
[ { "msg_contents": "Hi,\n\nhere's a small patch modifying postgres_fdw to use TABLESAMPLE to \ncollect sample when running ANALYZE on a foreign table. Currently the \nsampling happens locally, i.e. we fetch all data from the remote server \nand then pick rows. But that's hugely expensive for large relations \nand/or with limited network bandwidth, of course.\n\nAlexander Lepikhov mentioned this in [1], but it was actually proposed \nby Stephen in 2020 [2] but no patch even materialized.\n\nSo here we go. The patch does a very simple thing - it uses TABLESAMPLE \nto collect/transfer just a small sample from the remote node, saving \nboth CPU and network.\n\nAnd indeed, that helps quite a bit:\n\n---------------------------------------------------------------------\ncreate table t (a int);\nTime: 10.223 ms\n\ninsert into t select i from generate_series(1,10000000) s(i);\nTime: 552.207 ms\n\nanalyze t;\nTime: 310.445 ms\n\nCREATE FOREIGN TABLE ft (a int)\n SERVER foreign_server\n OPTIONS (schema_name 'public', table_name 't');\nTime: 4.838 ms\n\nanalyze ft;\nWARNING: SQL: DECLARE c1 CURSOR FOR SELECT a FROM public.t TABLESAMPLE \nSYSTEM(0.375001)\nTime: 44.632 ms\n\nalter foreign table ft options (sample 'false');\nTime: 4.821 ms\n\nanalyze ft;\nWARNING: SQL: DECLARE c1 CURSOR FOR SELECT a FROM public.t\nTime: 6690.425 ms (00:06.690)\n---------------------------------------------------------------------\n\n6690ms without sampling, and 44ms with sampling - quite an improvement. \nOf course, the difference may be much smaller/larger in other cases.\n\nNow, there's a couple issues ...\n\nFirstly, the FDW API forces a bit strange division of work between \ndifferent methods and duplicating some of it (and it's already mentioned \nin postgresAnalyzeForeignTable). But that's minor and can be fixed.\n\nThe other issue is which sampling method to use - we have SYSTEM and \nBERNOULLI built in, and the tsm_system_rows as an extension (and _time, \nbut that's not very useful here). I guess we'd use one of the built-in \nones, because that'll work on more systems out of the box.\n\nBut that leads to the main issue - determining the fraction of rows to \nsample. We know how many rows we want to sample, but we have no idea how \nmany rows there are in total. We can look at reltuples, but we can't be \nsure how accurate / up-to-date that value is.\n\nThe patch just trusts it unless it's obviously bogus (-1, 0, etc.) and \napplies some simple sanity checks, but I wonder if we need to do more \n(e.g. look at relation size and adjust reltuples by current/relpages).\n\nFWIW this is yet a bit more convoluted when analyzing partitioned table \nwith foreign partitions, because we only ever look at relation sizes to \ndetermine how many rows to sample from each partition.\n\n\nregards\n\n[1] \nhttps://www.postgresql.org/message-id/bdb0bea2-a0da-1f1d-5c92-96ff90c198eb%40postgrespro.ru\n\n[2] \nhttps://www.postgresql.org/message-id/20200829162231.GE29590%40tamriel.snowman.net\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 11 Feb 2022 18:02:37 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> So here we go. The patch does a very simple thing - it uses TABLESAMPLE \n> to collect/transfer just a small sample from the remote node, saving \n> both CPU and network.\n\nThis is great if the remote end has TABLESAMPLE, but pre-9.5 servers\ndon't, and postgres_fdw is supposed to still work with old servers.\nSo you need some conditionality for that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:39:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On Fri, Feb 11, 2022 at 12:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> > So here we go. The patch does a very simple thing - it uses TABLESAMPLE\n> > to collect/transfer just a small sample from the remote node, saving\n> > both CPU and network.\n>\n> This is great if the remote end has TABLESAMPLE, but pre-9.5 servers\n> don't, and postgres_fdw is supposed to still work with old servers.\n> So you need some conditionality for that.\n\nI think it's going to be necessary to compromise on that at some\npoint. I don't, for example, think it would be reasonable for\npostgres_fdw to have detailed knowledge of which operators can be\npushed down as a function of the remote PostgreSQL version. Nor do I\nthink that we care about whether this works at all against, say,\nPostgreSQL 8.0. I'm not sure where it's reasonable to draw a line and\nsay we're not going to expend any more effort, and maybe 15 with 9.5\nis a small enough gap that we still care at least somewhat about\ncompatibility. But even that is not 100% obvious to me.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 12:43:31 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I think it's going to be necessary to compromise on that at some\n> point.\n\nSure. The existing postgres_fdw documentation [1] already addresses\nthis issue:\n\n postgres_fdw can be used with remote servers dating back to PostgreSQL\n 8.3. Read-only capability is available back to 8.1. A limitation\n however is that postgres_fdw generally assumes that immutable built-in\n functions and operators are safe to send to the remote server for\n execution, if they appear in a WHERE clause for a foreign table. Thus,\n a built-in function that was added since the remote server's release\n might be sent to it for execution, resulting in “function does not\n exist” or a similar error. This type of failure can be worked around\n by rewriting the query, for example by embedding the foreign table\n reference in a sub-SELECT with OFFSET 0 as an optimization fence, and\n placing the problematic function or operator outside the sub-SELECT.\n\nWhile I'm not opposed to moving those goalposts at some point,\nI think pushing them all the way up to 9.5 for this one easily-fixed\nproblem is not very reasonable.\n\nGiven other recent discussion, an argument could be made for moving\nthe cutoff to 9.2, on the grounds that it's too hard to test against\nanything older. But that still leaves us needing a version check\nif we want to use TABLESAMPLE.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/postgres-fdw.html#id-1.11.7.45.17\n\n\n", "msg_date": "Fri, 11 Feb 2022 13:06:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On Fri, Feb 11, 2022 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> While I'm not opposed to moving those goalposts at some point,\n> I think pushing them all the way up to 9.5 for this one easily-fixed\n> problem is not very reasonable.\n>\n> Given other recent discussion, an argument could be made for moving\n> the cutoff to 9.2, on the grounds that it's too hard to test against\n> anything older. But that still leaves us needing a version check\n> if we want to use TABLESAMPLE.\n\nOK, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Fri, 11 Feb 2022 13:13:46 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 2/11/22 19:13, Robert Haas wrote:\n> On Fri, Feb 11, 2022 at 1:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While I'm not opposed to moving those goalposts at some point,\n>> I think pushing them all the way up to 9.5 for this one easily-fixed\n>> problem is not very reasonable.\n>>\n>> Given other recent discussion, an argument could be made for moving\n>> the cutoff to 9.2, on the grounds that it's too hard to test against\n>> anything older. But that still leaves us needing a version check\n>> if we want to use TABLESAMPLE.\n> \n> OK, thanks.\n> \n\nYeah, I think checking server_version is fairly simple. Which is mostly\nwhy I haven't done anything about that in the submitted patch, because\nthe other issues (what fraction to sample) seemed more important.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 11 Feb 2022 20:52:46 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Hello,\n \nI find it a great idea to use TABLESAMPLE in postgres_fdw ANALYZE.\nLet me offer you some ideas how to resolve you problems.\n \nTomas Vondra < tomas.vondra@enterprisedb.com > writes:\n> The other issue is which sampling method to use - we have SYSTEM and \n> BERNOULLI built in, and the tsm_system_rows as an extension (and _time,\n> but that's not very useful here). I guess we'd use one of the built-in\n> ones, because that'll work on more systems out of the box.\n \nIt’s hard to choose between speed and quality, but I think we need\nSYSTEM method here. This patch is for speeding-up ANALYZE,\nand SYSTEM method will faster than BERNOULLI on fraction\nvalues to 50%.\n \n> But that leads to the main issue - determining the fraction of rows to\n> sample. We know how many rows we want to sample, but we have no idea how\n> many rows there are in total. We can look at reltuples, but we can't be\n> sure how accurate / up-to-date that value is.\n>\n> The patch just trusts it unless it's obviously bogus (-1, 0, etc.) and\n> applies some simple sanity checks, but I wonder if we need to do more\n> (e.g. look at relation size and adjust reltuples by current/relpages).\n \nI found a query on Stackoverflow (it does similar thing to what\nestimate_rel_size does) that may help with it. So that makes\ntsm_system_rows unnecessary.\n \n----------------------------------------------------------------------\nSELECT\n    (CASE WHEN relpages = 0 THEN float8 '0'\n                ELSE reltuples / relpages END\n    * (pg_relation_size(oid) / pg_catalog.current_setting('block_size')::int)\n    )::bigint\nFROM pg_class c\nWHERE c.oid = 'tablename'::regclass;\n----------------------------------------------------------------------\n \nAnd one more refactoring note. New function deparseAnalyzeSampleSql\nduplicates function deparseAnalyzeSql (is previous in file deparse.c)\nexcept for the new last line. I guess it would be better to add new\nparameter — double sample_frac — to existing function\ndeparseAnalyzeSql and use it as a condition for adding\n\"TABLESAMPLE SYSTEM...\" to SQL query (set it to zero when\ndo_sample is false). Or you may also add do_sample as a parameter to\ndeparseAnalyzeSql, but as for me that’s redundantly.\n \nStackoverflow: https://stackoverflow.com/questions/7943233/fast-way-to-discover-the-row-count-of-a-table-in-postgresql\n \n--\nSofia Kopikova\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n \n \n \nHello, I find it a great idea to use TABLESAMPLE in postgres_fdw ANALYZE.Let me offer you some ideas how to resolve you problems. Tomas Vondra <tomas.vondra@enterprisedb.com> writes:> The other issue is which sampling method to use - we have SYSTEM and > BERNOULLI built in, and the tsm_system_rows as an extension (and _time,> but that's not very useful here). I guess we'd use one of the built-in> ones, because that'll work on more systems out of the box. It’s hard to choose between speed and quality, but I think we needSYSTEM method here. This patch is for speeding-up ANALYZE,and SYSTEM method will faster than BERNOULLI on fractionvalues to 50%. > But that leads to the main issue - determining the fraction of rows to> sample. We know how many rows we want to sample, but we have no idea how> many rows there are in total. We can look at reltuples, but we can't be> sure how accurate / up-to-date that value is.>> The patch just trusts it unless it's obviously bogus (-1, 0, etc.) and> applies some simple sanity checks, but I wonder if we need to do more> (e.g. look at relation size and adjust reltuples by current/relpages). I found a query on Stackoverflow (it does similar thing to whatestimate_rel_size does) that may help with it. So that makestsm_system_rows unnecessary. ----------------------------------------------------------------------SELECT    (CASE WHEN relpages = 0 THEN float8 '0'                ELSE reltuples / relpages END    * (pg_relation_size(oid) / pg_catalog.current_setting('block_size')::int)    )::bigintFROM pg_class cWHERE c.oid = 'tablename'::regclass;---------------------------------------------------------------------- And one more refactoring note. New function deparseAnalyzeSampleSqlduplicates function deparseAnalyzeSql (is previous in file deparse.c)except for the new last line. I guess it would be better to add newparameter — double sample_frac — to existing functiondeparseAnalyzeSql and use it as a condition for adding\"TABLESAMPLE SYSTEM...\" to SQL query (set it to zero whendo_sample is false). Or you may also add do_sample as a parameter todeparseAnalyzeSql, but as for me that’s redundantly. Stackoverflow: https://stackoverflow.com/questions/7943233/fast-way-to-discover-the-row-count-of-a-table-in-postgresql --Sofia KopikovaPostgres Professional: http://www.postgrespro.comThe Russian Postgres Company", "msg_date": "Wed, 16 Feb 2022 15:56:58 +0300", "msg_from": "=?UTF-8?B?U29maWEgS29waWtvdmE=?= <s.kopikova@postgrespro.ru>", "msg_from_op": false, "msg_subject": "\n =?UTF-8?B?UmU6IHBvc3RncmVzX2ZkdzogdXNpbmcgVEFCTEVTQU1QTEUgdG8gY29sbGVj?=\n =?UTF-8?B?dCByZW1vdGUgc2FtcGxl?=" }, { "msg_contents": "Hi,\n\nhere's a slightly updated version of the patch series. The 0001 part\nadds tracking of server_version_num, so that it's possible to enable\nother features depending on it. In this case it's used to decide whether\nTABLESAMPLE is supported.\n\nThe 0002 part modifies the sampling. I realized we can do something\nsimilar even on pre-9.5 releases, by running \"WHERE random() < $1\". Not\nperfect, because it still has to read the whole table, but still better\nthan also sending it over the network.\n\nThere's a \"sample\" option for foreign server/table, which can be used to\ndisable the sampling if needed.\n\nA simple measurement on a table with 10M rows, on localhost.\n\n old: 6600ms\n random: 450ms\n tablesample: 40ms (system)\n tablesample: 200ms (bernoulli)\n\nLocal analyze takes ~190ms, so that's quite close.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 18 Feb 2022 14:28:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "\n\nOn 2022/02/18 22:28, Tomas Vondra wrote:\n> Hi,\n> \n> here's a slightly updated version of the patch series.\n\nThanks for updating the patches!\n\n\n> The 0001 part\n> adds tracking of server_version_num, so that it's possible to enable\n> other features depending on it.\n\nLike configure_remote_session() does, can't we use PQserverVersion() instead of implementing new function GetServerVersion()?\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n", "msg_date": "Tue, 22 Feb 2022 09:36:49 +0900", "msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 2/22/22 01:36, Fujii Masao wrote:\n> \n> \n> On 2022/02/18 22:28, Tomas Vondra wrote:\n>> Hi,\n>>\n>> here's a slightly updated version of the patch series.\n> \n> Thanks for updating the patches!\n> \n> \n>> The 0001 part\n>> adds tracking of server_version_num, so that it's possible to enable\n>> other features depending on it.\n> \n> Like configure_remote_session() does, can't we use PQserverVersion()\n> instead of implementing new function GetServerVersion()?\n> \n\nAh! My knowledge of libpq is limited, so I wasn't sure this function\nexists. It'll simplify the patch.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 22 Feb 2022 21:12:15 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 2/22/22 21:12, Tomas Vondra wrote:\n> On 2/22/22 01:36, Fujii Masao wrote:\n>>\n>>\n>> On 2022/02/18 22:28, Tomas Vondra wrote:\n>>> Hi,\n>>>\n>>> here's a slightly updated version of the patch series.\n>>\n>> Thanks for updating the patches!\n>>\n>>\n>>> The 0001 part\n>>> adds tracking of server_version_num, so that it's possible to enable\n>>> other features depending on it.\n>>\n>> Like configure_remote_session() does, can't we use PQserverVersion()\n>> instead of implementing new function GetServerVersion()?\n>>\n> \n> Ah! My knowledge of libpq is limited, so I wasn't sure this function\n> exists. It'll simplify the patch.\n> \n\nAnd here's the slightly simplified patch, without the part adding the\nunnecessary GetServerVersion() function.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Wed, 23 Feb 2022 00:51:24 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Hi,\n\nOn 2022-02-23 00:51:24 +0100, Tomas Vondra wrote:\n> And here's the slightly simplified patch, without the part adding the\n> unnecessary GetServerVersion() function.\n\nDoesn't apply anymore: http://cfbot.cputube.org/patch_37_3535.log\nMarked as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 18:18:20 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2022-02-23 00:51:24 +0100, Tomas Vondra wrote:\n>> And here's the slightly simplified patch, without the part adding the\n>> unnecessary GetServerVersion() function.\n\n> Doesn't apply anymore: http://cfbot.cputube.org/patch_37_3535.log\n> Marked as waiting-on-author.\n\nHere's a rebased version that should at least pass regression tests.\n\nI've not reviewed it in any detail, but:\n\n* I'm not really on board with defaulting to SYSTEM sample method,\nand definitely not on board with not allowing any other choice.\nWe don't know enough about the situation in a remote table to be\nconfident that potentially-nonrandom sampling is OK. So personally\nI'd default to BERNOULLI, which is more reliable and seems plenty fast\nenough given your upthread results. It could be an idea to extend the\nsample option to be like \"sample [ = methodname ]\", if you want more\nflexibility, but I'd be happy leaving that for another time.\n\n* The code depending on reltuples is broken in recent server versions,\nand might give useless results in older ones too (if reltuples =\nrelpages = 0). Ideally we'd retrieve all of reltuples, relpages, and\npg_relation_size(rel), and do the same calculation the planner does.\nNot sure if pg_relation_size() exists far enough back though.\n\n* Copying-and-pasting all of deparseAnalyzeSql (twice!) seems pretty\nbletcherous. Why not call that and then add a WHERE clause to its\nresult, or just add some parameters to it so it can do that itself?\n\n* More attention to updating relevant comments would be appropriate,\neg here you've not bothered to fix the adjacent falsified comment:\n\n \t/* We've retrieved all living tuples from foreign server. */\n-\t*totalrows = astate.samplerows;\n+\tif (do_sample)\n+\t\t*totalrows = reltuples;\n+\telse\n+\t\t*totalrows = astate.samplerows;\n\n* Needs docs obviously. I'm not sure if the existing regression\ntesting covers the new code adequately or if we need more cases.\n\nHaving said that much, I'm going to leave it in Waiting on Author\nstate.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 16 Jul 2022 17:57:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 7/16/22 23:57, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n>> On 2022-02-23 00:51:24 +0100, Tomas Vondra wrote:\n>>> And here's the slightly simplified patch, without the part adding the\n>>> unnecessary GetServerVersion() function.\n> \n>> Doesn't apply anymore: http://cfbot.cputube.org/patch_37_3535.log\n>> Marked as waiting-on-author.\n> \n> Here's a rebased version that should at least pass regression tests.\n>\n\nThanks. I've been hacking on this over the past few days, and by\ncoincidence I've been improving exactly the stuff you've pointed out in\nthe review. 0001 is just the original patch rebased, 0002 includes all\nthe various changes.\n\n> I've not reviewed it in any detail, but:\n> \n> * I'm not really on board with defaulting to SYSTEM sample method,\n> and definitely not on board with not allowing any other choice.\n> We don't know enough about the situation in a remote table to be\n> confident that potentially-nonrandom sampling is OK. So personally\n> I'd default to BERNOULLI, which is more reliable and seems plenty fast\n> enough given your upthread results. It could be an idea to extend the\n> sample option to be like \"sample [ = methodname ]\", if you want more\n> flexibility, but I'd be happy leaving that for another time.\n> \n\nI agree, I came roughly to the same conclusion, so I replaced the simple\non/off option with these options:\n\noff - Disables the remote sampling, so we just fetch everything and do\nsampling on the local node, just like today.\n\nrandom - Remote sampling, but \"naive\" implementation using random()\nfunction. The advantage is this reduces the amount of data we need to\ntransfer, but it still reads the whole table. This should work for all\nserver versions, I believe.\n\nsystem - TABLESAMPLE system method.\n\nbernoulli - TABLESAMOLE bernoulli (default for 9.5+)\n\nauto - picks bernoulli on 9.5+, random on older servers.\n\nI'm not sure about custom TABLESAMPLE methods - that adds more\ncomplexity to detect if it's installed, it's trickier to decide what's\nthe best choice (for \"auto\" to make decide), and the parameter is also\ndifferent (e.g. system_rows uses number of rows vs. sampling rate).\n\n> * The code depending on reltuples is broken in recent server versions,\n> and might give useless results in older ones too (if reltuples =\n> relpages = 0). Ideally we'd retrieve all of reltuples, relpages, and\n> pg_relation_size(rel), and do the same calculation the planner does.\n> Not sure if pg_relation_size() exists far enough back though.\n> \n\nYes, I noticed that too, and the reworked code should deal with this\nreltuples=0 (by just disabling remote sampling).\n\nI haven't implemented the reltuples/relpages correction yet, but I don't\nthink compatibility would be an issue - deparseAnalyzeSizeSql() already\ncalls pg_relation_size(), after all.\n\nFWIW it seems a bit weird being so careful about adjusting reltuples,\nwhen acquire_inherited_sample_rows() only really looks at relpages when\ndeciding how many rows to sample from each partition. If our goal is to\nuse a more accurate reltuples, maybe we should do that in the first step\nalready. Otherwise we may end up build with a sample that does not\nreflect sizes of the partitions correctly.\n\nOf course, the sample rate also matters for non-partitioned tables.\n\n\n> * Copying-and-pasting all of deparseAnalyzeSql (twice!) seems pretty\n> bletcherous. Why not call that and then add a WHERE clause to its\n> result, or just add some parameters to it so it can do that itself?\n> \n\nRight. I ended up refactoring this into a single function, with a\n\"method\" parameter that determines if/how we do the remote sampling.\n\n> * More attention to updating relevant comments would be appropriate,\n> eg here you've not bothered to fix the adjacent falsified comment:\n> \n> \t/* We've retrieved all living tuples from foreign server. */\n> -\t*totalrows = astate.samplerows;\n> +\tif (do_sample)\n> +\t\t*totalrows = reltuples;\n> +\telse\n> +\t\t*totalrows = astate.samplerows;\n> \n\nYep, fixed.\n\n> * Needs docs obviously. I'm not sure if the existing regression\n> testing covers the new code adequately or if we need more cases.\n> \n\nYep, I added the \"sampling_method\" to postgres-fdw.sgml.\n\n> Having said that much, I'm going to leave it in Waiting on Author\n> state.\n\nThanks. I'll switch this to \"needs review\" now.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Mon, 18 Jul 2022 13:36:23 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> Thanks. I'll switch this to \"needs review\" now.\n\nOK, I looked through this, and attach some review suggestions in the\nform of a delta patch. (0001 below is your two patches merged, 0002\nis my delta.) A lot of the delta is comment-smithing, but not all.\n\nAfter reflection I think that what you've got, ie use reltuples but\ndon't try to sample if reltuples <= 0, is just fine. The remote\nwould only have reltuples <= 0 in a never-analyzed table, which\nshouldn't be a situation that persists for long unless the table\nis tiny. Also, if reltuples is in error, the way to bet is that\nit's too small thanks to the table having been enlarged. But\nan error in that direction doesn't hurt us: we'll overestimate\nthe required sample_frac and pull back more data than we need,\nbut we'll still end up with a valid sample of the right size.\nSo I doubt it's worth the complication to try to correct based\non relpages etc. (Note that any such correction would almost\ncertainly end in increasing our estimate of reltuples. But\nit's safer to have an underestimate than an overestimate.)\n\nI messed around with the sample_frac choosing logic slightly,\nto make it skip pointless calculations if we decide right off\nthe bat to disable sampling. That way we don't need to worry\nabout avoiding zero divide, nor do we have to wonder if any\nof the later calculations could misbehave.\n\nI left your logic about \"disable if saving fewer than 100 rows\"\nalone, but I have to wonder if using an absolute threshold rather\nthan a relative one is well-conceived. Sampling at a rate of\n99.9 percent seems pretty pointless, but this code is perfectly\ncapable of accepting that if reltuples is big enough. So\npersonally I'd do that more like\n\n\tif (sample_frac > 0.95)\n\t method = ANALYZE_SAMPLE_OFF;\n\nwhich is simpler and would also eliminate the need for the previous\nrange-clamp step. I'm not sure what the right cutoff is, but\nyour \"100 tuples\" constant is just as arbitrary.\n\nI rearranged the docs patch too. Where you had it, analyze_sampling\nwas between fdw_startup_cost/fdw_tuple_cost and the following para\ndiscussing them, which didn't seem to me to flow well at all. I ended\nup putting analyze_sampling in its own separate list. You could almost\nmake a case for giving it its own <sect3>, but I concluded that was\nprobably overkill.\n\nOne thing I'm not happy about, but did not touch here, is the expense\nof the test cases you added. On my machine, that adds a full 10% to\nthe already excessively long runtime of postgres_fdw.sql --- and I\ndo not think it's buying us anything. It is not this module's job\nto test whether bernoulli sampling works on partitioned tables.\nI think you should just add enough to make sure we exercise the\nrelevant code paths in postgres_fdw itself.\n\nWith these issues addressed, I think this'd be committable.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 18 Jul 2022 14:45:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "\n\nOn 7/18/22 20:45, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> Thanks. I'll switch this to \"needs review\" now.\n> \n> OK, I looked through this, and attach some review suggestions in the\n> form of a delta patch. (0001 below is your two patches merged, 0002\n> is my delta.) A lot of the delta is comment-smithing, but not all.\n> \n\nThanks!\n\n> After reflection I think that what you've got, ie use reltuples but\n> don't try to sample if reltuples <= 0, is just fine. The remote\n> would only have reltuples <= 0 in a never-analyzed table, which\n> shouldn't be a situation that persists for long unless the table\n> is tiny. Also, if reltuples is in error, the way to bet is that\n> it's too small thanks to the table having been enlarged. But\n> an error in that direction doesn't hurt us: we'll overestimate\n> the required sample_frac and pull back more data than we need,\n> but we'll still end up with a valid sample of the right size.\n> So I doubt it's worth the complication to try to correct based\n> on relpages etc. (Note that any such correction would almost\n> certainly end in increasing our estimate of reltuples. But\n> it's safer to have an underestimate than an overestimate.)\n> \n\nI mostly agree, particularly for the non-partitioned case.\n\nI we want to improve sampling for partitioned cases (where the foreign\ntable is just one of many partitions), I think we'd have to rework how\nwe determine sample size for each partition. Now we simply calculate\nthat from relpages, which seems quite fragile (different amounts of\nbloat, different tuple densities) and somewhat strange for FDW serves\nthat don't use the same \"page\" concept.\n\nSo it may easily happen we determine bogus sample sizes for each\npartition. The difficulties when calculating the sample_frac is just a\nsecondary issue.\n\nOTOH the concept of a \"row\" seems way more general, so perhaps\nacquire_inherited_sample_rows should use reltuples, and if we want to do\ncorrection it should happen at this stage already.\n\n\n> I messed around with the sample_frac choosing logic slightly,\n> to make it skip pointless calculations if we decide right off\n> the bat to disable sampling. That way we don't need to worry\n> about avoiding zero divide, nor do we have to wonder if any\n> of the later calculations could misbehave.\n> \n\nThanks.\n\n> I left your logic about \"disable if saving fewer than 100 rows\"\n> alone, but I have to wonder if using an absolute threshold rather\n> than a relative one is well-conceived. Sampling at a rate of\n> 99.9 percent seems pretty pointless, but this code is perfectly\n> capable of accepting that if reltuples is big enough. So\n> personally I'd do that more like\n> \n> \tif (sample_frac > 0.95)\n> \t method = ANALYZE_SAMPLE_OFF;\n> \n> which is simpler and would also eliminate the need for the previous\n> range-clamp step. I'm not sure what the right cutoff is, but\n> your \"100 tuples\" constant is just as arbitrary.\n> \n\nI agree there probably is not much difference between a threshold on\nsample_frac directly and number of rows, at least in general. My\nreasoning for switching to \"100 rows\" is that in most cases the network\ntransfer is probably more costly than \"local costs\", and 5% may be quite\na few rows (particularly with higher statistics target). I guess the\nproper approach would be to make some simple costing, but that seems\nlike an overkill.\n\n> I rearranged the docs patch too. Where you had it, analyze_sampling\n> was between fdw_startup_cost/fdw_tuple_cost and the following para\n> discussing them, which didn't seem to me to flow well at all. I ended\n> up putting analyze_sampling in its own separate list. You could almost\n> make a case for giving it its own <sect3>, but I concluded that was\n> probably overkill.\n> \n\nThanks.\n\n> One thing I'm not happy about, but did not touch here, is the expense\n> of the test cases you added. On my machine, that adds a full 10% to\n> the already excessively long runtime of postgres_fdw.sql --- and I\n> do not think it's buying us anything. It is not this module's job\n> to test whether bernoulli sampling works on partitioned tables.\n> I think you should just add enough to make sure we exercise the\n> relevant code paths in postgres_fdw itself.\n> \n\nRight, I should have commented on that. The purpose of those tests was\nverifying that if we change the sampling method on server/table, the\ngenerated query changes accordingly, etc. But that's a bit futile\nbecause we don't have a good way of verifying what query was used - it\nworked during development, as I added elog(WARNING).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Tue, 19 Jul 2022 21:07:08 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> I we want to improve sampling for partitioned cases (where the foreign\n> table is just one of many partitions), I think we'd have to rework how\n> we determine sample size for each partition. Now we simply calculate\n> that from relpages, which seems quite fragile (different amounts of\n> bloat, different tuple densities) and somewhat strange for FDW serves\n> that don't use the same \"page\" concept.\n\n> So it may easily happen we determine bogus sample sizes for each\n> partition. The difficulties when calculating the sample_frac is just a\n> secondary issue.\n\n> OTOH the concept of a \"row\" seems way more general, so perhaps\n> acquire_inherited_sample_rows should use reltuples, and if we want to do\n> correction it should happen at this stage already.\n\nYeah, there's definitely something to be said for changing that to be\nbased on rowcount estimates instead of physical size. I think it's\na matter for a different patch though, and not a reason to hold up\nthis one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 19 Jul 2022 15:27:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Hi!\n\nHere's an updated patch, including all the changes proposed by Tom in\nhis review. There were two more points left [1], namely:\n\n1) The useless test added to postgres_fdw.sql consuming a lot of time.\n\nI decided to just rip this out and replace it with a much simpler test,\nthat simply changes the sampling method and does ANALYZE. Ideally we'd\nalso verify we actually generated the right SQL query to be executed on\nthe remote server, but there doesn't seem to be a good way to do that\n(e.g. we can't do EXPLAIN to show the \"Remote SQL\" on the \"ANALYZE\").\n\nSo I guess just changing the method and running ANALYZE will have to be\nenough for now (coverage report should at least tell us we got to all\nthe SQL variants).\n\n\n2) the logic disabling sampling when the fraction gets too high\n\nI based the condition on the absolute number of \"unsampled\" rows, Tom\nsuggested maybe it should be just a simple condition on sample_frac\n(e.g. 95%). But neither of us was sure what would be a good value.\n\nSo I did a couple tests to get at least some idea what would be a good\nthreshold, and it turns out the threshold is mostly useless. I'll\ndiscuss the measurements I did in a bit (some of the findings are a bit\namusing or even hilarious, actually), but the main reasons are:\n\n- The sampling overhead is negligible (compared to transferring the\ndata, even on localhost), and the behavior is very smooth. So almost\nnever exceed reading everything, unless sample_frac >= 0.99 and even\nthere it's a tiny difference. So there's almost no risk, I think.\n\n- For small tables it doesn't really matter. It's going to be fast\nanyway, and the difference between reading 10000 or 11000 rows is going\nto be just noise. Who cares if this takes 10 or 11 ms ...\n\n- For large tables, we'll never even get to these high sample_frac\nvalues. Imagine a table with 10M rows - the highers stats target we\nallow is 10k, and the sample size is 300 * target, so 3M rows. It\ndoesn't matter if the condition is 0.95 or 0.99, because for this table\nwe'll never ask for a sample above 30%.\n\n- For the tables in between it might be more relevant, but the simple\ntruth is that reading the row and sampling it remotely is way cheaper\nthan the network transfer, even if on localhost. The data suggest that\nreading+sampling a row costs ~0.2us at most, but sending it is ~1.5us\n(localhost) or ~5.5us (local network).\n\nSo I just removed the threshold from the patch, and we'll request\nsampling even with sample_frac=100% (if that happens).\n\n\nsampling test\n-------------\n\nI did a simple test to collect some data - create a table, and sample\nvarious fractions in the ways discussed in this patch - either locally\nor through a FDW.\n\nThis required a bit of care to ensure the sampling happens in the right\nplace (and e.g. we don't push the random() or tablesample down), which I\ndid by pointing the FDW table not to a remote table, but to a view with\nan optimization fence (OFFSET 0). See the run-local.sh script.\n\nThe script populates the table with different numbers of rows, samples\ndifferent fractions of it, etc. I also did this from a different\nmachine, to see what a bit more network latency would do (that's what\nrun-remote.sh is for).\n\n\nresults (fdw-sampling-test.pdf)\n-------------------------------\n\nThe attached PDF shows the results - first page is for the foreign table\nin the same instance (i.e. localhost, latency ~0.01ms), second page is\nfor FDW pointing to a machine in the same network (latency ~0.1ms).\n\nLeft column is always the table \"directly\", right column is through the\nFDW. On the x-axis is the fraction of the table we sample, y-axis is\nduration in milliseconds. \"full\" means \"reading everything\" (i.e. what\nthe FDW does now), the other options should be clear I think.\n\nThe first two dataset sizes (10k and 10M rows) are tiny (10M is ~2GB),\nand fit into RAM, which is 8GB. The 100M is ~21GB, so much larger.\n\nIn the \"direct\" (non-FDW) sampling, the various sampling methods start\nlosing to seqscan fairly soon - \"random\" is consistently slower,\n\"bernoulli\" starts losing at ~30%, \"system\" as ~80%. This is not very\nsurprising, particularly for bernoulli/random which actually read all\nthe rows anyway. But the overhead is pretty limited to ~30% on top of\nthe seqscan.\n\nBut in the \"FDW\" sampling (right column), it's entirely different story,\nand all the methods clearly win over just transferring everything and\nonly then doing the sampling.\n\nWho cares if the remote sampling means means we have to pay 0.2us\ninstead of 0.15us (per row), when the transfer costs 1.5us per row?\n\nThe 100M case shows an interesting behavior for the \"system\" method,\nwhich quickly spikes to ~2x of the \"full\" method when sampling ~20% of\nthe table, and then gradually improves again.\n\nMy explanation is that this is due to \"system\" making the I/O patter\nincreasingly more random, because it jumps blocks in a way that makes\nreadahead impossible. And then as the fraction increases, it becomes\nmore sequential again.\n\nAll the other methods are pretty much equal to just scanning everything\nsequentially, and sampling rows one by one.\n\nThe \"system\" method in TABLESAMPLE would probably benefit from explicit\nprefetching, I guess. For ANALYZE this probably is not a huge, as we'll\nnever sample this large fraction for large tables (for 100M rows we peak\nat ~3% with target 10000, which is way before the peak). And for smaller\ntables we're more likely to hit cache (which is why the smaller data\nsets don't have this issue). But for explicit TABLESAMPLE queries that\nmay not be the case.\n\nAlthough, ANALYZE uses something like \"system\" to sample rows too, no?\n\nHowever, even this is not an issue for the FDW case - in that case it\nstill clearly wins over the current \"local sample\" approach, because\ntransferring the data is so expensive which makes the \"peak\" into a tiny\nhump.\n\nThe second page (different machine, local network) tells the same story,\nexcept that the differences are even clearer.\n\n\nANALYZE test\n------------\n\nSo I changed the patch, and did a similar test by running ANALYZE either\non the local or foreign table, using the different sampling methods.\nThis does not require the hacks to prevent pushdown etc. but it also\nmeans we can't determine sample_frac directly, only through statistics\ntarget (which is capped to 10k).\n\nIn this case, \"local\" means ANALYZE on the local table (which you might\nthink of as the \"baseline\"), and \"off\" means reading all data without\nremote sampling.\n\nFor the two smaller data sets (10k and 10M rows), the benefits are\npretty obvious. We're very close to the \"local\" results, because we save\na lot on copying only some of the rows. For 10M we only get to ~30%\nbefore we hit target=10k, which is we don't see it get closer to \"off\".\n\nBut now we get to the *hilarious* thing - if you look at the 10M result,\nyou may notice that *all* the sampling methods beat ANALYZE on the local\ntable.\n\nFor \"system\" (which wins from the very beginning) we might make some\nargument that the algorithm is simpler than what ANALYZE does, skips\nblocks differently, etc. - perhaps ...\n\nBut bernoulli/random are pretty much ideal sampling, reading/sampling\nall rows. And yet both methods start winning after crossing ~1% on this\ntables. In other words, it's about 3x faster to ANALYZE a table through\nFDW than directly ;-)\n\nAnyway, those issues have impact on this patch, I think. I believe the\nresults show what the patch does is reasonable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Sat, 8 Oct 2022 01:43:56 +0200", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "This patch looks good to me. I have two very minor nits: The inflation of the sample size by 10% is arbitrary but it doesn't seem unreasonable or concerning. It just makes me curious if there are any known cases that motivated adding this logic. Secondly, if the earliest non-deprecated version of PostgreSQL supports sampling, then you could optionally remove the logic that tests for that. The affected lines should be unreachable.", "msg_date": "Thu, 15 Dec 2022 16:34:35 +0000", "msg_from": "James Finnerty <jfinnert@amazon.com>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "James Finnerty <jfinnert@amazon.com> writes:\n> This patch looks good to me. I have two very minor nits: The inflation\n> of the sample size by 10% is arbitrary but it doesn't seem unreasonable\n> or concerning. It just makes me curious if there are any known cases\n> that motivated adding this logic.\n\nI wondered why, too.\n\n> Secondly, if the earliest non-deprecated version of PostgreSQL supports\n> sampling, then you could optionally remove the logic that tests for\n> that. The affected lines should be unreachable.\n\nWe've tried to keep postgres_fdw compatible with quite ancient remote\nservers (I think the manual claims back to 8.3, though it's unlikely\nanyone's tested that far back lately). This patch should not move those\ngoalposts, especially if it's easy not to.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 15 Dec 2022 11:46:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 12/15/22 17:46, Tom Lane wrote:\n> James Finnerty <jfinnert@amazon.com> writes:\n>> This patch looks good to me. I have two very minor nits: The inflation\n>> of the sample size by 10% is arbitrary but it doesn't seem unreasonable\n>> or concerning. It just makes me curious if there are any known cases\n>> that motivated adding this logic.\n> \n> I wondered why, too.\n> \n\nNot sure about known cases, but the motivation is explained in the\ncomment before the 10% is applied.\n\nThe repluples value is never going to be spot on, and we use that to\ndetermine what fraction of the table to sample (because all built-in\nTABLESAMPLE methods only accept fraction to sample, not expected number\nof rows).\n\nIf pg_class.reltuples is lower (than the actual row count), we'll end up\nwith sample fraction too high. That's mostly harmless, as we'll then\ndiscard some of the rows locally.\n\nBut if the pg_class.reltuples is higher, we'll end up sampling too few\nrows (less than targrows). This can happen e.g. after a bulk delete.\n\nYes, the 10% value is mostly arbitrary, and maybe it's pointless. How\nmuch may the stats change with 10% larger sample? Probably not much, so\nis it really solving anything?\n\nAlso, maybe it's fixing the issue at the wrong level - if stale\nreltuples are an issue, maybe the right fix is making it more accurate\non the source instance. Decrease autovacuum_vacuum_scale_factor, or\nmaybe also look at pg_stat_all_tables or something.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Wed, 21 Dec 2022 13:08:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Pushed.\n\nAfter thinking about it a bit more I decided to rip out the 10% sampling\nrate inflation. While stale reltuples may be an issue, but I couldn't\nconvince myself this would be an improvement overall, or that this is\nthe right place to deal with it.\n\nFirstly, if reltuples is too low (i.e. when the table grew), everything\nis \"fine\" - we sample more than targrows and then remove some of that\nlocally. We end up with same sample size as without this patch - which\nis not necessarily the right sample size (more about that in a minute),\nbut if it was good enough for now ... improving this bit was not the\nambition of this patch.\n\nIf reltuples is too high (e.g. right after bulk DELETE) we may end up\nsampling too few rows. If it's 1/2 what it should be, we'll end up\nsampling only 15k rows instead of 30k rows. But it's unclear how much\nshould we adjust the value - 10% as was in the patch (kinda based on\nautovacuum_analyze_scale_factor being 10%)?\n\nI'd argue that's too low to make any difference - cases that are only\n~10% off should work fine (we don't have perfect stats anyway). And for\nthe really bad cases 10% is not going to make a difference.\n\nAnd if the reltuples is that off, it'll probably affect even queries\nplanned on the remote node (directly or pushed down), so I guess the\nright fix should be making sure reltuples is not actually off - make\nautovacuum more aggressive or whatever.\n\n\nA thing that bothers me is that calculating sample size for partitioned\ntables is a bit cyclic - we look at number of blocks, and divide the\nwhole sample based on that. And that's generally correlated with the\nreltuples, but OTOH it may also be wrong - and not necessarily in the\nsame way. For example you might delete 90% of a table, which will make\nthe table overrepresented in the sample. But then reltuples may be quite\naccurate thanks to recent vacuum - so fixing this by blindly adjusting\nthe sampling rate seems misguided. If this turns out to be an issue in\npractice, we need to rethink acquire_inherited_sample_rows as a whole\n(not just for foreign tables).\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 31 Dec 2022 00:03:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> After thinking about it a bit more I decided to rip out the 10% sampling\n> rate inflation.\n\n+1. I'm not sure if there's anything more we need to do there, but\nthat didn't seem like that was it.\n\nI notice that the committed patch still has a reference to that hack\nthough:\n\n+ * Ensure the sampling rate is between 0.0 and 1.0, even after the\n+ * 10% adjustment above. (Clamping to 0.0 is just paranoia.)\n\nClamping still seems like a wise idea, but the comment is just\nconfusing now.\n\nAlso, I wonder if there is any possibility of ANALYZE failing\nwith\n\nERROR: TABLESAMPLE clause can only be applied to tables and materialized views\n\nI think the patch avoids that, but only accidentally, because\nreltuples will be 0 or -1 for a view. Maybe it'd be a good\nidea to pull back relkind along with reltuples, and check\nthat too?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 30 Dec 2022 23:42:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "\n\nOn 12/31/22 05:42, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> After thinking about it a bit more I decided to rip out the 10% sampling\n>> rate inflation.\n> \n> +1. I'm not sure if there's anything more we need to do there, but\n> that didn't seem like that was it.\n> \n> I notice that the committed patch still has a reference to that hack\n> though:\n> \n> + * Ensure the sampling rate is between 0.0 and 1.0, even after the\n> + * 10% adjustment above. (Clamping to 0.0 is just paranoia.)\n> \n> Clamping still seems like a wise idea, but the comment is just\n> confusing now.\n> \n\nYeah, I missed that reference. Will fix.\n\n> Also, I wonder if there is any possibility of ANALYZE failing\n> with\n> \n> ERROR: TABLESAMPLE clause can only be applied to tables and materialized views\n> \n> I think the patch avoids that, but only accidentally, because\n> reltuples will be 0 or -1 for a view. Maybe it'd be a good\n> idea to pull back relkind along with reltuples, and check\n> that too?\n\nNot sure. I guess we can rely on reltuples being 0 or -1 in such cases,\nbut maybe it'd be good to at least mention that in a comment? We're not\ngoing to use other reltuples values for views etc.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 31 Dec 2022 15:16:38 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 12/31/22 05:42, Tom Lane wrote:\n>> ERROR: TABLESAMPLE clause can only be applied to tables and materialized views\n>> I think the patch avoids that, but only accidentally, because\n>> reltuples will be 0 or -1 for a view. Maybe it'd be a good\n>> idea to pull back relkind along with reltuples, and check\n>> that too?\n\n> Not sure. I guess we can rely on reltuples being 0 or -1 in such cases,\n> but maybe it'd be good to at least mention that in a comment? We're not\n> going to use other reltuples values for views etc.\n\nWould anyone ever point a foreign table at a sequence? I guess it\nwould be a weird use-case, but it's possible, or was till now.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 31 Dec 2022 12:23:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 12/31/22 18:23, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> On 12/31/22 05:42, Tom Lane wrote:\n>>> ERROR: TABLESAMPLE clause can only be applied to tables and materialized views\n>>> I think the patch avoids that, but only accidentally, because\n>>> reltuples will be 0 or -1 for a view. Maybe it'd be a good\n>>> idea to pull back relkind along with reltuples, and check\n>>> that too?\n> \n>> Not sure. I guess we can rely on reltuples being 0 or -1 in such cases,\n>> but maybe it'd be good to at least mention that in a comment? We're not\n>> going to use other reltuples values for views etc.\n> \n> Would anyone ever point a foreign table at a sequence? I guess it\n> would be a weird use-case, but it's possible, or was till now.\n> \n\nYeah, it's a weird use case. I can't quite imagine why would anyone do\nthat, but I guess the mere possibility is sufficient reason to add the\nrelkind check ...\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 1 Jan 2023 23:36:49 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Hi,\n\nhere's two minor postgres_fdw patches, addressing the two issues\ndiscussed last week.\n\n\n1) 0001-Fix-stale-comment-in-postgres_fdw.patch\n\nThe first one cleans up the comment referencing the sample rate\nadjustment. While doing that, I realized without the adjustment we\nshould never get sample_rate outside the valid range because we do:\n\n if ((reltuples <= 0) || (targrows >= reltuples))\n method = ANALYZE_SAMPLE_OFF;\n\nSo I removed the clamping, or rather replaced it with an assert.\n\n\n2) 0002-WIP-check-relkind-in-FDW-analyze.patch\n\nThe second patch adds the relkind check, so that we only issue\nTABLESAMPLE on valid relation types (tables, matviews). But I'm not sure\nwe actually need it - the example presented earlier was foreign table\npointing to a sequence. But that actually works fine even without this\npatch, because fore sequences we have reltuples=1, which disables\nsampling due to the check mentioned above (because targrows >= 1).\n\nThe other issue with this patch is that it seems wrong to check the\nrelkind value locally - instead we should check this on the remote\nserver, because that's where we'll run TABLESAMPLE. Currently there are\nno differences between versions, but what if we implement TABLESAMPLE\nfor another relkind in the future? Then we'll either fail to use\nsampling or (worse) we'll try using TABLESAMPLE for unsupported relkind,\ndepending on which of the servers has newer version.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Thu, 5 Jan 2023 21:43:42 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The second patch adds the relkind check, so that we only issue\n> TABLESAMPLE on valid relation types (tables, matviews). But I'm not sure\n> we actually need it - the example presented earlier was foreign table\n> pointing to a sequence. But that actually works fine even without this\n> patch, because fore sequences we have reltuples=1, which disables\n> sampling due to the check mentioned above (because targrows >= 1).\n\nSeems pretty accidental still.\n\n> The other issue with this patch is that it seems wrong to check the\n> relkind value locally - instead we should check this on the remote\n> server, because that's where we'll run TABLESAMPLE. Currently there are\n> no differences between versions, but what if we implement TABLESAMPLE\n> for another relkind in the future? Then we'll either fail to use\n> sampling or (worse) we'll try using TABLESAMPLE for unsupported relkind,\n> depending on which of the servers has newer version.\n\nWe don't have a way to do that, and I don't think we need one. The\nworst case is that we don't try to use TABLESAMPLE when the remote\nserver is a newer version that would allow it. If we do extend the\nset of supported relkinds, we'd need a version-specific test in\npostgres_fdw so that we don't wrongly use TABLESAMPLE with an older\nremote server ... but that's hardly complicated, or a new concept.\n\nOn the other hand, if we let the code stand, the worst case is that\nANALYZE fails because we try to apply TABLESAMPLE in an unsupported\ncase, presumably with some remote relkind that doesn't exist today.\nI think that's clearly worse than not exploiting TABLESAMPLE\nalthough we could (with some other new remote relkind).\n\nAs far as the patch details go: I'd write 0001 as\n\n+ Assert(sample_frac >= 0.0 && sample_frac <= 1.0);\n\nThe way you have it seems a bit contorted. As for 0002, personally\nI'd rename the affected functions to reflect their expanded duties,\nand they *definitely* require adjustments to their header comments.\nFunctionally it looks fine though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Jan 2023 16:05:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "\n\nOn 1/5/23 22:05, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> The second patch adds the relkind check, so that we only issue\n>> TABLESAMPLE on valid relation types (tables, matviews). But I'm not sure\n>> we actually need it - the example presented earlier was foreign table\n>> pointing to a sequence. But that actually works fine even without this\n>> patch, because fore sequences we have reltuples=1, which disables\n>> sampling due to the check mentioned above (because targrows >= 1).\n> \n> Seems pretty accidental still.\n> \n>> The other issue with this patch is that it seems wrong to check the\n>> relkind value locally - instead we should check this on the remote\n>> server, because that's where we'll run TABLESAMPLE. Currently there are\n>> no differences between versions, but what if we implement TABLESAMPLE\n>> for another relkind in the future? Then we'll either fail to use\n>> sampling or (worse) we'll try using TABLESAMPLE for unsupported relkind,\n>> depending on which of the servers has newer version.\n> \n> We don't have a way to do that, and I don't think we need one. The\n> worst case is that we don't try to use TABLESAMPLE when the remote\n> server is a newer version that would allow it. If we do extend the\n> set of supported relkinds, we'd need a version-specific test in\n> postgres_fdw so that we don't wrongly use TABLESAMPLE with an older\n> remote server ... but that's hardly complicated, or a new concept.\n> \n> On the other hand, if we let the code stand, the worst case is that\n> ANALYZE fails because we try to apply TABLESAMPLE in an unsupported\n> case, presumably with some remote relkind that doesn't exist today.\n> I think that's clearly worse than not exploiting TABLESAMPLE\n> although we could (with some other new remote relkind).\n> \n\nOK\n\n> As far as the patch details go: I'd write 0001 as\n> \n> + Assert(sample_frac >= 0.0 && sample_frac <= 1.0);\n> \n> The way you have it seems a bit contorted.\n\nYeah, that's certainly cleaner.\n\n> As for 0002, personally\n> I'd rename the affected functions to reflect their expanded duties,\n> and they *definitely* require adjustments to their header comments.\n> Functionally it looks fine though.\n> \n\nNo argument about the header comments, ofc - I just haven't done that in\nthe WIP patch. As for the renaming, any suggestions for the new names?\nThe patch tweaks two functions in a way that affects what they return:\n\n1) deparseAnalyzeTuplesSql - adds relkind to the \"reltuples\" SQL\n\n2) postgresCountTuplesForForeignTable - adds can_sample flag\n\nThere are no callers outside postgresAcquireSampleRowsFunc, so what\nabout renaming them like this?\n\n deparseAnalyzeTuplesSql\n -> deparseAnalyzeInfoSql\n\n postgresCountTuplesForForeignTable\n -> postgresGetAnalyzeInfoForForeignTable\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Thu, 5 Jan 2023 22:47:23 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> There are no callers outside postgresAcquireSampleRowsFunc, so what\n> about renaming them like this?\n\n> deparseAnalyzeTuplesSql\n> -> deparseAnalyzeInfoSql\n\n> postgresCountTuplesForForeignTable\n> -> postgresGetAnalyzeInfoForForeignTable\n\nWFM.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 05 Jan 2023 16:56:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "I've pushed the comment/assert cleanup.\n\nHere's a cleaned up version of the relkind check. This is almost\nidentical to the patch from yesterday (plus the renames and updates of\ncomments for modified functions).\n\nThe one difference is that I realized the relkind check does not\nactually say we can't do sampling - it just means we can't use\nTABLESAMPLE to do it. We could still use \"random()\" ...\n\nFurthermore, I don't think we should silently disable sampling when the\nuser explicitly requests TABLESAMPLE by specifying bernoulli/system for\nthe table - IMHO it's less surprising to just fail in that case.\n\nSo we now do this:\n\n if (!can_tablesample && (method == ANALYZE_SAMPLE_AUTO))\n method = ANALYZE_SAMPLE_RANDOM;\n\nYes, we may still disable sampling when reltuples is -1, 0 or something\nlike that. But that's a condition that is expected for new relations and\nlikely to fix itself, which is not the case for relkind.\n\nOf course, all relkinds that don't support TABLESAMPLE currently have\nreltuples value that will disable sampling anyway (e.g. views have -1).\nSo we won't actually fallback to random() anyway, because we can't\ncalculate the sample fraction.\n\nThat's a bit annoying for foreign tables pointing at a view, which is a\nmore likely use case than table pointing at a sequence. And likely more\nof an issue, because views may return a many rows (while sequences have\nonly a single row).\n\nBut I realized we could actually still do \"random()\" sampling:\n\n SELECT * FROM t ORDER BY random() LIMIT $X;\n\nwhere $X is the target number of rows for sample for the table. Which\nwould result in plans like this (given sufficient work_mem values)\n\n QUERY PLAN\n -------------------------------------------------------------------\n Limit (actual rows=30000 loops=1)\n -> Sort (actual rows=30000 loops=1)\n Sort Key: (random())\n Sort Method: top-N heapsort Memory: 3916kB\n -> Append (actual rows=1000000 loops=1)\n\nEven with lower work_mem values this would likely be a win, due to\nsaving on network transfers.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company", "msg_date": "Fri, 6 Jan 2023 15:40:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> The one difference is that I realized the relkind check does not\n> actually say we can't do sampling - it just means we can't use\n> TABLESAMPLE to do it. We could still use \"random()\" ...\n\n> Furthermore, I don't think we should silently disable sampling when the\n> user explicitly requests TABLESAMPLE by specifying bernoulli/system for\n> the table - IMHO it's less surprising to just fail in that case.\n\nAgreed on both points. This patch looks good to me.\n\n> Of course, all relkinds that don't support TABLESAMPLE currently have\n> reltuples value that will disable sampling anyway (e.g. views have -1).\n> So we won't actually fallback to random() anyway, because we can't\n> calculate the sample fraction.\n> That's a bit annoying for foreign tables pointing at a view, which is a\n> more likely use case than table pointing at a sequence.\n\nRight, that's a case worth being concerned about.\n\n> But I realized we could actually still do \"random()\" sampling:\n> SELECT * FROM t ORDER BY random() LIMIT $X;\n\nHmm, interesting idea, but it would totally bollix our correlation\nestimates. Not sure that those are worth anything for remote views,\nbut still...\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 06 Jan 2023 11:58:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 1/6/23 17:58, Tom Lane wrote:\n> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>> The one difference is that I realized the relkind check does not\n>> actually say we can't do sampling - it just means we can't use\n>> TABLESAMPLE to do it. We could still use \"random()\" ...\n> \n>> Furthermore, I don't think we should silently disable sampling when the\n>> user explicitly requests TABLESAMPLE by specifying bernoulli/system for\n>> the table - IMHO it's less surprising to just fail in that case.\n> \n> Agreed on both points. This patch looks good to me.\n> \n\nGood, I'll get this committed.The \"ORDER BY random()\" idea is a possible\nimprovement, can be discussed on it's own.\n\n>> Of course, all relkinds that don't support TABLESAMPLE currently have\n>> reltuples value that will disable sampling anyway (e.g. views have -1).\n>> So we won't actually fallback to random() anyway, because we can't\n>> calculate the sample fraction.\n>> That's a bit annoying for foreign tables pointing at a view, which is a\n>> more likely use case than table pointing at a sequence.\n> \n> Right, that's a case worth being concerned about.\n> \n>> But I realized we could actually still do \"random()\" sampling:\n>> SELECT * FROM t ORDER BY random() LIMIT $X;\n> \n> Hmm, interesting idea, but it would totally bollix our correlation\n> estimates. Not sure that those are worth anything for remote views,\n> but still...\n\nBut isn't that an issue that we already have? I mean, if we do ANALYZE\non a foreign table pointing to a view, we fetch all the results. But if\nthe view does not have a well-defined ORDER BY, a trivial plan change\nmay change the order of results and thus the correlation.\n\nActually, how is a correlation even defined for a view?\n\nIt's true this \"ORDER BY random()\" thing would make it less stable, as\nit would change the correlation on every run. Although - the calculated\ncorrelation is actually quite stable, because it's guaranteed to be\npretty close to 0 because we make the order random.\n\nMaybe that's actually better than the current state where it depends on\nthe plan? Why not to look at the relkind and just set correlation to 0.0\nin such cases?\n\nBut if we want to restore that current behavior (where it depends on the\nactual query plan), we could do something like this:\n\n SELECT * FROM the_remote_view ORDER BY row_number() over ();\n\nBut yeah, this makes the remote sampling more expensive. Probably still\na win because of network costs, but not great.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Fri, 6 Jan 2023 23:41:48 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "On 1/6/23 23:41, Tomas Vondra wrote:\n> On 1/6/23 17:58, Tom Lane wrote:\n>> Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n>>> The one difference is that I realized the relkind check does not\n>>> actually say we can't do sampling - it just means we can't use\n>>> TABLESAMPLE to do it. We could still use \"random()\" ...\n>>\n>>> Furthermore, I don't think we should silently disable sampling when the\n>>> user explicitly requests TABLESAMPLE by specifying bernoulli/system for\n>>> the table - IMHO it's less surprising to just fail in that case.\n>>\n>> Agreed on both points. This patch looks good to me.\n>>\n> \n> Good, I'll get this committed.The \"ORDER BY random()\" idea is a possible\n> improvement, can be discussed on it's own.\n> \n\nPushed.\n\n>>> Of course, all relkinds that don't support TABLESAMPLE currently have\n>>> reltuples value that will disable sampling anyway (e.g. views have -1).\n>>> So we won't actually fallback to random() anyway, because we can't\n>>> calculate the sample fraction.\n>>> That's a bit annoying for foreign tables pointing at a view, which is a\n>>> more likely use case than table pointing at a sequence.\n>>\n>> Right, that's a case worth being concerned about.\n>>\n>>> But I realized we could actually still do \"random()\" sampling:\n>>> SELECT * FROM t ORDER BY random() LIMIT $X;\n>>\n>> Hmm, interesting idea, but it would totally bollix our correlation\n>> estimates. Not sure that those are worth anything for remote views,\n>> but still...\n> \n> But isn't that an issue that we already have? I mean, if we do ANALYZE\n> on a foreign table pointing to a view, we fetch all the results. But if\n> the view does not have a well-defined ORDER BY, a trivial plan change\n> may change the order of results and thus the correlation.\n> \n> Actually, how is a correlation even defined for a view?\n> \n> It's true this \"ORDER BY random()\" thing would make it less stable, as\n> it would change the correlation on every run. Although - the calculated\n> correlation is actually quite stable, because it's guaranteed to be\n> pretty close to 0 because we make the order random.\n> \n> Maybe that's actually better than the current state where it depends on\n> the plan? Why not to look at the relkind and just set correlation to 0.0\n> in such cases?\n> \n> But if we want to restore that current behavior (where it depends on the\n> actual query plan), we could do something like this:\n> \n> SELECT * FROM the_remote_view ORDER BY row_number() over ();\n> \n> But yeah, this makes the remote sampling more expensive. Probably still\n> a win because of network costs, but not great.\n> \n\nI've been thinking about this a bit more, and I'll consider submitting a\npatch for the next CF. IMHO it's probably better to accept correlation\nbeing 0 in these cases - it's more representative of what we know about\nthe view / plan output (or rather lack of knowledge).\n\nHowever, maybe views are not the best / most common example to think\nabout. I'd imagine it's much more common to reference a regular table,\nbut the table gets truncated / populated quickly, and/or the autovacuum\nworkers are busy so it takes time to update reltuples. But in this case\nit's also quite simple to fix the correlation by just ordering by ctid\n(which I guess we might do based on the relkind).\n\nThere's a minor issue with partitioned tables, with foreign partitions\npointing to remote views. This is kinda broken, because the sample size\nfor individual relations is determined based on relpages. But that's 0\nfor views, so these partitions get ignored when building the sample. But\nthat's a pre-existing issue.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sat, 7 Jan 2023 15:07:58 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": true, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> However, maybe views are not the best / most common example to think\n> about. I'd imagine it's much more common to reference a regular table,\n> but the table gets truncated / populated quickly, and/or the autovacuum\n> workers are busy so it takes time to update reltuples. But in this case\n> it's also quite simple to fix the correlation by just ordering by ctid\n> (which I guess we might do based on the relkind).\n\n> There's a minor issue with partitioned tables, with foreign partitions\n> pointing to remote views. This is kinda broken, because the sample size\n> for individual relations is determined based on relpages. But that's 0\n> for views, so these partitions get ignored when building the sample. But\n> that's a pre-existing issue.\n\nI wonder if we should stop consulting reltuples directly at all,\nand instead do\n\n\"EXPLAIN SELECT * FROM remote_table\"\n\nto get the remote planner's estimate of the number of rows involved.\nEven for a plain table, that's likely to be a better number.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 07 Jan 2023 10:20:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgres_fdw: using TABLESAMPLE to collect remote sample" } ]
[ { "msg_contents": "At PostGIS we've been thinking of ways to have better support, from\nPostgreSQL proper, to our upgrade strategy, which has always consisted\nin a SINGLE upgrade file good for upgrading from any older version.\n\nThe current lack of such support for EXTENSIONs forced us to install\na lot of files on disk and we'd like to stop doing that at some point\nin the future.\n\nThe proposal is to support wildcards for versions encoded in the\nfilename so that (for our case) a single wildcard could match \"any\nversion\". I've been thinking about the '%' character for that, to\nresemble the wildcard used for LIKE.\n\nHere's the proposal:\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n\nA very very short (and untested) patch which might (or\nmight not) support our case is attached.\n\nThe only problem with my proposal/patch would be the presence, on the\nwild, of PostgreSQL EXTENSION actually using the '%' character in\ntheir version strings, which is currently considered legit by\nPostgreSQL.\n\nHow do you feel about the proposal (which is wider than the patch) ?\n\n--strk; \n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html", "msg_date": "Fri, 11 Feb 2022 18:59:50 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "[PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> At PostGIS we've been thinking of ways to have better support, from\n> PostgreSQL proper, to our upgrade strategy, which has always consisted in\na\n> SINGLE upgrade file good for upgrading from any older version.\n> \n> The current lack of such support for EXTENSIONs forced us to install a lot\nof\n> files on disk and we'd like to stop doing that at some point in the\nfuture.\n> \n> The proposal is to support wildcards for versions encoded in the filename\nso\n> that (for our case) a single wildcard could match \"any version\". I've been\n> thinking about the '%' character for that, to resemble the wildcard used\nfor\n> LIKE.\n> \n> Here's the proposal:\n> https://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n> \n> A very very short (and untested) patch which might (or might not) support\nour\n> case is attached.\n> \n> The only problem with my proposal/patch would be the presence, on the\nwild,\n> of PostgreSQL EXTENSION actually using the '%' character in their version\n> strings, which is currently considered legit by PostgreSQL.\n> \n> How do you feel about the proposal (which is wider than the patch) ?\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n[Regina Obe] \n\nJust a heads up about the above, Sandro has added it as a commitfest item\nwhich hopefully we can polish in time for PG16.\nhttps://commitfest.postgresql.org/38/3654/\n\nDoes anyone think this is such a horrible idea that we should abandon all\nhope?\n\nThe main impetus is that many extensions (postgis, pgRouting, and I'm sure\nothers) have over 300 extensions script files that are exactly the same.\nWe'd like to reduce this footprint significantly.\n\nstrk said the patch is crappy so don't look at it just yet. We'll work on\npolishing it. I'll review and provide docs for it.\n\nThanks,\nRegina\n\n\n\n\n\n\n", "msg_date": "Fri, 27 May 2022 17:37:12 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Fri, 2022-05-27 at 17:37 -0400, Regina Obe wrote:\n> > At PostGIS we've been thinking of ways to have better support, from\n> > PostgreSQL proper, to our upgrade strategy, which has always consisted in a\n> > SINGLE upgrade file good for upgrading from any older version.\n> > \n> > The current lack of such support for EXTENSIONs forced us to install a lot of\n> > files on disk and we'd like to stop doing that at some point in the future.\n> > \n> > The proposal is to support wildcards for versions encoded in the filename so\n> > that (for our case) a single wildcard could match \"any version\". I've been\n> > thinking about the '%' character for that, to resemble the wildcard used for\n> > LIKE.\n> > \n> > Here's the proposal:\n> > https://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n> > \n> > A very very short (and untested) patch which might (or might not) support our\n> > case is attached.\n> > \n> > The only problem with my proposal/patch would be the presence, on the wild,\n> > of PostgreSQL EXTENSION actually using the '%' character in their version\n> > strings, which is currently considered legit by PostgreSQL.\n> > \n> > How do you feel about the proposal (which is wider than the patch) ?\n> \n> Just a heads up about the above, Sandro has added it as a commitfest item\n> which hopefully we can polish in time for PG16.\n> https://commitfest.postgresql.org/38/3654/\n> \n> Does anyone think this is such a horrible idea that we should abandon all\n> hope?\n> \n> The main impetus is that many extensions (postgis, pgRouting, and I'm sure\n> others) have over 300 extensions script files that are exactly the same.\n> We'd like to reduce this footprint significantly.\n> \n> strk said the patch is crappy so don't look at it just yet.  We'll work on\n> polishing it.  I'll review and provide docs for it.\n\nI don't think this idea is fundamentally wrong, but I have two worries:\n\n1. It would be a good idea good to make sure that there is not both\n \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n Otherwise the behavior might be indeterministic.\n\n2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n their PostGIS 1.1 installation with it? Would that work?\n Having a lower bound for a matching version might be a good idea,\n although I have no idea how to do that.\n\nYours,\nLaurenz Albe\n\n\n", "msg_date": "Sat, 28 May 2022 16:50:20 +0200", "msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On 28 May 2022, at 16:50, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n\n> I don't think this idea is fundamentally wrong, but I have two worries:\n> \n> 1. It would be a good idea good to make sure that there is not both\n> \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> Otherwise the behavior might be indeterministic.\n> \n> 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> their PostGIS 1.1 installation with it? Would that work?\n> Having a lower bound for a matching version might be a good idea,\n> although I have no idea how to do that.\n\nFollowing that reasoning, couldn't a rogue actor inject a fake file (perhaps\nbundled with another innocent looking extension) which takes precedence in\nwildcard matching?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n", "msg_date": "Sat, 28 May 2022 17:26:05 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Laurenz Albe <laurenz.albe@cybertec.at> writes:\n> I don't think this idea is fundamentally wrong, but I have two worries:\n\n> 1. It would be a good idea good to make sure that there is not both\n> \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> Otherwise the behavior might be indeterministic.\n\nThe existing logic to find multi-step upgrade paths is going to make\nthis a very pressing problem. For example, if you provide both\nextension--%--2.0.sql and extension--%--2.1.sql, it's not at all\nclear whether the code would try to use both of those or just one\nto get from 1.x to 2.1.\n\n> 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> their PostGIS 1.1 installation with it? Would that work?\n> Having a lower bound for a matching version might be a good idea,\n> although I have no idea how to do that.\n\nThe lack of upper bound is a problem too: what stops the system from\ntrying to use this to get from (say) 4.2 to 3.3, and if it does try that,\nwill the script produce a sane result? (Seems unlikely, as it couldn't\nknow what 4.x objects it ought to alter or remove.) But it's going to be\nvery hard to provide bounds, because we intentionally designed the\nextension system to not have an explicit concept of some versions being\nless than others.\n\nI'm frankly skeptical that this is a good idea at all. It seems\nto have come out of someone's willful refusal to use the system as\ndesigned, ie as a series of small upgrade scripts that each do just\none step. I don't feel an urgent need to cater to the\none-monster-script-that-handles-all-cases approach, because no one\nhas offered any evidence that that's really a better way. How would\nyou even write the conditional logic needed ... plpgsql DO blocks?\nTesting what? IIRC we don't expose any explicit knowledge of the\nold extension version number to the script.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 28 May 2022 11:37:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Sat, May 28, 2022 at 04:50:20PM +0200, Laurenz Albe wrote:\n> On Fri, 2022-05-27 at 17:37 -0400, Regina Obe wrote:\n> >\n> > https://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n> > \n> > Does anyone think this is such a horrible idea that we should abandon all\n> > hope?\n> \n> I don't think this idea is fundamentally wrong, but I have two worries:\n> \n> 1. It would be a good idea good to make sure that there is not both\n> \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> Otherwise the behavior might be indeterministic.\n\nI'd make sure to use extension--1.0--2.0.sql in that case (more\nspecific first).\n\n> 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> their PostGIS 1.1 installation with it? Would that work?\n\nFor PostGIS in particular it will NOT work as the PostGIS upgrade\nscript checks for the older version and decides if the upgrade is\nvalid or not. This is the same upgrade code used for non-extension\ninstalls.\n\n> Having a lower bound for a matching version might be a good idea,\n> although I have no idea how to do that.\n\nI was thinking of a broader pattern matching support, like:\n\n postgis--3.%--3.3.sql\n\nBut it would be better to start simple and eventually if needed\nincrease the complexity ?\n\nAnother option could be specifying something in the control file,\nwhich would also probably be a good idea to still allow some\nextensions to use '%' in the version string (for example).\n\n--strk; \n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Sat, 4 Jun 2022 11:20:55 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Sat, May 28, 2022 at 05:26:05PM +0200, Daniel Gustafsson wrote:\n> > On 28 May 2022, at 16:50, Laurenz Albe <laurenz.albe@cybertec.at> wrote:\n> \n> > I don't think this idea is fundamentally wrong, but I have two worries:\n> > \n> > 1. It would be a good idea good to make sure that there is not both\n> > \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> > Otherwise the behavior might be indeterministic.\n> > \n> > 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> > their PostGIS 1.1 installation with it? Would that work?\n> > Having a lower bound for a matching version might be a good idea,\n> > although I have no idea how to do that.\n> \n> Following that reasoning, couldn't a rogue actor inject a fake file (perhaps\n> bundled with another innocent looking extension) which takes precedence in\n> wildcard matching?\n\nI think whoever can write into the PostgreSQL extension folder will\nbe able to inject anything anyway....\n\n--strk;\n\n\n", "msg_date": "Sat, 4 Jun 2022 11:21:53 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Sat, May 28, 2022 at 11:37:30AM -0400, Tom Lane wrote:\n> Laurenz Albe <laurenz.albe@cybertec.at> writes:\n\n> > 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> > their PostGIS 1.1 installation with it? Would that work?\n> > Having a lower bound for a matching version might be a good idea,\n> > although I have no idea how to do that.\n> \n> The lack of upper bound is a problem too: what stops the system from\n> trying to use this to get from (say) 4.2 to 3.3, and if it does try that,\n> will the script produce a sane result?\n\nThis is a very old problem we had before EXTENSION was even available\nin PostgreSQL, and so we solved this internally. The upgrade script\nfor PostGIS checks the version of the existing code and refuses to\ndowngrade (and refuses to upgrade if a dump/restore is required).\n\n> I'm frankly skeptical that this is a good idea at all. It seems\n> to have come out of someone's willful refusal to use the system as\n> designed, ie as a series of small upgrade scripts that each do just\n> one step. I don't feel an urgent need to cater to the\n> one-monster-script-that-handles-all-cases approach, because no one\n> has offered any evidence that that's really a better way. How would\n> you even write the conditional logic needed ... plpgsql DO blocks?\n> Testing what? IIRC we don't expose any explicit knowledge of the\n> old extension version number to the script.\n\nWe (PostGIS) do expose explicit knowledge of the old extension, and\nfor this reason I think the pattern-based logic should be\nenabled explicitly in the postgis.control file. It could be even less\ngeneric and more specific to a given extension need, if done\ncompletely inside the control file. For PostGIS all we need at the\nmoment is something like (in the control file):\n\n one_monster_upgrade_script = postgis--ANY--3.3.0.sql\n\n--strk;\n\n\n", "msg_date": "Sat, 4 Jun 2022 11:26:19 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "I'm attaching an updated version of the patch. This time the patch\nis tested. Nothing changes unless the .control file for the subject\nextension doesn't have a \"wildcard_upgrades = true\" statement.\n\nWhen wildcard upgrades are enabled, a file with a \"%\" symbol as\nthe \"source\" part of the upgrade path will match any version and\nwill be used if a specific version upgrade does not exist.\nThis means that in presence of the following files:\n\n postgis--3.0.0--3.2.0.sql\n postgis--%--3.2.0.sql\n\nThe first one will be used for going from 3.0.0 to 3.2.0.\n\nThis is the intention. The patch lacks automated tests and can\nprobably be improved.\n\nFor more context, a previous (non-working) version of this patch was\nsubmitted to commitfest: https://commitfest.postgresql.org/38/3654/\n\n--strk;\n\nOn Sat, Jun 04, 2022 at 11:20:55AM +0200, Sandro Santilli wrote:\n> On Sat, May 28, 2022 at 04:50:20PM +0200, Laurenz Albe wrote:\n> > On Fri, 2022-05-27 at 17:37 -0400, Regina Obe wrote:\n> > >\n> > > https://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n> > > \n> > > Does anyone think this is such a horrible idea that we should abandon all\n> > > hope?\n> > \n> > I don't think this idea is fundamentally wrong, but I have two worries:\n> > \n> > 1. It would be a good idea good to make sure that there is not both\n> > \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> > Otherwise the behavior might be indeterministic.\n> \n> I'd make sure to use extension--1.0--2.0.sql in that case (more\n> specific first).\n> \n> > 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> > their PostGIS 1.1 installation with it? Would that work?\n> \n> For PostGIS in particular it will NOT work as the PostGIS upgrade\n> script checks for the older version and decides if the upgrade is\n> valid or not. This is the same upgrade code used for non-extension\n> installs.\n> \n> > Having a lower bound for a matching version might be a good idea,\n> > although I have no idea how to do that.\n> \n> I was thinking of a broader pattern matching support, like:\n> \n> postgis--3.%--3.3.sql\n> \n> But it would be better to start simple and eventually if needed\n> increase the complexity ?\n> \n> Another option could be specifying something in the control file,\n> which would also probably be a good idea to still allow some\n> extensions to use '%' in the version string (for example).\n> \n> --strk; \n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n> \n> \n\n\n", "msg_date": "Thu, 15 Sep 2022 00:01:00 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "And now with the actual patch attached ... (sorry)\n\n--strk;\n\nOn Thu, Sep 15, 2022 at 12:01:04AM +0200, Sandro Santilli wrote:\n> I'm attaching an updated version of the patch. This time the patch\n> is tested. Nothing changes unless the .control file for the subject\n> extension doesn't have a \"wildcard_upgrades = true\" statement.\n> \n> When wildcard upgrades are enabled, a file with a \"%\" symbol as\n> the \"source\" part of the upgrade path will match any version and\n> will be used if a specific version upgrade does not exist.\n> This means that in presence of the following files:\n> \n> postgis--3.0.0--3.2.0.sql\n> postgis--%--3.2.0.sql\n> \n> The first one will be used for going from 3.0.0 to 3.2.0.\n> \n> This is the intention. The patch lacks automated tests and can\n> probably be improved.\n> \n> For more context, a previous (non-working) version of this patch was\n> submitted to commitfest: https://commitfest.postgresql.org/38/3654/\n> \n> --strk;\n> \n> On Sat, Jun 04, 2022 at 11:20:55AM +0200, Sandro Santilli wrote:\n> > On Sat, May 28, 2022 at 04:50:20PM +0200, Laurenz Albe wrote:\n> > > On Fri, 2022-05-27 at 17:37 -0400, Regina Obe wrote:\n> > > >\n> > > > https://lists.osgeo.org/pipermail/postgis-devel/2022-February/029500.html\n> > > > \n> > > > Does anyone think this is such a horrible idea that we should abandon all\n> > > > hope?\n> > > \n> > > I don't think this idea is fundamentally wrong, but I have two worries:\n> > > \n> > > 1. It would be a good idea good to make sure that there is not both\n> > > \"extension--%--2.0.sql\" and \"extension--1.0--2.0.sql\" present.\n> > > Otherwise the behavior might be indeterministic.\n> > \n> > I'd make sure to use extension--1.0--2.0.sql in that case (more\n> > specific first).\n> > \n> > > 2. What if you have a \"postgis--%--3.3.sql\", and somebody tries to upgrade\n> > > their PostGIS 1.1 installation with it? Would that work?\n> > \n> > For PostGIS in particular it will NOT work as the PostGIS upgrade\n> > script checks for the older version and decides if the upgrade is\n> > valid or not. This is the same upgrade code used for non-extension\n> > installs.\n> > \n> > > Having a lower bound for a matching version might be a good idea,\n> > > although I have no idea how to do that.\n> > \n> > I was thinking of a broader pattern matching support, like:\n> > \n> > postgis--3.%--3.3.sql\n> > \n> > But it would be better to start simple and eventually if needed\n> > increase the complexity ?\n> > \n> > Another option could be specifying something in the control file,\n> > which would also probably be a good idea to still allow some\n> > extensions to use '%' in the version string (for example).\n> > \n> > --strk;", "msg_date": "Thu, 15 Sep 2022 00:02:49 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Just to reiterate the main impetus for this patch is to save PostGIS from\nshipping 100s of duplicate extension files for each release. \n\n> And now with the actual patch attached ... (sorry)\n> \n> --strk;\n> \n\nSandro, can you submit an updated version of this patch.\nI was testing it out and looked good first time.\n\nBut I retried just now testing against master, and it fails with \n\ncommands/extension.o: In function `file_exists':\npostgresql-git\\src\\backend\\commands/extension.c:3430: undefined reference to\n`AssertArg'\n\nIt seems 2 days ago AssertArg and AssertState were removed.\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=b1099eca8f38f\nf5cfaf0901bb91cb6a22f909bc6\n\nSo your use of AssertArg needs to be replaced with Assert I guess. \nI did that and was able to test again with a sample extension I made\n\nCalled:\n\nwildtest\n\n1) The wildcard patch in its current state only does anything if \n\nwildcard_upgrades = true \n\nis in the control file. If it's false or missing, then the behavior of\nextension upgrades doesn't change.\n\n\n2) It only understands % as a complete wildcard for a version number\n\nSo this is legal\nwildtest--%--2.0.sql\n\nThis does nothing\nwildtest--2%--2.0.sql\n\n3) I confirmed that if you have a path such as\n\nwildtest--2.0--2.2.sql\nwildtest--%--2.2.sql\n\nthen the exact match trumps the wildcard. In the above case if I am on 2.0\nand going to 2.2, the wildtest--2.0--2.2.sql script is used instead of the\nwildtest--% one.\n\n4) It is not possible to downgrade with the wildcard. For example I had\npaths\nwildtest--%--2.1.sql \n\nand I was unable to go from a version 2.2 down to a version 2.1. I didn't\ncheck why that was so, but probably a good thing.\n\nIf everyone is okay with this patch, we'll go ahead and add tests and\ndocumentation to go with it.\n\nThanks,\nRegina\n\n\n\n\n\n\n\n", "msg_date": "Mon, 31 Oct 2022 01:55:05 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Oct 31, 2022 at 01:55:05AM -0400, Regina Obe wrote:\n> \n> Sandro, can you submit an updated version of this patch.\n> I was testing it out and looked good first time.\n\nAttached updated version of the patch.\n\n> If everyone is okay with this patch, we'll go ahead and add tests and\n> documentation to go with it.\n\nYes please let us know if there's any chance of getting this\nmechanism approved or if we should keep advertising works around\nthis limitation (it's not just installing 100s of files but also\nstill missing needed upgrade paths when doing so).\n\n\n--strk;", "msg_date": "Fri, 4 Nov 2022 18:31:58 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "And now a version of the patch including documentation and regression tests.\nAnything you see I should improve ?\n\n--strk;\n\nOn Fri, Nov 04, 2022 at 06:31:58PM +0100, Sandro Santilli wrote:\n> On Mon, Oct 31, 2022 at 01:55:05AM -0400, Regina Obe wrote:\n> > \n> > Sandro, can you submit an updated version of this patch.\n> > I was testing it out and looked good first time.\n> \n> Attached updated version of the patch.\n> \n> > If everyone is okay with this patch, we'll go ahead and add tests and\n> > documentation to go with it.\n> \n> Yes please let us know if there's any chance of getting this\n> mechanism approved or if we should keep advertising works around\n> this limitation (it's not just installing 100s of files but also\n> still missing needed upgrade paths when doing so).\n> \n> \n> --strk;", "msg_date": "Wed, 9 Nov 2022 18:53:52 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, failed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nPURPOSE: \r\n\r\nThis feature is intended to allow projects with many micro versions that do the same thing, be able to ship much fewer files by specifying less granular stopping.\r\nOne of the projects that will benefit from this is the PostGIS project. Currently we ship over 300 extension scripts per version which are currently just symlinks to the same file.\r\nPart of the reason for that is our changes are dependent on both PostgreSQL version and PostGIS version, so a simple upgrade that only considers say PostGIS 3.2.0--3.3.1 is not sufficient.\r\nAlso much of the time, our function definitions don't change in micros, but yet we need to ship them to allow users to properly upgrade.\r\n\r\nThis feature will allow us to get rid of all these symlinks or 0-byte files.\r\n\r\nI've tested this feature against the PostgreSQL master branch on mingw64 gcc 8.1.\r\n\r\nBASIC FEATURES\r\n\r\n1) As stated, this feature only works if in the .control file, has a line:\r\n\r\nwildcard_upgrades = true\r\n\r\n2) It does nothing different if the line is missing or set to false.\r\n\r\n3) If there is such a line and there is no other path that takes an extension from it's current version to the requested version, then it will use the wildcard script file.\r\n\r\nTESTING\r\n\r\nAUTOMATED TESTS\r\nI have confirmed tests are in place.\r\nHowever the tests do not run when I do \r\nmake check (from the root source folder)\r\n\r\nor when I do\r\n\r\nmake installcheck-world\r\n\r\n\r\nTo run these tests, I had to\r\n\r\ncd src/test/modules/test_extensions\r\nmake check\r\n\r\nand see the output showing:\r\n\r\n============== running regression test queries ==============\r\ntest test_extensions ... ok 186 ms\r\ntest test_extdepend ... ok 97 ms\r\n\r\n\r\nI confirmed the tests are in test_extensions, which has always existed.\r\nSo I assume why it's not being run in installcheck-world is something off with my configuration \r\nor it's intentionally not run.\r\n\r\nMANUAL TESTS\r\n I also ran some adhoc tests of my own to confirm the behavior. and checking with\r\nSET client_min_messages='DEBUG1';\r\n\r\nTo confirm that the wildcard script I have only gets called when there is no other path that will take the user to the new version.\r\nI created my own extension\r\nwildtest\r\n\r\n\r\n1) I confirmed It only understands % as a complete wildcard for a version number\r\n\r\n\r\nSo this is legal\r\nwildtest--%--2.0.sql\r\n\r\n\r\nThis does nothing\r\nwildtest--2%--2.0.sql\r\n\r\n\r\n2) I confirmed that if you have a path such as\r\n\r\n\r\nwildtest--2.0--2.2.sql\r\nwildtest--%--2.2.sql\r\n\r\n\r\nthen the exact match trumps the wildcard. In the above case if I am on 2.0\r\nand going to 2.2, the wildtest--2.0--2.2.sql script is used instead of the\r\nwildtest--% one.\r\n\r\n\r\n3) It is not possible to downgrade with the wildcard. For example I had\r\npaths\r\nwildtest--%--2.1.sql \r\n\r\n\r\nand I was unable to go from a version 2.2 down to a version 2.1.\r\n\r\nDOCUMENTATION\r\n\r\nI built the html docs and confirmed that in the section of the docs where it defines valid properties of extension files it now includes this line:\r\n\r\nwildcard_upgrades (boolean)\r\n\r\n This parameter, if set to true (which is not the default), allows ALTER EXTENSION to consider a wildcard character % as matching any version of the extension. Such wildcard match will only be used when no perfect match is found for a version.\n\nThe new status of this patch is: Ready for Committer\n", "msg_date": "Thu, 10 Nov 2022 01:10:52 +0000", "msg_from": "LEO HSU <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Apologies. I made a mistake on my downgrade test.\r\nSo 3 should be\r\n\r\n3) It is possible to downgrade with the wildcard. For example I had\r\npaths\r\nwildtest--%--2.1.sql \r\n\r\nand confirmed it used the downgrade path when going from 2.2 to 2.1", "msg_date": "Thu, 10 Nov 2022 15:23:31 +0000", "msg_from": "Regina Obe <r@pcorp.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Re: Tom Lane\n> The existing logic to find multi-step upgrade paths is going to make\n> this a very pressing problem. For example, if you provide both\n> extension--%--2.0.sql and extension--%--2.1.sql, it's not at all\n> clear whether the code would try to use both of those or just one\n> to get from 1.x to 2.1.\n\nfind_update_path already includes Dijkstra to avoid that problem.\n\n> I'm frankly skeptical that this is a good idea at all. It seems\n> to have come out of someone's willful refusal to use the system as\n> designed, ie as a series of small upgrade scripts that each do just\n> one step. I don't feel an urgent need to cater to the\n> one-monster-script-that-handles-all-cases approach, because no one\n> has offered any evidence that that's really a better way. How would\n> you even write the conditional logic needed ... plpgsql DO blocks?\n> Testing what? IIRC we don't expose any explicit knowledge of the\n> old extension version number to the script.\n\nI was just talking to strk about that on #postgis and where's what I\ntook away from the discussion:\n\nThe main objection seems to be that 500 identical (linked) files\naren't how the extension system is supposed to be used, and that this\nproposal merely changes that to a single foo--%--1.2.3.sql file which\nis again not how the system is supposed to be used. Instead, we should\ntry to find a small set of files that just change what needs to be\nchanged between the versions.\n\nFirst, it's 580 files because the same problem exists for 7 extensions\nshipped in postgis, so we are effectively talking about 80 files per\nextension.\n\nSo the question is, can we reduce that 80 to some small number. It\nturns out the answer is \"no\" for several reasons:\n\n* parallel branches: postgis maintains several major branches, and\n people can upgrade at any time. If we currently have 3.3.4 and\n 3.4.0, we will likely have a 3.3.4--3.4.0.sql file. Now when 3.3.5\n and 3.4.1 are released, we add 3.3.4--3.3.5 in the 3.3 branch, and a\n 3.4.0--3.4.1 in the 3.4 branch.\n\n But we will also have to provide a new 3.3.5--3.4.0 (or\n 3.3.5--3.4.1) file *in the 3.4 branch*. That means even if we decide\n that upgrades always need to go through a 3.4.0 pivot version, we\n have to update the 3.4 branch each time 3.3 is updated, because\n history isn't single-branch linear. Lots of extra files, which need\n to be stored in a non-natural location.\n\n If we wanted to use the \"proper\" update files here, we'd be left\n with something like (number of major versions)*(number of all minor\n versions in total) number of files, spread over several major\n versions, all interconnected.\n\n* Even if we disregard topology, we still have to provide *some*\n start-updates-here file for each minor version released ever before.\n Hence we are at 80. (The number might be lower for smaller\n extensions like address_standardizer, but postgis itself would\n change in each version, plus many of the other extensions in there.)\n\nChristoph\n\n\n", "msg_date": "Fri, 11 Nov 2022 16:59:06 +0100", "msg_from": "Christoph Berg <myon@debian.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Re: Sandro Santilli\n> I'm attaching an updated version of the patch. This time the patch\n> is tested. Nothing changes unless the .control file for the subject\n> extension doesn't have a \"wildcard_upgrades = true\" statement.\n> \n> When wildcard upgrades are enabled, a file with a \"%\" symbol as\n> the \"source\" part of the upgrade path will match any version and\n\nFwiw I believe wildcard_upgrades isn't necessary in the .control file.\nIf there are no % files, none would be used anyway, and if there are,\nit's clear it's meant as wildcard since % won't appear in any remotely\nsane version number.\n\nChristoph\n\n\n", "msg_date": "Fri, 11 Nov 2022 17:07:13 +0100", "msg_from": "Christoph Berg <cb@df7cb.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> Re: Sandro Santilli\n> > I'm attaching an updated version of the patch. This time the patch is\n> > tested. Nothing changes unless the .control file for the subject\n> > extension doesn't have a \"wildcard_upgrades = true\" statement.\n> >\n> > When wildcard upgrades are enabled, a file with a \"%\" symbol as the\n> > \"source\" part of the upgrade path will match any version and\n> \n> Fwiw I believe wildcard_upgrades isn't necessary in the .control file.\n> If there are no % files, none would be used anyway, and if there are, it's\nclear\n> it's meant as wildcard since % won't appear in any remotely sane version\n> number.\n> \n> Christoph\n\nI also like the idea of skipping the wildcard_upgrades syntax.\nThen there is no need to have a conditional control file for PG 16 vs. older\nversions.\n\nThanks,\nRegina\n\n\n\n\n\n", "msg_date": "Sun, 13 Nov 2022 23:46:50 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Sun, Nov 13, 2022 at 11:46:50PM -0500, Regina Obe wrote:\n> > Re: Sandro Santilli\n> > > I'm attaching an updated version of the patch. This time the patch is\n> > > tested. Nothing changes unless the .control file for the subject\n> > > extension doesn't have a \"wildcard_upgrades = true\" statement.\n> > >\n> > > When wildcard upgrades are enabled, a file with a \"%\" symbol as the\n> > > \"source\" part of the upgrade path will match any version and\n> > \n> > Fwiw I believe wildcard_upgrades isn't necessary in the .control file.\n> > If there are no % files, none would be used anyway, and if there are, it's\n> clear\n> > it's meant as wildcard since % won't appear in any remotely sane version\n> > number.\n> \n> I also like the idea of skipping the wildcard_upgrades syntax.\n> Then there is no need to have a conditional control file for PG 16 vs. older\n> versions.\n\nHere we go. Attached a version of the patch with no\n\"wildcard_upgrades\" controlling it.\n\n--strk;", "msg_date": "Thu, 17 Nov 2022 00:09:33 +0100", "msg_from": "'Sandro Santilli' <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On Sun, Nov 13, 2022 at 11:46:50PM -0500, Regina Obe wrote:\n> > > Re: Sandro Santilli\n> > > > I'm attaching an updated version of the patch. This time the patch\n> > > > is tested. Nothing changes unless the .control file for the\n> > > > subject extension doesn't have a \"wildcard_upgrades = true\"\nstatement.\n> > > >\n> > > > When wildcard upgrades are enabled, a file with a \"%\" symbol as\n> > > > the \"source\" part of the upgrade path will match any version and\n> > >\n> > > Fwiw I believe wildcard_upgrades isn't necessary in the .control file.\n> > > If there are no % files, none would be used anyway, and if there\n> > > are, it's\n> > clear\n> > > it's meant as wildcard since % won't appear in any remotely sane\n> > > version number.\n> >\n> > I also like the idea of skipping the wildcard_upgrades syntax.\n> > Then there is no need to have a conditional control file for PG 16 vs.\n> > older versions.\n> \n> Here we go. Attached a version of the patch with no \"wildcard_upgrades\"\n> controlling it.\n> \n> --strk;\n\nI think you should increment the version number on the file name of this\npatch.\nYou had one earlier called 0001-...\nThe one before that was missing a version number entirely.\n\nMaybe call this 0003-...\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 16 Nov 2022 18:16:11 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Note that the patch is passing tests when using autoconf build but\nfailing for meson builds.\n\nhttp://cfbot.cputube.org/sandro-santilli.html\n\nI imagine you need to make the corresponding change to ./meson.build\nthat you made to ./Makefile.\n\nhttps://wiki.postgresql.org/wiki/Meson_for_patch_authors\nhttps://wiki.postgresql.org/wiki/Meson\n\nYou can test under cirrusci from your github account, with instructions\nat: ./src/tools/ci/README\n\nCheers,\n-- \nJustin\n\n\n", "msg_date": "Wed, 16 Nov 2022 17:25:07 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Wed, Nov 16, 2022 at 05:25:07PM -0600, Justin Pryzby wrote:\n> Note that the patch is passing tests when using autoconf build but\n> failing for meson builds.\n\nThanks for pointing me to the right direction.\nI'm attaching an updated patch, will keep an eye on cirrusCI\n\n--strk;", "msg_date": "Thu, 17 Nov 2022 10:57:34 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI reviewed this patch https://www.postgresql.org/message-id/20221117095734.igldlk6kngr6ogim%40c19\r\n\r\nMost look good to me. The only things that have changed since last I reviewed this, with the new patch is\r\n\r\n1) wildcard_upgrades=true is no longer needed in the control file and will break if present. This is an expected change, since we are now going strictly by presence of % extension upgrade scripts per suggestions\r\n2) The meson build issue has been fixed. Cirrus is passing on that now. I'm still fiddling with my meson setup, so didn't personally test this.\r\n\r\n3) The documentation has been updated to no longer include wildcard_upgrades as a variable for control file.\r\n\r\nand has this text now describing it's use:\r\n\r\n The literal value % can be used as the old_version component in an extension update script for it to match any version. Such wildcard update scripts will only be used when no explicit path is found from old to target version.\r\n\r\nGiven that a suitable update script is available, the command ALTER EXTENSION UPDATE will update an installed extension to the specified new version. The update script is run in the same environment that CREATE EXTENSION provides for installation scripts: in particular, search_path is set up in the same way, and any new objects created by the script are automatically added to the extension. Also, if the script chooses to drop extension member objects, they are automatically dissociated from the extension. \r\n\r\nI built the html docs but ran into a lot of warnings I don't recall getting last time. I think this is just my doc local setup is a bit messed up or something else changed in the doc setup.\r\n\r\nMy main gripe with this patch is Sandro did not increment the version number of it, so it's the same name as an old patch he had submitted before.", "msg_date": "Tue, 22 Nov 2022 19:03:14 +0000", "msg_from": "Regina Obe <r@pcorp.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "I continue to think that this is a fundamentally bad idea. It creates\nall sorts of uncertainties about what is a valid update path and what\nis not. Restrictions like\n\n+ Such wildcard update\n+ scripts will only be used when no explicit path is found from\n+ old to target version.\n\nare just band-aids to try to cover up the worst problems.\n\nHave you considered the idea of instead inventing a \"\\include\" facility\nfor extension scripts? Then, if you want to use one-monster-script\nto handle different upgrade cases, you still need one script file for\neach supported upgrade step, but those can be one-liners including the\ncommon script file. Plus, such a facility could be of use to people\nwho want intermediate factorization solutions (that is, some sharing\nof code without buying all the way into one-monster-script).\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 09 Jan 2023 17:51:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> I continue to think that this is a fundamentally bad idea. It creates all\nsorts of\n> uncertainties about what is a valid update path and what is not.\nRestrictions\n> like\n> \n> + Such wildcard update\n> + scripts will only be used when no explicit path is found from\n> + old to target version.\n> \n> are just band-aids to try to cover up the worst problems.\n> \n> Have you considered the idea of instead inventing a \"\\include\" facility\nfor\n> extension scripts? Then, if you want to use one-monster-script to handle\n> different upgrade cases, you still need one script file for each supported\n> upgrade step, but those can be one-liners including the common script\nfile.\n> Plus, such a facility could be of use to people who want intermediate\n> factorization solutions (that is, some sharing of code without buying all\nthe\n> way into one-monster-script).\n> \n> \t\t\tregards, tom lane\n\nThe problem with an include is that it does nothing for us. We still need to\nship a ton of script files. We've already dealt with the file size issue.\n\nSo our PostGIS 3.4.0+ (not yet released) already kind of simulates include\nusing blank script files that have nothing in them but forces the extension\nmachinery to upgrade the version to ANY to get to the required installed\nversion 3.4.0+\n\nSo for example postgis--3.3.0--ANY.sql\n\nWould contain this:\n-- Just tag extension postgis version as \"ANY\"\n-- Installed by postgis 3.4.0dev\n-- Built on ...\n\nAnd the file has the upgrade steps:\npostgis--ANY--3.4.0dev.sql\n\nSo that when you are on 3.3.0 and want to upgrade to 3.4.0dev ( it takes\n3.3.0 -> ANY -> 3.4.0dev)\n\nThe other option I had proposed was getting rid of the micro version, but I\ngot shot down on that argument -- with PostGIS PSC complaining about people\nnot being able to upgrade a micro if per chance we have some security patch\nreleased in a micro.\n\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-June/029673.html\n\nhttps://lists.osgeo.org/pipermail/postgis-devel/2022-July/029713.html\n\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Mon, 9 Jan 2023 22:03:11 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Jan 09, 2023 at 05:51:49PM -0500, Tom Lane wrote:\n\n> Have you considered the idea of instead inventing a \"\\include\" facility\n\n[...]\n\n> cases, you still need one script file for each supported upgrade step\n\nThat's exactly the problem we're trying to solve here.\nThe include support is nice on itself, but won't solve our problem.\n\n--strk;\n\n\n", "msg_date": "Tue, 10 Jan 2023 21:52:59 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Sandro Santilli <strk@kbt.io> writes:\n> On Mon, Jan 09, 2023 at 05:51:49PM -0500, Tom Lane wrote:\n>> ... you still need one script file for each supported upgrade step\n\n> That's exactly the problem we're trying to solve here.\n> The include support is nice on itself, but won't solve our problem.\n\nThe script-file-per-upgrade-path aspect solves a problem that you\nhave, whether you admit it or not; I think you simply aren't realizing\nthat because you have not had to deal with the consequences of\nyour proposed feature. Namely that you won't have any control\nover what the backend will try to do in terms of upgrade paths.\n\nAs an example, suppose that a database has foo 4.0 installed, and\nthe DBA decides to try to downgrade to 3.0. With the system as it\nstands, if you've provided foo--4.0--3.0.sql then the conversion\nwill go through, and presumably it will work because you tested that\nthat script does what it is intended to. If you haven't provided\nany such downgrade script, then ALTER EXTENSION UPDATE will say\n\"Sorry Dave, I'm afraid I can't do that\" and no harm is done.\n\nWith the proposed % feature, if foo--%--3.0.sql exists then the\nsystem will invoke it and expect the end result to be a valid\n3.0 installation, whether or not the script actually has any\nability to do a downgrade. Moreover, there isn't any very\ngood way to detect or prevent unsupported version transitions.\n(I suppose you could add code to look at pg_extension.extversion,\nbut I'm not sure if that works: it looks to me like we update that\nbefore we run the extension script. Besides which, if you have\nto add such code is that really better than having a number of\none-liner scripts implementing the same check declaratively?)\n\nIt gets worse though, because above I'm supposing that 4.0 at\nleast existed when this copy of foo--%--3.0.sql was made.\nSuppose that somebody fat-fingered a package upgrade, such that\nthe extension fileset available to a database containing foo 4.0\nnow corresponds to foo 3.0, and there's no knowledge of 4.0 at all\nin the extension scripts. The DBA trustingly issues ALTER EXTENSION\nUPDATE, which will conclude from foo.control that it should update to\n3.0, and invoke foo--%--3.0.sql to do it. Maybe the odds of success\nare higher than zero, but not by much; almost certainly you are\ngoing to end with an extension containing some leftover 4.0\nobjects, some 3.0 objects, and maybe some objects with properties\nthat don't exactly match either 3.0 or 4.0. Even if that state\nof affairs manages not to cause immediate problems, it'll surely\nbe a mess whenever somebody tries to re-update to 4.0 or later.\n\nSo I really think this is a case of \"be careful what you ask\nfor, you might get it\". Even if PostGIS is willing to put in\nthe amount of infrastructure legwork needed to make such a\ndesign bulletproof, I'm quite sure nobody else will manage\nto use such a thing successfully. I'd rather spend our\ndevelopment effort on a feature that has more than one use-case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 10 Jan 2023 18:50:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> Sandro Santilli <strk@kbt.io> writes:\n> > On Mon, Jan 09, 2023 at 05:51:49PM -0500, Tom Lane wrote:\n> >> ... you still need one script file for each supported upgrade step\n> \n> > That's exactly the problem we're trying to solve here.\n> > The include support is nice on itself, but won't solve our problem.\n> \n> The script-file-per-upgrade-path aspect solves a problem that you have,\n> whether you admit it or not; I think you simply aren't realizing that\nbecause\n> you have not had to deal with the consequences of your proposed feature.\n> Namely that you won't have any control over what the backend will try to\ndo in\n> terms of upgrade paths.\n> \n\nIt would be nice if there were some way to apply at least a range match to\nupgrades or explicitly state in the control file what paths should be\napplied for an upgrade.\nUltimately as Sandro mentioned, it's the 1000 files issue we are trying to\naddress.\n\nThe only way we can fix that in the current setup, is to move to a minor\nversion mode which means we can \nnever do micro updates.\n\n> As an example, suppose that a database has foo 4.0 installed, and the DBA\n> decides to try to downgrade to 3.0. With the system as it stands, if\nyou've\n> provided foo--4.0--3.0.sql then the conversion will go through, and\npresumably\n> it will work because you tested that that script does what it is intended\nto. If\n> you haven't provided any such downgrade script, then ALTER EXTENSION\n> UPDATE will say \"Sorry Dave, I'm afraid I can't do that\" and no harm is\ndone.\n> \nIn our case we've already addressed that in our script, because our upgrades\ndon't rely on what \nextension model tells us is the version, ultimately what our\npostgis..version() tells us is the true determinate (one for the lib file\nand one for the script).\nBut you are right, that's a selfish way of thinking about it, cause we have\ninternal plumbing to handle that issue already and other projects probably\ndon't.\n\nWhat I was hoping for was to having a section in the control file to say\nsomething like\n\nUpgrade patterns: 3.0.%--4.0.0.sql, 2.0.%--4.0.0.sql\n\n(and perhaps some logic to guarantee no way to match two patterns)\nSo you wouldn't be able to have a set of patterns like\n\nUpgrade patterns: %--4.0.0.sql, 3.0.%--4.0.0.sql, 2.0.%--4.0.0.sql\n\nThat would allow your proposed include something or other to work and for us\nto be able to use that, and still reduce our \nfile footprint.\n\nI'd even settle for absolute paths stated in the control file if we can\ndispense with the need a file path to push you forward.\n\nIn that mode, your downgrade issue would never happen even with the way\npeople normally create scripts now.\n\n> So I really think this is a case of \"be careful what you ask for, you\nmight get it\".\n> Even if PostGIS is willing to put in the amount of infrastructure legwork\nneeded\n> to make such a design bulletproof, I'm quite sure nobody else will manage\nto\n> use such a thing successfully. I'd rather spend our development effort on\na\n> feature that has more than one use-case.\n> \n> \t\t\tregards, tom lane\n\nI think you are underestimating the number of extensions that have this\nissue and could benefit (agree not in the current incarnation of the patch).\nIt's not just PostGIS, it's pgRouting (has 56 files), h3 extension (has 37\nfiles in last release, most of them a do nothing, except at the minor update\nsteps) that have the same issue (and both pgRouting and h3 do have little\nbitty script updates that follows the best practice way of doing this).\nFor pgRouting alone there are 56 files for the last release (of which it can\neasily be reduced to about 5 files if the paths could be explicitly stated\nin the control file).\n\nYes all those extensions should dispense with their dreams of having micro\nupdates (I honestly wish they would).\n \nPerhaps I'm a little obsessive, but it annoys me to see 1000s of files in my\nextension folder, when I know even if I followed best practice I only need\nlike 5 files for each extension.\n\nAnd as a packager to have to ship these files is even more annoying.\n\nThe reality is for micros, no one ships new functions (or at least shouldn't\nbe), so all functions just need to be replaced perhaps with a micro update\nfile you propose.\n\nHeck if we could even have the option to itemize our own upgrade file paths\nexplicitly in the control file, \n\nLike:\n\npaths: 3.0.1--4.0.0 = 3--4.0.0.sql, 3.0.2--4.0.0 = 3--4.0.0.sql,\n2--4.0.0.sql = 2.0.2--4.0.0.sql\n\nI'd be happy, and PostgreSQL can do the math pretending there are files it\nthinks it needs.\n\nSo if we could somehow rework this patch perhaps adding something in the\ncontrol to explicitly state the upgrade paths.\n\nI think that would satisfy your concern? And I'd be eternally grateful. \n\nThanks,\nRegina\n\n\n\n", "msg_date": "Tue, 10 Jan 2023 23:09:23 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Jan 10, 2023 at 06:50:31PM -0500, Tom Lane wrote:\n\n\n> With the proposed % feature, if foo--%--3.0.sql exists then the\n> system will invoke it and expect the end result to be a valid\n> 3.0 installation, whether or not the script actually has any\n> ability to do a downgrade. \n\nIt is sane, for the system to expect the end result\nto be a valid 3.0 installation if no exception is\nthrown by the script itself. \n\nIf we ship foo--%--3.0.sql we must have been taken care of\nprotecting from unsupported downgrades/upgrades (and we did,\nhaving the script throw an exception if anything is unexpected).\n\n> (I suppose you could add code to look at pg_extension.extversion,\n\nWe actually added code looking at our own version-extracting\nfunction (which existed since before PostgreSQL added support\nfor extensions). Is the function not there ? Raise an exception.\nIs the function not owned by the extension ? Raise an exception.\nIn other cases -> trust the output of that function to tell what\nversion we're coming from, throw an exception if upgrade to the\ntarget version is unsupported.\n\n> to add such code is that really better than having a number of\n> one-liner scripts implementing the same check declaratively?)\n\nYes, it is, because no matter how many one-liner scripts we install\n(we currently install 87 * 6 such scripts, we always end up missing\nsome of them upon releasing a new bug-fix release in stable branches.\n\n> almost certainly you are\n> going to end with an extension containing some leftover 4.0\n> objects, some 3.0 objects, and maybe some objects with properties\n> that don't exactly match either 3.0 or 4.0. \n\nThis is already possible, depending on WHO writes those upgrade\nscripts. This proposal just gives more expressiveness power to\nthose script authors.\n\n> So I really think this is a case of \"be careful what you ask\n> for, you might get it\". Even if PostGIS is willing to put in\n> the amount of infrastructure legwork needed to make such a\n> design bulletproof, I'm quite sure nobody else will manage\n> to use such a thing successfully.\n\nThis is why I initially made this something to be explicitly enabled\nby the .control file, which I can do again if it feels safer for you.\n\n> I'd rather spend our\n> development effort on a feature that has more than one use-case.\n\nUse case is any extension willing to support more than a single stable\nbranch while still allowing upgrading from newer-stable-bugfix-release\nto older-feature-release (ie: 3.2.10 -> 3.4.0 ). Does not seem so\nuncommon to me, for a big project. Maybe there aren't enough big\nextension-based projects out there ?\n\n--strk; \n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:10:40 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Jan 10, 2023 at 11:09:23PM -0500, Regina Obe wrote:\n\n> The only way we can fix that in the current setup, is to move to a minor\n> version mode which means we can \n> never do micro updates.\n\nOr just not with standard PostgreSQL syntax, because we can of course\ndo upgrades using the `SELECT postgis_extensions_upgrade()` call at\nthe moment, which, if you ask me, sounds MORE dangerous than the\nwildcard upgrade approach because the _implementation_ of that\nfunction will always be the OLD implementation, never the NEW one,\nso if bogus cannot be fixed by a new release w/out a way to upgrade\nthere...\n\n--strk;\n\n\n", "msg_date": "Wed, 11 Jan 2023 11:16:36 +0100", "msg_from": "'Sandro Santilli' <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "This patch too is conflicting on meson.build. Are these two patches\ninterdependent?\n\nThis one looks a bit more controversial. I'm not sure if Tom has been\nconvinced and I haven't seen anyone else speak up.\n\nPerhaps this needs a bit more discussion of other options to solve\nthis issue. Maybe it can just be solved with multiple one-line scripts\nthat call to a master script?\n\n-- \nGregory Stark\nAs Commitfest Manager\n\n\n", "msg_date": "Mon, 6 Mar 2023 14:54:35 -0500", "msg_from": "\"Gregory Stark (as CFM)\" <stark.cfm@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> This patch too is conflicting on meson.build. Are these two patches\n> interdependent?\n> \n> This one looks a bit more controversial. I'm not sure if Tom has been\n> convinced and I haven't seen anyone else speak up.\n> \n> Perhaps this needs a bit more discussion of other options to solve this issue.\n> Maybe it can just be solved with multiple one-line scripts that call to a master\n> script?\n> \n> --\n> Gregory Stark\n> As Commitfest Manager\n\nThey edit the same file yes so yes conflicts are expected. \nThe wildcard one, Tom was not convinced, so I assume we'll need to \ngo a couple more rounds on it and hopefully we can get something that will not be so controversial.\n\nI don't think the schema qualifying required extension feature is controversial though so I think that should be able to go and we'll rebase our other patch after that.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Mon, 6 Mar 2023 16:37:39 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Mar 06, 2023 at 02:54:35PM -0500, Gregory Stark (as CFM) wrote:\n> This patch too is conflicting on meson.build.\n\nI'm attaching a rebased version to this email.\n\n> Maybe it can just be solved with multiple one-line scripts\n> that call to a master script?\n\nNot really, as the problem we are trying to solve is the need\nto install many files, and the need to foresee any possible\nfuture bug-fix release we might want to support upgrades from.\n\nPostGIS is already installing zero-line scripts to upgrade\nfrom <old_version> to a virtual \"ANY\" version which we then\nuse to have a single ANY--<new_version> upgrade path, but we\nare still REQUIRED to install a file for any <old_version> we\nwant to allow upgrade from, which is what this patch is aimed\nat fixing.\n\n--strk;", "msg_date": "Tue, 7 Mar 2023 10:00:13 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Jan 10, 2023 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The script-file-per-upgrade-path aspect solves a problem that you\n> have, whether you admit it or not; I think you simply aren't realizing\n> that because you have not had to deal with the consequences of\n> your proposed feature. Namely that you won't have any control\n> over what the backend will try to do in terms of upgrade paths.\n>\n> As an example, suppose that a database has foo 4.0 installed, and\n> the DBA decides to try to downgrade to 3.0. With the system as it\n> stands, if you've provided foo--4.0--3.0.sql then the conversion\n> will go through, and presumably it will work because you tested that\n> that script does what it is intended to. If you haven't provided\n> any such downgrade script, then ALTER EXTENSION UPDATE will say\n> \"Sorry Dave, I'm afraid I can't do that\" and no harm is done.\n\nI mean, there is nothing to prevent the extension author from writing\na script which ensures that the extension membership after the\ndowngrade is exactly what it should be for version 3.0, regardless of\nwhat it was before. SQL is Turing-complete.\n\nThe thing that confuses me here is why the PostGIS folks are ending up\nwith so many files. We certainly don't have that problem with the\nextension that are being maintained in contrib, and I guess there is\nsome difference in versioning practice that is making it an issue for\nthem but not for us. I wish I understood what was going on there.\n\nBut that to one side, there is clearly a problem here, and I think\nPostgreSQL ought to provide some infrastructure to help solve it, even\nif the ultimate cause of that problem is that the PostGIS folks\nmanaged their extension versions in some less-than-ideal way. I can't\nshake the feeling that you're just holding your breath here and hoping\nthe problem goes away by itself, and judging by the responses, that\ndoesn't seem like it's going to happen.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 7 Mar 2023 12:39:34 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Tue, Jan 10, 2023 at 6:50 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> As an example, suppose that a database has foo 4.0 installed, and\n>> the DBA decides to try to downgrade to 3.0. With the system as it\n>> stands, if you've provided foo--4.0--3.0.sql then the conversion\n>> will go through, and presumably it will work because you tested that\n>> that script does what it is intended to. If you haven't provided\n>> any such downgrade script, then ALTER EXTENSION UPDATE will say\n>> \"Sorry Dave, I'm afraid I can't do that\" and no harm is done.\n\n> I mean, there is nothing to prevent the extension author from writing\n> a script which ensures that the extension membership after the\n> downgrade is exactly what it should be for version 3.0, regardless of\n> what it was before. SQL is Turing-complete.\n\nIt may be Turing-complete, but I seriously doubt that that's sufficient\nfor the problem I'm thinking of, which is to downgrade from an extension\nversion that you never heard of at the time the script was written.\nIn the worst case, that version might even contain objects of types\nyou never heard of and don't know how to drop.\n\nYou can imagine various approaches to deal with that; for instance,\nmaybe we could provide some kind of command to deal with dropping an\nobject identified by classid and objid, which the upgrade script\ncould scrape out of pg_depend. After writing even more code to issue\nthose drops in dependency-aware order, you could get on with modifying\nthe objects you do know about ... but maybe those now have properties\nyou never heard of and don't know how to adjust.\n\nWhether this is all theoretically possible is sort of moot. What\nI am maintaining is that no extension author is actually going to\nwrite such a script, indeed they probably won't trouble to write\nany downgrade-like actions at all. Which makes the proposed design\nmostly a foot-gun.\n\n> But that to one side, there is clearly a problem here, and I think\n> PostgreSQL ought to provide some infrastructure to help solve it, even\n> if the ultimate cause of that problem is that the PostGIS folks\n> managed their extension versions in some less-than-ideal way. I can't\n> shake the feeling that you're just holding your breath here and hoping\n> the problem goes away by itself, and judging by the responses, that\n> doesn't seem like it's going to happen.\n\nI'm not unsympathetic to the idea of trying to support multiple upgrade\npaths in one script. I just don't like this particular design for that,\nbecause it requires the extension author to make promises that nobody\nis actually going to deliver on.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Tue, 07 Mar 2023 14:13:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> The thing that confuses me here is why the PostGIS folks are ending up with\n> so many files. \n> We certainly don't have that problem with the extension that\n> are being maintained in contrib, and I guess there is some difference in\n> versioning practice that is making it an issue for them but not for us. I wish I\n> understood what was going on there.\n\nThe contrib files are minor versioned. PostGIS is micro versioned.\n\nSo we have for example postgis--3.3.0--3.3.1.sql\nAlso we have 5 extensions we ship all micro versions, so multiply that issue by 5\n\npostgis\npostgis_raster\npostgis_topology\npostgis_tiger_geocoder\npostgis_sfcgal\n\nThe other wrinkle we have that I don't think postgresql's contrib have is that we support \nEach version across multiple PostgreSQL versions.\nSo for example our 3.3 series supports PostgreSQL 12-15 (with plans to also support 16).\n\n\n\n\n", "msg_date": "Tue, 7 Mar 2023 14:25:09 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> I'm not unsympathetic to the idea of trying to support multiple upgrade paths\n> in one script. I just don't like this particular design for that, because it\n> requires the extension author to make promises that nobody is actually going\n> to deliver on.\n> \n> \t\t\tregards, tom lane\n\nHow about the idea I mentioned, of we revise the patch to read versioned upgrades from the control file\nrather than relying on said file to exist.\n\nhttps://www.postgresql.org/message-id/000201d92572%247dcd8260%2479688720%24%40pcorp.us\n\nEven better, we have an additional control file, something like\n\npostgis--paths.control\n\nThat has separate lines to itemize those paths. It would be nice if we could allow wild-cards in that, but I could live without that if we can stop shipping 300 empty files.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Tue, 7 Mar 2023 14:39:07 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Mar 07, 2023 at 12:39:34PM -0500, Robert Haas wrote:\n\n> The thing that confuses me here is why the PostGIS folks are ending up\n> with so many files.\n\nWe want to allow PostGIS users to upgrade from ANY previous version,\nbecause different distribution or user habit may result in people\nupgrading from ANY older release to the newest.\n\nNow, PostGIS has released a total of 164 versions:\n\n curl -s https://download.osgeo.org/postgis/source/ |\n grep -c '.tar.gz<'\n\nIn *addition* to these, most developers end up installing a \"dev\"\nsuffixed version to distinguish it from the official releases,\nwhich means there's a total of 164*2 = 328 version *per extension*.\n\nNow, PostGIS doesn't install a single extension but multiple ones:\n\n - postgis\n - postgis_raster\n - postgis_topology\n - postgis_sfcgal\n - postgis_tiger_geocoder\n - address_standardizer\n\nSo we'd be up to 328 * 6 = 1968 files needed to upgrade from ANY\npostgis extension version to CURRENT version.\n\nThe numbers are slightly less because not all versions of PostGIS\ndid support EXTENSION (wasn't even available in early PostGIS versions)\n\n> We certainly don't have that problem with the\n> extension that are being maintained in contrib, and I guess there is\n> some difference in versioning practice that is making it an issue for\n> them but not for us. I wish I understood what was going on there.\n\nOur extension versioning has 3 numbers, but PostgreSQL doesn't really\ncare about this right ? It's just that we've had 328 different\nversions released and more are to come, and we want users to be able\nto upgrade from each one of them.\n\nI guess you may suggest that we do NOT increas the *EXTENSION* version\nnumber UNLESS something really changes at SQL level, but honestly I\ndoubt we EVER released a version with no changes at the SQL level\n(maybe in some occasion for really bad bugs which were only fixed at\nthe C level).\n\n--strk;\n\n\n", "msg_date": "Wed, 8 Mar 2023 13:27:36 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Mar 07, 2023 at 02:13:07PM -0500, Tom Lane wrote:\n\n> What I am maintaining is that no extension author is actually going\n> to write such a script, indeed they probably won't trouble to write\n> any downgrade-like actions at all. Which makes the proposed design\n> mostly a foot-gun.\n\nWhat I'm maintaining is that such authors should be warned about\nthe risk, and discouraged from installing any wildcard-containing\nscript UNLESS they deal with downgrade protection.\n\nPostGIS does deal with that kind of protection (yes, could be helped\nsomehow in doing that by PostgreSQL).\n\n> I'm not unsympathetic to the idea of trying to support multiple upgrade\n> paths in one script. I just don't like this particular design for that,\n> because it requires the extension author to make promises that nobody\n> is actually going to deliver on.\n\nWould you be ok with a stricter pattern matching ? Something like:\n\n postgis--3.3.%--3.3.ANY.sql\n postgis--3.3.ANY--3.4.0.sql\n\nWould that be easier to promise something about ?\n\n--strk;\n\n\n", "msg_date": "Wed, 8 Mar 2023 13:32:07 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Wed, Mar 8, 2023 at 7:27 AM Sandro Santilli <strk@kbt.io> wrote:\n> Now, PostGIS has released a total of 164 versions:\n\nThat is a LOT of versions. PostGIS must release really frequently, I guess?\n\n> I guess you may suggest that we do NOT increas the *EXTENSION* version\n> number UNLESS something really changes at SQL level, but honestly I\n> doubt we EVER released a version with no changes at the SQL level\n> (maybe in some occasion for really bad bugs which were only fixed at\n> the C level).\n\nWell, that's certainly the design intention here. The point of the\nextension version number from the point of PostgreSQL is to provide a\nway to cause SQL changes to happen via extension upgrade steps, so if\nthere's no SQL change required then the version number change has no\npurpose from PostgreSQL's point of view. But I do see a problem with\nthat theory, which is that you might quite reasonably want the\nextension version number and your own release version number to match.\nThat problem doesn't arise for the extensions bundled with core\nbecause every new core release has a new PostgreSQL version number,\nwhich is entirely sufficient to identify the fact that there may have\nbeen C code changes, so we have no motivation to bump the extension\nversion number unless we need a SQL change. This is true even across\nmajor releases -- an extension in contrib can go for years without an\nSQL version bump even though the C code may change repeatedly to\naccommodate new major releases.\n\nI wonder if a solution to this problem might be to provide some kind\nof a version map file. Let's suppose that the map file is a bunch of\nlines of the form X Y Z, where X, Y, and Z are version numbers. The\nsemantics could be: we (the extension authors) promise that if you\nwant to upgrade from X to Z, it suffices to run the script that knows\nhow to upgrade from Y to Z. This would address Tom's concern, because\nif you write a master upgrade script, you have to explicitly declare\nthe versions to which it applies. But it gives you a way to do that\nwhich does not require creating a bazillion empty files. Instead you\ncan just declare that if you're running the upgrade script from\n14.36.279 to 14.36.280, that also suffices for an upgrade from\n14.36.278 or 14.36.277 or 14.36.276 or ....\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Wed, 8 Mar 2023 08:27:29 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> I wonder if a solution to this problem might be to provide some kind of a\n> version map file. Let's suppose that the map file is a bunch of lines of the form\n> X Y Z, where X, Y, and Z are version numbers. The semantics could be: we (the\n> extension authors) promise that if you want to upgrade from X to Z, it suffices\n> to run the script that knows how to upgrade from Y to Z. This would address\n> Tom's concern, because if you write a master upgrade script, you have to\n> explicitly declare the versions to which it applies. But it gives you a way to do\n> that which does not require creating a bazillion empty files. Instead you can\n> just declare that if you're running the upgrade script from\n> 14.36.279 to 14.36.280, that also suffices for an upgrade from\n> 14.36.278 or 14.36.277 or 14.36.276 or ....\n> \n> --\n> Robert Haas\n> EDB: http://www.enterprisedb.com\n\nThat's what I was proposing as a compromise.\nJust not sure what the appropriate name would be for the map file.\n\nOriginally I thought maybe we could put it in the .control, but decided \nit's better to have as a separate file that way we don't need to change the control and just add this extra file for PostgreSQL 16+. \n\nThen question arises if you have such a map file and you have files as well (the old way).\nWould we meld the two, so the map file would be used to simulate the file paths and these get added on as extra target paths or should we just assume if there is a map file, then that takes precedence over any paths inferred by files in the system.\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Wed, 8 Mar 2023 15:18:06 -0500", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Wed, Mar 08, 2023 at 08:27:29AM -0500, Robert Haas wrote:\n\n> I wonder if a solution to this problem might be to provide some kind\n> of a version map file. Let's suppose that the map file is a bunch of\n> lines of the form X Y Z, where X, Y, and Z are version numbers. The\n> semantics could be: we (the extension authors) promise that if you\n> want to upgrade from X to Z, it suffices to run the script that knows\n> how to upgrade from Y to Z.\n> This would address Tom's concern, because\n> if you write a master upgrade script, you have to explicitly declare\n> the versions to which it applies.\n\nThis would just move the problem from having 1968 files to having\nto write 1968 lines in control files, and does not solve the problem\nof allowing people with an hot-patched old version not being able to\nupgrade to a newer (bug-free) version released before the hot-patch.\n\nWe maintain multiple stable branches (5, at the moment: 2.5, 3.0, 3.1,\n3.2, 3.3) and would like to allow anyone running an old patched\nversion to still upgrade to a newer version released earlier.\n\n--strk;\n\n Libre GIS consultant/developer\n https://strk.kbt.io/services.html\n\n\n", "msg_date": "Mon, 13 Mar 2023 13:22:10 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Wed, Mar 08, 2023 at 03:18:06PM -0500, Regina Obe wrote:\n> \n> Then question arises if you have such a map file and you have files as well (the old way).\n\nOne idea I had in the past about the .control file was to\nadvertise an executable that would take current version and next\nversion and return a list of paths to SQL files to use to upgrade.\n\nThen our script could decide what to do, w/out Tom looking :P\n\n--strk;\n\n\n", "msg_date": "Mon, 13 Mar 2023 13:24:40 +0100", "msg_from": "'Sandro Santilli' <strk@kbt.io>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> > I wonder if a solution to this problem might be to provide some kind\n> > of a version map file. Let's suppose that the map file is a bunch of\n> > lines of the form X Y Z, where X, Y, and Z are version numbers. The\n> > semantics could be: we (the extension authors) promise that if you\n> > want to upgrade from X to Z, it suffices to run the script that knows\n> > how to upgrade from Y to Z.\n> > This would address Tom's concern, because if you write a master\n> > upgrade script, you have to explicitly declare the versions to which\n> > it applies.\n> \n> This would just move the problem from having 1968 files to having to write\n> 1968 lines in control files, \n\n1968 lines in one control file, is still much nicer than 1968 files in my\nbook.\n From a packaging standpoint also much cleaner, as that single file gets\nreplaced with each upgrade and no need to uninstall more junk from prior\ninstall.\n\n> We maintain multiple stable branches (5, at the moment: 2.5, 3.0, 3.1,\n3.2,\n> 3.3) and would like to allow anyone running an old patched version to\nstill\n> upgrade to a newer version released earlier.\n> \n> --strk;\n> \n> Libre GIS consultant/developer\n> https://strk.kbt.io/services.html\n\nThat said, I really would like a wildcard option for issues strk mentioned.\n\nI still see the main use-case as for those that micro version and for this\nuse case, they would need a way, not necessarily to have a single upgrade\nscript, but a script for each minor.\n\nSo something like \n\n3.2.%--3.4.0 = 3.2--3.4.0\n\nIn many cases, they don't even need anything done in an upgrade step, aside\nfrom moving the dial button on extension up a notch to coincide with the lib\nversion.\n\nIn our case, yes ours would be below because we already block downgrades\ninternally in our scripts\n\n%--3.4.0 = ANY--3.4.0\n\nOr we could move to a \n\n2.%--3.4.0 = 2--3.4.0\n3.%--3.4.0 = 3--3.4.0\n\nThanks,\nRegina\n\n\n\n\n\n\n", "msg_date": "Mon, 13 Mar 2023 14:48:56 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Mar 13, 2023 at 02:48:56PM -0400, Regina Obe wrote:\n> \n> I still see the main use-case as for those that micro version and for this\n> use case, they would need a way, not necessarily to have a single upgrade\n> script, but a script for each minor.\n> \n> So something like \n> \n> 3.2.%--3.4.0 = 3.2--3.4.0\n\nI could implement this too if there's an agreement about it.\nFor now I'm attaching an updated patch with conflicts resolved.\n\n--strk;", "msg_date": "Tue, 28 Mar 2023 00:58:47 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "I want to chime in on the issue of lower-number releases that are released\nafter higher-number releases. The way I see this particular problem is that\nwe always put upgrade SQL files in release \"packages,\" and they obviously\nbecome static resources.\n\nWhile I [intentionally] overlook some details here, what if (as a\nconvention, for projects where it matters) we shipped extensions with\nnon-upgrade SQL files only, and upgrades were available as separate\ndownloads? This way, we're not tying releases themselves to upgrade paths.\nThis also requires no changes to Postgres.\n\nI know this may be a big delivery layout departure for well-established\nprojects; I also understand that this won't solve the problem of having to\nhave these files in the first place (though in many cases, they can be\nautomatically generated once, I suppose, if they are trivial).\n\n\nOn Tue, Jan 10, 2023 at 5:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> I continue to think that this is a fundamentally bad idea. It creates\n> all sorts of uncertainties about what is a valid update path and what\n> is not. Restrictions like\n>\n> + Such wildcard update\n> + scripts will only be used when no explicit path is found from\n> + old to target version.\n>\n> are just band-aids to try to cover up the worst problems.\n>\n> Have you considered the idea of instead inventing a \"\\include\" facility\n> for extension scripts? Then, if you want to use one-monster-script\n> to handle different upgrade cases, you still need one script file for\n> each supported upgrade step, but those can be one-liners including the\n> common script file. Plus, such a facility could be of use to people\n> who want intermediate factorization solutions (that is, some sharing\n> of code without buying all the way into one-monster-script).\n>\n> regards, tom lane\n>\n>\n>\n\n-- \nY.\n\nI want to chime in on the issue of lower-number releases that are released after higher-number releases. The way I see this particular problem is that we always put upgrade SQL files in release \"packages,\" and they obviously become static resources.While I [intentionally] overlook some details here, what if (as a convention, for projects where it matters) we shipped extensions with non-upgrade SQL files only, and upgrades were available as separate downloads? This way, we're not tying releases themselves to upgrade paths. This also requires no changes to Postgres.I know this may be a big delivery layout departure for well-established projects; I also understand that this won't solve the problem of having to have these files in the first place (though in many cases, they can be automatically generated once, I suppose, if they are trivial).On Tue, Jan 10, 2023 at 5:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:I continue to think that this is a fundamentally bad idea.  It creates\nall sorts of uncertainties about what is a valid update path and what\nis not.  Restrictions like\n\n+     Such wildcard update\n+     scripts will only be used when no explicit path is found from\n+     old to target version.\n\nare just band-aids to try to cover up the worst problems.\n\nHave you considered the idea of instead inventing a \"\\include\" facility\nfor extension scripts?  Then, if you want to use one-monster-script\nto handle different upgrade cases, you still need one script file for\neach supported upgrade step, but those can be one-liners including the\ncommon script file.  Plus, such a facility could be of use to people\nwho want intermediate factorization solutions (that is, some sharing\nof code without buying all the way into one-monster-script).\n\n                        regards, tom lane\n\n\n-- Y.", "msg_date": "Mon, 3 Apr 2023 09:26:25 +0700", "msg_from": "Yurii Rashkovskii <yrashk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Apr 03, 2023 at 09:26:25AM +0700, Yurii Rashkovskii wrote:\n> I want to chime in on the issue of lower-number releases that are released\n> after higher-number releases. The way I see this particular problem is that\n> we always put upgrade SQL files in release \"packages,\" and they obviously\n> become static resources.\n> \n> While I [intentionally] overlook some details here, what if (as a\n> convention, for projects where it matters) we shipped extensions with\n> non-upgrade SQL files only, and upgrades were available as separate\n> downloads? This way, we're not tying releases themselves to upgrade paths.\n> This also requires no changes to Postgres.\n\nThis is actually something that's on the plate, and we recently\nadded a --disable-extension-upgrades-install configure switch\nand a `install-extension-upgrades-from-known-versions` make target\nin PostGIS to help going in that direction. I guess the ball would\nnow be in the hands of packagers.\n\n> I know this may be a big delivery layout departure for well-established\n> projects; I also understand that this won't solve the problem of having to\n> have these files in the first place (though in many cases, they can be\n> automatically generated once, I suppose, if they are trivial).\n\nWe will now also be providing a `postgis` script for administration\nthat among other things will support a `install-extension-upgrades`\ncommand to install upgrade paths from specific old versions, but will\nnot hard-code any list of \"known\" old versions as such a list will\neasily become outdated.\n\nNote that all upgrade scripts installed by the Makefile target or by\nthe `postgis` scripts will only be empty upgrade paths from the source\nversion to the fake \"ANY\" version, as the ANY--<current> upgrade path\nwill still be the \"catch-all\" upgrade script.\n\n--strk;\n\n> On Tue, Jan 10, 2023 at 5:52 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > I continue to think that this is a fundamentally bad idea. It creates\n> > all sorts of uncertainties about what is a valid update path and what\n> > is not. Restrictions like\n> >\n> > + Such wildcard update\n> > + scripts will only be used when no explicit path is found from\n> > + old to target version.\n> >\n> > are just band-aids to try to cover up the worst problems.\n> >\n> > Have you considered the idea of instead inventing a \"\\include\" facility\n> > for extension scripts? Then, if you want to use one-monster-script\n> > to handle different upgrade cases, you still need one script file for\n> > each supported upgrade step, but those can be one-liners including the\n> > common script file. Plus, such a facility could be of use to people\n> > who want intermediate factorization solutions (that is, some sharing\n> > of code without buying all the way into one-monster-script).\n> >\n> > regards, tom lane\n> >\n> >\n> >\n> \n> -- \n> Y.\n\n\n", "msg_date": "Sun, 9 Apr 2023 22:46:29 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On Mon, Apr 03, 2023 at 09:26:25AM +0700, Yurii Rashkovskii wrote:\n> > I want to chime in on the issue of lower-number releases that are\n> > released after higher-number releases. The way I see this particular\n> > problem is that we always put upgrade SQL files in release \"packages,\"\n> > and they obviously become static resources.\n> >\n> > While I [intentionally] overlook some details here, what if (as a\n> > convention, for projects where it matters) we shipped extensions with\n> > non-upgrade SQL files only, and upgrades were available as separate\n> > downloads? This way, we're not tying releases themselves to upgrade paths.\n> > This also requires no changes to Postgres.\n> \n> This is actually something that's on the plate, and we recently added a --\n> disable-extension-upgrades-install configure switch and a `install-extension-\n> upgrades-from-known-versions` make target in PostGIS to help going in that\n> direction. I guess the ball would now be in the hands of packagers.\n> \n> > I know this may be a big delivery layout departure for\n> > well-established projects; I also understand that this won't solve the\n> > problem of having to have these files in the first place (though in\n> > many cases, they can be automatically generated once, I suppose, if they are\n> trivial).\n> \n> We will now also be providing a `postgis` script for administration that among\n> other things will support a `install-extension-upgrades` command to install\n> upgrade paths from specific old versions, but will not hard-code any list of\n> \"known\" old versions as such a list will easily become outdated.\n> \n> Note that all upgrade scripts installed by the Makefile target or by the\n> `postgis` scripts will only be empty upgrade paths from the source version to\n> the fake \"ANY\" version, as the ANY--<current> upgrade path will still be the\n> \"catch-all\" upgrade script.\n> \n> --strk;\n> \n\nSounds like a user and packaging nightmare to me.\nHow is a packager to know which versions a user might have installed?\n\nand leaving that to users to run an extra command sounds painful. They barely know how to run\n\nALTER EXTENSION postgis UPDATE;\n\nOr pg_upgrade\n\nI much preferred the idea of just listing all our upgrade targets in the control file.\n\nThe many annoyance I have left is the 1000 of files that seem to grow forever.\n\nThis isn't just postgis. It's pgRouting, it's h3_pg, it's mobilitydb.\n\nI don't want to have to set this up for every single extension that does micro-updates.\nI just want a single file that can list all the target paths worst case.\n\nWe need to come up with a convention of how to describe a micro update, as it's really a problem with extensions that follow the pattern\n\nmajor.minor.micro\n\nIn almost all cases the minor upgrade script works for all micros of that minor and in postgis yes we have a single script that works for all cases.\n\nThanks,\nRegina\n\n\n\n\n\n\n \n\n\n\n", "msg_date": "Mon, 10 Apr 2023 23:09:40 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Apr 10, 2023 at 11:09:40PM -0400, Regina Obe wrote:\n> > On Mon, Apr 03, 2023 at 09:26:25AM +0700, Yurii Rashkovskii wrote:\n> > > I want to chime in on the issue of lower-number releases that are\n> > > released after higher-number releases. The way I see this particular\n> > > problem is that we always put upgrade SQL files in release \"packages,\"\n> > > and they obviously become static resources.\n> > >\n> > > While I [intentionally] overlook some details here, what if (as a\n> > > convention, for projects where it matters) we shipped extensions with\n> > > non-upgrade SQL files only, and upgrades were available as separate\n> > > downloads? This way, we're not tying releases themselves to upgrade paths.\n> > > This also requires no changes to Postgres.\n> > \n> > This is actually something that's on the plate, and we recently added a --\n> > disable-extension-upgrades-install configure switch and a `install-extension-\n> > upgrades-from-known-versions` make target in PostGIS to help going in that\n> > direction. I guess the ball would now be in the hands of packagers.\n> > \n> > > I know this may be a big delivery layout departure for\n> > > well-established projects; I also understand that this won't solve the\n> > > problem of having to have these files in the first place (though in\n> > > many cases, they can be automatically generated once, I suppose, if they are\n> > trivial).\n> > \n> > We will now also be providing a `postgis` script for administration that among\n> > other things will support a `install-extension-upgrades` command to install\n> > upgrade paths from specific old versions, but will not hard-code any list of\n> > \"known\" old versions as such a list will easily become outdated.\n> > \n> > Note that all upgrade scripts installed by the Makefile target or by the\n> > `postgis` scripts will only be empty upgrade paths from the source version to\n> > the fake \"ANY\" version, as the ANY--<current> upgrade path will still be the\n> > \"catch-all\" upgrade script.\n> \n> Sounds like a user and packaging nightmare to me.\n> How is a packager to know which versions a user might have installed?\n\nIsn't this EXACTLY the same problem WE have ? The problem is we cannot\nknow in advance how many patch-level releases will come out, and we\ndon't want to hurry into cutting a new release in branches NOT having\na bug which was fixed in a stable branch...\n\nPackager might actually know better in that they could ONLY consider\nthe packages ever packaged by them.\n\nHey, best would be having support for wildcard wouldn't it ? \n\n> I much preferred the idea of just listing all our upgrade targets in the control file.\n\nAgain: how would we know all upgrade targets ?\n\n> We need to come up with a convention of how to describe a micro update,\n> as it's really a problem with extensions that follow the pattern\n\nI think it's a problem with extensions maintaining stable branches,\nas if the history was linear we would possibly need less files\n(although at this stage any number bigger than 1 would be too much for me)\n\n--strk;\n\n\n", "msg_date": "Tue, 11 Apr 2023 20:48:23 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> Packager might actually know better in that they could ONLY consider the\n> packages ever packaged by them.\n> \nI'm a special case packager cause I'm on the PostGIS project and I only\npackage postgis related extensions, but even I find this painful. \nBut for most packagers, I think they are juggling too many packages and too\nmany OS versions to micro manage the business of each package.\nIn my case my job is simple. I deal just with Windows and that doesn't\nchange from Windows version to Windows version (just PG specific).\nThink of upgrading from Debian 10 to Debian 12 - what would you as a PG\npackager expect people to be running and upgrading from?\nThey could be switching from say the main distro to the pgdg distro.\n\n> Hey, best would be having support for wildcard wouldn't it ?\n> \nFor PostGIS yes and any other extension that does nothing but add new\nfunctions or replaces existing ones. For others some minor handling would\nbe ideal, though I guess some other projects would be happy with a wildcard\n(e.g. pgRouting would prefer a wildcard) since most of the changes are just\nadditions of new functions or replacements of existing functions.\n\nFor something like h3-pg I think a simpler micro handling would be ideal,\nthough not sure.\nThey ship two extensions (one that is a bridge to postgis and their newest\ntakes advantage of postgis_raster too)\nhttps://github.com/zachasme/h3-pg/tree/main/h3_postgis/sql/updates\n\nhttps://github.com/zachasme/h3-pg/tree/main/h3/sql/updates \nMany of their upgrades are No-ops cause they really are just lib upgrades.\n\n\nI'm thinking maybe we should discuss these ideas with projects who would\nbenefit most from this:\n\n(many of which I'm familiar with because they are postgis offsprings and I\npackage or plan to package them - pgRouting, h3-pg, pgPointCloud,\nmobilityDb, \n\nNot PostGIS offspring:\n\nZomboDB - https://github.com/zombodb/zombodb/tree/master/sql - (most of\nthose are just changing versioning on the function call, so yah wildcard\nwould be cleaner there)\nTimescaleDB - https://github.com/timescale/timescaledb/tree/main/sql/updates\n( I don't think a wildcard would work here especially since they have some\ndowngrade paths, but is a useful example of a micro-level extension upgrade\npattern we should think about if we could make easier)\nhttps://github.com/timescale/timescaledb/tree/main/sql/updates \n\n\n> > I much preferred the idea of just listing all our upgrade targets in the\n> control file.\n> \n> Again: how would we know all upgrade targets ?\n> \nWe already do, remember you wrote it :)\n\nhttps://git.osgeo.org/gitea/postgis/postgis/src/branch/master/extensions/upg\nradeable_versions.mk\n\nYes it does require manual updating each release cycle (and putting in\nversions from just released stable branches). I can live with continuing\nwith that exercise. It was a nice to have but is not nearly as annoying as\n1000 scripts or as a packager trying to catalog what versions of packages\nhave I released that I need to worry about.\n\n\n> > We need to come up with a convention of how to describe a micro\n> > update, as it's really a problem with extensions that follow the\n> > pattern\n> \n> I think it's a problem with extensions maintaining stable branches, as if\nthe\n> history was linear we would possibly need less files (although at this\nstage\n> any number bigger than 1 would be too much for me)\n> \n> --strk;\n\nI'm a woman of compromise. Sure 1 file would be ideal, but \nI'd rather live with a big file listing all version upgrades than 1000 files\nwith the same information.\n\nIt's cleaner to read a single file than make sense of a pile of files.\n\n\n\n\n\n", "msg_date": "Tue, 11 Apr 2023 16:36:04 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Apr 11, 2023 at 04:36:04PM -0400, Regina Obe wrote:\n> \n> > Hey, best would be having support for wildcard wouldn't it ?\n> \n> I'm a woman of compromise. Sure 1 file would be ideal, but \n> I'd rather live with a big file listing all version upgrades\n> than 1000 files with the same information.\n\nPersonally I don't see the benefit of 1 big file vs. many 0-length\nfiles to justify the cost (time and complexity) of a PostgreSQL change,\nwith the corresponding cost of making use of this new functionality\nbased on PostgreSQL version.\n\nWe'd still have the problem of missing upgrade paths unless we release\na new version of PostGIS even if it's ONLY for the sake of updating\nthat 1 big file (or adding a new file, in the current situation).\n\n--strk;\n\n\n", "msg_date": "Tue, 11 Apr 2023 23:27:37 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> Personally I don't see the benefit of 1 big file vs. many 0-length files\nto justify\n> the cost (time and complexity) of a PostgreSQL change, with the\n> corresponding cost of making use of this new functionality based on\n> PostgreSQL version.\n> \n From a packaging stand-point 1 big file is better than tons of 0-length\nfiles. \nFewer files to uninstall and to double check when testing.\n\nAs to the added complexity agree it's more but in my mind worth it if we\ncould get rid of this mountain of files.\nBut my vote would be the wild-card solution as I think it would serve more\nthan just postgis need.\nAny project that rarely does anything but add, remove, or modify functions\ndoesn't really need multiple upgrade scripts and \nI think quite a few extensions fall in that boat.\n\n> We'd still have the problem of missing upgrade paths unless we release a\nnew\n> version of PostGIS even if it's ONLY for the sake of updating that 1 big\nfile (or\n> adding a new file, in the current situation).\n> \n> --strk;\n\nI think we always have more releases with newer stable versions than older\nstable versions.\nI can't remember a case when we had this ONLY issue.\nIf there is one fix in an older stable, version we usually wait for several\nmore before bothering with a release \nand then all the stables are released around the same time. So I don't see\nthe ONLY being a real problem.\n\n\n\n\n\n", "msg_date": "Tue, 11 Apr 2023 18:19:12 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Here are my thoughts of how this can work to satisfy our specific needs and\nthat of others who have many micro versions.\n\n1) We define an additional file. I'll call this a paths file\n\nSo for example postgis would have a \n\npostgis.paths file\n\nThe format of the path file would be of the form\n\n<version pattern1>,<version pattern2> => 3.3.0--3.4.0\n\nIt will also allow a wildcard option\n% => ANY--3.4.0.sql\n\nSo a postgis.paths with multiple lines might look like\n\n3.2.0,3.2.1 => 3.2.2--3.3.0\n3.3.% => 3.3--3.4.0\n% => ANY--3.4.0\n\n2) The order of precedence would be:\n \na) physical files are always used first\nb) If no physical path is present on disk, then it looks at a\n<component>.paths file to formulate virtual paths\nc) Longer mappings are preferred over shorter mappings\n\nSo that means the % => ANY--3.4.0 would be the path of last resort\n\nLet's say our current installed version of postgis is postgis VERSION 3.2.0\n\nThe above path formulated would be \n\n3.2.0 -> 3.3.0 -> 3.4.0\nThe simulated scripts used to get there would be\n\npostgis--3.2.2--3.3.0.sql\npostgis--3.3.0--3.4.0.sql\n\n\nThis however does not solve the issue of downgrades, which admittedly\npostgis is not concerned about since we have accounted for that in our\ninternal scripts.\n\nSo we might have issue with having a bear: %. If we don't allow a bear %\n\nThen our postgis patterns might look something like:\n\n3.%, 2.% => ANY --3.4.0\n\nWhich would mean 3.0.1, 3.0.2, 3.2.etc would all use the same script.\n\nWhich would still be a major improvement from what we have today and\nminimizes likeliness of downgrade footguns.\n\nThoughts anyone?\n\n\n\n\n\n\n\n\n", "msg_date": "Thu, 13 Apr 2023 18:28:10 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> Here are my thoughts of how this can work to satisfy our specific needs\nand\n> that of others who have many micro versions.\n> \n> 1) We define an additional file. I'll call this a paths file\n> \n> So for example postgis would have a\n> \n> postgis.paths file\n> \n> The format of the path file would be of the form\n> \n> <version pattern1>,<version pattern2> => 3.3.0--3.4.0\n> \n> It will also allow a wildcard option\n> % => ANY--3.4.0.sql\n> \n> So a postgis.paths with multiple lines might look like\n> \n> 3.2.0,3.2.1 => 3.2.2--3.3.0\n> 3.3.% => 3.3--3.4.0\n> % => ANY--3.4.0\n> \n> 2) The order of precedence would be:\n> \n> a) physical files are always used first\n> b) If no physical path is present on disk, then it looks at a\n<component>.paths\n> file to formulate virtual paths\n> c) Longer mappings are preferred over shorter mappings\n> \n> So that means the % => ANY--3.4.0 would be the path of last resort\n> \n> Let's say our current installed version of postgis is postgis VERSION\n3.2.0\n> \n> The above path formulated would be\n> \n> 3.2.0 -> 3.3.0 -> 3.4.0\n> The simulated scripts used to get there would be\n> \n> postgis--3.2.2--3.3.0.sql\n> postgis--3.3.0--3.4.0.sql\n> \n> \n> This however does not solve the issue of downgrades, which admittedly\n> postgis is not concerned about since we have accounted for that in our\n> internal scripts.\n> \n> So we might have issue with having a bear: %. If we don't allow a bear %\n> \n> Then our postgis patterns might look something like:\n> \n> 3.%, 2.% => ANY --3.4.0\n> \n> Which would mean 3.0.1, 3.0.2, 3.2.etc would all use the same script.\n> \n> Which would still be a major improvement from what we have today and\n> minimizes likeliness of downgrade footguns.\n> \n> Thoughts anyone?\n> \n\nMinor correction scripts to get from 3.2.0 to 3.4.0 would be:\n\npostgis--3.2.2--3.3.0.sql\npostgis--3.3--3.4.0.sql\n\n\n\n", "msg_date": "Thu, 13 Apr 2023 18:32:55 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Hi All,\n\nI've done upgrade maintenance for multiple extensions now (TimescaleDB\ncore, Promscale) and I think the original suggestion (wildcard filenames\nwith control-file switch to enable) here is a good one.\n\nHaving one big upgrade file, at its core, enables you to do is move the\nlogic for \"what parts of the script to run for an upgrade from version V1\nto V2\" from a script to generate .sql files to Plpgsql in the form of\nsomething like:\n\nDO$$\n\n IF <guard> THEN\n\n<execute upgrade>\n\n <record metadata>\n\n END IF;\n\n$$;\n\nWhere the guard can check postgres catalogs, internal extension catalogs,\nor anything else the extension wants. What we have done in the past is\nrecord executions of non-idempotent migration scripts in an\nextension catalog table and simply check if that row exists in the guard.\nThis allows for much easier management of backporting patches, for\ninstance. Having the full power of Plpgsql makes all of this much easier\nand (more flexible) than in a build script that compiles various V1--v2.sql\nupgrade files.\n\nCurrently, when we did this in Promscale, we also added V1--v2.sql upgrade\nfiles as symlinks to \"the one big file\". Not the end of the world, but I\nthink a patch like the one suggested in this thread will make things a lot\nnicer.\n\nAs for Tom's concern about downgrades, I think it's valid but it's a case\nthat is easy to test for in Plpgsql and either handle or error. For\nexample, we use semver so testing for a downgrade at the top of the upgrade\nscript is trivial. I think this concern is a good reason to have this\nfunctionality enabled in the control file. That way, this issue can be\ndocumented and then only extensions that test for valid upgrades in the\nscript can enable this. Such an \"opt-in\" prevents people who haven't\nthought of the need to validate the version changes from using this by\naccident.\n\nI'll add two more related points:\n1) When the extension framework was first implemented I don't believe DO\nblocks existed, so handling upgrade logic in the script itself just wasn't\npossible. That's why I think rethinking some of the design now makes sense.\n2) This change also makes it easier for extensions that use versioned .so\nfiles (by that I mean uses extension-<version>.so rather than\nextension.so). Because such extensions can't really use the chaining\nproperty of the existing upgrade system and so need to write a\ndirect X--Y.sql migration file for every prior version X. I know the system\nwasn't designed for this, but in reality a lot of extensions do this.\nEspecially the more complex ones.\n\nHopefully this is helpful,\nMat\n\nHi All,I've done upgrade maintenance for multiple extensions now (TimescaleDB core, Promscale) and I think the original suggestion (wildcard filenames with control-file switch to enable) here is a good one. Having one big upgrade file, at its core, enables you to do is move the logic for \"what parts of the script to run for an upgrade from version V1 to V2\" from a script to generate .sql files to Plpgsql in the form of something like:DO$$  IF <guard> THEN<execute upgrade>                <record metadata>  END IF;$$; Where the guard can check postgres catalogs, internal extension catalogs, or anything else the extension wants. What we have done in the past is record executions of non-idempotent migration scripts in an extension catalog table and simply check if that row exists in the guard. This allows for much easier management of backporting patches, for instance. Having the full power of Plpgsql makes all of this much easier and (more flexible) than in a build script that compiles various V1--v2.sql upgrade files. Currently, when we did this in Promscale, we also added V1--v2.sql upgrade files as symlinks to \"the one big file\". Not the end of the world, but I think a patch like the one suggested in this thread will make things a lot nicer.As for Tom's concern about downgrades, I think it's valid but it's a case that is easy to test for in Plpgsql and either handle or error. For example, we use semver so testing for a downgrade at the top of the upgrade script is trivial. I think this concern is a good reason to have this functionality enabled in the control file. That way, this issue can be documented and then only extensions that test for valid upgrades in the script can enable this. Such an \"opt-in\" prevents people who haven't thought of the need to validate the version changes from using this by accident.I'll add two more related points:1) When the extension framework was first implemented I don't believe DO blocks existed, so handling upgrade logic in the script itself just wasn't possible. That's why I think rethinking some of the design now makes sense.2) This change also makes it easier for extensions that use versioned .so files (by that I mean uses extension-<version>.so rather than extension.so). Because such extensions can't really use the chaining property of the existing upgrade system and so need to write a direct X--Y.sql migration file for every prior version X. I know the system wasn't designed for this, but in reality a lot of extensions do this. Especially the more complex ones.Hopefully this is helpful,Mat", "msg_date": "Mon, 24 Apr 2023 13:06:24 -0400", "msg_from": "Mat Arye <mat@timescaledb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> This change also makes it easier for extensions that use versioned .so files (by that I mean uses extension-<version>.so rather than extension.so). \n> Because such extensions can't really use the chaining property of the existing upgrade system and so need to write a direct X--Y.sql migration file for \n> every prior version X. I know the system wasn't designed for this, but in reality a lot of extensions do this. Especially the more complex ones.\n\n> Hopefully this is helpful,\n> Mat\n\nThanks for the feedback. Yes this idea of versioned .so is also one of the reasons why we always have to run a replace on all functions.\n\nLike for example on PostGIS side, we have option of full minor (so the lib could be postgis-3.3.so or postgis-3.so)\n\nFor development I always include the minor version and the windows interim builds I build against our master branch has the minor.\nBut the idea is someone can upgrade to stable version, so the script needs to always have the .so version in it to account for this shifting of options.\n\nThanks,\nRegina\t\n\n\n\n", "msg_date": "Mon, 24 Apr 2023 14:00:30 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, Apr 24, 2023 at 01:06:24PM -0400, Mat Arye wrote:\n> Hi All,\n> \n> I've done upgrade maintenance for multiple extensions now (TimescaleDB\n> core, Promscale) and I think the original suggestion (wildcard filenames\n> with control-file switch to enable) here is a good one.\n\nThanks for your comment, Mat.\n\nI'm happy to bring back the control-file switch if there's an\nagreement about that.\n\nThe rationale for me to add that was solely to be 100% sure about not\nbreaking any extension using \"%\" character as an actual version.\n\nI never considered it a security measure because the author of the control\nfile is the same as the author of the ugprade scripts so shipping a\nwildcard upgrade script could be seen as a switch itself.\n\n[...]\n\n> As for Tom's concern about downgrades, I think it's valid but it's a case\n> that is easy to test for in Plpgsql and either handle or error. For\n> example, we use semver so testing for a downgrade at the top of the upgrade\n> script is trivial.\n\nI'd say it could be made even easier if PostgreSQL itself would provide\na variable (or other way to fetch it) for the \"source\" version of the extension\nduring exection of the \"source\"--\"target\" upgrade.\n\nI'm saying this because PostGIS fetches this version by calling a\nPostGIS function, but some extensions might have such \"version\"\nfunction pointing to a C file that doesn't exist enymore at the time\nof upgrade and thus would be left with the impossibility to rely on\ncalling it.\n\n--strk;\n\n\n", "msg_date": "Thu, 27 Apr 2023 12:49:57 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> \n> > As for Tom's concern about downgrades, I think it's valid but it's a\n> > case that is easy to test for in Plpgsql and either handle or error.\n> > For example, we use semver so testing for a downgrade at the top of\n> > the upgrade script is trivial.\n> \n\nI was thinking about this more. The extension model as it stands doesn't\nallow an extension author to define version hierarchy. We handle this\ninternally in our scripts to detect a downgrade attempt and stop it similar\nto what Mat is saying.\n\nMost of that is basically converting our version string to a numeric number\nwhich we largely do with a regex pattern.\n\nI was thinking if there were some way to codify that regex pattern in our\ncontrol file, then wild cards can only be applied if such a pattern is\ndefined and the \n\n%--<target version>\n\nThe % has to be numerically before the target version.\n\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Fri, 28 Apr 2023 08:10:00 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "(I'm the developer of ZomboDB and pgrx, which while not an extension per se, allows others to make extensions that then need upgrade scripts. So this topic is interesting to me.)\n\n> On Mar 13, 2023, at 2:48 PM, Regina Obe <lr@pcorp.us> wrote:\n> \n>>> I wonder if a solution to this problem might be to provide some kind\n>>> of a version map file. Let's suppose that the map file is a bunch of\n>>> lines of the form X Y Z, where X, Y, and Z are version numbers. The\n>>> semantics could be: we (the extension authors) promise that if you\n>>> want to upgrade from X to Z, it suffices to run the script that knows\n>>> how to upgrade from Y to Z.\n>>> This would address Tom's concern, because if you write a master\n>>> upgrade script, you have to explicitly declare the versions to which\n>>> it applies.\n>> \n>> This would just move the problem from having 1968 files to having to write\n>> 1968 lines in control files, \n> \n> 1968 lines in one control file, is still much nicer than 1968 files in my\n> book.\n> From a packaging standpoint also much cleaner, as that single file gets\n> replaced with each upgrade and no need to uninstall more junk from prior\n> install.\n\n\nI tend to agree with this. I like this mapping file idea. Allowing an extension to declare what to use (Z) to get from X to Y is great. In theory that's not much different than having a bunch of individual files, but in practice an extension likely could pare their actual file set down to just a few, and dealing with less files is a big win. And then the mapping file would allow the extension author to make the tree as complex as necessary.\n\n(I have some hand-wavy ideas for pgrx and autogenerating upgrade scripts and a file like this could help quite a bit. Rust and rust-adjacent things prefer explicitness)\n\nIf this \"wildcard in a filename\" idea existed back in 2015 when I started ZomboDB I'm not sure I'd have used it, and I'm not sure I'd want to switch to using it now. The ambiguities are too great when what I want is something that's obvious. \n\nPrimarily, ZDB purposely doesn't support downgrade paths so I wouldn't want to use a pattern that implies it does.\n\nZomboDB has 137 releases over the past 8 years. Each one of those adds an upgrade script from OLDVER--NEWVER. Prior to the Rust rewrite, these were all hand-generated, and sometimes were just zero bytes. I've since developed tooling to auto-generate upgrade scripts by diffing the full schemas for OLDVER and NEWVER and emitting whatever DDL can move OLDVER to NEWVER. In practice this usually generates an upgrade script that just replaces 1 function... the \"zdb.version()\" function. Of course, I've had to hand-edit these on occasion as well -- this is not a perfect system.\n\nIt might just be the nature of our extensions, but I don't recall ever needing DO statements in an upgrade script. The extension.sql has a few, one of which is to optionally enable PostGIS support! haha ZDB is fairly complex too. Hundreds of functions, dozens of types, an IAM implementation, a dozen views, a few tables, some operators. I also don't see a lot of changes to ZDB's extension schema nowadays -- new releases are usually fixing some terrible bug.\n\n(As an aside, I wish Postgres would show the line number in whatever .sql file when an ERROR occurs during CREATE EXTENSION or ALTER EXTENSION UPDATE. That'd be a huge QoL improvement for me -- maybe it's my turn to put a patch together)\n\nJust some musings from a guy hanging out here on the fringes of Postgres extension development. Of the things I've seen in this thread I really like the mapping file idea and I don't have any original thoughts on the subject.\n\neric\n\n\n\n", "msg_date": "Fri, 28 Apr 2023 10:03:13 -0400", "msg_from": "Eric Ridge <eebbrr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Fri, Apr 28, 2023 at 10:03 AM Eric Ridge <eebbrr@gmail.com> wrote:\n> ZomboDB has 137 releases over the past 8 years.\n\nDang.\n\nThis may be one of those cases where the slow pace of change for\nextensions shipped with PostgreSQL gives those of us who are\ndevelopers for PostgreSQL itself a sort of tunnel vision. The system\nthat we have in core works well for those extensions, which get a new\nversion at most once a year and typically much less often than that.\nI'm not sure we'd be so sanguine about the status quo if we had 137\nreleases out there.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Mon, 1 May 2023 12:12:14 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "\n> On May 1, 2023, at 12:12 PM, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Fri, Apr 28, 2023 at 10:03 AM Eric Ridge <eebbrr@gmail.com> wrote:\n>> ZomboDB has 137 releases over the past 8 years.\n> \n> Dang.\n> \n> This may be one of those cases where the slow pace of change for\n> extensions shipped with PostgreSQL gives those of us who are\n> developers for PostgreSQL itself a sort of tunnel vision.\n\nCould be I'm a terrible programmer too. Could be Elasticsearch is a PITA to deal with.\n\nFWIW, outside of major ZDB releases, most of those have little-to-zero schema changes. But that doesn't negate the fact each release needs its own upgrade.sql script. I'm working on a point release right this moment and it won't have any schema changes but it'll have an upgrade.sql script.\n\nMaybe related, pgrx (formally pgx) has had 77 release since June 2020, but that's mostly us just slowly trying to wrap Postgres internals in Rust, bug fixing, and such. Getting support for a new major Postgres release usually isn't *that* hard -- a day or two of work, depending on how drastically y'all changed an internal API. `List` and `FunctionCallInfo(Base)Data` come to mind as minor horrors for us, hahaha. Rust at least makes it straightforward for us to have divergent implementations and have the decisions solved at compile time.\n\n> The system\n> that we have in core works well for those extensions, which get a new\n> version at most once a year and typically much less often than that.\n> I'm not sure we'd be so sanguine about the status quo if we had 137\n> releases out there.\n\nI don't lose sleep over it but it is a lot of bookkeeping. The nice thing about Postgres' extension system is that we can do what we want without regard to the project's release schedule. In general, y'all nailed the extension system back in the day.\n\nI feel for the PostGIS folks as they've clearly got more, hmm, complete Postgres version support than ZDB does, and I'd definitely want to support anything that would make their lives easier. But I also don't want to see something that pgrx users might want to adopt that might turn out to be bad for them.\n\nI have a small list of things I'd love to see changed wrt extensions but I'm just the idea guy and don't have much in the way of patches to offer. I also have some radical ideas about annotating the sources themselves, but I don't feel like lobbing a grenade today.\n\neric\n\n", "msg_date": "Mon, 1 May 2023 14:25:24 -0400", "msg_from": "Eric Ridge <eebbrr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "Eric Ridge <eebbrr@gmail.com> writes:\n> FWIW, outside of major ZDB releases, most of those have little-to-zero schema changes. But that doesn't negate the fact each release needs its own upgrade.sql script. I'm working on a point release right this moment and it won't have any schema changes but it'll have an upgrade.sql script.\n\nHmm ... our model for the in-core extensions has always been that you\ndon't need a new upgrade script unless the SQL declarations change.\n\nAdmittedly, that means that the script version number isn't a real\nhelpful guide to which version of the .so you're dealing with.\nWe expect the .so's own minor version number to suffice for that,\nbut I realize that that might not be the most user-friendly answer.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 01 May 2023 16:24:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "\n> On May 1, 2023, at 4:24 PM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Eric Ridge <eebbrr@gmail.com> writes:\n>> FWIW, outside of major ZDB releases, most of those have little-to-zero schema changes. But that doesn't negate the fact each release needs its own upgrade.sql script. I'm working on a point release right this moment and it won't have any schema changes but it'll have an upgrade.sql script.\n> \n> Hmm ... our model for the in-core extensions has always been that you\n> don't need a new upgrade script unless the SQL declarations change.\n\nThat's a great model when the schemas only change once every few years or never.\n\n> Admittedly, that means that the script version number isn't a real\n> helpful guide to which version of the .so you're dealing with.\n\nIt isn't. ZDB, and I think (at least) PostGIS, have their own \"version()\" function. Keeping everything the same version keeps me \"sane\" and eliminates a class of round-trip questions with users.\n\nSo when I say \"little-to-zero schema changes\" I mean that at least the version() function changes -- it's a `LANGUAGE sql` function for easy human inspection.\n\n> We expect the .so's own minor version number to suffice for that,\n> but I realize that that might not be the most user-friendly answer.\n\nOne of my desires is that the on-disk .so's filename be associated with the pg_extension entry and not Each. Individual. Function. There's a few extensions that like to version the on-disk .so's filename which means a CREATE OR REPLACE for every function on every extension version bump. That forces an upgrade script even if the schema didn't technically change and also creates the need for bespoke tooling around extension.sql and upgrade.sql scripts.\n\nBut I don't want to derail this thread.\n\neric\n\n", "msg_date": "Mon, 1 May 2023 17:05:08 -0400", "msg_from": "Eric Ridge <eebbrr@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> It isn't. ZDB, and I think (at least) PostGIS, have their own \"version()\"\nfunction.\n> Keeping everything the same version keeps me \"sane\" and eliminates a class\n> of round-trip questions with users.\n> \nYes we have several version numbers and yes we too like to keep the\nextension version the same, cause it's the only thing that is universally\nconsistent across all PostgreSQL extensions.\n\nYes we have our own internal version functions. One for PostGIS (has the\ntrue lib version file) and a script version and we have logic to make sure\nthey are aligned. We even have versions for our dependencies (PROJ, GEOS,\nGDAL) cause behavior of PostGIS changes based on versions of those.\nThis goes for each extension we package that has a lib file (postgis,\npostgis_raster, postgis_topology, postgis_sfcgal)\n\nIn addition to that we also have a version for PostgreSQL (that the scripts\nwere installed on). To catch cases when a pg_upgrade is needed to go from\n3.3.0 to 3.3.0.\nYes we need same version upgrades (particularly because of pg_upgrade).\nSandro and I were talking about this. This is something we do in our\npostgis_extensions_upgrade() (basically forcing an upgrade to a version\nthat does nothing, so we can force an upgrade to 3.3.0 again) to make up for\nthis limitation in extension model.\n\nThe reason for that is features get exposed based on version of PostgreSQL\nyou are running.\nSo in 3.3.0 we leveraged the new gist fast indexing build, which is only\nenabled for users running PostgreSQL 15 and above.\n\nWhat usually happens is someone has PostGIS 3.3.0 say on PG 14, they\npg_upgrade to PG 15 but they are running with PG 14 scripts \nSo they are not taking advantage of the new PG 15 features until they do a \n\nSELECT postgis_extensions_upgrade();\n\nSo this is why we need the DO commands for scenarios like this.\n\n> One of my desires is that the on-disk .so's filename be associated with\nthe\n> pg_extension entry and not Each. Individual. Function. There's a few\n> extensions that like to version the on-disk .so's filename which means a\n> CREATE OR REPLACE for every function on every extension version bump.\n> That forces an upgrade script even if the schema didn't technically change\nand\n> also creates the need for bespoke tooling around extension.sql and\n> upgrade.sql scripts.\n> \n> But I don't want to derail this thread.\n> \n> eric=\n\nThis is more or less the reason why we had to do CREATE OR REPLACE for all\nour functions.\nIn the past we minor versioned our lib files so we had postgis-2.4,\npostgis-2.5\n\nAt 3.0 after much in-fighting (battle between convenience of developers vs.\nregular old users just wanting to use PostGIS and my frustration trying to\nhold peoples hands thru pg_upgrade), we settled on major version for the lib\nfile, with option for developers to still keep the minor.\n\nSo default install will be postgis-3 for life of 3 series and become\npostgis-4 when 4 comes along (hopefully not for another 10 years).\nCompletely stripping the version we decided not to do cause with the change\nwe have a whole baggage of legacy functions we needed to stub as we removed\nthem so pg_upgrade will work more or less seamlessly. So come postgis-4\nthese stubs will be removed.\n\nOur CI bots however many of them do use the minor versionings 3.1, 3.2, 3.3\netc, cause it's easier to test upgrades and do regressions.\nAnd many PostGIS developers do the same. So a replace of all functions is\nstill largely needed. This is one of the reasons the whole chained upgrade\npath never worked for us and why we settled on one script to handle\neverything.\n\n\n\n\n", "msg_date": "Mon, 1 May 2023 17:50:49 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, May 1, 2023 at 11:05 PM Eric Ridge <eebbrr@gmail.com> wrote:\n\n>\n> > We expect the .so's own minor version number to suffice for that,\n> > but I realize that that might not be the most user-friendly answer.\n>\n> One of my desires is that the on-disk .so's filename be associated with\n> the pg_extension entry and not Each. Individual. Function. There's a few\n> extensions that like to version the on-disk .so's filename which means a\n> CREATE OR REPLACE for every function on every extension version bump. That\n> forces an upgrade script even if the schema didn't technically change and\n> also creates the need for bespoke tooling around extension.sql and\n> upgrade.sql scripts.\n>\n\nI understand the potential pain with this (though I suppose MODULE_PATHNAME\ncan somewhat alleviate it). However, I'd like to highlight the fact\nthat, besides the fact that control files contain a reference to a .so\nfile, there's very little in the way of actual dependency of the extension\nmechanism on shared libraries, as that part relies on functions being able\nto use C language for their implementation. Nothing I am aware of would\nstop you from having multiple .so files in a given extension (for one\nreason or another reason), and I think that's actually a great design,\nincidentally or not. This does allow for a great deal of flexibility and an\nescape hatch for when the straightforward case doesn't work [1]\n\nIf the extension subsystem wasn't replacing MODULE_PATHNAME, I don't think\nthere would be a reason for having `module_pathname` in the control file.\nIt doesn't preload the file or call anything from it. It's what `create\nfunction` in the scripts will do. And that's actually great.\n\nI am referencing this not because I don't think you know this but to try\nand capture the higher-level picture here.\n\nThis doesn't mean we shouldn't improve the UX/DX of one of the common and\nstraightforward cases (having a single .so file with no other\ncomplications) where possible.\n\n[1]\nhttps://yrashk.com/blog/2023/04/10/avoiding-postgres-extensions-limitations/\n-- \nhttps://omnigr.es\nYurii.\n\nOn Mon, May 1, 2023 at 11:05 PM Eric Ridge <eebbrr@gmail.com> wrote:\n\n> We expect the .so's own minor version number to suffice for that,\n> but I realize that that might not be the most user-friendly answer.\n\nOne of my desires is that the on-disk .so's filename be associated with the pg_extension entry and not Each. Individual. Function.  There's a few extensions that like to version the on-disk .so's filename which means a CREATE OR REPLACE for every function on every extension version bump.  That forces an upgrade script even if the schema didn't technically change and also creates the need for bespoke tooling around extension.sql and upgrade.sql scripts.I understand the potential pain with this (though I suppose MODULE_PATHNAME can somewhat alleviate it). However, I'd like to highlight the fact that, besides the fact that control files contain a reference to a .so file, there's very little in the way of actual dependency of the extension mechanism on shared libraries, as that part relies on functions being able to use C language for their implementation.  Nothing I am aware of would stop you from having multiple .so files in a given extension (for one reason or another reason), and I think that's actually a great design, incidentally or not. This does allow for a great deal of flexibility and an escape hatch for when the straightforward case doesn't work [1]If the extension subsystem wasn't replacing MODULE_PATHNAME, I don't think there would be a reason for having `module_pathname` in the control file. It doesn't preload the file or call anything from it. It's what `create function` in the scripts will do. And that's actually great.I am referencing this not because I don't think you know this but to try and capture the higher-level picture here.This doesn't mean we shouldn't improve the UX/DX of one of the common and straightforward cases (having a single .so file with no other complications) where possible. [1] https://yrashk.com/blog/2023/04/10/avoiding-postgres-extensions-limitations/-- https://omnigr.esYurii.", "msg_date": "Tue, 2 May 2023 08:15:39 +0200", "msg_from": "Yurii Rashkovskii <yrashk@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "For me, it would be a big help if you could specify a simple from/to \nrange as the source version:\nmyext--1.0.0-1.9.9--2.0.0.sql\nmyext--2.0.0-2.9.9--3.0.0.sql\nmyext--3.0.0-3.2.3--3.2.4.sql\n\nThe idea of % wildcard in my opinion is fully contained in the from/to \nrange.\n\nThe name \"ANY\" should also be allowed as the source version.\nSome extensions are a collection of CREATE OR REPLACE FUNCTION. All \nupgrades are one and the same SQL file.\n\nFor example: address_standardizer_data_us--ANY--3.2.4.sql\n-- \nPrzemysław Sztoch | Mobile +48 509 99 00 66\n\n\n\nFor me, it would be a big help if you could \nspecify a simple from/to range as the source version:\nmyext--1.0.0-1.9.9--2.0.0.sql\nmyext--2.0.0-2.9.9--3.0.0.sql\nmyext--3.0.0-3.2.3--3.2.4.sql\n\nThe idea of % wildcard in my opinion is fully contained in the from/to \nrange.\n\nThe name \"ANY\" should also be allowed as the source version.\nSome extensions are a collection of CREATE OR REPLACE FUNCTION. All \nupgrades are one and the same SQL file.\n\nFor example: address_standardizer_data_us--ANY--3.2.4.sql\n-- \nPrzemysław Sztoch | Mobile +48 \n509 99 00 66", "msg_date": "Thu, 18 May 2023 23:14:52 +0200", "msg_from": "=?UTF-8?Q?Przemys=c5=82aw_Sztoch?= <przemyslaw@sztoch.pl>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Thu, May 18, 2023 at 11:14:52PM +0200, Przemysław Sztoch wrote:\n> For me, it would be a big help if you could specify a simple from/to range\n> as the source version:\n> myext--1.0.0-1.9.9--2.0.0.sql\n> myext--2.0.0-2.9.9--3.0.0.sql\n> myext--3.0.0-3.2.3--3.2.4.sql\n> \n> The idea of % wildcard in my opinion is fully contained in the from/to\n> range.\n\nYes, it will be once partial matching is implemented (in its current\nstate the patch only allows the wildcard to cover the whole version,\nnot just a portion, but the idea was to move to partial matches too).\n\n> The name \"ANY\" should also be allowed as the source version.\n\nIt is. The only character with a special meaning, in my patch, would\nbe the \"%\" character, and would ONLY be special if the extension has\nset the wildcard_upgrades directive to \"true\" in the .control file.\n\n> Some extensions are a collection of CREATE OR REPLACE FUNCTION. All upgrades\n> are one and the same SQL file.\n\nYes, this is the case for PostGIS, and that's why we're looking\nforward to addint support for this wildcard.\n\n--strk;\n\n\n", "msg_date": "Wed, 31 May 2023 19:13:50 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n> On Mon, Apr 24, 2023 at 01:06:24PM -0400, Mat Arye wrote:\n> > Hi All,\n> > \n> > I've done upgrade maintenance for multiple extensions now (TimescaleDB\n> > core, Promscale) and I think the original suggestion (wildcard filenames\n> > with control-file switch to enable) here is a good one.\n> \n> Thanks for your comment, Mat.\n> \n> I'm happy to bring back the control-file switch if there's an\n> agreement about that.\n\nI'm attaching an up-to-date version of the patch with the control-file\nswitch back in, so there's an explicit choice by extension developers.\n\n--strk;", "msg_date": "Wed, 31 May 2023 21:07:43 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On 31 May 2023, at 21:07, Sandro Santilli <strk@kbt.io> wrote:\n> On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n\n>> I'm happy to bring back the control-file switch if there's an\n>> agreement about that.\n> \n> I'm attaching an up-to-date version of the patch with the control-file\n> switch back in, so there's an explicit choice by extension developers.\n\nThis version fails the extension test since the comment from the .sql file is\nmissing from test_extensions.out. Can you fix that in a rebase such that the\nCFBot can have a green build of this patch?\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Wed, 28 Jun 2023 10:29:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On 28 Jun 2023, at 10:29, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 31 May 2023, at 21:07, Sandro Santilli <strk@kbt.io> wrote:\n>> On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n> \n>>> I'm happy to bring back the control-file switch if there's an\n>>> agreement about that.\n>> \n>> I'm attaching an up-to-date version of the patch with the control-file\n>> switch back in, so there's an explicit choice by extension developers.\n> \n> This version fails the extension test since the comment from the .sql file is\n> missing from test_extensions.out. Can you fix that in a rebase such that the\n> CFBot can have a green build of this patch?\n\nWith no update to the thread and the patch still not applying I'm marking this\nreturned with feedback. Please feel free to resubmit to a future CF when there\nis a new version of the patch.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 20:24:15 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Aug 1, 2023 at 2:24 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> returned with feedback. Please feel free to resubmit to a future CF when there\n> is a new version of the patch.\n\nIsn't the real problem here that there's no consensus on what to do?\nOr to put a finer point on it, that Tom seems adamantly opposed to any\ndesign that would solve the problem the patch authors are worried\nabout?\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 14:45:01 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> On 1 Aug 2023, at 20:45, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> On Tue, Aug 1, 2023 at 2:24 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n>> returned with feedback. Please feel free to resubmit to a future CF when there\n>> is a new version of the patch.\n> \n> Isn't the real problem here that there's no consensus on what to do?\n\nI don't disagree with that, but there is nothing preventing that discussion to\ncontinue here or on other threads. The fact that consensus is that far away\nand no patch that applies exist seems to me to indicate that a new CF entry is\na better option than pushing forward.\n\n--\nDaniel Gustafsson\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 21:23:14 +0200", "msg_from": "Daniel Gustafsson <daniel@yesql.se>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "> > On 1 Aug 2023, at 20:45, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > On Tue, Aug 1, 2023 at 2:24 PM Daniel Gustafsson <daniel@yesql.se>\n> wrote:\n> >> returned with feedback. Please feel free to resubmit to a future CF\n> >> when there is a new version of the patch.\n> >\n> > Isn't the real problem here that there's no consensus on what to do?\n> \n> I don't disagree with that, but there is nothing preventing that discussion to\n> continue here or on other threads. The fact that consensus is that far away\n> and no patch that applies exist seems to me to indicate that a new CF entry is\n> a better option than pushing forward.\n> \n> --\n> Daniel Gustafsson\n\nWe are still interested. We'll clean up in the next week or so. Been tied up with PostGIS release cycle.\n\nTo Robert's point, we are losing faith in coming up with a solution that everyone agrees is workable.\nWhat I thought Sandro had so far in his last patch seemed like a good compromise to me.\nIf we get it back to the point of passing the CF Bot, can we continue on this thread or are we just wasting our time?\n\nThanks,\nRegina\n\n\n\n", "msg_date": "Tue, 1 Aug 2023 15:59:53 -0400", "msg_from": "\"Regina Obe\" <lr@pcorp.us>", "msg_from_op": false, "msg_subject": "RE: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Aug 1, 2023 at 3:23 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> I don't disagree with that, but there is nothing preventing that discussion to\n> continue here or on other threads. The fact that consensus is that far away\n> and no patch that applies exist seems to me to indicate that a new CF entry is\n> a better option than pushing forward.\n\nFair point, thanks.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 1 Aug 2023 15:59:55 -0400", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Tue, Aug 01, 2023 at 08:24:15PM +0200, Daniel Gustafsson wrote:\n> > On 28 Jun 2023, at 10:29, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > \n> >> On 31 May 2023, at 21:07, Sandro Santilli <strk@kbt.io> wrote:\n> >> On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n> > \n> >>> I'm happy to bring back the control-file switch if there's an\n> >>> agreement about that.\n> >> \n> >> I'm attaching an up-to-date version of the patch with the control-file\n> >> switch back in, so there's an explicit choice by extension developers.\n> > \n> > This version fails the extension test since the comment from the .sql file is\n> > missing from test_extensions.out. Can you fix that in a rebase such that the\n> > CFBot can have a green build of this patch?\n> \n> With no update to the thread and the patch still not applying I'm marking this\n> returned with feedback. Please feel free to resubmit to a future CF when there\n> is a new version of the patch.\n\nUpdated version of the patch is attached, regresses cleanly.\nWill try to submit this to future CF from the website, let me know\nif I'm doing it wrong.\n\n--strk;", "msg_date": "Mon, 7 Aug 2023 15:55:32 +0200", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Mon, 7 Aug 2023 at 19:25, Sandro Santilli <strk@kbt.io> wrote:\n>\n> On Tue, Aug 01, 2023 at 08:24:15PM +0200, Daniel Gustafsson wrote:\n> > > On 28 Jun 2023, at 10:29, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > >\n> > >> On 31 May 2023, at 21:07, Sandro Santilli <strk@kbt.io> wrote:\n> > >> On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n> > >\n> > >>> I'm happy to bring back the control-file switch if there's an\n> > >>> agreement about that.\n> > >>\n> > >> I'm attaching an up-to-date version of the patch with the control-file\n> > >> switch back in, so there's an explicit choice by extension developers.\n> > >\n> > > This version fails the extension test since the comment from the .sql file is\n> > > missing from test_extensions.out. Can you fix that in a rebase such that the\n> > > CFBot can have a green build of this patch?\n> >\n> > With no update to the thread and the patch still not applying I'm marking this\n> > returned with feedback. Please feel free to resubmit to a future CF when there\n> > is a new version of the patch.\n>\n> Updated version of the patch is attached, regresses cleanly.\n> Will try to submit this to future CF from the website, let me know\n> if I'm doing it wrong.\n\nOne of tests has failed in CFBot at [1] with:\n\n+++ /tmp/cirrus-ci-build/build/testrun/test_extensions/regress/results/test_extensions.out\n2023-12-21 02:24:07.738726000 +0000\n@@ -449,17 +449,20 @@\n -- Test wildcard based upgrade paths\n --\n CREATE EXTENSION test_ext_wildcard1;\n+ERROR: extension \"test_ext_wildcard1\" is not available\n+DETAIL: Could not open extension control file\n\"/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/share/extension/test_ext_wildcard1.control\":\nNo such file or directory.\n+HINT: The extension must first be installed on the system where\nPostgreSQL is running.\n SELECT ext_wildcard1_version();\n- ext_wildcard1_version\n------------------------\n- 1.0\n-(1 row)\n-\n+ERROR: function ext_wildcard1_version() does not exist\n+LINE 1: SELECT ext_wildcard1_version();\n\nMore details available at [2].\n\n[1] - https://cirrus-ci.com/task/6090742435676160\n[2] - https://api.cirrus-ci.com/v1/artifact/task/6090742435676160/testrun/build/testrun/test_extensions/regress/regression.diffs\n\nRegards,\nVignesh\n\n\n", "msg_date": "Sun, 7 Jan 2024 12:36:29 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Sun, 7 Jan 2024 at 12:36, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Mon, 7 Aug 2023 at 19:25, Sandro Santilli <strk@kbt.io> wrote:\n> >\n> > On Tue, Aug 01, 2023 at 08:24:15PM +0200, Daniel Gustafsson wrote:\n> > > > On 28 Jun 2023, at 10:29, Daniel Gustafsson <daniel@yesql.se> wrote:\n> > > >\n> > > >> On 31 May 2023, at 21:07, Sandro Santilli <strk@kbt.io> wrote:\n> > > >> On Thu, Apr 27, 2023 at 12:49:57PM +0200, Sandro Santilli wrote:\n> > > >\n> > > >>> I'm happy to bring back the control-file switch if there's an\n> > > >>> agreement about that.\n> > > >>\n> > > >> I'm attaching an up-to-date version of the patch with the control-file\n> > > >> switch back in, so there's an explicit choice by extension developers.\n> > > >\n> > > > This version fails the extension test since the comment from the .sql file is\n> > > > missing from test_extensions.out. Can you fix that in a rebase such that the\n> > > > CFBot can have a green build of this patch?\n> > >\n> > > With no update to the thread and the patch still not applying I'm marking this\n> > > returned with feedback. Please feel free to resubmit to a future CF when there\n> > > is a new version of the patch.\n> >\n> > Updated version of the patch is attached, regresses cleanly.\n> > Will try to submit this to future CF from the website, let me know\n> > if I'm doing it wrong.\n>\n> One of tests has failed in CFBot at [1] with:\n>\n> +++ /tmp/cirrus-ci-build/build/testrun/test_extensions/regress/results/test_extensions.out\n> 2023-12-21 02:24:07.738726000 +0000\n> @@ -449,17 +449,20 @@\n> -- Test wildcard based upgrade paths\n> --\n> CREATE EXTENSION test_ext_wildcard1;\n> +ERROR: extension \"test_ext_wildcard1\" is not available\n> +DETAIL: Could not open extension control file\n> \"/tmp/cirrus-ci-build/build/tmp_install/usr/local/pgsql/share/extension/test_ext_wildcard1.control\":\n> No such file or directory.\n> +HINT: The extension must first be installed on the system where\n> PostgreSQL is running.\n> SELECT ext_wildcard1_version();\n> - ext_wildcard1_version\n> ------------------------\n> - 1.0\n> -(1 row)\n> -\n> +ERROR: function ext_wildcard1_version() does not exist\n> +LINE 1: SELECT ext_wildcard1_version();\n\nWith no update to the thread and the tests still failing I'm marking\nthis as returned with feedback. Please feel free to resubmit to the\nnext CF when there is a new version of the patch.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 1 Feb 2024 16:31:26 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" }, { "msg_contents": "On Thu, Feb 01, 2024 at 04:31:26PM +0530, vignesh C wrote:\n> \n> With no update to the thread and the tests still failing I'm marking\n> this as returned with feedback. Please feel free to resubmit to the\n> next CF when there is a new version of the patch.\n\nOk, no problem. Thank you.\n\n--strk;", "msg_date": "Thu, 1 Feb 2024 18:58:09 +0100", "msg_from": "Sandro Santilli <strk@kbt.io>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Support % wildcard in extension upgrade filenames" } ]
[ { "msg_contents": "The attached patch fixes an overflow bug in DecodeInterval when applying\nthe units week, decade, century, and millennium. The overflow check logic\nwas modelled after the overflow check at the beginning of `int\ntm2interval(struct pg_tm *tm, fsec_t fsec, Interval *span);` in timestamp.c.\n\nThis is my first patch, so apologies if I screwed up the process somewhere.\n\n- Joe Koshakow", "msg_date": "Fri, 11 Feb 2022 14:48:21 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> The attached patch fixes an overflow bug in DecodeInterval when applying\n> the units week, decade, century, and millennium. The overflow check logic\n> was modelled after the overflow check at the beginning of `int\n> tm2interval(struct pg_tm *tm, fsec_t fsec, Interval *span);` in timestamp.c.\n\nGood catch, but I don't think that tm2interval code is best practice\nanymore. Rather than bringing \"double\" arithmetic into the mix,\nyou should use the overflow-detecting arithmetic functions in\nsrc/include/common/int.h. The existing code here is also pretty\nfaulty in that it doesn't notice addition overflow when combining\nmultiple units. So for example, instead of\n\n\ttm->tm_mday += val * 7;\n\nI think we should write something like\n\n\tif (pg_mul_s32_overflow(val, 7, &tmp))\n\t return DTERR_FIELD_OVERFLOW;\n\tif (pg_add_s32_overflow(tm->tm_mday, tmp, &tm->tm_mday))\n\t return DTERR_FIELD_OVERFLOW;\n\nPerhaps some macros could be used to make this more legible?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:55:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:\n> Joseph Koshakow <koshy44(at)gmail(dot)com> writes:\n> > The attached patch fixes an overflow bug in DecodeInterval when applying\n> > the units week, decade, century, and millennium. The overflow check logic\n> > was modelled after the overflow check at the beginning of `int\n> > tm2interval(struct pg_tm *tm, fsec_t fsec, Interval *span);` in timestamp.c.\n>\n>\n> Good catch, but I don't think that tm2interval code is best practice\n> anymore. Rather than bringing \"double\" arithmetic into the mix,\n> you should use the overflow-detecting arithmetic functions in\n> src/include/common/int.h. The existing code here is also pretty\n> faulty in that it doesn't notice addition overflow when combining\n> multiple units. So for example, instead of\n>\n>\n> tm->tm_mday += val * 7;\n>\n>\n> I think we should write something like\n>\n>\n> if (pg_mul_s32_overflow(val, 7, &tmp))\n> return DTERR_FIELD_OVERFLOW;\n> if (pg_add_s32_overflow(tm->tm_mday, tmp, &tm->tm_mday))\n> return DTERR_FIELD_OVERFLOW;\n>\n>\n> Perhaps some macros could be used to make this more legible?\n>\n>\n> regards, tom lane\n>\n>\n> @postgresql\n\nThanks for the feedback Tom, I've attached an updated patch with\nyour suggestions. Feel free to rename the horribly named macro.\n\nAlso while fixing this I noticed that fractional intervals can also\ncause an overflow issue.\npostgres=# SELECT INTERVAL '0.1 months 2147483647 days';\n interval\n------------------\n -2147483646 days\n(1 row)\nI haven't looked into it, but it's probably a similar cause.", "msg_date": "Fri, 11 Feb 2022 16:58:09 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Fri, Feb 11, 2022 at 4:58 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:\n> > Joseph Koshakow <koshy44(at)gmail(dot)com> writes:\n> > > The attached patch fixes an overflow bug in DecodeInterval when applying\n> > > the units week, decade, century, and millennium. The overflow check logic\n> > > was modelled after the overflow check at the beginning of `int\n> > > tm2interval(struct pg_tm *tm, fsec_t fsec, Interval *span);` in timestamp.c.\n> >\n> >\n> > Good catch, but I don't think that tm2interval code is best practice\n> > anymore. Rather than bringing \"double\" arithmetic into the mix,\n> > you should use the overflow-detecting arithmetic functions in\n> > src/include/common/int.h. The existing code here is also pretty\n> > faulty in that it doesn't notice addition overflow when combining\n> > multiple units. So for example, instead of\n> >\n> >\n> > tm->tm_mday += val * 7;\n> >\n> >\n> > I think we should write something like\n> >\n> >\n> > if (pg_mul_s32_overflow(val, 7, &tmp))\n> > return DTERR_FIELD_OVERFLOW;\n> > if (pg_add_s32_overflow(tm->tm_mday, tmp, &tm->tm_mday))\n> > return DTERR_FIELD_OVERFLOW;\n> >\n> >\n> > Perhaps some macros could be used to make this more legible?\n> >\n> >\n> > regards, tom lane\n> >\n> >\n> > @postgresql\n>\n> Thanks for the feedback Tom, I've attached an updated patch with\n> your suggestions. Feel free to rename the horribly named macro.\n>\n> Also while fixing this I noticed that fractional intervals can also\n> cause an overflow issue.\n> postgres=# SELECT INTERVAL '0.1 months 2147483647 days';\n> interval\n> ------------------\n> -2147483646 days\n> (1 row)\n> I haven't looked into it, but it's probably a similar cause.\n\nHey Tom,\n\nI was originally going to fix the fractional issue in a follow-up\npatch, but I took a look at it and decided to make a new patch\nwith both fixes. I tried to make the handling of fractional and\nnon-fractional units consistent. I've attached the patch below.\n\nWhile investigating this, I've noticed a couple more overflow\nissues with Intervals, but I think they're best handled in separate\npatches. I've included the ones I've found in this email.\n\n postgres=# CREATE TEMP TABLE INTERVAL_TBL_OF (f1 interval);\n CREATE TABLE\n postgres=# INSERT INTO INTERVAL_TBL_OF (f1) VALUES ('0.1 weeks\n2147483647 hrs');\n INSERT 0 1\n postgres=# SELECT * FROM INTERVAL_TBL_OF;\n ERROR: interval out of range\n\n postgres=# SELECT justify_days(INTERVAL '2147483647 months 2147483647 days');\n justify_days\n -----------------------------------\n -172991738 years -4 mons -23 days\n (1 row)\n\nCheers,\nJoe Koshakow", "msg_date": "Sat, 12 Feb 2022 21:16:05 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 21:16:05 -0500, Joseph Koshakow wrote:\n> I've attached the patch below.\n\nAny reason for using int return types?\n\n\n> +/* As above, but initial val produces years */\n> +static int\n> +AdjustYears(int val, struct pg_tm *tm, int multiplier)\n> +{\n> +\tint\t\t\tyears;\n> +\tif (pg_mul_s32_overflow(val, multiplier, &years))\n> +\t\treturn 1;\n> +\tif (pg_add_s32_overflow(tm->tm_year, years, &tm->tm_year))\n> +\t\treturn 1;\n> +\treturn 0;\n> }\n\nparticularly since the pg_*_overflow stuff uses bool?\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 19:51:50 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sat, Feb 12, 2022 at 10:51 PM Andres Freund <andres@anarazel.de> wrote:\n> Any reason for using int return types?\n>\n> particularly since the pg_*_overflow stuff uses bool?\n\nI chose int return types to keep all these methods\nconsistent with DecodeInterval, which returns a\nnon-zero int to indicate an error. Though I wasn't sure\nif an int or bool would be best, so I'm happy to change\nto bool if people think that's better.\n\nAlso I'm realizing now that I've incorrectly been using the\nnumber of the patch to indicate the version, instead of just\nsticking a v3 to the front. So sorry about that, all the patches\nI sent in this thread are the same patch, just different versions.\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 13 Feb 2022 09:35:47 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Actually found an additional overflow issue in\nDecodeInterval. I've updated the patch with the\nfix. Specifically it prevents trying to negate a field\nif it equals INT_MIN. Let me know if this is best\nput into another patch.\n\n- Joe Koshakow", "msg_date": "Sun, 13 Feb 2022 14:34:34 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Hi,\n\nOn 2022-02-13 09:35:47 -0500, Joseph Koshakow wrote:\n> I chose int return types to keep all these methods\n> consistent with DecodeInterval, which returns a\n> non-zero int to indicate an error.\n\nThat's different, because it actually returns different types of errors. IMO\nthat difference is actually reason to use a bool for the new cases, because\nthen it's a tad clearer that they don't return DTERR_*.\n\n> Though I wasn't sure\n> if an int or bool would be best, so I'm happy to change\n> to bool if people think that's better.\n\n+1 or bool.\n\n\n> Also I'm realizing now that I've incorrectly been using the\n> number of the patch to indicate the version, instead of just\n> sticking a v3 to the front. So sorry about that, all the patches\n> I sent in this thread are the same patch, just different versions.\n\nNo worries ;)\n\n\nDo we want to consider backpatching these fixes? If so, I'd argue for skipping\n10, because it doesn't have int.h stuff yet. There's also the issue with\npotentially breaking indexes / constraints? Not that goes entirely away across\nmajor versions...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Feb 2022 12:30:37 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Do we want to consider backpatching these fixes?\n\nI wasn't thinking that we should. It's a behavioral change and\npeople might not be pleased to have it appear in minor releases.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Feb 2022 15:38:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Attached is a new version switching from ints to bools, as requested.\n\n- Joe Koshakow", "msg_date": "Sun, 13 Feb 2022 17:12:54 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Feb 13, 2022 at 5:12 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> Attached is a new version switching from ints to bools, as requested.\n>\n> - Joe Koshakow\n\nTom, Andres,\n\nThanks for your feedback! I'm not super familiar with the process,\nshould I submit this thread to the commitfest or just leave it as is?\n\nThanks,\nJoe Koshakow\n\n\n", "msg_date": "Tue, 15 Feb 2022 06:44:40 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Hi,\n\nOn 2022-02-15 06:44:40 -0500, Joseph Koshakow wrote:\n> Thanks for your feedback! I'm not super familiar with the process,\n> should I submit this thread to the commitfest or just leave it as is?\n\nSubmit it to the CF - then we get an automatic test on a few platforms. I\nthink we can apply it quickly after that...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Tue, 15 Feb 2022 08:28:09 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Ok, so I've attached a patch with my final unprompted changes. It\ncontains the following two changes:\n\n1. I found some more overflows with the ISO8601 formats and have\nincluded some fixes.\n2. I reverted the overflow checks for the seconds field. It's actually a\nbit more complicated than I thought. For example consider the following\nquery:\n postgres=# SELECT INTERVAL '0.99999999 min 2147483647 sec';\n interval\n ----------------------\n -596523:13:09.000001\n (1 row)\nThis query will overflow the tm_sec field of the struct pg_tm, however\nit's not actually out of range for the Interval. I'm not sure the best\nway to handle this right now, but I think it would be best to leave it\nfor a future patch. Perhaps the time related fields in struct pg_tm\nneed to be changed to 64 bit ints.\n\n- Joe Koshakow", "msg_date": "Thu, 17 Feb 2022 21:45:45 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Copying over a related thread here to have info all in one place.\nIt's a little fragmented so sorry about that.\n\n\nOn Fri, Feb 18, 2022 at 11:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > When formatting the output of an Interval, we call abs() on the hours\n> > field of the Interval. Calling abs(INT_MIN) returns back INT_MIN\n> > causing the output to contain two '-' characters. The attached patch\n> > fixes that issue by special casing INT_MIN hours.\n>\n> Good catch, but it looks to me like three out of the four formats in\n> EncodeInterval have variants of this problem --- there are assumptions\n> throughout that code that we can compute \"-x\" or \"abs(x)\" without\n> fear. Not much point in fixing only one symptom.\n>\n> Also, I notice that there's an overflow hazard upstream of here,\n> in interval2tm:\n>\n> regression=# select interval '214748364 hours' * 11;\n> ERROR: interval out of range\n> regression=# \\errverbose\n> ERROR: 22008: interval out of range\n> LOCATION: interval2tm, timestamp.c:1982\n>\n> There's no good excuse for not being able to print a value that\n> we computed successfully.\n>\n> I wonder if the most reasonable fix would be to start using int64\n> instead of int arithmetic for the values that are potentially large.\n> I doubt that we'd be taking much of a performance hit on modern\n> hardware.\n>\n> regards, tom lane\n\nJoseph Koshakow <koshy44@gmail.com> writes:\n> On Fri, Feb 18, 2022 at 11:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I wonder if the most reasonable fix would be to start using int64\n>> instead of int arithmetic for the values that are potentially large.\n>> I doubt that we'd be taking much of a performance hit on modern\n>> hardware.\n\n> That's an interesting idea. I've always assumed that the range of the\n> time fields of Intervals was 2147483647 hours 59 minutes\n> 59.999999 seconds to -2147483648 hours -59 minutes\n> -59.999999 seconds. However the only reason that we can't support\n> the full range of int64 microseconds is because the struct pg_tm fields\n> are only ints. If we increase those fields to int64 then we'd be able to\n> support the full int64 range for microseconds as well as implicitly fix\n> some of the overflow issues in DecodeInterval and EncodeInterval.\n\nOn Sat, Feb 19, 2022 at 1:52 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > On Sat, Feb 19, 2022 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I think that messing with struct pg_tm might have too many side-effects.\n> >> However, pg_tm isn't all that well adapted to intervals in the first\n> >> place, so it'd make sense to make a new struct specifically for interval\n> >> decoding.\n>\n> > Yeah that's a good idea, pg_tm is used all over the place. Is there a\n> > reason we need an intermediate structure to convert to and from?\n> > We could just convert directly to an Interval in DecodeInterval and\n> > print directly from an Interval in EncodeInterval.\n>\n> Yeah, I thought about that too, but if you look at the other callers of\n> interval2tm, you'll see this same set of issues. I'm inclined to keep\n> the current code structure and just fix the data structure. Although\n> interval2tm isn't *that* big, I don't think four copies of its logic\n> would be an improvement.\n>\n> > Also I originally created a separate thread and patch because I\n> > thought this wouldn't be directly related to my other patch,\n> > https://commitfest.postgresql.org/37/3541/. However I think with these\n> > discussed changes it's directly related. So I think it's best to close\n> > this thread and patch and copy this conversion to the other thread.\n>\n> Agreed.\n>\n> regards, tom lane\n\n\n", "msg_date": "Sat, 19 Feb 2022 14:26:17 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Attached is a patch of my first pass. The to_char method isn't finished\nand I need to add a bunch of tests, but everything else is in place. It\nended up being a fairly large change in case anyone wants to take a look\nthe changes so far.\n\nOne thing I noticed is that interval.c has a ton of code copied and pasted\nfrom other files. The code seemed out of date from those other files, so\nI tried to bring it up to date and add my changes.\n\n- Joe Koshakow", "msg_date": "Sat, 19 Feb 2022 19:16:54 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> Attached is a patch of my first pass. The to_char method isn't finished\n> and I need to add a bunch of tests, but everything else is in place. It\n> ended up being a fairly large change in case anyone wants to take a look\n> the changes so far.\n\nCouple of quick comments:\n\n* You've assumed in a number of places that long == int64. This is wrong\non many platforms. Current project style for use of int64 in printf-type\ncalls is to cast the argument to (long long) explicitly and use \"%lld\" (or\n\"%llu\", etc). As for strtoint_64, surely that functionality exists\nsomewhere already (hopefully with fewer bugs than this has).\n\n* I think that tools like Coverity will complain about how you've\ngot both calls that check the result of AdjustFractSeconds (etc)\nand calls that don't. You might consider coding the latter like\n\n+ /* this can't overflow: */\n+ (void) AdjustFractSeconds(fval, itm, fsec, SECS_PER_DAY);\n\nin hopes of (a) silencing those warnings and (b) making the implicit\nassumption clear to readers.\n\n* I'm not entirely buying use of pg_time_t, rather than just int64,\nin struct itm. I think the point of this struct is to get away\nfrom datetime-specific datatypes. Also, if pg_time_t ever changed\nwidth, it'd create issues for users of this struct. I also note\na number of places in the patch that are assuming that these fields\nare int64 not something else.\n\n* I'm a little inclined to rename interval2tm/tm2interval to\ninterval2itm/itm2interval, as I think it'd be confusing for those\nfunction names to refer to a typedef they no longer use.\n\n* The uses of tm2itm make me a bit itchy. Is that sweeping\nupstream-of-there overflow problems under the rug?\n\n* // comments are not project style, either. While pgindent will\nconvert them to /* ... */ style, the results might not be pleasing.\n\n> One thing I noticed is that interval.c has a ton of code copied and pasted\n> from other files. The code seemed out of date from those other files, so\n> I tried to bring it up to date and add my changes.\n\nWe haven't actually maintained ecpg's copies of backend datatype-specific\ncode in many years. While bringing that stuff up to speed might be\nworthwhile (or perhaps not, given the lack of complaints), I'd see it\nas a separate project.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 20 Feb 2022 18:37:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Feb 20, 2022 at 6:37 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Couple of quick comments:\n\nThanks for the comments Tom, I'll work on fixing these and submit a\nnew patch.\n\n> * The uses of tm2itm make me a bit itchy. Is that sweeping\n> upstream-of-there overflow problems under the rug?\n\nI agree, I'm not super happy with that approach. In fact\nI'm pretty sure it will cause queries like\n SELECT INTERVAL '2147483648:00:00';\nto overflow upstream, even though queries like\n SELECT INTERVAL '2147483648 hours';\nwould not. The places tm2itm is being used are\n * After DecodeTime\n * In interval_to_char.\nThe more general issue is how to share code with\nfunctions that are doing almost identical things but use\npg_tm instead of the new pg_itm? I'm not really sure what\nthe best solution is right now but I will think about it. If\nanyone has suggestions though, feel free to chime in.\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 20 Feb 2022 21:53:56 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Hi All,\n\nSorry for the delay in the new patch, I've attached my most recent\npatch to this email. I ended up reworking a good portion of my previous\npatch so below I've included some reasons why, notes on my current\napproach, and some pro/cons to the approach.\n\n* The main reason for the rework had to do with double conversions and\nshared code.\n\n* The existing code for rounding had a lot of int to double\ncasting and vice versa. I *think* that doubles are able to completely\nrepresent the range of ints. However doubles are not able to represent\nthe full range of int64. After making the change I started noticing\na lot of lossy behavior. One thought I had was to change the doubles\nto long doubles, but I wasn't able to figure out if long doubles could\ncompletely represent the range of int64. Especially since their size\nvaries depending on the architecture. Does anyone know the answer to\nthis?\n\n* I ended up creating two intermediate data structures for Intervals.\nOne for decoding and one for everything else. I'll go into more detail\nbelow.\n* One common benefit was that they both contain a usec field which\nmeans that the Interval methods no longer need to carry around a\nseparate fsec argument.\n* The obvious con here is that Intervals require two unique\nintermediate data structures, while all other date/time types\ncan share a single intermediate data structure. I find this to\nbe a bit clunky.\n\n* pg_itm_in is the struct used for Interval decoding. It's very similar\nto pg_tm, except all of the time related fields are collapsed into a\nsingle `int64 usec` field.\n* The biggest benefit of this was that all int64-double conversions\nare limited to a single function, AdjustFractMicroseconds. Instead\nof fractional units flowing down over every single time field, they\nonly need to flow down into the single `int64 usec` field.\n* Overflows are caught much earlier in the decoding process which\nhelps avoid wasted work.\n* I found that the decoding code became simpler for time fields,\nthough this is a bit subjective.\n\n* pg_itm is the struct used for all other Interval functionality. It's\nvery similar to pg_tm, except the tm_hour field is converted from int\nto int64 and an `int tm_usec` field was added.\n* When encoding and working with Intervals, we almost always want\nto break the time field out into hours, min, sec, usec. So it's\nhelpful to have a common place to do this, instead of every\nfunction duplicating this code.\n* When breaking the time fields out, a single field will never\ncontain a value greater than could have fit in the next unit\nhigher. Meaning that minutes will never be greater than 60, seconds\nwill be never greater than 60, and usec will never be greater than\n1,000. So hours is actually the only field that needs to be int64\nand the rest can be an int.\n* This also helps limit the impact to shared code (see below).\n\n* There's some shared code between Intervals and other date/time types.\nSpecifically the DecodeTime function and the datetime_to_char_body\nfunction. These functions take in a `struct pg_tm` and a `fsec_t fsec`\n(fsec_t is just an alias for int32) which allows them to be re-used by\nall date/time types. The only difference now between pg_tm and pg_itm\nis the tm_hour field size (the tm_usec field in pg_itm can be used as\nthe fsec). So to get around this I changed the function signatures to\ntake a `struct pg_tm`, `fsec_t fsec`, and an `int64 hour` argument.\nIt's up to the caller to provide to correct hour field. Intervals can\neasily convert pg_itm to a pg_tm, fsec, and hour. It's honestly a bit\nerror-prone since those functions have to explicitly ignore the\npg_tm->tm_hour field and use the provided hour argument instead, but I\ncouldn't think of a better less intrusive solution. If anyone has a\nbetter idea, please don't hesitate to bring it up.\n\n* This partly existed in the previous patch, but I just wanted to\nrestate it. All modifications to pg_itm_in during decoding is done via\nhelper functions that check for overflow. All invocations of these\nfunctions either return an error on overflow or explicitly state why an\noverflow is impossible.\n\n* I completely rewrote the ParseISO8601Number function to try and avoid\ndouble to int64 conversions. I tried to model it after the parsing done\nin DecodeInterval, though I would appreciate extra scrutiny here.\n\n- Joe Koshakow", "msg_date": "Sun, 6 Mar 2022 11:14:25 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "I just realized another issue today. It may have been obvious from one\nof Tom's earlier messages, but I'm just now putting the pieces\ntogether.\nOn Fri, Feb 18, 2022 at 11:44 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Also, I notice that there's an overflow hazard upstream of here,\n> in interval2tm:\n>\n> regression=# select interval '214748364 hours' * 11;\n> ERROR: interval out of range\n> regression=# \\errverbose\n> ERROR: 22008: interval out of range\n> LOCATION: interval2tm, timestamp.c:1982\n>\n> There's no good excuse for not being able to print a value that\n> we computed successfully.\n\nScenarios like this can properly decode the interval, but actually\nerror out when encoding the interval. As a consequence you can insert\nthe value successfully into a table, but any attempt to query the table\nthat includes the \"bad interval\" value in the result will cause an\nerror. Below I've demonstrated an example:\n\npostgres=# CREATE TABLE tbl (i INTERVAL);\nCREATE TABLE\npostgres=# INSERT INTO tbl VALUES ('1 day'), ('3 months'), ('2 years');\nINSERT 0 3\npostgres=# SELECT * FROM tbl;\ni\n---------\n1 day\n3 mons\n2 years\n(3 rows)\n\npostgres=# INSERT INTO tbl VALUES ('2147483647 hours 60 minutes');\nINSERT 0 1\npostgres=# SELECT * FROM tbl;\nERROR: interval out of range\n\nThis would seriously reduce the usable of any table that contains one\nof these \"bad interval\" values.\n\nMy patch actually fixes this issue, but I just wanted to call it out\nbecause it might be relevant when reviewing.\n\n\n", "msg_date": "Mon, 7 Mar 2022 19:00:44 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> [ v8-0001-Check-for-overflow-when-decoding-an-interval.patch ]\n\nThis isn't applying per the cfbot; looks like it got sideswiped\nby 9e9858389. Here's a quick rebase. I've not reviewed it, but\nI did notice (because git was in my face about this) that it's\ngot whitespace issues. Please try to avoid unnecessary whitespace\nchanges ... pgindent will clean those up, but it makes reviewing\nharder.\n\n\t\t\tregards, tom lane", "msg_date": "Mon, 21 Mar 2022 20:31:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Mon, Mar 21, 2022 at 8:31 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> This isn't applying per the cfbot; looks like it got sideswiped\n> by 9e9858389. Here's a quick rebase. I've not reviewed it, but\n> I did notice (because git was in my face about this) that it's\n> got whitespace issues. Please try to avoid unnecessary whitespace\n> changes ... pgindent will clean those up, but it makes reviewing\n> harder.\n\nSorry about that, I didn't have my IDE set up quite right and\nnoticed a little too late that I had some auto-formatting turned\non. Thanks for doing the rebase, did it end up fixing\nthe whitespace issues? If not I'll go through the patch and try\nand fix them all.\n\n- Joe Koshakow\n\n\n", "msg_date": "Wed, 23 Mar 2022 20:27:25 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> Sorry about that, I didn't have my IDE set up quite right and\n> noticed a little too late that I had some auto-formatting turned\n> on. Thanks for doing the rebase, did it end up fixing\n> the whitespace issues? If not I'll go through the patch and try\n> and fix them all.\n\nNo, I just fixed the merge failure.\n\nOur standard way to clean up whitespace issues and make sure code\nmeets our layout conventions is to run pgindent over it [1].\nFor this particular patch, that might be too much, because it will\nreindent the sections that you added braces around, making the patch\nharder to review. So maybe the best bet is to leave well enough\nalone and expect the committer to re-pgindent before pushing it.\nHowever, if you spot any diff hunks where there's just a whitespace\nchange, getting rid of those would be appreciated.\n\n\t\t\tregards, tom lane\n\n[1] https://wiki.postgresql.org/wiki/Running_pgindent_on_non-core_code_or_development_code\n\n\n", "msg_date": "Wed, 23 Mar 2022 21:20:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> * The existing code for rounding had a lot of int to double\n> casting and vice versa. I *think* that doubles are able to completely\n> represent the range of ints. However doubles are not able to represent\n> the full range of int64. After making the change I started noticing\n> a lot of lossy behavior. One thought I had was to change the doubles\n> to long doubles, but I wasn't able to figure out if long doubles could\n> completely represent the range of int64. Especially since their size\n> varies depending on the architecture. Does anyone know the answer to\n> this?\n\nI agree that relying on long double is not a great plan. However,\nI'm not seeing where there's a problem. AFAICS the revised code\nonly uses doubles to represent fractions from the input, ie if you\nwrite \"123.456 hours\" then the \".456\" is carried around for awhile\nas a float. This does not seem likely to pose any real-world\nproblem; do you have a counterexample?\n\nAnyway, I've spent today reviewing the code and cleaning up things\nI didn't like, and attached is a v10. I almost feel that this is\ncommittable, but there is one thing that is bothering me. The\npart of DecodeInterval that does strange things with signs in the\nINTSTYLE_SQL_STANDARD case (starting about line 3400 in datetime.c\nbefore this patch, or line 3600 after) used to separately force the\nhour, minute, second, and microsecond fields to negative.\nNow it forces the merged tm_usec field to negative. It seems to\nme that this could give a different answer than before, if the\nh/m/s/us values had been of different signs before they got merged.\nHowever, I don't think that that situation is possible in SQL-spec-\ncompliant input, so it may not be a problem. Again, a counterexample\nwould be interesting.\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 01 Apr 2022 20:06:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "I wrote:\n> ... I almost feel that this is\n> committable, but there is one thing that is bothering me. The\n> part of DecodeInterval that does strange things with signs in the\n> INTSTYLE_SQL_STANDARD case (starting about line 3400 in datetime.c\n> before this patch, or line 3600 after) used to separately force the\n> hour, minute, second, and microsecond fields to negative.\n> Now it forces the merged tm_usec field to negative. It seems to\n> me that this could give a different answer than before, if the\n> h/m/s/us values had been of different signs before they got merged.\n> However, I don't think that that situation is possible in SQL-spec-\n> compliant input, so it may not be a problem. Again, a counterexample\n> would be interesting.\n\nAs best I can tell, the case isn't reachable with spec-compliant input,\nbut it's easy to demonstrate an issue if you set intervalstyle to\nsql_standard and then put in Postgres-format input. Historically,\nyou got\n\nregression=# show intervalstyle;\n IntervalStyle \n---------------\n postgres\n(1 row)\n\nregression=# select '-23 hours 45 min 12.34 sec'::interval;\n interval \n--------------\n -22:14:47.66\n(1 row)\n\n(because by default the field signs are taken as independent)\n\nregression=# set intervalstyle = sql_standard ;\nSET\nregression=# select '-23 hours 45 min 12.34 sec'::interval;\n interval \n--------------\n -23:45:12.34\n(1 row)\n\nHowever, with this patch both cases produce \"-22:14:47.66\",\nbecause we already merged the differently-signed fields and\nDecodeInterval can't tease them apart again. Perhaps we could\nget away with changing this nonstandard corner case, but I'm\npretty uncomfortable with that thought --- I don't think\nrandom semantics changes are within the charter of this patch.\n\nI think the patch can be salvaged, though. I like the concept\nof converting all the sub-day fields to microseconds immediately,\nbecause it avoids a host of issues, so I don't want to give that up.\nWhat I'm going to look into is detecting the sign-adjustment-needed\ncase up front (which is easy enough, since it's looking at the\ninput data not the conversion results) and then forcing the\nindividual field values negative before we accumulate them into\nthe pg_itm_in struct.\n\nMeanwhile, the fact that we didn't detect this issue immediately\nshows that there's a gap in our regression tests. So the *first*\nthing I'm gonna do is push a patch to add test cases like what\nI showed above.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Apr 2022 13:20:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Fri, Apr 1, 2022 at 8:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > * The existing code for rounding had a lot of int to double\n> > casting and vice versa. I *think* that doubles are able to completely\n> > represent the range of ints. However doubles are not able to represent\n> > the full range of int64. After making the change I started noticing\n> > a lot of lossy behavior. One thought I had was to change the doubles\n> > to long doubles, but I wasn't able to figure out if long doubles could\n> > completely represent the range of int64. Especially since their size\n> > varies depending on the architecture. Does anyone know the answer to\n> > this?\n>\n> I agree that relying on long double is not a great plan. However,\n> I'm not seeing where there's a problem. AFAICS the revised code\n> only uses doubles to represent fractions from the input, ie if you\n> write \"123.456 hours\" then the \".456\" is carried around for awhile\n> as a float. This does not seem likely to pose any real-world\n> problem; do you have a counterexample?\n\nYeah, you're correct, I don't think there is any problem with just\nusing double. I don't exactly remember why I thought long double\nwas necessary in the revised code. I probably just confused\nmyself because it would have been necessary with the old\nrounding code, but not the revised code.\n\n> Anyway, I've spent today reviewing the code and cleaning up things\n> I didn't like, and attached is a v10.\n\nThanks so much for the review and updates!\n\n> I think the patch can be salvaged, though. I like the concept\n> of converting all the sub-day fields to microseconds immediately,\n> because it avoids a host of issues, so I don't want to give that up.\n> What I'm going to look into is detecting the sign-adjustment-needed\n> case up front (which is easy enough, since it's looking at the\n> input data not the conversion results) and then forcing the\n> individual field values negative before we accumulate them into\n> the pg_itm_in struct.\n\nThis sounds like a very reasonable and achievable approach\nto me.\n\n- Joe Koshakow\n\n\n", "msg_date": "Sat, 2 Apr 2022 13:29:32 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sat, Apr 2, 2022 at 1:29 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Fri, Apr 1, 2022 at 8:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Joseph Koshakow <koshy44@gmail.com> writes:\n> > > * The existing code for rounding had a lot of int to double\n> > > casting and vice versa. I *think* that doubles are able to completely\n> > > represent the range of ints. However doubles are not able to represent\n> > > the full range of int64. After making the change I started noticing\n> > > a lot of lossy behavior. One thought I had was to change the doubles\n> > > to long doubles, but I wasn't able to figure out if long doubles could\n> > > completely represent the range of int64. Especially since their size\n> > > varies depending on the architecture. Does anyone know the answer to\n> > > this?\n> >\n> > I agree that relying on long double is not a great plan. However,\n> > I'm not seeing where there's a problem. AFAICS the revised code\n> > only uses doubles to represent fractions from the input, ie if you\n> > write \"123.456 hours\" then the \".456\" is carried around for awhile\n> > as a float. This does not seem likely to pose any real-world\n> > problem; do you have a counterexample?\n>\n> Yeah, you're correct, I don't think there is any problem with just\n> using double. I don't exactly remember why I thought long double\n> was necessary in the revised code. I probably just confused\n> myself because it would have been necessary with the old\n> rounding code, but not the revised code.\n\nOk I actually remember now, the issue is with the rounding\ncode in AdjustFractMicroseconds.\n\n> frac *= scale;\n> usec = (int64) frac;\n>\n> /* Round off any fractional microsecond */\n> frac -= usec;\n> if (frac > 0.5)\n> usec++;\n> else if (frac < -0.5)\n> usec--;\n\nI believe it's possible for `frac -= usec;` to result in a value greater\nthan 1 or less than -1 due to the lossiness of int64 to double\nconversions. Then we'd incorrectly round in one direction. I don't\nhave a concrete counter example, but at worst we'd end up with a\nresult that's a couple of microseconds off, so it's probably not a huge\ndeal.\n\nIf I'm right about the above, and we care enough to fix it, then I think\nit can be fixed with the following:\n\n> frac *= scale;\n> usec = (int64) frac;\n>\n> /* Remove non fractional part from frac */\n> frac -= (double) usec;\n> /* Adjust for lossy conversion from int64 to double */\n> while (frac < 0 && frac < -1)\n> frac++;\n> while (frac > 0 && frac > 1)\n> frac--;\n>\n> /* Round off any fractional microsecond */\n> if (frac > 0.5)\n> usec++;\n> else if (frac < -0.5)\n> usec--;\n\n- Joe Koshakow\n\n\n", "msg_date": "Sat, 2 Apr 2022 14:22:58 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sat, Apr 2, 2022 at 2:22 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sat, Apr 2, 2022 at 1:29 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n> >\n> > On Fri, Apr 1, 2022 at 8:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >\n> > > Joseph Koshakow <koshy44@gmail.com> writes:\n> > > > * The existing code for rounding had a lot of int to double\n> > > > casting and vice versa. I *think* that doubles are able to completely\n> > > > represent the range of ints. However doubles are not able to represent\n> > > > the full range of int64. After making the change I started noticing\n> > > > a lot of lossy behavior. One thought I had was to change the doubles\n> > > > to long doubles, but I wasn't able to figure out if long doubles could\n> > > > completely represent the range of int64. Especially since their size\n> > > > varies depending on the architecture. Does anyone know the answer to\n> > > > this?\n> > >\n> > > I agree that relying on long double is not a great plan. However,\n> > > I'm not seeing where there's a problem. AFAICS the revised code\n> > > only uses doubles to represent fractions from the input, ie if you\n> > > write \"123.456 hours\" then the \".456\" is carried around for awhile\n> > > as a float. This does not seem likely to pose any real-world\n> > > problem; do you have a counterexample?\n> >\n> > Yeah, you're correct, I don't think there is any problem with just\n> > using double. I don't exactly remember why I thought long double\n> > was necessary in the revised code. I probably just confused\n> > myself because it would have been necessary with the old\n> > rounding code, but not the revised code.\n>\n> Ok I actually remember now, the issue is with the rounding\n> code in AdjustFractMicroseconds.\n>\n> > frac *= scale;\n> > usec = (int64) frac;\n> >\n> > /* Round off any fractional microsecond */\n> > frac -= usec;\n> > if (frac > 0.5)\n> > usec++;\n> > else if (frac < -0.5)\n> > usec--;\n>\n> I believe it's possible for `frac -= usec;` to result in a value greater\n> than 1 or less than -1 due to the lossiness of int64 to double\n> conversions. Then we'd incorrectly round in one direction. I don't\n> have a concrete counter example, but at worst we'd end up with a\n> result that's a couple of microseconds off, so it's probably not a huge\n> deal.\n>\n> If I'm right about the above, and we care enough to fix it, then I think\n> it can be fixed with the following:\n>\n> > frac *= scale;\n> > usec = (int64) frac;\n> >\n> > /* Remove non fractional part from frac */\n> > frac -= (double) usec;\n> > /* Adjust for lossy conversion from int64 to double */\n> > while (frac < 0 && frac < -1)\n> > frac++;\n> > while (frac > 0 && frac > 1)\n> > frac--;\n> >\n> > /* Round off any fractional microsecond */\n> > if (frac > 0.5)\n> > usec++;\n> > else if (frac < -0.5)\n> > usec--;\n\n\nSorry, those should be inclusive comparisons\n> frac *= scale;\n> usec = (int64) frac;\n>\n> /* Remove non fractional part from frac */\n> frac -= (double) usec;\n> /* Adjust for lossy conversion from int64 to double */\n> while (frac < 0 && frac <= -1)\n> frac++;\n> while (frac > 0 && frac >= 1)\n> frac--;\n>\n> /* Round off any fractional microsecond */\n> if (frac > 0.5)\n> usec++;\n> else if (frac < -0.5)\n> usec--;\n\n\n", "msg_date": "Sat, 2 Apr 2022 14:32:31 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Fri, Apr 1, 2022 at 8:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I think the patch can be salvaged, though. I like the concept\n> of converting all the sub-day fields to microseconds immediately,\n> because it avoids a host of issues, so I don't want to give that up.\n> What I'm going to look into is detecting the sign-adjustment-needed\n> case up front (which is easy enough, since it's looking at the\n> input data not the conversion results) and then forcing the\n> individual field values negative before we accumulate them into\n> the pg_itm_in struct.\n\nI took a stab at this issue and the attached patch (which would be\napplied on top of your v10 patch) seems to fix the issue. Feel\nfree to ignore it if you're already working on a fix.\n\n- Joe", "msg_date": "Sat, 2 Apr 2022 14:55:13 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> Ok I actually remember now, the issue is with the rounding\n> code in AdjustFractMicroseconds.\n> ...\n> I believe it's possible for `frac -= usec;` to result in a value greater\n> than 1 or less than -1 due to the lossiness of int64 to double\n> conversions.\n\nI think it's not, at least not for the interesting range of possible\nvalues in this code. Given that abs(frac) < 1 to start with, the\nabs value of usec can't exceed the value of scale, which is at most \nUSECS_PER_DAY so it's at most 37 or so bits, which is well within\nthe exact range for any sane implementation of double. It would\ntake a very poor floating-point implementation to not get the right\nanswer here. (And we're largely assuming IEEE-compliant floats these\ndays.)\n\nAnyway, the other issue indeed turns out to be easy to fix.\nAttached is a v11 that deals with that. If the cfbot doesn't\ncomplain about it, I'll push this later today.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 02 Apr 2022 15:08:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> I took a stab at this issue and the attached patch (which would be\n> applied on top of your v10 patch) seems to fix the issue. Feel\n> free to ignore it if you're already working on a fix.\n\nYou really only need to flip val/fval in one place. More to the\npoint, there's also the hh:mm:ss paths to deal with; see my v11.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Apr 2022 15:10:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sat, Apr 2, 2022 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > Ok I actually remember now, the issue is with the rounding\n> > code in AdjustFractMicroseconds.\n> > ...\n> > I believe it's possible for `frac -= usec;` to result in a value greater\n> > than 1 or less than -1 due to the lossiness of int64 to double\n> > conversions.\n>\n> I think it's not, at least not for the interesting range of possible\n> values in this code. Given that abs(frac) < 1 to start with, the\n> abs value of usec can't exceed the value of scale, which is at most\n> USECS_PER_DAY so it's at most 37 or so bits, which is well within\n> the exact range for any sane implementation of double. It would\n> take a very poor floating-point implementation to not get the right\n> answer here. (And we're largely assuming IEEE-compliant floats these\n> days.)\n\nAh, I see. That makes sense to me.\n\nOn Sat, Apr 2, 2022 at 3:10 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > I took a stab at this issue and the attached patch (which would be\n> > applied on top of your v10 patch) seems to fix the issue. Feel\n> > free to ignore it if you're already working on a fix.\n>\n> You really only need to flip val/fval in one place. More to the\n> point, there's also the hh:mm:ss paths to deal with; see my v11.\n\nGood point. Thanks again for all the help!\n\n- Joe Koshakow\n\n\n", "msg_date": "Sat, 2 Apr 2022 15:51:38 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Sat, Apr 2, 2022 at 3:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I think it's not, at least not for the interesting range of possible\n>> values in this code. Given that abs(frac) < 1 to start with, the\n>> abs value of usec can't exceed the value of scale, which is at most\n>> USECS_PER_DAY so it's at most 37 or so bits, which is well within\n>> the exact range for any sane implementation of double. It would\n>> take a very poor floating-point implementation to not get the right\n>> answer here. (And we're largely assuming IEEE-compliant floats these\n>> days.)\n\n> Ah, I see. That makes sense to me.\n\nCool. I've pushed the patch.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 02 Apr 2022 16:14:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "I wrote:\n> Cool. I've pushed the patch.\n\nHmm ... buildfarm's not entirely happy [1][2][3]:\n\ndiff -U3 /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/expected/interval.out /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/results/interval.out\n--- /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/expected/interval.out\t2022-04-03 04:56:32.000000000 +0000\n+++ /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/results/interval.out\t2022-04-03 05:23:00.000000000 +0000\n@@ -1465,7 +1465,7 @@\n LINE 1: select interval 'PT2562047788.1:00:54.775807';\n ^\n select interval 'PT2562047788:01.:54.775807';\n- ERROR: interval field value out of range: \"PT2562047788:01.:54.775807\"\n+ ERROR: invalid input syntax for type interval: \"PT2562047788:01.:54.775807\"\n LINE 1: select interval 'PT2562047788:01.:54.775807';\n ^\n -- overflowing with fractional fields - SQL standard format\n\nWhat do you make of that? I'm betting that strtod() works a\nbit differently on those old platforms, but too tired to\nlook closer tonight.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-04-03%2004%3A56%3A34\n[2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2022-04-03%2000%3A51%3A50\n[3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2022-04-03%2000%3A32%3A10\n\n\n", "msg_date": "Sun, 03 Apr 2022 03:09:07 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Apr 3, 2022 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Cool. I've pushed the patch.\n>\n> Hmm ... buildfarm's not entirely happy [1][2][3]:\n>\n> diff -U3 /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/expected/interval.out /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/results/interval.out\n> --- /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/expected/interval.out 2022-04-03 04:56:32.000000000 +0000\n> +++ /home/nm/farm/gcc64/HEAD/pgsql.build/src/test/regress/results/interval.out 2022-04-03 05:23:00.000000000 +0000\n> @@ -1465,7 +1465,7 @@\n> LINE 1: select interval 'PT2562047788.1:00:54.775807';\n> ^\n> select interval 'PT2562047788:01.:54.775807';\n> - ERROR: interval field value out of range: \"PT2562047788:01.:54.775807\"\n> + ERROR: invalid input syntax for type interval: \"PT2562047788:01.:54.775807\"\n> LINE 1: select interval 'PT2562047788:01.:54.775807';\n> ^\n> -- overflowing with fractional fields - SQL standard format\n>\n> What do you make of that? I'm betting that strtod() works a\n> bit differently on those old platforms, but too tired to\n> look closer tonight.\n>\n> regards, tom lane\n>\n> [1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sungazer&dt=2022-04-03%2004%3A56%3A34\n> [2] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hoverfly&dt=2022-04-03%2000%3A51%3A50\n> [3] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=anole&dt=2022-04-03%2000%3A32%3A10\n\nI think I know that the issue is. It's with `ParseISO8601Number` and\nthe minutes field \"1.\".\nPreviously that function parsed the entire field into a single double,\nso \"1.\" would\nbe parsed into 1.0. Now we try to parse the integer and decimal parts\nseparately. So\nwe first parse \"1\" into 1 and then fail to \".\" into anything because\nit's not a valid decimal.\n\nWhat's interesting is that I believe this syntax, \"1.\", always would\nhave failed for\nnon-ISO8601 Interval. It was only previously valid with ISO8601 intervals.\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 3 Apr 2022 11:23:07 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Sun, Apr 3, 2022 at 3:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hmm ... buildfarm's not entirely happy [1][2][3]:\n\n> I think I know that the issue is. It's with `ParseISO8601Number` and\n> the minutes field \"1.\".\n> Previously that function parsed the entire field into a single double,\n> so \"1.\" would\n> be parsed into 1.0. Now we try to parse the integer and decimal parts\n> separately. So\n> we first parse \"1\" into 1 and then fail to \".\" into anything because\n> it's not a valid decimal.\n\nInteresting point, but then why doesn't it fail everywhere?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Apr 2022 11:44:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "I wrote:\n> Joseph Koshakow <koshy44@gmail.com> writes:\n>> I think I know that the issue is. It's with `ParseISO8601Number` and\n>> the minutes field \"1.\".\n>> Previously that function parsed the entire field into a single double,\n>> so \"1.\" would\n>> be parsed into 1.0. Now we try to parse the integer and decimal parts\n>> separately. So\n>> we first parse \"1\" into 1 and then fail to \".\" into anything because\n>> it's not a valid decimal.\n\n> Interesting point, but then why doesn't it fail everywhere?\n\nOh ... a bit of testing says that strtod() on an empty string\nsucceeds (returning zero) on Linux, but fails with EINVAL on\nAIX. The latter is a lot less surprising than the former,\nso we'd better cope.\n\n(Reading POSIX with an eagle eye, it looks like both behaviors\nare allowed per spec: this is why you have to check that endptr\nwas advanced to be sure everything is kosher.)\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Apr 2022 12:03:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Joseph Koshakow <koshy44@gmail.com> writes:\n> >> I think I know that the issue is. It's with `ParseISO8601Number` and\n> >> the minutes field \"1.\".\n> >> Previously that function parsed the entire field into a single double,\n> >> so \"1.\" would\n> >> be parsed into 1.0. Now we try to parse the integer and decimal parts\n> >> separately. So\n> >> we first parse \"1\" into 1 and then fail to \".\" into anything because\n> >> it's not a valid decimal.\n>\n> > Interesting point, but then why doesn't it fail everywhere?\n>\n> Oh ... a bit of testing says that strtod() on an empty string\n> succeeds (returning zero) on Linux, but fails with EINVAL on\n> AIX. The latter is a lot less surprising than the former,\n> so we'd better cope.\n>\n> (Reading POSIX with an eagle eye, it looks like both behaviors\n> are allowed per spec: this is why you have to check that endptr\n> was advanced to be sure everything is kosher.)\n>\n> regards, tom lane\n\nI'm not sure I follow exactly. Where would we pass an empty\nstring to strtod()? Wouldn't we be passing a string with a\nsingle character of '.'?\n\nEither way, from reading the man pages though it seems\nthat strtod() has the same behavior on any invalid input in\nLinux, return 0 and don't advance endptr.\n\nSo I think we need to check that endptr has moved both after\nthe call to strtoi64() and strtod().\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 3 Apr 2022 12:22:12 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> On Sun, Apr 3, 2022 at 12:03 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh ... a bit of testing says that strtod() on an empty string\n>> succeeds (returning zero) on Linux, but fails with EINVAL on\n>> AIX. The latter is a lot less surprising than the former,\n>> so we'd better cope.\n\n> I'm not sure I follow exactly. Where would we pass an empty\n> string to strtod()? Wouldn't we be passing a string with a\n> single character of '.'?\n\nOh, I was thinking that we passed \"cp + 1\" to strtod, but that\nwas just caffeine deprivation. You're right, what we are asking\nit to parse is \".\" not \"\". The result is the same though:\nper testing, AIX sets EINVAL and Linux doesn't.\n\n> So I think we need to check that endptr has moved both after\n> the call to strtoi64() and strtod().\n\nI'm not sure we need to do that explicitly, given that there's\na check later as to whether endptr is pointing at \\0; that will\nfail if endptr wasn't advanced.\n\nThe fix I was loosely envisioning was to check for cp[1] == '\\0'\nand not bother calling strtod() in that case.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 03 Apr 2022 12:30:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Joseph Koshakow <koshy44@gmail.com> writes:\n> > So I think we need to check that endptr has moved both after\n> > the call to strtoi64() and strtod().\n>\n> I'm not sure we need to do that explicitly, given that there's\n> a check later as to whether endptr is pointing at \\0; that will\n> fail if endptr wasn't advanced.\n>\n> The fix I was loosely envisioning was to check for cp[1] == '\\0'\n> and not bother calling strtod() in that case.\n\nAh, ok I see what you mean. I agree an approach like that should\nwork, but I don't actually think cp is null terminated in this case. The\nentire Interval is passed to DecodeISO8601Interval() as one big\nstring, so the specific number we're parsing may be somewhere\nin the middle.\n\nIf we just do the opposite and check isdigit(cp[1]) and only call\nstrtod() in that case I think it should work.\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 3 Apr 2022 12:44:46 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Apr 3, 2022 at 12:44 PM Joseph Koshakow <koshy44@gmail.com> wrote:\n>\n> On Sun, Apr 3, 2022 at 12:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > Joseph Koshakow <koshy44@gmail.com> writes:\n> > > So I think we need to check that endptr has moved both after\n> > > the call to strtoi64() and strtod().\n> >\n> > I'm not sure we need to do that explicitly, given that there's\n> > a check later as to whether endptr is pointing at \\0; that will\n> > fail if endptr wasn't advanced.\n> >\n> > The fix I was loosely envisioning was to check for cp[1] == '\\0'\n> > and not bother calling strtod() in that case.\n>\n> Ah, ok I see what you mean. I agree an approach like that should\n> work, but I don't actually think cp is null terminated in this case. The\n> entire Interval is passed to DecodeISO8601Interval() as one big\n> string, so the specific number we're parsing may be somewhere\n> in the middle.\n>\n> If we just do the opposite and check isdigit(cp[1]) and only call\n> strtod() in that case I think it should work.\n>\n> - Joe Koshakow\n\nHow does this patch look? I don't really have any way to test it on\nAIX.\n\n- Joe Koshakow", "msg_date": "Sun, 3 Apr 2022 13:00:48 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> How does this patch look? I don't really have any way to test it on\n> AIX.\n\nThat buildfarm machine is pretty slow, so I'm not in a hurry to test\nit manually either. However, now that we realize the issue is about\nwhether strtod(\".\") produces EINVAL or not, I think we need to fix\nall the places in datetime.c that are risking that. After a bit of\nhacking I have the attached. (I think that the call sites for\nstrtoint and its variants are not at risk of passing empty strings,\nso there's not need for concern there.)\n\nBTW, the way you had it coded would allow 'P.Y0M3DT4H5M6S', which\nI don't think we want to allow --- at least, that's rejected by v14\non my machine.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 03 Apr 2022 15:06:55 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in DecodeInterval" }, { "msg_contents": "On Sun, Apr 3, 2022 at 3:06 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That buildfarm machine is pretty slow, so I'm not in a hurry to test\n> it manually either. However, now that we realize the issue is about\n> whether strtod(\".\") produces EINVAL or not, I think we need to fix\n> all the places in datetime.c that are risking that. After a bit of\n> hacking I have the attached. (I think that the call sites for\n> strtoint and its variants are not at risk of passing empty strings,\n> so there's not need for concern there.)\n>\n> BTW, the way you had it coded would allow 'P.Y0M3DT4H5M6S', which\n> I don't think we want to allow --- at least, that's rejected by v14\n> on my machine.\n\n\nOh yeah, good catch. Your patch seems like it should\nfix all the issues. Thanks again for the help!\n\n- Joe Koshakow\n\n\n", "msg_date": "Sun, 3 Apr 2022 18:19:54 -0400", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in DecodeInterval" } ]
[ { "msg_contents": "With yesterday’s release of PostgreSQL 11.15, 12.10, and 13.6 \n(presumably 10.20 and 14.2 as well), Zulip’s test suite started failing \nwith “variable not found in subplan target list” errors from PostgreSQL \non a table that has a PGroonga index. I found the following \nreproduction recipe from a fresh database:\n\npsql (11.15 (Debian 11.15-1.pgdg100+1))\nType \"help\" for help.\n\ntest=# CREATE EXTENSION pgroonga;\nCREATE EXTENSION\ntest=# CREATE TABLE t AS SELECT CAST(c AS text) FROM generate_series(1,\n10000) AS c;\nSELECT 10000\ntest=# CREATE INDEX t_c ON t USING pgroonga (c);\nCREATE INDEX\ntest=# VACUUM t;\nVACUUM\ntest=# SELECT COUNT(*) FROM t;\nERROR: variable not found in subplan target list\n\nI filed https://github.com/pgroonga/pgroonga/issues/203, and confirmed \nwith a Git bisection of PostgreSQL that this error first appears with \nec36745217 (REL_11_15~42) “Fix index-only scan plans, take 2.” I’m \naware that this likely just exposed a previously hidden PGroonga bug, \nbut I figure PostgreSQL developers might want to know about this anyway \nand help come up with the right fix. The PGroonga author suggested I \nstart a thread here:\n\nhttps://github.com/pgroonga/pgroonga/issues/203#issuecomment-1036708841\n> Thanks for investigating this!\n> \n> Our CI is also broken with new PostgreSQL:\n> https://github.com/pgroonga/pgroonga/runs/5149762901?check_suite_focus=true\n> \n> https://www.postgresql.org/message-id/602391641208390%40iva4-92c901fae84c.qloud-c.yandex.net\n> says partial-retrieval index-only scan but our case is\n> non-retrievable index-only scan. In non-retrievable index-only scan,\n> the error is occurred.\n> \n> We asked about non-retrievable index-only scan on the PostgreSQL\n> mailing list in the past:\n> https://www.postgresql.org/message-id/5148.1584372043%40sss.pgh.pa.us\n> We thought non-retrievable index-only scan should not be used but\n> PostgreSQL may use it as a valid plan. So I think that our case\n> should be supported with postgres/postgres@ec36745 “Fix index-only > scan plans, take 2.”\n> \n> Could you ask about this case on the PostgreSQL mailing list\n> https://www.postgresql.org/list/pgsql-hackers/ ?\n> \n> The following patch fixes our case and PostgreSQL's test cases are\n> still passed but a review by the original author is needed:\n> \n> --- postgresql-11.15.orig/src/backend/optimizer/plan/setrefs.c\t2022-02-08 06:20:23.000000000 +0900\n\n> +++ postgresql-11.15/src/backend/optimizer/plan/setrefs.c\t2022-02-12 07:32:20.355567317 +0900\n\n> @@ -1034,7 +1034,7 @@\n\n> \t{\n\n> \t\tTargetEntry *indextle = (TargetEntry *) lfirst(lc);\n\n> \n\n> -\t\tif (!indextle->resjunk)\n\n> +\t\tif (!indextle->resjunk || indextle->expr->type == T_Var)\n\n> \t\t\tstripped_indextlist = lappend(stripped_indextlist, indextle);\n\n> \t}\n\n> \n\n> ```\n\n\nAnders\n\n\n", "msg_date": "Fri, 11 Feb 2022 15:41:01 -0800", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?PGroonga_index-only_scan_problem_with_yesterday=e2=80=99s?=\n =?UTF-8?Q?_PostgreSQL_updates?=" }, { "msg_contents": "Anders Kaseorg <andersk@mit.edu> writes:\n> With yesterday’s release of PostgreSQL 11.15, 12.10, and 13.6 \n> (presumably 10.20 and 14.2 as well), Zulip’s test suite started failing \n> with “variable not found in subplan target list” errors from PostgreSQL \n> on a table that has a PGroonga index.\n\nPossibly same issue I just fixed?\n\nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5691cc9170bcd6c684715c2755d919c5a16fea2\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Fri, 11 Feb 2022 18:57:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re:\n =?UTF-8?Q?PGroonga_index-only_scan_problem_with_yesterday=e2=80=99s?=\n =?UTF-8?Q?_PostgreSQL_updates?=" }, { "msg_contents": "On 2/11/22 15:57, Tom Lane wrote:\n> Possibly same issue I just fixed?\n> \n> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5691cc9170bcd6c684715c2755d919c5a16fea2\n\nYeah, that does seem to fix my test cases. Thanks!\n\nAny chance this will make it into minor releases sooner than three \nmonths from now? I know extra minor releases are unusual, but this \nregression will be critical at least for self-hosted Zulip users and \npresumably other PGroonga users.\n\nAnders\n\n\n", "msg_date": "Fri, 11 Feb 2022 16:51:53 -0800", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Re=3a_PGroonga_index-only_scan_problem_with_yesterday?=\n =?UTF-8?Q?=e2=80=99s_PostgreSQL_updates?=" }, { "msg_contents": "Anders Kaseorg <andersk@mit.edu> writes:\n> On 2/11/22 15:57, Tom Lane wrote:\n>> Possibly same issue I just fixed?\n>> https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=e5691cc9170bcd6c684715c2755d919c5a16fea2\n\n> Yeah, that does seem to fix my test cases. Thanks!\n\n> Any chance this will make it into minor releases sooner than three \n> months from now? I know extra minor releases are unusual, but this \n> regression will be critical at least for self-hosted Zulip users and \n> presumably other PGroonga users.\n\nI don't know that we'd go that far ... maybe if another bad problem\nturns up. In the meantime, though, I do have a modest suggestion:\nit would not be too hard to put some defenses in place against another\nsuch bug. The faulty commit was in our tree for a month and nobody\nreported a problem, which is annoying. Do you want to think about doing\nyour testing against git branch tips, rather than the last released\nversions? Making a new build every few days would probably be plenty\nfast enough.\n\nAn even slicker answer would be to set up a PG buildfarm machine\nthat, in addition to the basic tests, builds PGroonga against the\nnew PG sources and runs your tests. Andrew's machine \"crake\" does\nthat for RedisFDW and BlackholeFDW, and the source code for at least\nthe latter module is in the buildfarm client distribution.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 12:25:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: =?UTF-8?Q?Re=3a_PGroonga_index-only_scan_problem_with_yesterday?=\n =?UTF-8?Q?=e2=80=99s_PostgreSQL_updates?=" }, { "msg_contents": "On 2/12/22 09:25, Tom Lane wrote:\n> I don't know that we'd go that far ... maybe if another bad problem\n> turns up. In the meantime, though, I do have a modest suggestion:\n> it would not be too hard to put some defenses in place against another\n> such bug. The faulty commit was in our tree for a month and nobody\n> reported a problem, which is annoying. Do you want to think about doing\n> your testing against git branch tips, rather than the last released\n> versions? Making a new build every few days would probably be plenty\n> fast enough.\n> \n> An even slicker answer would be to set up a PG buildfarm machine\n> that, in addition to the basic tests, builds PGroonga against the\n> new PG sources and runs your tests. Andrew's machine \"crake\" does\n> that for RedisFDW and BlackholeFDW, and the source code for at least\n> the latter module is in the buildfarm client distribution.\n\nI’m a Zulip developer, not a PGroonga developer, but this sounds like a \ngood idea to me, so I opened a PR to add the PostgreSQL 15 snapshot \nbuild to PGroonga’s test matrix.\n\nhttps://github.com/pgroonga/pgroonga/pull/204\n\nThe current snapshot build in the PGDG APT repository is \n15~~devel~20220208.0530-1~541.gitba15f16, predating this fix, despite \nthe documentation that they should be built every 6 hours \n(https://wiki.postgresql.org/wiki/Apt/FAQ#Development_snapshots). But \nit seems the tests have other failures against this snapshot build that \nwill need to be investigated.\n\nAnders\n\n\n", "msg_date": "Sat, 12 Feb 2022 12:36:26 -0800", "msg_from": "Anders Kaseorg <andersk@mit.edu>", "msg_from_op": true, "msg_subject": "=?UTF-8?Q?Re=3a_PGroonga_index-only_scan_problem_with_yesterday?=\n =?UTF-8?Q?=e2=80=99s_PostgreSQL_updates?=" }, { "msg_contents": "\nOn 2/12/22 12:25, Tom Lane wrote:\n>\n> An even slicker answer would be to set up a PG buildfarm machine\n> that, in addition to the basic tests, builds PGroonga against the\n> new PG sources and runs your tests. Andrew's machine \"crake\" does\n> that for RedisFDW and BlackholeFDW, and the source code for at least\n> the latter module is in the buildfarm client distribution.\n\n\nIt occurred to me that this wasn't very well documented, so I created\nsome docco:\n<https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Testing_Additional_Software>\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 16:02:37 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "=?UTF-8?Q?Re=3a_PGroonga_index-only_scan_problem_with_yesterday?=\n =?UTF-8?Q?=e2=80=99s_PostgreSQL_updates?=" }, { "msg_contents": "Am Montag, dem 14.02.2022 um 16:02 -0500 schrieb Andrew Dunstan:\n> It occurred to me that this wasn't very well documented, so I created\n> some docco:\n> <\n> https://wiki.postgresql.org/wiki/PostgreSQL_Buildfarm_Howto#Testing_Ad\n> ditional_Software>\n\nThx Andrew, i wasn't aware of this, too! I'm going to use that for the\nInformix FDW, since it has got rotten over the time a little and i'd\nlike to get some better code testing for it.\n\n\tBernd\n\n\n\n", "msg_date": "Wed, 16 Feb 2022 15:00:05 +0100", "msg_from": "Bernd Helmle <mailings@oopsware.de>", "msg_from_op": false, "msg_subject": "Re: PGroonga index-only scan problem with\n =?UTF-8?Q?yesterday=E2=80=99s?= PostgreSQL updates" } ]
[ { "msg_contents": "Hi hackers.\n\nI'm attaching a patch that add some new test cases for tab completion of psql.\n\nThis is my first patch that I'm sending here so let me know if I'm doing something wrong.\n\n--\nMatheus Alcantara", "msg_date": "Sat, 12 Feb 2022 20:17:29 +0000", "msg_from": "Matheus Alcantara <mths.dev@pm.me>", "msg_from_op": true, "msg_subject": "[PATCH] Add tests for psql tab completion" }, { "msg_contents": "Matheus Alcantara <mths.dev@pm.me> writes:\n> I'm attaching a patch that add some new test cases for tab completion of psql.\n\nWhat exactly is the motivation for these particular tests?\n\nI believe that most of tab-complete.c is already covered, outside\nof the giant if-else chain at the heart of psql_completion().\nIt's hard to summon interest in trying to cover every branch of\nthat chain; but if we're to add coverage of just a few more,\nwhich ones and why? The ones you've chosen to hit in this\npatch don't seem especially interesting.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:01:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add tests for psql tab completion" }, { "msg_contents": "[ Please keep the mailing list cc'd ]\n\nMatheus Alcantara <mths.dev@pm.me> writes:\n> On Monday, February 14th, 2022 at 17:01, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What exactly is the motivation for these particular tests?\n\n> I was studying the source code and looking for projects that I could contribute so I decided\n> to start with tests, so I ran coverage and started with files that had little coverage, realized\n> that psql tab completion ones had little coverage so I decided to add some tests, I tried to\n> start with the simplest.\n\n> I understand that the patch may not be as much of a need, I just wanted to try and help with something.\n> Do you think there would be other tests that should be done? I would like to try to contribute.\n\nThere's certainly lots of places that could use more test coverage.\nBut I think that making a meaningful difference in tab-complete.c\nwould require writing test cases to hit most of the if-else branches,\nwhich doesn't seem very profitable either in terms of test-writing\neffort or in terms of the cycles that'd be spent on running those\ntests forevermore. We try to be thrifty about how much work is\ndone by check-world, because it's a real advantage for development\nthat that takes a small number of minutes and not hours. I'm not\nreally seeing that covering more of tab-complete would buy much.\n\nAs for areas that *do* need more coverage, the first one that\nI come across in looking through the coverage report is GIST\nindex build: gistbuild.c is only showing 45% coverage, and\ngistbuildbuffers.c a fat zero [1]. We've looked at that before [2]\nbut not made much progress on developing an adequately cheap test\ncase. Maybe you could pick up where that thread left off? Or if that\ndoesn't seem interesting to you, there's lots of other possibilities.\nI'd suggest getting some buy-in from this list on what to work on\nbefore you start, though.\n\n\t\t\tregards, tom lane\n\n[1] https://coverage.postgresql.org/src/backend/access/gist/index.html\n\n[2] https://www.postgresql.org/message-id/flat/10261.1588705157%40sss.pgh.pa.us#46b998e6585f0bf0fd7b75703b43decb\n\n\n", "msg_date": "Mon, 14 Feb 2022 16:19:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Add tests for psql tab completion" } ]
[ { "msg_contents": "Is there any check for warnings from new code, other than those buildfarm\nmembers with -Werror ?\n\nIt'd be better to avoid warnings, allowing members to use -Werror, rather than\nto allow/ignore warnings, which preclude that possibility. A circular problem.\n\nI checked for warnings on master during \"check\" phase by opening many browser\ntabs...\n\nI'm of the impression that some people have sql access to BF logs. It seems\nlike it'd be easy to make a list of known/old/false-positive warnings and\nignore warnings matching (member,file,regex) to detect new warnings.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=caiman&dt=2022-02-12%2006%3A00%3A30&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=moonjelly&dt=2022-02-12%2001%3A06%3A22&stg=make\nrawhide prerelease / gcc trunk\npg_basebackup.c:1261:35: warning: storing the address of local variable archive_filename in progress_filename [-Wdangling-pointer=]\n\t=> new in 23a1c6578 - looks like a real error\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=wrasse&dt=2022-02-12%2005%3A08%3A48&stg=make\nOracle Developer Studio\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/libpq/auth.c\", line 104: warning: initialization type mismatch\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/commands/copy.c\", line 206: warning: true part of ?: is unused and is nonconstant\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/utils/fmgr/fmgr.c\", line 400: warning: assignment type mismatch:\n\"/export/home/nm/farm/studio64v12_6/HEAD/pgsql.build/../pgsql/src/backend/access/common/reloptions.c\", line 742: warning: argument #2 is incompatible with prototype:\n\t=> \"Size relopt_struct_size\" looks wrong since 911e7020 (v13)\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=haddock&dt=2022-02-12%2000%3A23%3A17&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=hake&dt=2022-02-12%2003%3A00%3A35&stg=make\nllumos \"rolling release\"\npg_dump_sort.c:1232:3: warning: format not a string literal and no format arguments [-Wformat-security]\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=prairiedog&dt=2022-02-12%2009%3A52%3A07&stg=make\nOSX/PPC\nA lot of these, but some from new code (babbbb59)\npg_receivewal.c:288: warning: 'wal_compression_method' may be used uninitialized in this function\npg_receivewal.c:289: warning: 'ispartial' may be used uninitialized in this function\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=gharial&dt=2022-02-12%2012%3A36%3A42&stg=make\nHPUX\ncompress_io.c:532:4: warning: implicit declaration of function gzopen64 [-Wimplicit-function-declaration]\npg_backup_archiver.c:1521:4: warning: implicit declaration of function gzopen64 [-Wimplicit-function-declaration]\npg_backup_archiver.c:1521:11: warning: assignment makes pointer from integer without a cast [enabled by default]\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=anole&dt=2022-02-12%2005%3A46%3A44&stg=make\nHPUX\n\"heapam_handler.c\", line 2434: warning #2940-D: missing return statement at end of non-void function \"heapam_scan_sample_next_tuple\"\n\"regc_lex.c\", line 762: warning #2940-D: missing return statement at end of non-void function \"lexescape\"\n\"fmgr.c\", line 400: warning #2513-D: a value of type \"void *\" cannot be assigned to an entity of type \"PGFunction\"\n\"bbstreamer_gzip.c\", line 92: warning #2223-D: function \"gzopen64\" declared implicitly\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=tern&dt=2022-02-12%2006%3A48%3A14&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=sungazer&dt=2022-02-12%2009%3A26%3A58&stg=make\nppc\nauth.c:1876:6: warning: implicit declaration of function 'getpeereid'; did you mean 'getpgid'? [-Wimplicit-function-declaration]\nmissing #include ?\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=pollock&dt=2022-02-12%2004%3A35%3A53&stg=make\nGCC10/illumos\npg_shmem.c:326:33: warning: passing argument 1 of 'shmdt' from incompatible pointer type [-Wincompatible-pointer-types]\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=pogona&dt=2022-02-12%2004%3A10%3A03&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=desmoxytes&dt=2022-02-12%2018%3A24%3A03&stg=make\nLLVM: warning: void* memcpy(void*, const void*, size_t) writing to an object of type struct std::pair<void*, long unsigned int> with no trivial copy-assignment; use copy-assignment or copy-initialization instead [-Wclass-memaccess]\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=rorqual&dt=2022-02-12%2018%3A23%3A39&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=serinus&dt=2022-02-12%2005%3A00%3A08&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=calliphoridae&dt=2022-02-12%2004%3A07%3A11&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=culicidae&dt=2022-02-12%2004%3A07%3A13&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=piculet&dt=2022-02-12%2004%3A07%3A25&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=francolin&dt=2022-02-12%2004%3A07%3A44&stg=make\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=lorikeet&dt=2022-02-12%2002%3A11%3A43&stg=make\ndebian sid/gcc snapshot\nnumerous: variable ... might be clobbered by longjmp or vfork [-Wclobbered]\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_stage_log.pl?nm=jacana&dt=2022-02-12%2007%3A36%3A32&stg=make\na huge number of these: this statement may fall through [-Wimplicit-fallthrough=]\n\n-- \nJustin\n\n\n", "msg_date": "Sat, 12 Feb 2022 15:13:16 -0600", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": true, "msg_subject": "buildfarm warnings" }, { "msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> Is there any check for warnings from new code, other than those buildfarm\n> members with -Werror ?\n\nI periodically grep the buildfarm logs for interesting warnings.\nThere are a lot of uninteresting ones :-(, so I'm not sure how\nautomatable that'd be. I do use a prefiltering script, which\nright now says there are\n13917 total warnings from 41 buildfarm members\nof which it thinks all but 350 are of no interest. Most of those\n350 are of no interest either, but I didn't bother to make\nfilter rules for them yet ...\n\n> It'd be better to avoid warnings, allowing members to use -Werror, rather than\n> to allow/ignore warnings, which preclude that possibility.\n\nThe ones I classify as \"uninteresting\" are mostly from old/broken\ncompilers. I don't think it'd be profitable to try to convince\nprairiedog's 17-year-old compiler that we don't have uninitialized\nvariables, for instance.\n\n> I'm of the impression that some people have sql access to BF logs.\n\nYeah. I'm not sure exactly what the access policy is for that machine;\nI know we'll give out logins to committers, but not whether there's\nany precedent for non-committers.\n\n> pg_basebackup.c:1261:35: warning: storing the address of local variable archive_filename in progress_filename [-Wdangling-pointer=]\n> \t=> new in 23a1c6578 - looks like a real error\n\nI saw that one a few days ago but didn't get around to looking\nmore closely yet. It does look fishy, but it might be okay\ndepending on when the global variable can be accessed.\n\nAnother new one that maybe should be silenced is\n\n/mnt/resource/bf/build/skink-master/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c:1129:41: warning: 'freespace' may be used uninitialized in this function [-Wmaybe-uninitialized]\n\nOnly skink and frogfish are showing that, though.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sat, 12 Feb 2022 16:42:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "Hi,\n\nOn 2022-02-12 16:42:03 -0500, Tom Lane wrote:\n> Another new one that maybe should be silenced is\n>\n> /mnt/resource/bf/build/skink-master/HEAD/pgsql.build/../pgsql/src/backend/access/heap/vacuumlazy.c:1129:41: warning: 'freespace' may be used uninitialized in this function [-Wmaybe-uninitialized]\n>\n> Only skink and frogfish are showing that, though.\n\nskink uses -O1 to reduce the overhead of valgrind a bit (iirc that shaved off\na couple of hours) without making backtraces completely unreadable. Which\nexplains why it shows different warnings than most other animals...\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 14:03:03 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> Is there any check for warnings from new code, other than those buildfarm\n>> members with -Werror ?\n\n> I periodically grep the buildfarm logs for interesting warnings.\n> There are a lot of uninteresting ones :-(, so I'm not sure how\n> automatable that'd be. I do use a prefiltering script, which\n> right now says there are\n> 13917 total warnings from 41 buildfarm members\n> of which it thinks all but 350 are of no interest.\n\nThis conversation motivated me to take a fresh pass over said filtering\nscript, and one thing I notice that it's been ignoring is these complaints\nfrom all the AIX animals:\n\nld: 0711-224 WARNING: Duplicate symbol: .xml_is_well_formed\nld: 0711-224 WARNING: Duplicate symbol: xml_is_well_formed\n\nwhile building contrib/xml2. Evidently this is because xml2 defines\na symbol also defined in the core. We cannot rename xml2's function\nwithout a user-visible ABI break, but it would be pretty painless\nto rename the core function (at the C level, not SQL level).\nSo I propose we do that. I think that when I put in that pattern,\nthere were more problems, but this seems to be all that's left.\n\nAnother thing I've been ignoring on the grounds that I-don't-feel-\nlike-fixing-this is that the Solaris and OpenIndiana machines whine\nabout every single reference from loadable modules to the core code,\neg this from haddock while building contrib/fuzzystrmatch:\n\nUndefined\t\t\tfirst referenced\n symbol \t\t\t in file\ncstring_to_text dmetaphone.o\nvarstr_levenshtein fuzzystrmatch.o\ntext_to_cstring dmetaphone.o\nerrstart_cold fuzzystrmatch.o\nerrmsg fuzzystrmatch.o\npalloc dmetaphone.o\nExceptionalCondition fuzzystrmatch.o\nerrfinish fuzzystrmatch.o\nvarstr_levenshtein_less_equal fuzzystrmatch.o\nrepalloc dmetaphone.o\nerrcode fuzzystrmatch.o\nerrmsg_internal fuzzystrmatch.o\npg_detoast_datum_packed dmetaphone.o\nld: warning: symbol referencing errors\n\nPresumably, this means that the Solaris linker would really like\nto see the executable the module will load against, analogously to\nthe BE_DLLLIBS settings we use on some other platforms such as macOS.\nIt evidently still works without that, but we might be losing\nperformance from an extra indirection or the like. And in any case,\nthis amount of noise would be very distracting if you wanted to do\nactual development on that platform. I'm not especially interested\nin tracking down the correct incantation, but I'd gladly push a patch\nif someone submits one.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Feb 2022 12:10:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "I wrote:\n> This conversation motivated me to take a fresh pass over said filtering\n> script, and one thing I notice that it's been ignoring is these complaints\n> from all the AIX animals:\n\n> ld: 0711-224 WARNING: Duplicate symbol: .xml_is_well_formed\n> ld: 0711-224 WARNING: Duplicate symbol: xml_is_well_formed\n\n> while building contrib/xml2. Evidently this is because xml2 defines\n> a symbol also defined in the core. We cannot rename xml2's function\n> without a user-visible ABI break, but it would be pretty painless\n> to rename the core function (at the C level, not SQL level).\n> So I propose we do that.\n\nI tried to do that, but it blew up in my face, because it turns out that\nactually contrib/xml2's extension script has a by-C-name reference to\nthe *core* function:\n\n-- deprecated old name for xml_is_well_formed\nCREATE FUNCTION xml_valid(text) RETURNS bool\nAS 'xml_is_well_formed'\nLANGUAGE INTERNAL STRICT STABLE PARALLEL SAFE;\n\nThe situation with the instance in xml2 is explained by its comment:\n\n * Note: this has been superseded by a core function. We still have to\n * have it in the contrib module so that existing SQL-level references\n * to the function won't fail; but in normal usage with up-to-date SQL\n * definitions for the contrib module, this won't be called.\n\nSo we're faced with a dilemma: we can't rename either instance without\nbreaking something. The only way to get rid of the warnings seems to be\nto decide that the copy in xml2 has outlived its usefulness and remove\nit. I don't think that's a hard argument to make: that version has been\nobsolete since 9.1 (a0b7b717a), meaning that only pre-extensions versions\nof xml2 could need it. In fact, we pulled the trigger on it once before,\nin 2016 (20540710e), and then changed our minds not because anyone\nlobbied to put it back but just because we gave up on the PGDLLEXPORT\nrearrangement that prompted the change then.\n\nI think that getting rid of these build warnings is sufficient reason\nto drop this ancient compatibility function, and now propose that\nwe do that.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Feb 2022 13:59:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "xml_is_well_formed (was Re: buildfarm warnings)" }, { "msg_contents": "\nOn 2/12/22 16:42, Tom Lane wrote:\n>> I'm of the impression that some people have sql access to BF logs.\n> Yeah. I'm not sure exactly what the access policy is for that machine;\n> I know we'll give out logins to committers, but not whether there's\n> any precedent for non-committers.\n\n\nYeah, I don't think it's wider than that. The sysadmins own these\ninstances, so the policy is really up to them.\n\n\ncheers\n\n\nandrew\n\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 13 Feb 2022 18:31:57 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "\nOn 2/13/22 13:59, Tom Lane wrote:\n> So we're faced with a dilemma: we can't rename either instance without\n> breaking something. The only way to get rid of the warnings seems to be\n> to decide that the copy in xml2 has outlived its usefulness and remove\n> it. I don't think that's a hard argument to make: that version has been\n> obsolete since 9.1 (a0b7b717a), meaning that only pre-extensions versions\n> of xml2 could need it. In fact, we pulled the trigger on it once before,\n> in 2016 (20540710e), and then changed our minds not because anyone\n> lobbied to put it back but just because we gave up on the PGDLLEXPORT\n> rearrangement that prompted the change then.\n>\n> I think that getting rid of these build warnings is sufficient reason\n> to drop this ancient compatibility function, and now propose that\n> we do that.\n>\n> \t\t\t\n\n\n+1\n\n\ncheers\n\n\nandrew\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Sun, 13 Feb 2022 18:34:00 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: xml_is_well_formed (was Re: buildfarm warnings)" }, { "msg_contents": "On Mon, Feb 14, 2022 at 6:10 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Another thing I've been ignoring on the grounds that I-don't-feel-\n> like-fixing-this is that the Solaris and OpenIndiana machines whine\n> about every single reference from loadable modules to the core code,\n> eg this from haddock while building contrib/fuzzystrmatch:\n>\n> Undefined first referenced\n> symbol in file\n> cstring_to_text dmetaphone.o\n> varstr_levenshtein fuzzystrmatch.o\n> text_to_cstring dmetaphone.o\n> errstart_cold fuzzystrmatch.o\n> errmsg fuzzystrmatch.o\n> palloc dmetaphone.o\n> ExceptionalCondition fuzzystrmatch.o\n> errfinish fuzzystrmatch.o\n> varstr_levenshtein_less_equal fuzzystrmatch.o\n> repalloc dmetaphone.o\n> errcode fuzzystrmatch.o\n> errmsg_internal fuzzystrmatch.o\n> pg_detoast_datum_packed dmetaphone.o\n> ld: warning: symbol referencing errors\n>\n> Presumably, this means that the Solaris linker would really like\n> to see the executable the module will load against, analogously to\n> the BE_DLLLIBS settings we use on some other platforms such as macOS.\n> It evidently still works without that, but we might be losing\n> performance from an extra indirection or the like. And in any case,\n> this amount of noise would be very distracting if you wanted to do\n> actual development on that platform. I'm not especially interested\n> in tracking down the correct incantation, but I'd gladly push a patch\n> if someone submits one.\n\nI took a quick look at this on an OpenIndiana vagrant image, using the\nGNU compiler and [Open]Solaris's linker. It seems you can't use -l\nfor this (it only likes files called libX.Y) but the manual[1] says\nthat \"shared objects that reference symbols from an application can\nuse the -z defs option, together with defining the symbols by using an\nextern mapfile directive\", so someone might need to figure out how to\nbuild and feed in such a file...\n\n[1] https://docs.oracle.com/cd/E26502_01/html/E26507/chapter2-90421.html\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:29:08 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "\nOn 2/12/22 16:13, Justin Pryzby wrote:\n> Is there any check for warnings from new code, other than those buildfarm\n> members with -Werror ?\n\n\nI had forgotten about this :-) but a few years ago I provided a\ncheck_warnings setting (and a --check-warnings command line parameter).\nIt's processed if set in the main build steps (\"make\", \"make contrib\"\nand \"make_test_modules\") as well as some modules that check external\nsoftware. But it hasn't been widely advertised so it's probably in\nhardly any use, if at all.\n\n\ncheers\n\n\nandrew\n\n\n--\nAndrew Dunstan\nEDB: https://www.enterprisedb.com\n\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:11:52 -0500", "msg_from": "Andrew Dunstan <andrew@dunslane.net>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "I wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n>> pg_basebackup.c:1261:35: warning: storing the address of local variable archive_filename in progress_filename [-Wdangling-pointer=]\n>> => new in 23a1c6578 - looks like a real error\n\n> I saw that one a few days ago but didn't get around to looking\n> more closely yet. It does look fishy, but it might be okay\n> depending on when the global variable can be accessed.\n\nI got around to looking at this, and it absolutely is a bug.\nThe test scripts don't reach it, because they don't exercise -P\nmode, let alone -P -v which is what you need to get progress_report()\nto try to print the filename. However, if you modify the test\nto do so, then you find that the output differs depending on\nwhether or not you add \"progress_filename = NULL;\" at the bottom\nof CreateBackupStreamer(). So the variable is continuing to be\nreferenced, not only after it goes out of scope within that\nfunction, but even after the function returns entirely.\n(Interestingly, the output isn't obvious garbage, at least not\non my machine; so somehow the array's part of the stack is not\ngetting overwritten very soon.)\n\nA quick-and-dirty fix is\n\n-\t\tprogress_filename = archive_filename;\n+\t\tprogress_filename = pg_strdup(archive_filename);\n\nbut perhaps this needs more thought. How long is that filename\nactually reasonable to show in the progress reports?\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:08:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "On Thu, Feb 17, 2022 at 12:08 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> pg_basebackup.c:1261:35: warning: storing the address of local variable archive_filename in progress_filename [-Wdangling-pointer=]\n> >> => new in 23a1c6578 - looks like a real error\n>\n> > I saw that one a few days ago but didn't get around to looking\n> > more closely yet. It does look fishy, but it might be okay\n> > depending on when the global variable can be accessed.\n>\n> I got around to looking at this, and it absolutely is a bug.\n> The test scripts don't reach it, because they don't exercise -P\n> mode, let alone -P -v which is what you need to get progress_report()\n> to try to print the filename. However, if you modify the test\n> to do so, then you find that the output differs depending on\n> whether or not you add \"progress_filename = NULL;\" at the bottom\n> of CreateBackupStreamer(). So the variable is continuing to be\n> referenced, not only after it goes out of scope within that\n> function, but even after the function returns entirely.\n> (Interestingly, the output isn't obvious garbage, at least not\n> on my machine; so somehow the array's part of the stack is not\n> getting overwritten very soon.)\n>\n> A quick-and-dirty fix is\n>\n> - progress_filename = archive_filename;\n> + progress_filename = pg_strdup(archive_filename);\n>\n> but perhaps this needs more thought. How long is that filename\n> actually reasonable to show in the progress reports?\n\nMan, you couldn't have asked a question that made me any happier.\nPreserving that behavior was one of the most annoying parts of this\nwhole refactoring. The server always sends a series of tar files, but\nthe pre-existing behavior is that progress reports show the archive\nname with -Ft and the name of the archive member with -Fp. I think\nthat's sort of useful, but it's certainly annoying from an\nimplementation perspective because we only know the name of the\narchive member if we're extracting the tarfile, possibly after\ndecompressing it, whereas the name of the archive itself is easily\nknown. This results in annoying gymnastics to try to update the global\nvariable that we use to store this information from very different\nplaces depending on what the user requested, and evidently I messed\nthat up, and also, why in the world does this code think that \"more\nglobal variables\" is the right answer to every problem?\n\nI'm not totally convinced that it's OK to just hit this progress\nreporting stuff with a hammer and remove some functionality in the\ninterest of simplifying the code. But I will shed no tears if that's\nthe direction other people would like to go.\n\nIf not, I think that your quick-and-dirty fix is about right, except\nthat we probably need to do it every place where we set\nprogress_filename, and we should arrange to free the memory later.\nWith -Ft, you won't have enough archives to matter, but with -Fp, you\nmight have enough archive members to matter.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:58:02 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> If not, I think that your quick-and-dirty fix is about right, except\n> that we probably need to do it every place where we set\n> progress_filename, and we should arrange to free the memory later.\n> With -Ft, you won't have enough archives to matter, but with -Fp, you\n> might have enough archive members to matter.\n\nYeah, I came to the same conclusion while out doing some errands.\nThere's no very convincing reason to believe that what's passed to\nprogress_update_filename has got adequate lifespan either, or that\nthat would remain true even if it's true today, or that testing\nwould detect such a problem (even if we had any, ahem). The file\nnames I was seeing in testing just now tended to contain fragments\nof temporary directory names, which'd be mighty hard to tell from\nrandom garbage in any automated way:\n\n 3/22774 kB (0%), 0/2 tablespaces (...pc1/PG_15_202202141/5/16392_init)\n 3/22774 kB (0%), 1/2 tablespaces (...pc1/PG_15_202202141/5/16392_init)\n22785/22785 kB (100%), 1/2 tablespaces (...t_T8u0/backup1/global/pg_control)\n\nSo I think we should make progress_update_filename strdup each\ninput while freeing the last value, and insist that every update\ngo through that logic. (This is kind of a lot of cycles to spend\nin support of an optional mode, but maybe we could do it only if\nshowprogress && verbose.) I'll go make it so.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Thu, 17 Feb 2022 14:36:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "On Thu, Feb 17, 2022 at 2:36 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, I came to the same conclusion while out doing some errands.\n> There's no very convincing reason to believe that what's passed to\n> progress_update_filename has got adequate lifespan either, or that\n> that would remain true even if it's true today, or that testing\n> would detect such a problem (even if we had any, ahem). The file\n> names I was seeing in testing just now tended to contain fragments\n> of temporary directory names, which'd be mighty hard to tell from\n> random garbage in any automated way:\n>\n> 3/22774 kB (0%), 0/2 tablespaces (...pc1/PG_15_202202141/5/16392_init)\n> 3/22774 kB (0%), 1/2 tablespaces (...pc1/PG_15_202202141/5/16392_init)\n> 22785/22785 kB (100%), 1/2 tablespaces (...t_T8u0/backup1/global/pg_control)\n>\n> So I think we should make progress_update_filename strdup each\n> input while freeing the last value, and insist that every update\n> go through that logic. (This is kind of a lot of cycles to spend\n> in support of an optional mode, but maybe we could do it only if\n> showprogress && verbose.) I'll go make it so.\n\nOK, sounds good, thanks. I couldn't (and still can't) think of a good\nway of testing the progress-reporting code either. I mean I guess if\nyou could convince pg_basebackup not to truncate the filenames, maybe\nby convincing it that your terminal is as wide as your garage door,\nthen you could capture the output and do some tests against it. But I\nfeel like the test code would be two orders of magnitude larger than\nthe code it intends to exercise, and I'm not sure it would be entirely\nrobust, either.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 15:22:08 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "On 2022-02-17 15:22:08 -0500, Robert Haas wrote:\n> OK, sounds good, thanks. I couldn't (and still can't) think of a good\n> way of testing the progress-reporting code either. I mean I guess if\n> you could convince pg_basebackup not to truncate the filenames, maybe\n> by convincing it that your terminal is as wide as your garage door,\n> then you could capture the output and do some tests against it. But I\n> feel like the test code would be two orders of magnitude larger than\n> the code it intends to exercise, and I'm not sure it would be entirely\n> robust, either.\n\nHow about just running pg_basebackup with --progress in one or two of the\ntests? Of course that's not testing very much, but at least it verifies not\ncrashing...\n\n\n", "msg_date": "Thu, 17 Feb 2022 12:51:41 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "On Thu, Feb 17, 2022 at 3:51 PM Andres Freund <andres@anarazel.de> wrote:\n> On 2022-02-17 15:22:08 -0500, Robert Haas wrote:\n> > OK, sounds good, thanks. I couldn't (and still can't) think of a good\n> > way of testing the progress-reporting code either. I mean I guess if\n> > you could convince pg_basebackup not to truncate the filenames, maybe\n> > by convincing it that your terminal is as wide as your garage door,\n> > then you could capture the output and do some tests against it. But I\n> > feel like the test code would be two orders of magnitude larger than\n> > the code it intends to exercise, and I'm not sure it would be entirely\n> > robust, either.\n>\n> How about just running pg_basebackup with --progress in one or two of the\n> tests? Of course that's not testing very much, but at least it verifies not\n> crashing...\n\nTrue. That would be easy enough.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Thu, 17 Feb 2022 15:57:14 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" }, { "msg_contents": "On Thu, Feb 17, 2022 at 3:57 PM Robert Haas <robertmhaas@gmail.com> wrote:\n> True. That would be easy enough.\n\nI played around with this a bit, and of course it is easy enough to\nadd --progress with or without --verbose to a few tests, but when I\nreverted 62cb7427d1e491faf8612a82c2e3711a8cd65422 it didn't make any\ntests fail. So then I tried sticking --progress --verbose into\n@pg_basebackup_defs so all the tests would run with it, and that did\nmake some tests fail, but the same ones fail with and without the\npatch. So it doesn't seem like we would have caught this particular\nproblem via this testing method no matter what we did in detail.\n\nIf we just want to splatter a few --progress switches around for the\nheck of it, we could do something like the attached. But I don't know\nif this is best: it seems highly arguable what to do in detail (and\nalso not worth arguing about, so if someone else feels motivated to do\nsomething different than this, or the same as this, fine by me).\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com", "msg_date": "Fri, 18 Feb 2022 14:56:13 -0500", "msg_from": "Robert Haas <robertmhaas@gmail.com>", "msg_from_op": false, "msg_subject": "Re: buildfarm warnings" } ]
[ { "msg_contents": "Hi,\n\nI don't know what happened here, but I my little script that scrapes\nfor build farm assertion failures hasn't seen this one before now.\nIt's on a 15 year old macOS release and compiled with 15 year old GCC\nversion, for some added mystery.\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-12%2022%3A07%3A21\n\n2022-02-13 00:09:52.859 CET [92423:92] pg_regress/replorigin\nSTATEMENT: SELECT data FROM\npg_logical_slot_get_changes('regression_slot', NULL, NULL,\n'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences',\n'0');\nTRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n\"reorderbuffer.c\", Line: 1173, PID: 92423)\n0 postgres 0x00592dd0 ExceptionalCondition + 160\\\\0\n1 postgres 0x00391c40\nReorderBufferGetRelids + 304\\\\0\n2 postgres 0x00394794\nReorderBufferAllocate + 1044\\\\0\n3 postgres 0x003834dc heap_decode + 76\\\\0\n4 postgres 0x003829c8\nLogicalDecodingProcessRecord + 168\\\\0\n5 postgres 0x0038a1d4\npg_logical_emit_message_text + 1876\\\\0\n6 postgres 0x00241810\nExecMakeTableFunctionResult + 576\\\\0\n7 postgres 0x002576c4\nExecInitFunctionScan + 1444\\\\0\n8 postgres 0x00242140 ExecScan + 896\\\\0\n9 postgres 0x002397cc standard_ExecutorRun + 540\\\\0\n10 postgres 0x00427940\nEnsurePortalSnapshotExists + 832\\\\0\n11 postgres 0x00428e84 PortalRun + 644\\\\0\n12 postgres 0x004252e0 pg_plan_queries + 3232\\\\0\n13 postgres 0x00426270 PostgresMain + 3424\\\\0\n14 postgres 0x0036b9ec PostmasterMain + 8060\\\\0\n15 postgres 0x0029f24c main + 1356\\\\0\n16 postgres 0x00002524 start + 68\\\\0\n\n\n", "msg_date": "Sun, 13 Feb 2022 12:57:28 +1300", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": true, "msg_subject": "FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n \"reorderbuffer.c\", Line: 1173, ..." }, { "msg_contents": "Hi, \n\nOn February 12, 2022 3:57:28 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n>Hi,\n>\n>I don't know what happened here, but I my little script that scrapes\n>for build farm assertion failures hasn't seen this one before now.\n>It's on a 15 year old macOS release and compiled with 15 year old GCC\n>version, for some added mystery.\n>\n>https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-12%2022%3A07%3A21\n>\n>2022-02-13 00:09:52.859 CET [92423:92] pg_regress/replorigin\n>STATEMENT: SELECT data FROM\n>pg_logical_slot_get_changes('regression_slot', NULL, NULL,\n>'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences',\n>'0');\n>TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>\"reorderbuffer.c\", Line: 1173, PID: 92423)\n>0 postgres 0x00592dd0 ExceptionalCondition + 160\\\\0\n>1 postgres 0x00391c40\n>ReorderBufferGetRelids + 304\\\\0\n>2 postgres 0x00394794\n>ReorderBufferAllocate + 1044\\\\0\n>3 postgres 0x003834dc heap_decode + 76\\\\0\n>4 postgres 0x003829c8\n>LogicalDecodingProcessRecord + 168\\\\0\n>5 postgres 0x0038a1d4\n>pg_logical_emit_message_text + 1876\\\\0\n>6 postgres 0x00241810\n>ExecMakeTableFunctionResult + 576\\\\0\n>7 postgres 0x002576c4\n>ExecInitFunctionScan + 1444\\\\0\n>8 postgres 0x00242140 ExecScan + 896\\\\0\n>9 postgres 0x002397cc standard_ExecutorRun + 540\\\\0\n>10 postgres 0x00427940\n>EnsurePortalSnapshotExists + 832\\\\0\n>11 postgres 0x00428e84 PortalRun + 644\\\\0\n>12 postgres 0x004252e0 pg_plan_queries + 3232\\\\0\n>13 postgres 0x00426270 PostgresMain + 3424\\\\0\n>14 postgres 0x0036b9ec PostmasterMain + 8060\\\\0\n>15 postgres 0x0029f24c main + 1356\\\\0\n>16 postgres 0x00002524 start + 68\\\\0\n\nI think that may be the bug that Tomas fixed a short while ago. He said the skip empty xacts stuff didn't yet work for sequences. That's only likely to be hit in slower machines...\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n", "msg_date": "Sat, 12 Feb 2022 16:28:42 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "=?US-ASCII?Q?Re=3A_FailedAssertion=28=22pre?=\n =?US-ASCII?Q?v=5Ffirst=5Flsn_=3C_cur=5Ftxn-=3Efi?=\n =?US-ASCII?Q?rst=5Flsn=22=2C_File=3A_=22reorderb?=\n =?US-ASCII?Q?uffer=2Ec=22=2C_Line=3A_1173=2C_=2E=2E=2E?=" }, { "msg_contents": "\n\nOn 2/13/22 01:28, Andres Freund wrote:\n> Hi, \n> \n> On February 12, 2022 3:57:28 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n>> Hi,\n>>\n>> I don't know what happened here, but I my little script that scrapes\n>> for build farm assertion failures hasn't seen this one before now.\n>> It's on a 15 year old macOS release and compiled with 15 year old GCC\n>> version, for some added mystery.\n>>\n>> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=locust&dt=2022-02-12%2022%3A07%3A21\n>>\n>> 2022-02-13 00:09:52.859 CET [92423:92] pg_regress/replorigin\n>> STATEMENT: SELECT data FROM\n>> pg_logical_slot_get_changes('regression_slot', NULL, NULL,\n>> 'include-xids', '0', 'skip-empty-xacts', '1', 'include-sequences',\n>> '0');\n>> TRAP: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n>> \"reorderbuffer.c\", Line: 1173, PID: 92423)\n>> 0 postgres 0x00592dd0 ExceptionalCondition + 160\\\\0\n>> 1 postgres 0x00391c40\n>> ReorderBufferGetRelids + 304\\\\0\n>> 2 postgres 0x00394794\n>> ReorderBufferAllocate + 1044\\\\0\n>> 3 postgres 0x003834dc heap_decode + 76\\\\0\n>> 4 postgres 0x003829c8\n>> LogicalDecodingProcessRecord + 168\\\\0\n>> 5 postgres 0x0038a1d4\n>> pg_logical_emit_message_text + 1876\\\\0\n>> 6 postgres 0x00241810\n>> ExecMakeTableFunctionResult + 576\\\\0\n>> 7 postgres 0x002576c4\n>> ExecInitFunctionScan + 1444\\\\0\n>> 8 postgres 0x00242140 ExecScan + 896\\\\0\n>> 9 postgres 0x002397cc standard_ExecutorRun + 540\\\\0\n>> 10 postgres 0x00427940\n>> EnsurePortalSnapshotExists + 832\\\\0\n>> 11 postgres 0x00428e84 PortalRun + 644\\\\0\n>> 12 postgres 0x004252e0 pg_plan_queries + 3232\\\\0\n>> 13 postgres 0x00426270 PostgresMain + 3424\\\\0\n>> 14 postgres 0x0036b9ec PostmasterMain + 8060\\\\0\n>> 15 postgres 0x0029f24c main + 1356\\\\0\n>> 16 postgres 0x00002524 start + 68\\\\0\n> \n> I think that may be the bug that Tomas fixed a short while ago. He\n> said the skip empty xacts stuff didn't yet work for sequences. That's\n> only likely to be hit in slower machines...\n> \n\nI'm not sure how could this be related to the issue fixed by b779d7d8fd.\nThat's merely about handling of empty xacts in the output plugin, it has\nlittle (nothing) to do with reorderbuffer - the failures was simply\nabout (not) printing BEGIN/COMMIT for empty xacts.\n\nSo why would it be triggering an Assert in reorderbuffer? Also, there\nwas a successful run with the test_decoding changes 80901b3291 (and also\nwith sequence decoding infrastructure committed a couple days ago) ...\n\nDo we have access to the machine? If yes, it'd be interesting to run the\ntests before the fix, and see how often it fails. Also, printing the LSN\nwould be interesting, so that we can correlate it to WAL.\n\nBTW it's probably just a coincidence, but the cfbot does show a strange\nfailure on the macOS machine [1], as if we did not wait for all changes\nto be applied by replica. I thought it's because of the issue with\ntracking LSN for sequences [2], but this is with a reworked test that\ndoes not do just nextval(). So I'm wondering if these two things may be\nrelated ... both things happened on macOS only.\n\n\n[1] https://cirrus-ci.com/task/6299472241098752\n\n[2]\nhttps://www.postgresql.org/message-id/712cad46-a9c8-1389-aef8-faf0203c9be9%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 13 Feb 2022 13:27:08 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n \"reorderbuffer.c\", Line: 1173, ..." }, { "msg_contents": "On 2/13/22 13:27, Tomas Vondra wrote:\n> \n> ...\n>\n> BTW it's probably just a coincidence, but the cfbot does show a strange\n> failure on the macOS machine [1], as if we did not wait for all changes\n> to be applied by replica. I thought it's because of the issue with\n> tracking LSN for sequences [2], but this is with a reworked test that\n> does not do just nextval(). So I'm wondering if these two things may be\n> related ... both things happened on macOS only.\n>\n\nI take this back - this failure was just a thinko in the ROLLBACK test,\nI've posted an updated patch fixing this.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 13 Feb 2022 14:11:36 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n \"reorderbuffer.c\", Line: 1173, ..." }, { "msg_contents": "Hi,\n\nOn 2022-02-13 13:27:08 +0100, Tomas Vondra wrote:\n> I'm not sure how could this be related to the issue fixed by b779d7d8fd.\n> That's merely about handling of empty xacts in the output plugin, it has\n> little (nothing) to do with reorderbuffer - the failures was simply\n> about (not) printing BEGIN/COMMIT for empty xacts.\n\nI'd only read your commit message - which didn't go into that much detail. I\njust saw that the run didn't yet include that commit and that the failed\ncommand specified skip-empty-xacts 1, which your commit described as fixing...\n\n\n> So why would it be triggering an Assert in reorderbuffer? Also, there\n> was a successful run with the test_decoding changes 80901b3291 (and also\n> with sequence decoding infrastructure committed a couple days ago) ...\n\nIt's clearly a bug only happen at a lower likelihood...\n\n\n> Do we have access to the machine? If yes, it'd be interesting to run the\n> tests before the fix, and see how often it fails. Also, printing the LSN\n> would be interesting, so that we can correlate it to WAL.\n\nWe really should provide assert infrastructure that can do comparisons while\nalso printing values... If C just had proper generics :(.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Feb 2022 11:24:13 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: FailedAssertion(\"prev_first_lsn < cur_txn->first_lsn\", File:\n \"reorderbuffer.c\", Line: 1173, ..." } ]
[ { "msg_contents": "Hi,\n\nThis thread started at https://www.postgresql.org/message-id/20220213021746.GM31460%40telsasoft.com\nbut is mostly independent, so I split the thread off\n\nOn 2022-02-12 20:17:46 -0600, Justin Pryzby wrote:\n> On Sat, Feb 12, 2022 at 06:00:44PM -0800, Andres Freund wrote:\n> > I bet using COW file copies would speed up our own regression tests noticeably\n> > - on slower systems we spend a fair bit of time and space creating template0\n> > and postgres, with the bulk of the data never changing.\n> > \n> > Template databases are also fairly commonly used by application developers to\n> > avoid the cost of rerunning all the setup DDL & initial data loading for\n> > different tests. Making that measurably cheaper would be a significant win.\n> \n> +1\n> \n> I ran into this last week and was still thinking about proposing it.\n> \n> Would this help CI\n\nIt could theoretically help linux - but currently I think the filesystem for\nCI is ext4, which doesn't support FICLONE. I assume it'd help macos, but I\ndon't know the performance characteristics of copyfile(). I don't think any of\nthe other OSs have working reflink / file clone support.\n\nYou could prototype it for CI on macos by using the \"template initdb\" patch\nand passing -c to cp.\n\nOn linux it might be worth using copy_file_range(), if supported, if not file\ncloning. But that's kind of an even more separate topic...\n\n\n> or any significant fraction of buildfarm ?\n\nNot sure how many are on new enough linux / mac to benefit and use a suitable\nfilesystem. There are a few animals with slow-ish storage but running fairly\nnew linux. Don't think we can see the FS. Those would likely benefit the most.\n\n\n> Or just tests run locally on supporting filesystems.\n\nProbably depends on your storage subsystem. If not that fast, and running\ntests concurrently, it'd likely help.\n\n\nOn my workstation, with lots of cores and very fast storage, using the initdb\ncaching patch modified to do cp --reflink=never / always yields the following\ntime for concurrent check-world (-j40 PROVE_FLAGS=-j4):\n\ncp --reflink=never:\n\n96.64user 61.74system 1:04.69elapsed 244%CPU (0avgtext+0avgdata 97544maxresident)k\n0inputs+34124296outputs (2584major+7247038minor)pagefaults 0swaps\npcheck-world-success\n\ncp --reflink=always:\n\n91.79user 56.16system 1:04.21elapsed 230%CPU (0avgtext+0avgdata 97716maxresident)k\n189328inputs+16361720outputs (2674major+7229696minor)pagefaults 0swaps\npcheck-world-success\n\nSeems roughly stable across three runs.\n\n\nJust comparing the time for cp -r of a fresh initdb'd cluster:\ncp -a --reflink=never\nreal\t0m0.043s\nuser\t0m0.000s\nsys\t0m0.043s\ncp -a --reflink=always\nreal\t0m0.021s\nuser\t0m0.004s\nsys\t0m0.018s\n\nso that's a pretty nice win.\n\n\n> Note that pg_upgrade already supports copy/link/clone. (Obviously, link\n> wouldn't do anything desirable for CREATE DATABASE).\n\nYea. We'd likely have to move relevant code into src/port.\n\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sat, 12 Feb 2022 19:37:30 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "using file cloning in create database / initdb" }, { "msg_contents": "On Sat, Feb 12, 2022 at 07:37:30PM -0800, Andres Freund wrote:\n> On 2022-02-12 20:17:46 -0600, Justin Pryzby wrote:\n> > On Sat, Feb 12, 2022 at 06:00:44PM -0800, Andres Freund wrote:\n> > > I bet using COW file copies would speed up our own regression tests noticeably\n> > > - on slower systems we spend a fair bit of time and space creating template0\n> > > and postgres, with the bulk of the data never changing.\n> > > \n> > > Template databases are also fairly commonly used by application developers to\n> > > avoid the cost of rerunning all the setup DDL & initial data loading for\n> > > different tests. Making that measurably cheaper would be a significant win.\n> > \n> > +1\n> > \n> > I ran into this last week and was still thinking about proposing it.\n> > \n> > Would this help CI\n> \n> It could theoretically help linux - but currently I think the filesystem for\n> CI is ext4, which doesn't support FICLONE. I assume it'd help macos, but I\n> don't know the performance characteristics of copyfile(). I don't think any of\n> the other OSs have working reflink / file clone support.\n> \n> You could prototype it for CI on macos by using the \"template initdb\" patch\n> and passing -c to cp.\n\nYes, copyfile() in CREATE DATABASE seems to help cirrus/darwin a bit.\nhttps://cirrus-ci.com/task/5277139049119744\n\nOn xfs:\npostgres=# CREATE DATABASE new3 TEMPLATE postgres STRATEGY FILE_COPY ;\n2022-07-31 00:21:28.350 CDT [2347] LOG: checkpoint starting: immediate force wait flush-all\n...\nCREATE DATABASE\nTime: 1296.243 ms (00:01.296)\n\npostgres=# CREATE DATABASE new4 TEMPLATE postgres STRATEGY FILE_CLONE;\n2022-07-31 00:21:38.697 CDT [2347] LOG: checkpoint starting: immediate force wait flush-all\n...\nCREATE DATABASE\nTime: 167.152 ms\n\n-- \nJustin", "msg_date": "Mon, 1 Aug 2022 13:15:22 -0500", "msg_from": "Justin Pryzby <pryzby@telsasoft.com>", "msg_from_op": false, "msg_subject": "Re: using file cloning in create database / initdb" }, { "msg_contents": "On Tue, Aug 2, 2022 at 6:15 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> On Sat, Feb 12, 2022 at 07:37:30PM -0800, Andres Freund wrote:\n> > It could theoretically help linux - but currently I think the filesystem for\n> > CI is ext4, which doesn't support FICLONE. I assume it'd help macos, but I\n> > don't know the performance characteristics of copyfile(). I don't think any of\n> > the other OSs have working reflink / file clone support.\n\nJust BTW, I think Solaris (on its closed source ZFS) can also do\nthis[1], but I doubt anyone will be along soon to write the patch for\nthat.\n\nMore interestingly to me at least, it looks like OpenZFS is getting\nready to ship its reflink feature, called BRT (block reference\ntracking). I guess it'll just work on Linux and macOS, and for\nFreeBSD there may be a new syscall to patch into this code...\n\n+ if ((src_fd = open(src, O_RDONLY | PG_BINARY, 0)) < 0)\n+ return 1;\n\nWhy would you return 1, and not -1 (system call style), for failure?\n\nHmm, I don't think we can do plain open() from a regular backend\nwithout worrying about EMFILE/ENFILE (a problem that pg_upgrade\ndoesn't have to deal with). Perhaps copydir() should double-wrap the\ncall to clone_file() in ReserveExternalFD()/ReleaseExternalFD()?\n\n[1] https://blogs.oracle.com/solaris/post/reflink3c-what-is-it-why-do-i-care-and-how-can-i-use-it\n\n\n", "msg_date": "Tue, 2 Aug 2022 10:13:59 +1200", "msg_from": "Thomas Munro <thomas.munro@gmail.com>", "msg_from_op": false, "msg_subject": "Re: using file cloning in create database / initdb" } ]
[ { "msg_contents": "Hi,\n\nIn the code comments, it is being used as \"Startup process\" in the\nmiddle of the sentences whereas in most of the places \"startup\nprocess\" is used. Also, \"WAL Sender\" is being used instead of \"WAL\nsender\". Let's be consistent across the docs and code comments.\n\nAttaching a patch. Thoughts?\n\nRegards,\nBharath Rupireddy.", "msg_date": "Sun, 13 Feb 2022 13:43:23 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Consistently use \"startup process\"/\"WAL sender\" instead of \"Startup\n process\"/\"WAL Sender\" in comments and docs." }, { "msg_contents": "On Sun, Feb 13, 2022 at 3:13 PM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n>\n> Hi,\n>\n> In the code comments, it is being used as \"Startup process\" in the\n> middle of the sentences whereas in most of the places \"startup\n> process\" is used. Also, \"WAL Sender\" is being used instead of \"WAL\n> sender\". Let's be consistent across the docs and code comments.\n\nFWIW, docs need to hold to a higher standard than code comments.\n\n-- \nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Sun, 13 Feb 2022 19:19:28 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Consistently use \"startup process\"/\"WAL sender\" instead of\n \"Startup process\"/\"WAL Sender\" in comments and docs." }, { "msg_contents": "On Sun, Feb 13, 2022 at 5:49 PM John Naylor\n<john.naylor@enterprisedb.com> wrote:\n>\n> On Sun, Feb 13, 2022 at 3:13 PM Bharath Rupireddy\n> <bharath.rupireddyforpostgres@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > In the code comments, it is being used as \"Startup process\" in the\n> > middle of the sentences whereas in most of the places \"startup\n> > process\" is used. Also, \"WAL Sender\" is being used instead of \"WAL\n> > sender\". Let's be consistent across the docs and code comments.\n>\n> FWIW, docs need to hold to a higher standard than code comments.\n\nThanks. In general, I agree that the docs and error/log messages\n(user-facing) need to be of higher standard, but at the same time code\ncomments too IMHO. Because many hackers/developers consider code\ncomments a great place to learn the internals, being consistent there\ndoes no harm.\n\nRegards,\nBharath Rupireddy.\n\n\n", "msg_date": "Mon, 14 Feb 2022 09:41:52 +0530", "msg_from": "Bharath Rupireddy <bharath.rupireddyforpostgres@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Consistently use \"startup process\"/\"WAL sender\" instead of\n \"Startup process\"/\"WAL Sender\" in comments and docs." }, { "msg_contents": "On Mon, Feb 14, 2022 at 11:12 AM Bharath Rupireddy\n<bharath.rupireddyforpostgres@gmail.com> wrote:\n\n> > FWIW, docs need to hold to a higher standard than code comments.\n>\n> Thanks. In general, I agree that the docs and error/log messages\n> (user-facing) need to be of higher standard, but at the same time code\n> comments too IMHO. Because many hackers/developers consider code\n> comments a great place to learn the internals, being consistent there\n> does no harm.\n\ngit log and git blame are more useful when they are free of minor\nstylistic changes. Of course, corrections and clarity are different\nmatters that are worth fixing in the comments. I've pushed the doc\nfixes, thanks for the patch!\n\n\n--\nJohn Naylor\nEDB: http://www.enterprisedb.com\n\n\n", "msg_date": "Tue, 15 Feb 2022 14:35:41 +0700", "msg_from": "John Naylor <john.naylor@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: Consistently use \"startup process\"/\"WAL sender\" instead of\n \"Startup process\"/\"WAL Sender\" in comments and docs." } ]
[ { "msg_contents": "I mentioned this issue briefly in another thread, but the\njustify_interval, justify_days, and justify_hours functions\nall have the potential to overflow. The attached patch fixes\nthis issue.\n\nCheers,\nJoe Koshakow", "msg_date": "Sun, 13 Feb 2022 13:28:38 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Fix overflow in justify_interval related functions" }, { "msg_contents": "On Sun, Feb 13, 2022 at 01:28:38PM -0500, Joseph Koshakow wrote:\n> +SELECT justify_hours(interval '2147483647 days 24 hrs');\n> +ERROR: interval out of range\n\nThe docs [0] claim that the maximum value for interval is 178 million\nyears, but this test case is only ~6 million. Should we instead rework the\nlogic to avoid overflow for this case?\n\n[0] https://www.postgresql.org/docs/devel/datatype-datetime.html\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 10:35:40 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "Nathan Bossart <nathandbossart@gmail.com> writes:\n> On Sun, Feb 13, 2022 at 01:28:38PM -0500, Joseph Koshakow wrote:\n>> +SELECT justify_hours(interval '2147483647 days 24 hrs');\n>> +ERROR: interval out of range\n\n> The docs [0] claim that the maximum value for interval is 178 million\n> years, but this test case is only ~6 million. Should we instead rework the\n> logic to avoid overflow for this case?\n\nI think the docs are misleading you on this point. The maximum\nvalue of the months part of an interval is 2^31 months or\nabout 178Myr, but what we're dealing with here is days, which\nlikewise caps at 2^31 days. justify_hours is not chartered\nto transpose up to months, so it can't avoid that limit.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 14 Feb 2022 13:55:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 01:55:56PM -0500, Tom Lane wrote:\n> Nathan Bossart <nathandbossart@gmail.com> writes:\n>> On Sun, Feb 13, 2022 at 01:28:38PM -0500, Joseph Koshakow wrote:\n>>> +SELECT justify_hours(interval '2147483647 days 24 hrs');\n>>> +ERROR: interval out of range\n> \n>> The docs [0] claim that the maximum value for interval is 178 million\n>> years, but this test case is only ~6 million. Should we instead rework the\n>> logic to avoid overflow for this case?\n> \n> I think the docs are misleading you on this point. The maximum\n> value of the months part of an interval is 2^31 months or\n> about 178Myr, but what we're dealing with here is days, which\n> likewise caps at 2^31 days. justify_hours is not chartered\n> to transpose up to months, so it can't avoid that limit.\n\nMakes sense. So we could likely avoid it for justify_interval, but the\nothers are at the mercy of the interval implementation.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 11:15:09 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 2:15 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> Makes sense. So we could likely avoid it for justify_interval, but the\n> others are at the mercy of the interval implementation.\n\nI'm not entirely sure what you mean by \"it\", but for both\njustify_interval and justify_days this commit throws an error if we\ntry to set the months field above 2^31.\n\n> +SELECT justify_days(interval '2147483647 months 30 days');\n> +ERROR: interval out of range\n> +SELECT justify_interval(interval '2147483647 months 30 days');\n> +ERROR: interval out of range\n\nThat's what these are testing.\n\n- Joe Koshakow\n\n\n", "msg_date": "Mon, 14 Feb 2022 16:57:07 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 04:57:07PM -0500, Joseph Koshakow wrote:\n> On Mon, Feb 14, 2022 at 2:15 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n>> Makes sense. So we could likely avoid it for justify_interval, but the\n>> others are at the mercy of the interval implementation.\n> \n> I'm not entirely sure what you mean by \"it\", but for both\n> justify_interval and justify_days this commit throws an error if we\n> try to set the months field above 2^31.\n> \n>> +SELECT justify_days(interval '2147483647 months 30 days');\n>> +ERROR: interval out of range\n>> +SELECT justify_interval(interval '2147483647 months 30 days');\n>> +ERROR: interval out of range\n> \n> That's what these are testing.\n\nI think it's possible to avoid overflow in justify_interval() in some cases\nby pre-justifying the days. I've attached a patch to demonstrate.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com", "msg_date": "Mon, 14 Feb 2022 16:59:18 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 7:59 PM Nathan Bossart <nathandbossart@gmail.com> wrote:\n> I think it's possible to avoid overflow in justify_interval() in some cases\n> by pre-justifying the days. I've attached a patch to demonstrate.\n>\n> --\n> Nathan Bossart\n> Amazon Web Services: https://aws.amazon.com\n\nGood catch, I didn't think about that. Though if you are pre-justifying\nthe days, then I don't think it's possible for the second addition to\ndays to overflow. The maximum amount the days field could be after\nthe first justification is 29. I think we can simplify it further to the\nattached patch.\n\n- Joe", "msg_date": "Mon, 14 Feb 2022 20:35:43 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 08:35:43PM -0500, Joseph Koshakow wrote:\n> Good catch, I didn't think about that. Though if you are pre-justifying\n> the days, then I don't think it's possible for the second addition to\n> days to overflow. The maximum amount the days field could be after\n> the first justification is 29. I think we can simplify it further to the\n> attached patch.\n\nLGTM. I've marked this one as ready-for-committer. It's a little weird\nthat justify_hours() and justify_days() can overflow in cases where there\nis still a valid interval representation, but as Tom noted, those functions\nhave specific charters to follow.\n\n> +\t\tereport(ERROR,\n> +\t\t\t\t(errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> +\t\t\t\terrmsg(\"interval out of range\")));\n\nnitpick: I think there is ordinarily an extra space before errmsg() so that\nit lines up with errcode().\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Mon, 14 Feb 2022 20:23:06 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Mon, Feb 14, 2022 at 11:23 PM Nathan Bossart\n<nathandbossart@gmail.com> wrote:\n> It's a little weird\n> that justify_hours() and justify_days() can overflow in cases where there\n> is still a valid interval representation, but as Tom noted, those functions\n> have specific charters to follow.\n\nYes it is a bit weird, but this follows the same behavior as adding\nIntervals. The following query overflows:\n postgres=# SELECT interval '2147483647 days' + interval '1 day';\n ERROR: interval out of range\nEven though the following query does not:\n postgres=# SELECT justify_days(interval '2147483647 days') + interval '1 day';\n ?column?\n -----------------------------\n 5965232 years 4 mons 8 days\n (1 row)\n\nThe reason is, as Tom mentioned, that Interval's months, days, and\ntime (microseconds) are stored separately. They are only combined\nduring certain scenarios such as testing for equality, ordering,\njustify_* methods, etc. I think the idea behind it is that not every month\nhas 30 days and not every day has 24 hrs, though I'm not sure\n.\n> > + ereport(ERROR,\n> > + (errcode(ERRCODE_DATETIME_VALUE_OUT_OF_RANGE),\n> > + errmsg(\"interval out of range\")));\n>\n> nitpick: I think there is ordinarily an extra space before errmsg() so that\n> it lines up with errcode().\n\nI've attached a patch to add the space. Thanks so much for your review\nand comments!\n\n- Joe Koshakow", "msg_date": "Tue, 15 Feb 2022 06:42:23 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "Just checking because I'm not very familiar with the process,\nare there any outstanding items that I need to do for this patch?\n\n- Joe Koshakow\n\n\n", "msg_date": "Fri, 25 Feb 2022 10:30:57 -0500", "msg_from": "Joseph Koshakow <koshy44@gmail.com>", "msg_from_op": true, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "On Fri, Feb 25, 2022 at 10:30:57AM -0500, Joseph Koshakow wrote:\n> Just checking because I'm not very familiar with the process,\n> are there any outstanding items that I need to do for this patch?\n\nUnless someone has additional feedback, I don't think so.\n\n-- \nNathan Bossart\nAmazon Web Services: https://aws.amazon.com\n\n\n", "msg_date": "Fri, 25 Feb 2022 11:04:35 -0800", "msg_from": "Nathan Bossart <nathandbossart@gmail.com>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" }, { "msg_contents": "Joseph Koshakow <koshy44@gmail.com> writes:\n> [ v4-0001-Check-for-overflow-in-justify_interval-functions.patch ]\n\nPushed. I added a comment explaining why the one addition in\ninterval_justify_interval doesn't require an overflow check.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Mon, 28 Feb 2022 15:38:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fix overflow in justify_interval related functions" } ]
[ { "msg_contents": "Hi,\n\nSometime last year I was surprised to see (not on a public list unfortunately)\nthat bookindex.html is 657kB, with > 200kB just being repetitions of\nxmlns=\"http://www.w3.org/1999/xhtml\" xmlns:xlink=\"http://www.w3.org/1999/xlink\"\n\nReminded of this, due to a proposal to automatically generate docs as part of\ncfbot runs (which'd be fairly likely to update bookindex.html), I spent a few\npainful hours last night trying to track this down.\n\n\nThe reason for the two xmlns= are different. The\nxmlns=\"http://www.w3.org/1999/xhtml\" is afaict caused by confusion on our\npart.\n\nSome of our stylesheets use\nxmlns=\"http://www.w3.org/TR/xhtml1/transitional\"\nothers use\nxmlns=\"http://www.w3.org/1999/xhtml\"\n\nIt's noteworthy that the docbook xsl stylesheets end up with\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\nso it's a bit pointless to reference http://www.w3.org/TR/xhtml1/transitional\nafaict.\n\nAdding xmlns=\"http://www.w3.org/1999/xhtml\" to stylesheet-html-common.xsl gets\nrid of xmlns=\"http://www.w3.org/TR/xhtml1/transitional\" in bookindex specific\ncontent.\n\nChanging stylesheet.xsl from transitional to http://www.w3.org/1999/xhtml gets\nrid of xmlns=\"http://www.w3.org/TR/xhtml1/transitional\" in navigation/footer.\n\nOf course we should likely change all http://www.w3.org/TR/xhtml1/transitional\nreferences, rather than just the one necessary to get rid of the xmlns= spam.\n\n\nSo far, so easy. It took me way longer to understand what's causing the\nall the xmlns:xlink= appearances.\n\nFor a long time I was misdirected because if I remove the <xsl:template\nname=\"generate-basic-index\"> in stylesheet-html-common.xsl, the number of\nxmlns:xlink drastically reduces to a handful. Which made me think that their\nexistance is somehow our fault. And I tried and tried to find the cause.\n\nBut it turns out that this originally is caused by a still existing buglet in\nthe docbook xsl stylesheets, specifically autoidx.xsl. It doesn't omit xlink\nin exclude-result-prefixes, but uses ids etc from xlink.\n\nThe reason that we end up with so many more xmlns:xlink is just that without\nour customization there ends up being a single\n<div xmlns:xlink=\"http://www.w3.org/1999/xlink\" class=\"index\">\nand then everything below that doesn't need the xmlns:xlink anymore. But\nbecause stylesheet-html-common.xsl emits the div, the xmlns:xlink is emitted\nfor each element that autoidx.xsl has \"control\" over.\n\nWaiting for docbook to fix this seems a bit futile, I eventually found a\nbugreport about this, from 2016: https://sourceforge.net/p/docbook/bugs/1384/\n\nBut we can easily reduce the \"impact\" of the issue, by just adding a single\nxmlns:xlink to <div class=\"index\">, which is sufficient to convince xsltproc\nto not repeat it.\n\n\nBefore:\n-rw-r--r-- 1 andres andres 683139 Feb 13 04:31 html-broken/bookindex.html\nAfter:\n-rw-r--r-- 1 andres andres 442923 Feb 13 12:03 html/bookindex.html\n\nWhile most of the savings are in bookindex, the rest of the files are reduced\nby another ~100kB.\n\n\nWIP patch attached. For now I just adjusted the minimal set of\nxmlns=\"http://www.w3.org/TR/xhtml1/transitional\", but I think we should update\nall.\n\nGreetings,\n\nAndres Freund", "msg_date": "Sun, 13 Feb 2022 12:16:18 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "fixing bookindex.html bloat" }, { "msg_contents": "Hi,\n\nOn 2022-02-13 12:16:18 -0800, Andres Freund wrote:\n> Waiting for docbook to fix this seems a bit futile, I eventually found a\n> bugreport about this, from 2016:\n> https://sourceforge.net/p/docbook/bugs/1384/\n\nNow also reported to the current repo:\nhttps://github.com/docbook/xslt10-stylesheets/issues/239\n\nWhile there's been semi-regular changes / fixes, they've not done a release in\nyears... So even if they fix it, it'll likely not trickle down into distro\npackages etc anytime soon, if ever.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Feb 2022 12:42:05 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "On 13.02.22 21:16, Andres Freund wrote:\n> The reason for the two xmlns= are different. The\n> xmlns=\"http://www.w3.org/1999/xhtml\" is afaict caused by confusion on our\n> part.\n> \n> Some of our stylesheets use\n> xmlns=\"http://www.w3.org/TR/xhtml1/transitional\"\n> others use\n> xmlns=\"http://www.w3.org/1999/xhtml\"\n> \n> It's noteworthy that the docbook xsl stylesheets end up with\n> <html xmlns=\"http://www.w3.org/1999/xhtml\">\n> so it's a bit pointless to reference http://www.w3.org/TR/xhtml1/transitional\n> afaict.\n> \n> Adding xmlns=\"http://www.w3.org/1999/xhtml\" to stylesheet-html-common.xsl gets\n> rid of xmlns=\"http://www.w3.org/TR/xhtml1/transitional\" in bookindex specific\n> content.\n> \n> Changing stylesheet.xsl from transitional to http://www.w3.org/1999/xhtml gets\n> rid of xmlns=\"http://www.w3.org/TR/xhtml1/transitional\" in navigation/footer.\n> \n> Of course we should likely change all http://www.w3.org/TR/xhtml1/transitional\n> references, rather than just the one necessary to get rid of the xmlns= spam.\n\nYeah, that is currently clearly wrong. It appears I originally copied \nthe wrong namespace declarations from examples that show how to \ncustomize the DocBook stylesheets, but those examples were apparently \nwrong or outdated in this respect. It seems we also lack some namespace \ndeclarations altogether, as shown by your need to add it to \nstylesheet-html-common.xsl. This appears to need some careful cleanup.\n\n> The reason that we end up with so many more xmlns:xlink is just that without\n> our customization there ends up being a single\n> <div xmlns:xlink=\"http://www.w3.org/1999/xlink\" class=\"index\">\n> and then everything below that doesn't need the xmlns:xlink anymore. But\n> because stylesheet-html-common.xsl emits the div, the xmlns:xlink is emitted\n> for each element that autoidx.xsl has \"control\" over.\n> \n> Waiting for docbook to fix this seems a bit futile, I eventually found a\n> bugreport about this, from 2016: https://sourceforge.net/p/docbook/bugs/1384/\n> \n> But we can easily reduce the \"impact\" of the issue, by just adding a single\n> xmlns:xlink to <div class=\"index\">, which is sufficient to convince xsltproc\n> to not repeat it.\n\nI haven't fully wrapped my head around this. I tried adding xlink to \nour own exclude-result-prefixes, but that didn't seem to have the right \neffect.\n\n\n", "msg_date": "Mon, 14 Feb 2022 18:31:25 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 18:31:25 +0100, Peter Eisentraut wrote:\n> > The reason that we end up with so many more xmlns:xlink is just that without\n> > our customization there ends up being a single\n> > <div xmlns:xlink=\"http://www.w3.org/1999/xlink\" class=\"index\">\n> > and then everything below that doesn't need the xmlns:xlink anymore. But\n> > because stylesheet-html-common.xsl emits the div, the xmlns:xlink is emitted\n> > for each element that autoidx.xsl has \"control\" over.\n> > \n> > Waiting for docbook to fix this seems a bit futile, I eventually found a\n> > bugreport about this, from 2016: https://sourceforge.net/p/docbook/bugs/1384/\n> > \n> > But we can easily reduce the \"impact\" of the issue, by just adding a single\n> > xmlns:xlink to <div class=\"index\">, which is sufficient to convince xsltproc\n> > to not repeat it.\n> \n> I haven't fully wrapped my head around this. I tried adding xlink to our\n> own exclude-result-prefixes, but that didn't seem to have the right effect.\n\nIt can't, because it's not one of our stylesheets that causes the xlink: stuff\nto be included. It's autoidx.xls - just adding xlink to autoidx's\nexclude-result-prefixes fixes the problem \"properly\", but we can't really\nmodify it.\n\nThe reason adding xmlns:xlink to our div (or even higher up) helps is that\nthen nodes below it don't need to include it again (when output by autoidx),\nwhich drastically reduces the number of xmlns:xlink. So it's just a somewhat\nugly workaround, but for >100kB it seems better than the alternative.\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Feb 2022 09:51:04 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "On 14.02.22 18:31, Peter Eisentraut wrote:\n> Yeah, that is currently clearly wrong.  It appears I originally copied \n> the wrong namespace declarations from examples that show how to \n> customize the DocBook stylesheets, but those examples were apparently \n> wrong or outdated in this respect.  It seems we also lack some namespace \n> declarations altogether, as shown by your need to add it to \n> stylesheet-html-common.xsl.  This appears to need some careful cleanup.\n\nThe attached patch cleans up the xhtml namespace declarations properly, \nI think.\n\nFor the xlink business, I don't have a better idea than you, so your \nworkaround proposal seems fine.", "msg_date": "Mon, 14 Feb 2022 23:06:20 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 23:06:20 +0100, Peter Eisentraut wrote:\n> The attached patch cleans up the xhtml namespace declarations properly, I\n> think.\n\nLooks good to me.\n\n\n> For the xlink business, I don't have a better idea than you, so your\n> workaround proposal seems fine.\n\nK. Will you apply your patch, and then I'll push mine ontop?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 14 Feb 2022 15:06:51 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "On 15.02.22 00:06, Andres Freund wrote:\n> On 2022-02-14 23:06:20 +0100, Peter Eisentraut wrote:\n>> The attached patch cleans up the xhtml namespace declarations properly, I\n>> think.\n> \n> Looks good to me.\n> \n> \n>> For the xlink business, I don't have a better idea than you, so your\n>> workaround proposal seems fine.\n> \n> K. Will you apply your patch, and then I'll push mine ontop?\n\ndone\n\n\n", "msg_date": "Tue, 15 Feb 2022 11:16:12 +0100", "msg_from": "Peter Eisentraut <peter.eisentraut@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: fixing bookindex.html bloat" }, { "msg_contents": "On 2022-02-15 11:16:12 +0100, Peter Eisentraut wrote:\n> On 15.02.22 00:06, Andres Freund wrote:\n> > On 2022-02-14 23:06:20 +0100, Peter Eisentraut wrote:\n> > > For the xlink business, I don't have a better idea than you, so your\n> > > workaround proposal seems fine.\n> > \n> > K. Will you apply your patch, and then I'll push mine ontop?\n> \n> done\n\ndone as well.\n\n\n", "msg_date": "Tue, 15 Feb 2022 13:55:12 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": true, "msg_subject": "Re: fixing bookindex.html bloat" } ]
[ { "msg_contents": "Hi,\n\nCommit\nhttps://github.com/postgres/postgres/commit/6d554e3fcd6fb8be2dbcbd3521e2947ed7a552cb\n\nhas fixed the bug, but still has room for improvement.\n\nWhy open and lock the table Extended Statistics if it is not the owner.\nCheck and return to avoid this.\n\ntrivial patch attached.\n\nregards,\nRanier Vilela", "msg_date": "Sun, 13 Feb 2022 18:21:38 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "[PATCH] Avoid open and lock the table Extendend Statistics\n (src/backend/commands/statscmds.c)" }, { "msg_contents": "Hi,\n\nOn 2022-02-13 18:21:38 -0300, Ranier Vilela wrote:\n> Why open and lock the table Extended Statistics if it is not the owner.\n> Check and return to avoid this.\n\nI was about to say that this opens up time-to-check-time-to-use type\nissues. But it's the wrong thing to lock to prevent those.\n\nHaving looked briefly at it, I don't understand what the locking scheme is?\nShouldn't there be at least some locking against concurrent ALTERs and between\nALTER and ANALYZE etc?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Sun, 13 Feb 2022 13:43:17 -0800", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Avoid open and lock the table Extendend Statistics\n (src/backend/commands/statscmds.c)" }, { "msg_contents": "Em dom., 13 de fev. de 2022 às 18:43, Andres Freund <andres@anarazel.de>\nescreveu:\n\n> Hi,\n>\n> On 2022-02-13 18:21:38 -0300, Ranier Vilela wrote:\n> > Why open and lock the table Extended Statistics if it is not the owner.\n> > Check and return to avoid this.\n>\n> I was about to say that this opens up time-to-check-time-to-use type\n> issues. But it's the wrong thing to lock to prevent those.\n>\n> Having looked briefly at it, I don't understand what the locking scheme is?\n> Shouldn't there be at least some locking against concurrent ALTERs and\n> between\n> ALTER and ANALYZE etc?\n>\nI checked the pg_statistics_object_ownercheck function and I think the\npatch is wrong.\nIt seems to me that when using SearchSysCache1, it is necessary to open and\nlock the table.\n\nBetter remove the patch, sorry for the noise.\n\nregards,\nRanier Vilela\n\nEm dom., 13 de fev. de 2022 às 18:43, Andres Freund <andres@anarazel.de> escreveu:Hi,\n\nOn 2022-02-13 18:21:38 -0300, Ranier Vilela wrote:\n> Why open and lock the table Extended Statistics if it is not the owner.\n> Check and return to avoid this.\n\nI was about to say that this opens up time-to-check-time-to-use type\nissues. But it's the wrong thing to lock to prevent those.\n\nHaving looked briefly at it, I don't understand what the locking scheme is?\nShouldn't there be at least some locking against concurrent ALTERs and between\nALTER and ANALYZE etc?I checked the pg_statistics_object_ownercheck function and I think the patch is wrong.It seems to me that when using SearchSysCache1, it is necessary to open and lock the table.Better remove the patch, sorry for the noise.regards,Ranier Vilela", "msg_date": "Sun, 13 Feb 2022 19:27:59 -0300", "msg_from": "Ranier Vilela <ranier.vf@gmail.com>", "msg_from_op": true, "msg_subject": "Re: [PATCH] Avoid open and lock the table Extendend Statistics\n (src/backend/commands/statscmds.c)" }, { "msg_contents": "On 2/13/22 22:43, Andres Freund wrote:\n> Hi,\n> \n> On 2022-02-13 18:21:38 -0300, Ranier Vilela wrote:\n>> Why open and lock the table Extended Statistics if it is not the owner.\n>> Check and return to avoid this.\n> \n> I was about to say that this opens up time-to-check-time-to-use type\n> issues. But it's the wrong thing to lock to prevent those.\n> \n\nI doubt optimizing this is worth any effort - ALTER STATISTICS is rare,\nnot doing it with owner rights is even rarer.\n\n\n> Having looked briefly at it, I don't understand what the locking scheme is?\n> Shouldn't there be at least some locking against concurrent ALTERs and between\n> ALTER and ANALYZE etc?\n> \n\nI don't recall the details of the discussion (if at all), but if you try\nto do concurrent ALTER STATISTICS, that'll end up with:\n\n ERROR: tuple concurrently updated\n\nreported by CatalogTupleUpdate. AFAIK that's what we do for other object\ntypes that don't have a relation that we might lock (e.g. try to co\nCREATE OR REPLACE FUNCTION).\n\nANALYZE reads the statistics from the catalog, so it should see the last\ncommitted stattarget value, I think.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n", "msg_date": "Sun, 13 Feb 2022 23:59:13 +0100", "msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Avoid open and lock the table Extendend Statistics\n (src/backend/commands/statscmds.c)" }, { "msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 2/13/22 22:43, Andres Freund wrote:\n>> Having looked briefly at it, I don't understand what the locking scheme is?\n\n> I don't recall the details of the discussion (if at all), but if you try\n> to do concurrent ALTER STATISTICS, that'll end up with:\n> ERROR: tuple concurrently updated\n> reported by CatalogTupleUpdate. AFAIK that's what we do for other object\n> types that don't have a relation that we might lock (e.g. try to co\n> CREATE OR REPLACE FUNCTION).\n\nYeah, we generally haven't bothered with a separate locking scheme\nfor objects other than relations. I don't see any big problem with\nthat for objects that are defined by a single catalog entry, since\n\"tuple concurrently updated\" failures serve fine (although maybe\nthat error message could be nicer). Extended stats don't quite\nfit that definition, but at least for updates that only need to\ntouch the pg_statistic_ext row, it's the same story.\n\nWhen we get around to supporting multi-relation statistics,\nthere might need to be some protocol around when ANALYZE can\nupdate pg_statistic_ext_data rows. But I don't think that\nissue exists yet, since only one ANALYZE can run on a particular\nrelation at a time.\n\n\t\t\tregards, tom lane\n\n\n", "msg_date": "Sun, 13 Feb 2022 18:30:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCH] Avoid open and lock the table Extendend Statistics\n (src/backend/commands/statscmds.c)" } ]
[ { "msg_contents": "Hi!\n\nI'd like to introduce a patch that reworks options processing. \n\nThis patch detaches code for options processing and validating from the code \nthat uses these options. This will allow to reuse same code in any part of \npostgres that may need options. Currently there are three ways of defining \noptions, and this code is scattered along the source tree, that leads to a \nproblems: code is hard to understand, hard to maintain, I saw several options \nadded or changed, and and in one of two cases, there was one or another \nmistake done because of code unclearity.\n\nGeneral idea:\n\nThere is an object OptionSpec that describes how single option should be \nparsed, validated, stored into bytea, etc.\n\nOptionSpecSet is a list of OptionSpecs, that describes all possible options \nfor certain object (e.g. options for nbtree index relation)\n\nAll options related functions that are not depended on context were moved to \nsrc/backend/access/common/options.c. These functions receives OptionSpecSet \n(or OptionSpec) and option data that should be processed. And OptionSpecSet \nshould contain all information that is needed for proper processing.\n\nOptions from the perspective of the option engine can exist in four \nrepresentation:\n- defList - the way they came from SQL parser\n- TEXT[] - the way they are stored in pg_class or similar place\n- Bytea - the way they stored in C-structure, for caching and using in\n postgres code that uses these options \n- Value List - is an internal representation that is used while parsing, \n validating and converting between representations \n\nSee code comments for more info.\n\nThere are functions that is used for conversion from one representation to \nanother:\n- optionsDefListToRawValues : defList -> Values\n- optionsTextArrayToDefList :TEXT[] -> defList\n\n- optionsTextArrayToRawValues : TEXT[] -> Values\n- optionsValuesToTextArray: Values -> TEXT[]\n\n- optionsValuesToBytea: Values -> Bytea\n\nThis functions are called from meta-functions that is used when postgres \nreceives an SQL command for creating or updating options:\n\n- optionsDefListToTextArray - when options are created\n- optionsUpdateTexArrayWithDefList - when option are updated.\n\nThey also trigger validation while processing.\n\nThere are also functions for SpecSet processing:\n- allocateOptionsSpecSet\n- optionsSpecSetAddBool\n- optionsSpecSetAddBool\n- optionsSpecSetAddReal\n- optionsSpecSetAddEnum\n- optionsSpecSetAddString\n\nFor index access methods \"amoptions\" member function that preformed option \nprocessing, were replaced with \"amreloptspecset\" member function that provided \nan SpecSet for reloptions for this AM, so caller can trigger option processing \nhimself.\n\nFor other relation types options have been processing by \"fixed\" functions, so \nthese processing were replaced with \"fixed\" SpecSet providers + processing \nusing that SpecSet. Later on these should be moved to access method the same \nway it is done in indexes. I plan to do it after this patch is commit.\n\nAs for local options, that is used of opclass options, I kept all current API, \nbut made this API a wrapper around new option engine. Local options should be \nmoved to unified option engine API later on. I hope to do it too.\n\n\nThis patch does not change any of postgres behaviour (even if this behaviour \nis not quite right). The only change is text of the warning for unexisting \noption in toast namespace. But I hope this change is for better.\n\n\nThe only problem I am not sure how to solve is an obtaining a LockMode.\nTo get a LockMode for option , you need a SpecSet. For indexes we get SpecSets \nvia AccessMethod. To get access for AccessMethod, you need relation C-\nstructure. To get relation structure you need open relation with NoLock mode. \nBut if I do it, I get Assert in relation_open. (There were no such Assert when \nI started this work, it appeared later).\nAll code related to this issue is marked with FIXME comments. I've commented \nthat Assert out, but not quite sure I was right. Here I need a help of more \nexperienced members of community.\n\n\nThis quite a big patch. Unfortunately it can't be split into smaller parts. \nI also suggest to consider this patch as an implementation of general idea, \nthat works well. I have ideas in mind to make it better, but it will be \ninfinite improvement process that will never lead to final commit. So if the \nconcept is good and implementation is good enough, I suggest to commit it, and \nmake it better later on by smaller patches over it. If it is not good enough, \nlet me know, I will try to make it good.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Mon, 14 Feb 2022 00:43:36 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "[PATCH] New [relation] option engine" }, { "msg_contents": "Hi,\n\nOn 2022-02-14 00:43:36 +0300, Nikolay Shaplov wrote:\n> I'd like to introduce a patch that reworks options processing. \n\nThis doesn't apply anymore: http://cfbot.cputube.org/patch_37_3536.log\n\nGiven that this patch has been submitted just to the last CF and that there's\nbeen no action on it, I don't see this going into 15. Therefore I'd like to\nmove it to the next CF?\n\nGreetings,\n\nAndres Freund\n\n\n", "msg_date": "Mon, 21 Mar 2022 17:46:10 -0700", "msg_from": "Andres Freund <andres@anarazel.de>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от вторник, 22 марта 2022 г. 03:46:10 MSK пользователь Andres Freund \nнаписал:\n\n> > I'd like to introduce a patch that reworks options processing.\n> This doesn't apply anymore: http://cfbot.cputube.org/patch_37_3536.log\nThank you for sending this notice!\n\nI've updated the patch to include new changes, that have been made in postgres \nrecently. \n\n> Given that this patch has been submitted just to the last CF and that\n> there's been no action on it, I don't see this going into 15.\nYes it is not to be added in 15 for sure.\n\n> Therefore I'd like to move it to the next CF?\nThe only reason it can be kept in current CF, is to keep it in front of the \neyes of potential reviewers for one more week. So it is up to you, if you \nconsider it does not worth it, you can move it now. No great harm would be \ndone then. \n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Tue, 22 Mar 2022 19:56:46 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "A minor fix to get rid of maybe-uninitialized warnings. \n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sun, 27 Mar 2022 15:19:41 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от воскресенье, 27 марта 2022 г. 15:19:41 MSK пользователь Nikolay \nShaplov написал:\n> A minor fix to get rid of maybe-uninitialized warnings.\n\nSame patch, but properly rebased to current master head.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sat, 16 Apr 2022 17:15:34 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от суббота, 16 апреля 2022 г. 17:15:34 MSK пользователь Nikolay \nShaplov написал:\n\nChanged NoLock to AccessShareLock. This will remove last FIXME from the patch.\n\nAs more experienced pg-developers told me, you can call relation_open with \nAccessShareLock, on a table you are going to work with, any time you like, \nwithout being ashamed about it.\n\nBut this lock is visible when you have prepared transaction, so have to update\ntwophase test of test_decoding extension.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Wed, 04 May 2022 18:15:51 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от среда, 4 мая 2022 г. 18:15:51 MSK пользователь Nikolay Shaplov \nнаписал:\n\nRebased patch, so it can be applied to current master again.\n\nAdded static keyword to enum option items definition as it have been done in \n09cd33f4\n \n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sun, 15 May 2022 12:41:39 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "I'm sorry if you've already said this elsewhere, but can you please\nstate what is the *intention* of this patchset? If it's a pure\nrefactoring (but I don't think it is), then it's a net loss, because\nafter pgindent it summarizes as:\n\n 58 files changed, 2714 insertions(+), 2368 deletions(-)\n\nso we end up 500+ lines worse than the initial story. However, I\nsuspect that's not the final situation, since I saw a comment somewhere\nthat opclass options have to be rewritten to modify this mechanism, and\nI suspect that will remove a few lines. And you maybe have a more\nambitious goal. But what is it?\n\nPlease pgindent your code for the next submission, making sure to add\nyour new typedef(s) to typedefs.list so that it doesn't generate stupid\nspaces. After pgindenting you'll notice the argument lists of some\nfunctions look bad (cf. commit c4f113e8fef9). Please fix that too.\n\nI notice that you kept the commentary about lock levels in the place\nwhere they were previously defined. This is not good. Please move each\nexplanation next to the place where each option is defined.\n\nFor next time, please use \"git format-patch\" for submission, and write a\ntentative commit message. The committer may or may not use your\nproposed commit message, but with it they will know what you're trying\nto achieve.\n\nThe translatability marker for detailmsg for enums is wrong AFAICT. You\nneed gettext_noop() around the strings themselves IIRC. I think you\nneed to get rid of the _() call around the variable that receives that\nvalue and use errdetail() instead of errdetail_internal(), to avoid\ndouble-translating it; but I'm not 100% sure. Please experiment with\n\"make update-po\" until you get the messages in the .po file. \n\nYou don't need braces around single-statement blocks.\n\nThanks\n\n-- \nÁlvaro Herrera PostgreSQL Developer — https://www.EnterpriseDB.com/\n\"World domination is proceeding according to plan\" (Andrew Morton)\n\n\n", "msg_date": "Sun, 15 May 2022 14:25:47 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от воскресенье, 15 мая 2022 г. 15:25:47 MSK пользователь Alvaro \nHerrera написал:\n> I'm sorry if you've already said this elsewhere, but can you please\n> state what is the *intention* of this patchset? If it's a pure\n> refactoring (but I don't think it is), then it's a net loss, because\n> after pgindent it summarizes as:\n> \n> 58 files changed, 2714 insertions(+), 2368 deletions(-)\n> \n> so we end up 500+ lines worse than the initial story. However, I\n> suspect that's not the final situation, since I saw a comment somewhere\n> that opclass options have to be rewritten to modify this mechanism, and\n> I suspect that will remove a few lines. And you maybe have a more\n> ambitious goal. But what is it?\n\nMy initial goal was to make options code reusable for any types of options \n(not only reloptions). While working with this goal I came to conclusion that \nI have to create completely new option engine that will be used anywhere \noptions (name=value) is needed. This will solve following problems:\n\n- provide unified API for options definition. Currently postgres have core-AM \noptions, contrib-AM options and local options for opclasses, each have their \nown way to define options. This patch will allow to use one API for them all \n(for opclasses it is still WIP)\n- Currently core-AM options are partly defined in reloptions.c and partly in AM \ncode. This is error prone. This patch fixes that.\n- For indexes option definition is moved into AM code, where they should be. \nFor heap it should be moved into AM code later. \n- There is no difference for core-AM indexes, and contrib-AM indexes options. \nThey use same API.\n\nI also tried to write detailed commit message as you've suggested. There my \ngoals is described in more detailed way.\n\n> Please pgindent your code for the next submission, making sure to add\n> your new typedef(s) to typedefs.list so that it doesn't generate stupid\n> spaces. After pgindenting you'll notice the argument lists of some\n> functions look bad (cf. commit c4f113e8fef9). Please fix that too.\nI've tried to pgindent. Hope I did it well. I've manually edited all code \nlines (not string consts) that were longer then 80 characters, afterwards. \nHope it was right decision \n\n> I notice that you kept the commentary about lock levels in the place\n> where they were previously defined. This is not good. Please move each\n> explanation next to the place where each option is defined.\nYou are right. Tried to find better place for it.\n\nI also noticed that I've missed updating initial comment for reloptions.c.\nWill update it this week, meanwhile will send a patch version without changing \nthat comment, in order not to slow anything down.\n\n> For next time, please use \"git format-patch\" for submission, and write a\n> tentative commit message. The committer may or may not use your\n> proposed commit message, but with it they will know what you're trying\n> to achieve.\nDone.\n\n> The translatability marker for detailmsg for enums is wrong AFAICT. You\n> need gettext_noop() around the strings themselves IIRC. I think you\n> need to get rid of the _() call around the variable that receives that\n> value and use errdetail() instead of errdetail_internal(), to avoid\n> double-translating it; but I'm not 100% sure. Please experiment with\n> \"make update-po\" until you get the messages in the .po file.\nThat part of code was not written by me. It was added while enum options were \ncommit. Then I've just copied it to this patch. I do not quite understand how \ndoes it works. But I can say that update-po works well for enum detailmsg, and \nwe actually have gettext_noop(), but it is used while calling \noptionsSpecSetAddEnum, not when error message is actually printed. But I guess \nit do the trick.\n\n> You don't need braces around single-statement blocks.\nTried to remove all I've found.\n\n> Thanks\nThank you for answering. \n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Tue, 17 May 2022 15:09:55 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "forbid_realloc is only tested in an assert. There needs to be an \"if\"\ntest for it somewhere (suppose some extension author uses this API and\nonly runs it in assert-disabled environment; they'll never know they\nmade a mistake). But do we really need this option? Why do we need a\nhardcoded limit in the number of options?\n\n\nIn allocateOptionsSpecSet there's a new error message with a typo\n\"grater\" which should be \"greater\". But I think the message is\nconfusingly worded. Maybe a better wording is \"the value of parameter\nXXX may not be greater than YYY\".\n\n-- \nÁlvaro Herrera Breisgau, Deutschland — https://www.EnterpriseDB.com/\n\n\n", "msg_date": "Wed, 18 May 2022 10:10:08 +0200", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от среда, 18 мая 2022 г. 11:10:08 MSK пользователь Alvaro Herrera \nнаписал:\n> forbid_realloc is only tested in an assert. There needs to be an \"if\"\n> test for it somewhere (suppose some extension author uses this API and\n> only runs it in assert-disabled environment; they'll never know they\n> made a mistake). But do we really need this option? Why do we need a\n> hardcoded limit in the number of options?\n\nGeneral idea was that it is better to allocate as many option_spec items as we \nare going to use. It is optional, so if you do not want to allocate exact \nnumber of options, then realloc will be used, when we adding one more item, \nand all allocated items are used.\n\nBut when you explicitly specify number of items, it is better not to forget to \n++ it when you add extra option in the code. That was the purpose of \nforbid_realloc: to remind. It worked well for, while working with the patch \nseveral options were added in the upstream, and this assert reminded me that I \nshould also allocate extra item.\n\nIf it is run in production without asserts, it is also no big deal, we will \njust have another realloc.\n\nBut you are right, variable name forbid_realloc is misleading. It does not \nreally forbid anything, so I changed it to assert_on_realloc, so the name \ntells what this flag really do.\n\n> In allocateOptionsSpecSet there's a new error message with a typo\n> \"grater\" which should be \"greater\". But I think the message is\n> confusingly worded. Maybe a better wording is \"the value of parameter\n> XXX may not be greater than YYY\".\n\nThis error message is really from bloom index. And here I was not as careful \nand watchful as I should, because this error message is from the check that \nwas not there in original code. And this patch should not change behaviour at \nall. So I removed this check completely, and will submit it later.\n\nMy original patch has a bunch of changes like that. I've removed them all, but \nmissed one in the contrib... :-(\n\nThank you for pointing to it. \n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Wed, 18 May 2022 18:46:55 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Mon, 2022-02-14 at 00:43 +0300, Nikolay Shaplov wrote:\n> For index access methods \"amoptions\" member function that preformed\n> option \n> processing, were replaced with \"amreloptspecset\" member function that\n> provided \n> an SpecSet for reloptions for this AM, so caller can trigger option\n> processing \n> himself.\n\nWhat about table access methods? There have been a couple attempts to\nallow custom reloptions for table AMs. Does this patch help that use\ncase?\n\nRegards,\n\tJeff Davis\n\n\n\n\n", "msg_date": "Mon, 11 Jul 2022 13:03:55 -0700", "msg_from": "Jeff Davis <pgsql@j-davis.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от понедельник, 11 июля 2022 г. 23:03:55 MSK пользователь Jeff Davis \nнаписал:\n\n> > For index access methods \"amoptions\" member function that preformed\n> > option\n> > processing, were replaced with \"amreloptspecset\" member function that\n> > provided\n> > an SpecSet for reloptions for this AM, so caller can trigger option\n> > processing\n> > himself.\n> \n> What about table access methods? There have been a couple attempts to\n> allow custom reloptions for table AMs. Does this patch help that use\n> case?\n\nThis patch does not add custom reloptions for table AM. It is already huge \nenough, with no extra functionality. But new option engine will make adding \ncustom options for table AM more easy task, as main goal of this patch is to \nsimplify adding options everywhere they needed. And yes, adding custom table \nAM options is one of my next goals, as soon as this patch is commit.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Tue, 12 Jul 2022 07:30:40 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от вторник, 12 июля 2022 г. 07:30:40 MSK пользователь Nikolay Shaplov \nнаписал:\n> > What about table access methods? There have been a couple attempts to\n> > allow custom reloptions for table AMs. Does this patch help that use\n> > case?\n> \n> This patch does not add custom reloptions for table AM. It is already huge\n> enough, with no extra functionality. But new option engine will make adding\n> custom options for table AM more easy task, as main goal of this patch is to\n> simplify adding options everywhere they needed. And yes, adding custom\n> table AM options is one of my next goals, as soon as this patch is commit.\n\nAnd here goes rebased version of the patch, that can be applied to current \nmaster.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Tue, 12 Jul 2022 07:47:22 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "2022年7月12日(火) 13:47 Nikolay Shaplov <dhyan@nataraj.su>:\n>\n> В письме от вторник, 12 июля 2022 г. 07:30:40 MSK пользователь Nikolay Shaplov\n> написал:\n> > > What about table access methods? There have been a couple attempts to\n> > > allow custom reloptions for table AMs. Does this patch help that use\n> > > case?\n> >\n> > This patch does not add custom reloptions for table AM. It is already huge\n> > enough, with no extra functionality. But new option engine will make adding\n> > custom options for table AM more easy task, as main goal of this patch is to\n> > simplify adding options everywhere they needed. And yes, adding custom\n> > table AM options is one of my next goals, as soon as this patch is commit.\n>\n> And here goes rebased version of the patch, that can be applied to current\n> master.\n\nHi\n\ncfbot reports the patch no longer applies. As CommitFest 2022-11 is\ncurrently underway, this would be an excellent time to update the patch.\n\nThanks\n\nIan Barwick\n\n\n", "msg_date": "Fri, 4 Nov 2022 09:47:09 +0900", "msg_from": "Ian Lawrence Barwick <barwick@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от пятница, 4 ноября 2022 г. 03:47:09 MSK пользователь Ian Lawrence \nBarwick написал:\n\n> cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> currently underway, this would be an excellent time to update the patch.\n\nOups! I should have done it before...\nFixed\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 04 Nov 2022 21:06:38 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от пятница, 4 ноября 2022 г. 21:06:38 MSK пользователь Nikolay \nShaplov написал:\n> В письме от пятница, 4 ноября 2022 г. 03:47:09 MSK пользователь Ian Lawrence\n> Barwick написал:\n> > cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> > currently underway, this would be an excellent time to update the patch.\n> \n> Oups! I should have done it before...\n> Fixed\n\nTrying to fix meson build\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 04 Nov 2022 22:30:06 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от пятница, 4 ноября 2022 г. 22:30:06 MSK пользователь Nikolay \nShaplov написал:\n\n> > > cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> > > currently underway, this would be an excellent time to update the patch.\n> > \n> > Oups! I should have done it before...\n> > Fixed\n> \n> Trying to fix meson build\nTrying to fix compiler warnings.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sun, 06 Nov 2022 19:22:09 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от воскресенье, 6 ноября 2022 г. 19:22:09 MSK пользователь Nikolay \nShaplov написал:\n\n> > > > cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> > > > currently underway, this would be an excellent time to update the\n> > > > patch.\n> > > \n> > > Oups! I should have done it before...\n> > > Fixed\n> > \n> > Trying to fix meson build\n> \n> Trying to fix compiler warnings.\n\nPatched rebased. Imported changes from 4f981df8 commit (Report a more useful \nerror for reloptions on a partitioned table)\n\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sun, 20 Nov 2022 09:11:46 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Sun, 20 Nov 2022 at 11:42, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n>\n> В письме от воскресенье, 6 ноября 2022 г. 19:22:09 MSK пользователь Nikolay\n> Shaplov написал:\n>\n> > > > > cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> > > > > currently underway, this would be an excellent time to update the\n> > > > > patch.\n> > > >\n> > > > Oups! I should have done it before...\n> > > > Fixed\n> > >\n> > > Trying to fix meson build\n> >\n> > Trying to fix compiler warnings.\n>\n> Patched rebased. Imported changes from 4f981df8 commit (Report a more useful\n> error for reloptions on a partitioned table)\n\nThe patch does not apply on top of HEAD as in [1], please post a rebased patch:\n=== Applying patches on top of PostgreSQL commit ID\n92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n=== applying patch ./new_options_take_two_v03f.patch\npatching file src/include/access/reloptions.h\nHunk #1 FAILED at 1.\n1 out of 4 hunks FAILED -- saving rejects to file\nsrc/include/access/reloptions.h.rej\n\n[1] - http://cfbot.cputube.org/patch_41_3536.log\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 3 Jan 2023 18:38:33 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Sun, 20 Nov 2022 at 11:42, Nikolay Shaplov <dhyan@nataraj.su> wrote:\n> >\n> > В письме от воскресенье, 6 ноября 2022 г. 19:22:09 MSK пользователь Nikolay\n> > Shaplov написал:\n> >\n> > > > > > cfbot reports the patch no longer applies. As CommitFest 2022-11 is\n> > > > > > currently underway, this would be an excellent time to update the\n> > > > > > patch.\n> > > > >\n> > > > > Oups! I should have done it before...\n> > > > > Fixed\n> > > >\n> > > > Trying to fix meson build\n> > >\n> > > Trying to fix compiler warnings.\n> >\n> > Patched rebased. Imported changes from 4f981df8 commit (Report a more useful\n> > error for reloptions on a partitioned table)\n>\n> The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> === Applying patches on top of PostgreSQL commit ID\n> 92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n> === applying patch ./new_options_take_two_v03f.patch\n> patching file src/include/access/reloptions.h\n> Hunk #1 FAILED at 1.\n> 1 out of 4 hunks FAILED -- saving rejects to file\n> src/include/access/reloptions.h.rej\n\nThere has been no updates on this thread for some time, so this has\nbeen switched as Returned with Feedback. Feel free to open it in the\nnext commitfest if you plan to continue on this.\n\nRegards,\nVignesh\n\n\n", "msg_date": "Tue, 31 Jan 2023 23:18:11 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On 2023-Jan-31, vignesh C wrote:\n\n> On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n\n> > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > === Applying patches on top of PostgreSQL commit ID\n> > 92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n> > === applying patch ./new_options_take_two_v03f.patch\n> > patching file src/include/access/reloptions.h\n> > Hunk #1 FAILED at 1.\n> > 1 out of 4 hunks FAILED -- saving rejects to file\n> > src/include/access/reloptions.h.rej\n> \n> There has been no updates on this thread for some time, so this has\n> been switched as Returned with Feedback. Feel free to open it in the\n> next commitfest if you plan to continue on this.\n\nWell, no feedback has been given, so I'm not sure this is a great\noutcome. In the interest of keeping it alive, I've rebased it. It\nturns out that the only conflict is with the 2022 -> 2023 copyright line\nupdate.\n\nI have not reviewed it.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"El que vive para el futuro es un iluso, y el que vive para el pasado,\nun imbécil\" (Luis Adler, \"Los tripulantes de la noche\")", "msg_date": "Wed, 1 Feb 2023 10:04:26 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от среда, 1 февраля 2023 г. 12:04:26 MSK пользователь Alvaro Herrera \nнаписал:\n> On 2023-Jan-31, vignesh C wrote:\n> > On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n> > > The patch does not apply on top of HEAD as in [1], please post a rebased\n> > > patch: === Applying patches on top of PostgreSQL commit ID\n> > > 92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n> > > === applying patch ./new_options_take_two_v03f.patch\n> > > patching file src/include/access/reloptions.h\n> > > Hunk #1 FAILED at 1.\n> > > 1 out of 4 hunks FAILED -- saving rejects to file\n> > > src/include/access/reloptions.h.rej\n> > \n> > There has been no updates on this thread for some time, so this has\n> > been switched as Returned with Feedback. Feel free to open it in the\n> > next commitfest if you plan to continue on this.\n> \n> Well, no feedback has been given, so I'm not sure this is a great\n> outcome. In the interest of keeping it alive, I've rebased it. It\n> turns out that the only conflict is with the 2022 -> 2023 copyright line\n> update.\n> \n> I have not reviewed it.\nGuys sorry, I am totally unable to do just anything this month... Do not have \nspare resources to switch my attention to anything even this simple...\n\nHope, next month will be better... And I will try to do some more reviewing of \nother patches...\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Wed, 01 Feb 2023 12:27:10 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Wed, 1 Feb 2023 at 14:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>\n> On 2023-Jan-31, vignesh C wrote:\n>\n> > On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n>\n> > > The patch does not apply on top of HEAD as in [1], please post a rebased patch:\n> > > === Applying patches on top of PostgreSQL commit ID\n> > > 92957ed98c5c565362ce665266132a7f08f6b0c0 ===\n> > > === applying patch ./new_options_take_two_v03f.patch\n> > > patching file src/include/access/reloptions.h\n> > > Hunk #1 FAILED at 1.\n> > > 1 out of 4 hunks FAILED -- saving rejects to file\n> > > src/include/access/reloptions.h.rej\n> >\n> > There has been no updates on this thread for some time, so this has\n> > been switched as Returned with Feedback. Feel free to open it in the\n> > next commitfest if you plan to continue on this.\n>\n> Well, no feedback has been given, so I'm not sure this is a great\n> outcome. In the interest of keeping it alive, I've rebased it. It\n> turns out that the only conflict is with the 2022 -> 2023 copyright line\n> update.\n>\n> I have not reviewed it.\n\nSince there was no response to rebase the patch, I was not sure if the\nauthor was planning to continue. Anyways Nikolay Shaplov plans to work\non this soon, So I have added this to the next commitfest at [1].\n[1] - https://commitfest.postgresql.org/41/3536/\n\nRegards,\nVignesh\n\n\n", "msg_date": "Thu, 2 Feb 2023 08:33:46 +0530", "msg_from": "vignesh C <vignesh21@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On 2023-Feb-02, vignesh C wrote:\n\n> On Wed, 1 Feb 2023 at 14:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n\n> > Well, no feedback has been given, so I'm not sure this is a great\n> > outcome. In the interest of keeping it alive, I've rebased it. It\n> > turns out that the only conflict is with the 2022 -> 2023 copyright line\n> > update.\n> \n> Since there was no response to rebase the patch, I was not sure if the\n> author was planning to continue. Anyways Nikolay Shaplov plans to work\n> on this soon, So I have added this to the next commitfest at [1].\n\nThank you!\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"¿Cómo puedes confiar en algo que pagas y que no ves,\ny no confiar en algo que te dan y te lo muestran?\" (Germán Poo)\n\n\n", "msg_date": "Thu, 2 Feb 2023 13:29:42 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Thu, Feb 2, 2023 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:\n>\n> On Wed, 1 Feb 2023 at 14:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> >\n> > On 2023-Jan-31, vignesh C wrote:\n> >\n> > > On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n\n> > > There has been no updates on this thread for some time, so this has\n> > > been switched as Returned with Feedback. Feel free to open it in the\n> > > next commitfest if you plan to continue on this.\n> >\n> > Well, no feedback has been given, so I'm not sure this is a great\n> > outcome. In the interest of keeping it alive, I've rebased it. It\n> > turns out that the only conflict is with the 2022 -> 2023 copyright line\n> > update.\n> >\n> > I have not reviewed it.\n>\n> Since there was no response to rebase the patch, I was not sure if the\n> author was planning to continue. Anyways Nikolay Shaplov plans to work\n> on this soon, So I have added this to the next commitfest at [1].\n> [1] - https://commitfest.postgresql.org/41/3536/\n\nNine months have passed, and there hasn't been an update since last\ntime it was returned. It doesn't seem to have been productive to\nresurrect it to the CF based on \"someone plans to work on it soon\".\nI'm re-returning it with feedback.\n\nSome time ago, there was a proposal to create a new category \"lack of\ninterest\", but that also seems to have gone nowhere because of...lack\nof interest.\n\n--\nJohn Naylor\n\n\n", "msg_date": "Sat, 11 Nov 2023 16:33:11 +0700", "msg_from": "John Naylor <johncnaylorls@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "11.11.2023 12:33, John Naylor пишет:\n> On Thu, Feb 2, 2023 at 10:03 AM vignesh C <vignesh21@gmail.com> wrote:\n>>\n>> On Wed, 1 Feb 2023 at 14:49, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>>>\n>>> On 2023-Jan-31, vignesh C wrote:\n>>>\n>>>> On Tue, 3 Jan 2023 at 18:38, vignesh C <vignesh21@gmail.com> wrote:\n> \n>>>> There has been no updates on this thread for some time, so this has\n>>>> been switched as Returned with Feedback. Feel free to open it in the\n>>>> next commitfest if you plan to continue on this.\n>>>\n>>> Well, no feedback has been given, so I'm not sure this is a great\n>>> outcome. In the interest of keeping it alive, I've rebased it. It\n>>> turns out that the only conflict is with the 2022 -> 2023 copyright line\n>>> update.\n>>>\n>>> I have not reviewed it.\n>>\n>> Since there was no response to rebase the patch, I was not sure if the\n>> author was planning to continue. Anyways Nikolay Shaplov plans to work\n>> on this soon, So I have added this to the next commitfest at [1].\n>> [1] - https://commitfest.postgresql.org/41/3536/\n> \n> Nine months have passed, and there hasn't been an update since last\n> time it was returned. It doesn't seem to have been productive to\n> resurrect it to the CF based on \"someone plans to work on it soon\".\n> I'm re-returning it with feedback.\n> \n> Some time ago, there was a proposal to create a new category \"lack of\n> interest\", but that also seems to have gone nowhere because of...lack\n> of interest.\n\nI've rebased patch, so it could be add to commitfest again.\n\n------\nYura Sokolov", "msg_date": "Fri, 8 Dec 2023 08:10:29 +0300", "msg_from": "Yura Sokolov <y.sokolov@postgrespro.ru>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Fri, Dec 08, 2023 at 08:10:29AM +0300, Yura Sokolov wrote:\n> I've rebased patch, so it could be add to commitfest again.\n\nThis is a 270kB patch with quite a few changes, and a lot of code\nmoved around:\n\n> 47 files changed, 2592 insertions(+), 2326 deletions(-)\n\nCould it be possible to split that into more successive steps to ease\nits review?\n--\nMichael", "msg_date": "Fri, 8 Dec 2023 14:59:41 +0900", "msg_from": "Michael Paquier <michael@paquier.xyz>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от пятница, 8 декабря 2023 г. 08:59:41 MSK пользователь Michael \nPaquier написал:\n\n> > I've rebased patch, so it could be add to commitfest again.\n> \n> This is a 270kB patch with quite a few changes, and a lot of code\n> \n> moved around:\n> > 47 files changed, 2592 insertions(+), 2326 deletions(-)\n> \n> Could it be possible to split that into more successive steps to ease\n> its review?\n\nTheoretically I can create patch with full options.c as it is in the patch \nnow, and use that code only in index AM, and keep reloption.c mostly \nunchanged.\n\nThis will be total mess with two different options mechanisms working in the \nsame time, but this might be much more easy to review. When we are done with \nthe first step, we can change the rest.\nIf this will help to finally include patch into postgres, I can do it. Will \nthat help you to review?\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 08 Dec 2023 15:49:31 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On 2023-Dec-08, Nikolay Shaplov wrote:\n\n> Theoretically I can create patch with full options.c as it is in the patch \n> now, and use that code only in index AM, and keep reloption.c mostly \n> unchanged.\n> \n> This will be total mess with two different options mechanisms working in the \n> same time, but this might be much more easy to review. When we are done with \n> the first step, we can change the rest.\n> If this will help to finally include patch into postgres, I can do it. Will \n> that help you to review?\n\nI don't think that's better, because we could create slight\ninconsistencies between the code used for index AMs and the users of\nreloptions.\n\nI'm not seeing any reasonable way to split this patch in smaller ones.\n\n-- \nÁlvaro Herrera 48°01'N 7°57'E — https://www.EnterpriseDB.com/\n\"No necesitamos banderas\n No reconocemos fronteras\" (Jorge González)\n\n\n", "msg_date": "Fri, 8 Dec 2023 13:59:09 +0100", "msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от пятница, 8 декабря 2023 г. 15:59:09 MSK пользователь Alvaro \nHerrera написал:\n\n> > Theoretically I can create patch with full options.c as it is in the patch\n> > now, and use that code only in index AM, and keep reloption.c mostly\n> > unchanged.\n> > \n> > This will be total mess with two different options mechanisms working in\n> > the same time, but this might be much more easy to review. When we are\n> > done with the first step, we can change the rest.\n> > If this will help to finally include patch into postgres, I can do it.\n> > Will\n> > that help you to review?\n> \n> I don't think that's better, because we could create slight\n> inconsistencies between the code used for index AMs and the users of\n> reloptions.\nI've written quite good regression tests for it, there should be no \ninconsistency.\n \n> I'm not seeing any reasonable way to split this patch in smaller ones.\nActually me neither. Not a good one anyway.\n\nBut if somebody really need it to be split, it can be done that way.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Fri, 08 Dec 2023 16:04:05 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "2024-01 Commitfest.\n\nHi, This patch has a CF status of \"Needs Review\" [1], but it seems\nthere were CFbot test failures last time it was run [2]. Please have a\nlook and post an updated version if necessary.\n\n======\n[1] https://commitfest.postgresql.org/46/4688/\n[2] https://cirrus-ci.com/task/5066432363364352\n\nKind Regards,\nPeter Smith.\n\n\n", "msg_date": "Mon, 22 Jan 2024 16:24:13 +1100", "msg_from": "Peter Smith <smithpb2250@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от понедельник, 22 января 2024 г. 08:24:13 MSK пользователь Peter \nSmith написал:\n\nHi!\n\nI've updated the patch, and it should work with modern master branch.\n\n> 2024-01 Commitfest.\n> \n> Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> there were CFbot test failures last time it was run [2]. Please have a\n> look and post an updated version if necessary.\n\nDo we still miss notification facility for CFbot? May be somebody have added it \nsince the time I've checked last time?\n\n> \n> ======\n> [1] https://commitfest.postgresql.org/46/4688/\n> [2] https://cirrus-ci.com/task/5066432363364352\n> \n> Kind Regards,\n> Peter Smith.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Thu, 01 Feb 2024 09:58:32 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от четверг, 1 февраля 2024 г. 09:58:32 MSK пользователь Nikolay \nShaplov написал:\n\nI' ve updated the patch again, since the output of the \nsrc/test/modules/test_oat_hooks test have changed.\nThis test began to register namespace lookups, and since options internals \nhave been changed much, number of lookups have been changed too. So I just \nfixed the expected file.\n\n> Hi!\n> \n> I've updated the patch, and it should work with modern master branch.\n> \n> > 2024-01 Commitfest.\n> > \n> > Hi, This patch has a CF status of \"Needs Review\" [1], but it seems\n> > there were CFbot test failures last time it was run [2]. Please have a\n> > look and post an updated version if necessary.\n> \n> Do we still miss notification facility for CFbot? May be somebody have added\n> it since the time I've checked last time?\n> \n> > ======\n> > [1] https://commitfest.postgresql.org/46/4688/\n> > [2] https://cirrus-ci.com/task/5066432363364352\n> > \n> > Kind Regards,\n> > Peter Smith.\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Wed, 07 Feb 2024 09:44:52 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "On Wed, Feb 7, 2024 at 2:45 PM Nikolay Shaplov <dhyan@nataraj.su> wrote:\n>\n> В письме от четверг, 1 февраля 2024 г. 09:58:32 MSK пользователь Nikolay\n> Shaplov написал:\n>\n> I' ve updated the patch again, since the output of the\n> src/test/modules/test_oat_hooks test have changed.\n> This test began to register namespace lookups, and since options internals\n> have been changed much, number of lookups have been changed too. So I just\n> fixed the expected file.\n>\n> > Hi!\n> >\n> > I've updated the patch, and it should work with modern master branch.\n> >\n\nyou've changed the `typedef struct IndexAmRoutine`\nthen we also need to update doc/src/sgml/indexam.sgml.\n\nsome places you use \"Spec Set\", other places you use \"spec set\"\n+typedef struct options_spec_set\n+{\n+ option_spec_basic **definitions;\n+ int num; /* Number of spec_set items in use */\n+ int num_allocated; /* Number of spec_set items allocated */\n+ bool assert_on_realloc; /* If number of items of the spec_set were\n+ * strictly set to certain value assert on\n+ * adding more items */\n+ bool is_local; /* If true specset is in local memory context */\n+ Size struct_size; /* Size of a structure for options in binary\n+ * representation */\n+ postprocess_bytea_options_function postprocess_fun; /* This function is\n+ * called after options\n+ * were converted in\n+ * Bytea representation.\n+ * Can be used for extra\n+ * validation etc. */\n+ char *namspace; /* spec_set is used for options from this\n+ * namespase */\n+} options_spec_set;\nhere the comments, you used \"spec_set\".\nmaybe we can make it more consistent?\n\n\ntypedef enum option_value_status\n{\nOPTION_VALUE_STATUS_EMPTY, /* Option was just initialized */\nOPTION_VALUE_STATUS_RAW, /* Option just came from syntax analyzer in\n* has name, and raw (unparsed) value */\nOPTION_VALUE_STATUS_PARSED, /* Option was parsed and has link to catalog\n* entry and proper value */\nOPTION_VALUE_STATUS_FOR_RESET, /* This option came from ALTER xxx RESET */\n} option_value_status;\n\nI am slightly confused.\nsay\n`create table a(a1 int) with (foo=bar).`\nto make table a created successfully:\nwe need to check if variable foo is valid or not, if not then error\nout immediately,\nwe also need to check if the variable bar is within the expected bound.\noverall I am not sure about the usage of option_value_status.\nusing some real example demo different option_value_status usage would be great.\n\n\n+ /*\n+ * If this option came from RESET statement we should throw error\n+ * it it brings us name=value data, as syntax analyzer do not\n+ * prevent it\n+ */\n+ if (def->arg != NULL)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_SYNTAX_ERROR),\n+ errmsg(\"RESET must not include values for parameters\")));\n\n+ errmsg(\"RESET must not include values for parameters\")));\nthe error message seems not so good?\n\n\nyou declared as:\noptions_spec_set *get_stdrd_relopt_spec_set(bool is_for_toast);\nbut you used as:\noptions_spec_set * get_stdrd_relopt_spec_set(bool is_heap)\n\nmaybe we refactor the usage as\n`options_spec_set * get_stdrd_relopt_spec_set(bool is_for_toast)`\n\n\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n+ value, option->gen->name),\n+ errdetail(\"Valid values are between \\\"%d\\\" and \\\"%d\\\".\",\n+ optint->min, optint->max)));\n\nmaybe\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n+ value, option->gen->name),\n+ errdetail(\"valid values are between %d and %d.\",\n+ optint->min, optint->max)));\n\n+ if (validate && (option->values.real_val < optreal->min ||\n+ option->values.real_val > optreal->max))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n+ value, option->gen->name),\n+ errdetail(\"Valid values are between \\\"%f\\\" and \\\"%f\\\".\",\n+ optreal->min, optreal->max)));\n\nmaybe\n+ if (validate && (option->values.real_val < optreal->min ||\n+ option->values.real_val > optreal->max))\n+ ereport(ERROR,\n+ (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n+ errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n+ value, option->gen->name),\n+ errdetail(\"valid values are between %f and %f.\",\n+ optreal->min, optreal->max)));\n\nI think the above two, change errdetail to errhint would be more appropriate.\n\n\nI've attached v7, 0001 and 0002.\nv7, 0001 is the rebased v6, v7, 0002 is miscellaneous minor cosmetic fix.", "msg_date": "Mon, 3 Jun 2024 15:52:02 +0800", "msg_from": "jian he <jian.universality@gmail.com>", "msg_from_op": false, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "В письме от понедельник, 3 июня 2024 г. 10:52:02 MSK пользователь jian he \nнаписал:\n\nHi!\n\nThank you for giving attention to my patch, and sorry, it takes me some time \nto deal with such things.\n\nBy the way, what is your further intentions concerning this patch? Should I \nadd you as a reviewer, or you are just passing by? Or add you as second \nauthor, if you are planning to suggest big changes\n\nSo I tried to fix all issues you've pointed out:\n\n> you've changed the `typedef struct IndexAmRoutine`\n> then we also need to update doc/src/sgml/indexam.sgml.\n\nOh. You are right. I've fixed that. The description I've created is quite \nshort. I do not know what else I can say there without diving deep into \nimplementation details. Do you have any ideas what can be added there?\n\n> some places you use \"Spec Set\", other places you use \"spec set\"\n...\n> here the comments, you used \"spec_set\".\n> maybe we can make it more consistent?\n\nThis sounds reasonable. I've changed it to Spec Set and Options Spec Set \nwherever I found alternative spelling.\n\n> typedef enum option_value_status\n> {\n> OPTION_VALUE_STATUS_EMPTY, /* Option was just initialized */\n> OPTION_VALUE_STATUS_RAW, /* Option just came from syntax analyzer in\n> * has name, and raw (unparsed) value */\n> OPTION_VALUE_STATUS_PARSED, /* Option was parsed and has link to catalog\n> * entry and proper value */\n> OPTION_VALUE_STATUS_FOR_RESET, /* This option came from ALTER xxx RESET */\n> } option_value_status;\n\n> overall I am not sure about the usage of option_value_status.\n> using some real example demo different option_value_status usage would be\n> great.\n\nI've added explanatory comment before option_value_status enum explaining what \nis what. Hope it will make things more clear. \n\nIf not me know and we will try to find out what comments should be added\n\n> \n> + errmsg(\"RESET must not include values for parameters\")));\n> the error message seems not so good?\neh... It does seem to be not good, but I've copied it from original code\nhttps://github.com/postgres/postgres/blob/\n00ac25a3c365004821e819653c3307acd3294818/src/backend/access/common/\nreloptions.c#L1231\n\nI would not dare changing it. I've already changed to many things here.\n\n> you declared as:\n> options_spec_set *get_stdrd_relopt_spec_set(bool is_for_toast);\n> but you used as:\n> options_spec_set * get_stdrd_relopt_spec_set(bool is_heap)\n> \n> maybe we refactor the usage as\n> `options_spec_set * get_stdrd_relopt_spec_set(bool is_for_toast)`\nMy bad.\nI've changed is_for_toast to is_heap once, and I guess I've missed function \ndeclaration... Fixed it to be consistent.\n\n> + ereport(ERROR,\n> + (errcode(ERRCODE_INVALID_PARAMETER_VALUE),\n> + errmsg(\"value %s out of bounds for option \\\"%s\\\"\",\n> + value, option->gen->name),\n> + errdetail(\"Valid values are between \\\"%d\\\" and \\\"%d\\\".\",\n> + optint->min, optint->max)));\n.....\n> I think the above two, change errdetail to errhint would be more\n> appropriate.\n\nSame as above... I've just copied error messages from the original code. \nFixing them better go as a separate patch I guess...\n\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sun, 16 Jun 2024 17:07:21 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" }, { "msg_contents": "While fixing the patch according to jian he recommendations, I come to a \nconclusion that using void* pointer to a structure is really bad idea. Do not \nknow why I ever did it that way from start.\nSo I fixed it.\nChanges of this fix can be seen here: \nhttps://gitlab.com/dhyannataraj/postgres/-/commit/be4d9e16\n\nThe whole patch is attached to this mail.\n\n-- \nNikolay Shaplov aka Nataraj\nFuzzing Engineer at Postgres Professional\nMatrix IM: @dhyan:nataraj.su", "msg_date": "Sat, 22 Jun 2024 21:21:08 +0300", "msg_from": "Nikolay Shaplov <dhyan@nataraj.su>", "msg_from_op": true, "msg_subject": "Re: [PATCH] New [relation] option engine" } ]