threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "The logical replication subscription side does not fire per-column \nupdate triggers when applying an update change. (Per-table update \ntriggers work fine.) This patch fixes that. Should be backpatched to PG10.\n\nA patch along these lines is also necessary to handle triggers involving \ngenerated columns in the apply worker. I'll work on that separately.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 13 Dec 2019 14:25:47 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "logical replication does not fire per-column triggers"
},
{
"msg_contents": "Em sex., 13 de dez. de 2019 às 10:26, Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> escreveu:\n>\n> The logical replication subscription side does not fire per-column\n> update triggers when applying an update change. (Per-table update\n> triggers work fine.) This patch fixes that. Should be backpatched to PG10.\n>\nUsing the regression test example, table tab_fk_ref have columns id\nand bid. If you add a trigger \"BEFORE UPDATE OF bid\" into subscriber\nthat fires on replica, it will always fire even if you are **not**\nchanged bid in publisher. In logical replication protocol all columns\nwere changed unless it is a (unchanged) TOAST column (if a column is\npart of the PK/REPLICA IDENTITY we can compare both values and figure\nout if the value changed, however, we can't ensure that a value\nchanged for the other columns -- those that are not PK/REPLICA\nIDENTITY). It is clear that not firing the trigger is wrong but firing\nit when you say that you won't fire it is also wrong. Whichever\nbehavior we choose, limitation should be documented. I prefer the\nbehavior that ignores \"OF col1\" and always fire the trigger (because\nwe can add a filter inside the function/procedure).\n\n+ /* Populate updatedCols for trigger manager */\nAdd a comment that explains it is not possible to (always) determine\nif a column changed. Hence, \"OF col1\" syntax will be ignored.\n\n+ for (int i = 0; i < remoteslot->tts_tupleDescriptor->natts; i++)\n+ {\n+ RangeTblEntry *target_rte = list_nth(estate->es_range_table, 0);\n+\nIt should be outside the loop.\n\n+ if (newtup.changed)\nIt should be newtup.changed[i].\n\nYou should add a test that exposes \"ignore OF col1\" such as:\n\n$node_publisher->safe_psql('postgres',\n \"UPDATE tab_fk_ref SET id = 6 WHERE id = 1;\");\n\n\n-- \n Euler Taveira Timbira -\nhttp://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\n\n",
"msg_date": "Fri, 13 Dec 2019 23:13:55 -0300",
"msg_from": "Euler Taveira <euler@timbira.com.br>",
"msg_from_op": false,
"msg_subject": "Re: logical replication does not fire per-column triggers"
},
{
"msg_contents": "On 2019-12-14 03:13, Euler Taveira wrote:\n> Using the regression test example, table tab_fk_ref have columns id\n> and bid. If you add a trigger \"BEFORE UPDATE OF bid\" into subscriber\n> that fires on replica, it will always fire even if you are **not**\n> changed bid in publisher. In logical replication protocol all columns\n> were changed unless it is a (unchanged) TOAST column (if a column is\n> part of the PK/REPLICA IDENTITY we can compare both values and figure\n> out if the value changed, however, we can't ensure that a value\n> changed for the other columns -- those that are not PK/REPLICA\n> IDENTITY). It is clear that not firing the trigger is wrong but firing\n> it when you say that you won't fire it is also wrong. Whichever\n> behavior we choose, limitation should be documented. I prefer the\n> behavior that ignores \"OF col1\" and always fire the trigger (because\n> we can add a filter inside the function/procedure).\n\nThere is a small difference: If the subscriber has extra columns not \npresent on the publisher, then a column trigger covering only columns in \npublished column set will not fire.\n\nIn practice, a column trigger is just an optimization. The column it is \ntriggering on might not have actually changed. The opposite is worse, \nnot firing the trigger when the column actually has changed.\n\n> + /* Populate updatedCols for trigger manager */\n> Add a comment that explains it is not possible to (always) determine\n> if a column changed. Hence, \"OF col1\" syntax will be ignored.\n\ndone\n\n> + for (int i = 0; i < remoteslot->tts_tupleDescriptor->natts; i++)\n> + {\n> + RangeTblEntry *target_rte = list_nth(estate->es_range_table, 0);\n> +\n> It should be outside the loop.\n\nfixed\n\n> + if (newtup.changed)\n> It should be newtup.changed[i].\n\nfixed\n\n> You should add a test that exposes \"ignore OF col1\" such as:\n> \n> $node_publisher->safe_psql('postgres',\n> \"UPDATE tab_fk_ref SET id = 6 WHERE id = 1;\");\n\ndone\n\nNew patch attached.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 16 Dec 2019 14:37:00 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: logical replication does not fire per-column triggers"
},
{
"msg_contents": "On 2019-12-16 14:37, Peter Eisentraut wrote:\n> New patch attached.\n\nI have committed this and backpatched it to PG10.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 11:40:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: logical replication does not fire per-column triggers"
}
] |
[
{
"msg_contents": "If I do something like this:\n\nexplain (analyze) select * from pgbench_accounts \\watch 1\n\nIt behaves as expected. But once I break out of the loop with ctrl-C, then\nif I execute the same thing again it executes the command once, but shows\nno output and doesn't loop. It seems like some flag is getting set with\nctrl-C, but then never gets reset.\n\nIt was broken in this commit:\n\ncommit a4fd3aa719e8f97299dfcf1a8f79b3017e2b8d8b\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Mon Dec 2 11:18:56 2019 +0900\n\n Refactor query cancellation code into src/fe_utils/\n\n\nI've not dug into code itself, I just bisected it.\n\nCheers,\n\nJeff\n\nIf I do something like this:explain (analyze) select * from pgbench_accounts \\watch 1It behaves as expected. But once I break out of the loop with ctrl-C, then if I execute the same thing again it executes the command once, but shows no output and doesn't loop. It seems like some flag is getting set with ctrl-C, but then never gets reset.It was broken in this commit:commit a4fd3aa719e8f97299dfcf1a8f79b3017e2b8d8bAuthor: Michael Paquier <michael@paquier.xyz>Date: Mon Dec 2 11:18:56 2019 +0900 Refactor query cancellation code into src/fe_utils/ I've not dug into code itself, I just bisected it.Cheers,Jeff",
"msg_date": "Fri, 13 Dec 2019 14:43:31 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "psql's \\watch is broken"
},
{
"msg_contents": "\n> explain (analyze) select * from pgbench_accounts \\watch 1\n>\n> It behaves as expected. But once I break out of the loop with ctrl-C, then\n> if I execute the same thing again it executes the command once, but shows\n> no output and doesn't loop. It seems like some flag is getting set with\n> ctrl-C, but then never gets reset.\n>\n> It was broken in this commit:\n>\n> commit a4fd3aa719e8f97299dfcf1a8f79b3017e2b8d8b\n> Author: Michael Paquier <michael@paquier.xyz>\n> Date: Mon Dec 2 11:18:56 2019 +0900\n>\n> Refactor query cancellation code into src/fe_utils/\n>\n> I've not dug into code itself, I just bisected it.\n\nThanks for the report. I'll look into it.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 14 Dec 2019 00:09:51 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 12:09:51AM +0100, Fabien COELHO wrote:\n> \n>> explain (analyze) select * from pgbench_accounts \\watch 1\n>> \n>> It behaves as expected. But once I break out of the loop with ctrl-C, then\n>> if I execute the same thing again it executes the command once, but shows\n>> no output and doesn't loop. It seems like some flag is getting set with\n>> ctrl-C, but then never gets reset.\n>> \n>> \n>> I've not dug into code itself, I just bisected it.\n> \n> Thanks for the report. I'll look into it.\n\nLooked at it already. And yes, I can see the difference. This comes\nfrom the switch from cancel_pressed to CancelRequested in psql,\nespecially PSQLexecWatch() in this case. And actually, now that I\nlook at it, I think that we should simply get rid of cancel_pressed in\npsql completely and replace it with CancelRequested. This also\nremoves the need of having cancel_pressed defined in print.c, which\nwas not really wanted originally. Attached is a patch which addresses\nthe issue for me, and cleans up the code while on it. Fabien, Jeff,\ncan you confirm please?\n--\nMichael",
"msg_date": "Sat, 14 Dec 2019 11:44:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 9:45 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Sat, Dec 14, 2019 at 12:09:51AM +0100, Fabien COELHO wrote:\n> >\n> >> explain (analyze) select * from pgbench_accounts \\watch 1\n> >>\n> >> It behaves as expected. But once I break out of the loop with ctrl-C,\n> then\n> >> if I execute the same thing again it executes the command once, but\n> shows\n> >> no output and doesn't loop. It seems like some flag is getting set with\n> >> ctrl-C, but then never gets reset.\n> >>\n> >>\n> >> I've not dug into code itself, I just bisected it.\n> >\n> > Thanks for the report. I'll look into it.\n>\n> Looked at it already. And yes, I can see the difference. This comes\n> from the switch from cancel_pressed to CancelRequested in psql,\n> especially PSQLexecWatch() in this case. And actually, now that I\n> look at it, I think that we should simply get rid of cancel_pressed in\n> psql completely and replace it with CancelRequested. This also\n> removes the need of having cancel_pressed defined in print.c, which\n> was not really wanted originally. Attached is a patch which addresses\n> the issue for me, and cleans up the code while on it. Fabien, Jeff,\n> can you confirm please?\n> --\n> Michael\n>\n\nThis works for me.\n\nThanks,\n\nJeff\n\nOn Fri, Dec 13, 2019 at 9:45 PM Michael Paquier <michael@paquier.xyz> wrote:On Sat, Dec 14, 2019 at 12:09:51AM +0100, Fabien COELHO wrote:\n> \n>> explain (analyze) select * from pgbench_accounts \\watch 1\n>> \n>> It behaves as expected. But once I break out of the loop with ctrl-C, then\n>> if I execute the same thing again it executes the command once, but shows\n>> no output and doesn't loop. It seems like some flag is getting set with\n>> ctrl-C, but then never gets reset.\n>> \n>> \n>> I've not dug into code itself, I just bisected it.\n> \n> Thanks for the report. I'll look into it.\n\nLooked at it already. And yes, I can see the difference. This comes\nfrom the switch from cancel_pressed to CancelRequested in psql,\nespecially PSQLexecWatch() in this case. And actually, now that I\nlook at it, I think that we should simply get rid of cancel_pressed in\npsql completely and replace it with CancelRequested. This also\nremoves the need of having cancel_pressed defined in print.c, which\nwas not really wanted originally. Attached is a patch which addresses\nthe issue for me, and cleans up the code while on it. Fabien, Jeff,\ncan you confirm please?\n--\nMichaelThis works for me. Thanks,Jeff",
"msg_date": "Fri, 13 Dec 2019 22:49:45 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "\n>>> I've not dug into code itself, I just bisected it.\n>>\n>> Thanks for the report. I'll look into it.\n>\n> Looked at it already.\n\nAh, the magic of timezones!\n\n> And yes, I can see the difference. This comes from the switch from \n> cancel_pressed to CancelRequested in psql, especially PSQLexecWatch() in \n> this case. And actually, now that I look at it, I think that we should \n> simply get rid of cancel_pressed in psql completely and replace it with \n> CancelRequested. This also removes the need of having cancel_pressed \n> defined in print.c, which was not really wanted originally. Attached is \n> a patch which addresses the issue for me, and cleans up the code while\n> on it. Fabien, Jeff, can you confirm please?\n\nYep. Patch applies cleanly, compiles, works for me as well.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 14 Dec 2019 13:49:08 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Looked at it already. And yes, I can see the difference. This comes\n> from the switch from cancel_pressed to CancelRequested in psql,\n> especially PSQLexecWatch() in this case. And actually, now that I\n> look at it, I think that we should simply get rid of cancel_pressed in\n> psql completely and replace it with CancelRequested. This also\n> removes the need of having cancel_pressed defined in print.c, which\n> was not really wanted originally. Attached is a patch which addresses\n> the issue for me, and cleans up the code while on it. Fabien, Jeff,\n> can you confirm please?\n\nGiven the rather small number of existing uses of CancelRequested,\nI wonder if it wouldn't be a better idea to rename it to cancel_pressed?\n\nAlso, perhaps I am missing something, but I do not see anyplace in the\ncurrent code base that ever *clears* CancelRequested. How much has\nthis code been tested? Is it really sane to remove the setting of that\nflag from psql_cancel_callback, as this patch does? Is it sane that\nCancelRequested isn't declared volatile?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 15 Dec 2019 15:09:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "Hello Tom,\n\nMy 0.02 ᅵ:\n\n> Given the rather small number of existing uses of CancelRequested,\n> I wonder if it wouldn't be a better idea to rename it to cancel_pressed?\n\nI prefer the former because it is more functional (a cancellation has been \nrequested, however the mean to do so) while \"pressed\" rather suggest a \nparticular operation.\n\n> Also, perhaps I am missing something, but I do not see anyplace in the\n> current code base that ever *clears* CancelRequested.\n\nThis was already like that in the initial version before the refactoring.\n\n ./src/bin/scripts/common.h:extern bool CancelRequested;\n ./src/bin/scripts/common.c:bool CancelRequested = false;\n ./src/bin/scripts/common.c: CancelRequested = true;\n ./src/bin/scripts/common.c: CancelRequested = true;\n ./src/bin/scripts/common.c: CancelRequested = true;\n ./src/bin/scripts/common.c: CancelRequested = true;\n ./src/bin/scripts/vacuumdb.c: if (CancelRequested)\n ./src/bin/scripts/vacuumdb.c: if (CancelRequested)\n ./src/bin/scripts/vacuumdb.c: if (i < 0 || CancelRequested)\n\nHowever \"cancel_request\" resets are in \"psql/mainloop.c\", and are changed \nto \"CancelRequest = false\" resets by Michaᅵl patch, so all seems fine.\n\n> How much has this code been tested? Is it really sane to remove the \n> setting of that flag from psql_cancel_callback, as this patch does?\n\nISTM that the callback is called from a function which sets CancelRequest?\n\n> Is it sane that CancelRequested isn't declared volatile?\n\nI agree that it would seem appropriate, but the initial version I \nrefactored was not declaring CancelRequested as volatile, so I did not \nchange that. However, \"cancel_pressed\" is volatile, so merging the two\nwould indeed suggest to declare it as volatile.\n\n-- \nFabien.",
"msg_date": "Sun, 15 Dec 2019 22:35:54 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "On Sun, Dec 15, 2019 at 10:35:54PM +0100, Fabien COELHO wrote:\n>> Also, perhaps I am missing something, but I do not see anyplace in the\n>> current code base that ever *clears* CancelRequested.\n\nFor bin/scripts/, that's not really a problem, because all code paths\ntriggering a cancellation, aka in vacuumdb and reindexdb, exit\nimmediately.\n\n>> How much has this code been tested?\n\nSorry about that, not enough visibly :(\n\n>> Is it really sane to remove the setting of that flag from\n>> psql_cancel_callback, as this patch does?\n> \n> ISTM that the callback is called from a function which sets CancelRequest?\n\nHmm, that's not right. Note that there is a subtle difference between\npsql and bin/scripts/. In the case of the scripts, originally\nCancelRequested tracks if a cancellation request has been sent to the\nbackend or not. Hence, if the client has not called SetCancelConn()\nto set up cancelConn, then CancelRequested is switched to true all the\ntime. Now, if cancelConn is set, but a cancellation request has not\ncorrectly completed, then CancelRequested never set to true.\n\nIn the case of psql, the original code sets cancel_pressed all the\ntime, even if a cancellation request has been done and that it failed,\nand did not care if cancelConn was set or not. So, the intention of\npsql is to always track when a cancellation attempt is done, even if\nit has failed to issue it, while for all our other frontends we want\nto make sure that a cancellation attempt is done, and that the\ncancellation has succeeded before looping out and exit.\n\n>> Is it sane that CancelRequested isn't declared volatile?\n> \n> I agree that it would seem appropriate, but the initial version I refactored\n> was not declaring CancelRequested as volatile, so I did not change that.\n> However, \"cancel_pressed\" is volatile, so merging the two\n> would indeed suggest to declare it as volatile.\n\nActually, it seems to me that both suggestions are not completely\nright either about that stuff since the flag has been introduced in\nbin/scripts/ in a1792320, no? The way to handle such variables safely\nin a signal handler it to mark them as volatile and sig_atomic_t. The\nsame can be said about the older cancel_pressed as of 718bb2c in psql.\nSo fixed all that while on it.\n\nAs the concepts behind cancel_pressed and CancelRequested are\ndifferent, we need to keep cancel_pressed and make psql use it. And\nthe callback used for WIN32 also needs to set the flag. I also think\nthat we should do a better effort in documenting CancelRequested\nproperly in cancel.c. All that should be fixed as of the attached,\ntested on Linux and from a Windows console. From a point of view of\nconsistency, this actually brings back the code of psql to the same\nstate as it was before a4fd3aa, except that we still have the\nrefactored pieces.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 16 Dec 2019 11:40:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 11:40:07AM +0900, Michael Paquier wrote:\n> As the concepts behind cancel_pressed and CancelRequested are\n> different, we need to keep cancel_pressed and make psql use it. And\n> the callback used for WIN32 also needs to set the flag. I also think\n> that we should do a better effort in documenting CancelRequested\n> properly in cancel.c. All that should be fixed as of the attached,\n> tested on Linux and from a Windows console. From a point of view of\n> consistency, this actually brings back the code of psql to the same\n> state as it was before a4fd3aa, except that we still have the\n> refactored pieces.\n\nMerging both flags can actually prove to be tricky, as we have some\ncode paths involving --single-step where psql visibly assumes that a\ncancellation pressed does not necessarily imply one that succeeds is\nthere is a cancellation object around (ExecQueryTuples, tuple printing\nand \\copy). So I have fixed the issue by making the code of psql\nconsistent with what we had before a4fd3aa. I think that it should be\nactually possible to merge CancelRequested and cancel_pressed while\nkeeping the user-visible changes acceptable, and this requires a very\ncareful lookup.\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 13:40:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: psql's \\watch is broken"
}
] |
[
{
"msg_contents": "commit message says it all.\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/",
"msg_date": "Fri, 13 Dec 2019 17:07:51 -0300",
"msg_from": "Alvaro Herrera <alvherre@alvh.no-ip.org>",
"msg_from_op": true,
"msg_subject": "xlog.c variable pointlessly global"
},
{
"msg_contents": "> On 13 Dec 2019, at 21:07, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n> \n> commit message says it all.\n\nI haven't tested it, but reading it makes perfect sense. +1.\n\ncheers ./daniel\n\n\n",
"msg_date": "Sat, 14 Dec 2019 00:32:30 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: xlog.c variable pointlessly global"
},
{
"msg_contents": "On Sat, Dec 14, 2019 at 12:32:30AM +0100, Daniel Gustafsson wrote:\n> On 13 Dec 2019, at 21:07, Alvaro Herrera <alvherre@alvh.no-ip.org> wrote:\n>> \n>> commit message says it all.\n> \n> I haven't tested it, but reading it makes perfect sense. +1.\n\n+1 says it all.\n--\nMichael",
"msg_date": "Sat, 14 Dec 2019 11:47:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: xlog.c variable pointlessly global"
}
] |
[
{
"msg_contents": "To whom it may concern,\n\nI'm Utsav Parmar, pursuing my B. Tech in Computer Engineering. I like to\nwork on new technologies and am currently looking for open-source projects\nto contribute to.\n\nAs it may turn out, I've got a college project in my curriculum this\nsemester under “Software Development Practice”, and I'd like to work upon a\nproject and/or a feature in pipeline spanning over 3 months in PostgreSQL\norganization as a part of the same college project. My mentor cum professor\nhas already agreed for the same, given that I get approval from one of the\nmaintainers. So, if possible, will you please allot me something to work\nupon?\n\nThank you for your time. Looking forward to hearing from you.\n\nRegards,\n\nUtsav Parmar\n\nTo whom it may concern,I'm Utsav Parmar, pursuing my B. Tech in Computer Engineering. I like to work on new technologies and am currently looking for open-source projects to contribute to.As it may turn out, I've got a college project in my curriculum this semester under “Software Development Practice”, and I'd like to work upon a project and/or a feature in pipeline spanning over 3 months in PostgreSQL organization as a part of the same college project. My mentor cum professor has already agreed for the same, given that I get approval from one of the maintainers. So, if possible, will you please allot me something to work upon?Thank you for your time. Looking forward to hearing from you.Regards,Utsav Parmar",
"msg_date": "Sun, 15 Dec 2019 21:56:32 +0530",
"msg_from": "Utsav Parmar <utsavp0213@gmail.com>",
"msg_from_op": true,
"msg_subject": "Request to be allotted a project or a feature in pipeline"
},
{
"msg_contents": "On Sun, 2019-12-15 at 21:56 +0530, Utsav Parmar wrote:\n> I'm Utsav Parmar, pursuing my B. Tech in Computer Engineering. I like to work on new technologies\n> and am currently looking for open-source projects to contribute to.\n> \n> As it may turn out, I've got a college project in my curriculum this semester under\n> “Software Development Practice”, and I'd like to work upon a project and/or a feature\n> in pipeline spanning over 3 months in PostgreSQL organization as a part of the same\n> college project. My mentor cum professor has already agreed for the same, given that\n> I get approval from one of the maintainers. So, if possible, will you please allot me\n> something to work upon?\n> \n> Thank you for your time. Looking forward to hearing from you.\n\nOne thing that we can never have enough of is reviewers, that is people who examine\npatches in the commitfest and give feedback on them.\nLook here for details: https://wiki.postgresql.org/wiki/Reviewing_a_Patch\n\nCode review is definitely part of software development practice, and\nreading and understanding the code of experienced developers can teach\nyou a lot. Another nice aspect is that this is an activity that can easily\nbe adjusted to span three months; if you embark on a new feature, the\nthree months may pass without your patch getting accepted.\n\nYours,\nLaurenz Albe\n\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:09:51 +0100",
"msg_from": "Laurenz Albe <laurenz.albe@cybertec.at>",
"msg_from_op": false,
"msg_subject": "Re: Request to be allotted a project or a feature in pipeline"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 03:09:51PM +0100, Laurenz Albe wrote:\n> Code review is definitely part of software development practice, and\n> reading and understanding the code of experienced developers can teach\n> you a lot. Another nice aspect is that this is an activity that can easily\n> be adjusted to span three months; if you embark on a new feature, the\n> three months may pass without your patch getting accepted.\n\nCode review can be very challenging, but that's very fruitful in the\nlong-term as you gain experience reading other's code. You will most\nlikely begin to dig into parts of the code you are not familiar of,\nstill there are a couple of areas which are more simple than others if\nyou want to get used to the Postgres code, like changes involving\nin-core extensions or client binaries. If you begin working on a\nfeature, I would recommend beginning with something small-ish. And\neven such things can sometimes get more complicated depending on the\nreviews you get regarding issues you did not even imagine :)\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 13:53:47 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Request to be allotted a project or a feature in pipeline"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 6:12 AM Utsav Parmar <utsavp0213@gmail.com> wrote:\n> As it may turn out, I've got a college project in my curriculum this semester under “Software Development Practice”, and I'd like to work upon a project and/or a feature in pipeline spanning over 3 months in PostgreSQL organization as a part of the same college project. My mentor cum professor has already agreed for the same, given that I get approval from one of the maintainers. So, if possible, will you please allot me something to work upon?\n\nIt doesn't really work like that. We don't assign tasks to people;\npeople show up and work on topics that they find interesting. It's\nvery difficult to do actual task assignments because we all work for\ndifferent companies, and somebody at company A cannot tell somebody at\ncompany B what to spend time on. Sometimes people are willing to help\nnewcomers with suggested projects and mentoring, but that's fairly\ntime-consuming for the mentor, so to a large extent we rely on people\nto find their own projects.\n\nThis is maybe not great. It would be cool if the PostgreSQL community\nhad the resources to pay experienced developers just to mentor new\ndevelopers. It's not clear how to me how that could be made to work,\nthough.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 10:48:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Request to be allotted a project or a feature in pipeline"
},
{
"msg_contents": "Greetings,\n\n* Utsav Parmar (utsavp0213@gmail.com) wrote:\n> I'm Utsav Parmar, pursuing my B. Tech in Computer Engineering. I like to\n> work on new technologies and am currently looking for open-source projects\n> to contribute to.\n\nNeat!\n\n> As it may turn out, I've got a college project in my curriculum this\n> semester under “Software Development Practice”, and I'd like to work upon a\n> project and/or a feature in pipeline spanning over 3 months in PostgreSQL\n> organization as a part of the same college project. My mentor cum professor\n> has already agreed for the same, given that I get approval from one of the\n> maintainers. So, if possible, will you please allot me something to work\n> upon?\n\nWhat you're gearing up for actually sounds quite similar to what we do\nwith the GSoC each summer. The projects there are intended to be about\n3 months long and you can look at who has been interested in supporting\nthose projects in the past from a mentorship perspective. Here's the\n2019 list:\n\nhttps://wiki.postgresql.org/wiki/GSoC_2019\n\nNote that the individuals listed on that page as being willing to mentor\nwere specifically planning to help with GSoC 2019 over this past summer,\nso they may or may not have time to be able to help you today, but\nyou'll also find that this mailing list and the other channels mentioned\non the high-level GSoC page:\n\nhttps://wiki.postgresql.org/wiki/GSoC\n\nThat might even be a way for you to continue to contribute to PG (and be\npaid by Google to do so) even after you're done with this semester,\nassuming you meet the criteria for GSoC 2020.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 18 Dec 2019 10:56:41 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Request to be allotted a project or a feature in pipeline"
}
] |
[
{
"msg_contents": "Hello, this week I decided to pursue an error a bit further than\nusual, even after having fixed it for myself, I found that I could fix\nit for future newcomers, especially those running\ncontainerized distributions.\n\nThe problem was that running the command psql without arguments\nreturned the following\nerror message:\n\npsql: could not connect to server: No such file or directory\n Is the server running locally and accepting\n connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nNow, I eventually found a way around this by specifying the host with\nthe following command 'psql -h localhost -p 5432'.\n\nHowever, the answers I found on google didn't suggest this simple fix\nat all, I found a lot of confused users either exposing the sockets\nfrom their containers, or worse, bashing into their containers and\nrunning psql from inside :*(\nhttps://stackoverflow.com/questions/27673563/how-to-get-into-psql-of-a-running-postgres-container/59296176#59296176\n\nI also found this is a common error in postgres docs:\nhttps://www.postgresql.org/docs/9.1/server-start.html\nhttps://www.postgresql.org/docs/10/tutorial-createdb.html\n\n\nSo I wondered, since psql went through the trouble of guessing my unix\nsocket, it could guess my hostname as well. Indeed I would later find\nthat the tcp defaults were already implemented on non-unix builds,\nadditionally psql already has a mechanism to try multiple connections.\nSo\nmy humble change is for unix builds to try to connect via unix socket,\nand if that fails, to connect via localhost. This would save headaches\nfor newbies trying to connect for the first time.\n\nAttached you will find my patch. Below you can find the form required\nfor submitting patches.\n\nProject name: Not sure, psql?\nUniquely identifiable file name, so we can tell the difference between\nyour v1 and v24:\nRunning-psql-without-specifying-host-on-unix-systems.patch\nWhat the patch does in a short paragraph: When psql is not supplied a\nhost or hostname, and connection via default socket fails, psql will\nattempt to connect via default tcp, probably localhost.\nWhether the patch is for discussion or for application: Application,\nbut further testing is required.\nWhich branch the patch is against: master\nWhether it compiles and tests successfully, so we know nothing obvious\nis broken: Compiles and works successfully in my linux machine,\nhowever I can't test whether this works on non-unix machines, I will\nneed some help there. I didn't see any automated tests, hopefully I\ndidn't miss any.\nWhether it contains any platform-specific items and if so, has it been\ntested on other platforms: Yes, connection via socket is only\navailable on unix systems. I need help testing on other platforms.\nConfirm that the patch includes regression tests to check the new\nfeature actually works as described.: make check runs successfully,\nthere seems to be a test called psql_command that confirms that psql\ncan connect without specifying host. But I didn't add a test for\nconnecting via tcp.\nInclude documentation on how to use the new feature, including\nexamples: The docs already describe the correct behaviour in\n/doc/src/sgml/ref/psql-ref.sgml \"If you omit the host name psql will\nconnect via a Unix-domain socket to a server on the local host, or via\nTCP/IP to localhost on machines that don't have Unix-domain sockets.\"\nDescribe the effect your patch has on performance, if any: OS without\nunix socket support still won't try to connect via unix socket so they\nwill be unaffected. This change should only affect paths where\nconnection via socket failed and the user would have been shown an\nerror. One could argue that, some users might suffer a slight\nperformance hit by not being told that they are connecting via a\nsubpar method, but this is a sub-tenth of a second latency difference\nfor local connections I believe. If this is an issue, a warning could\nbe added.\n\n\nThank you for time,\nTomas.",
"msg_date": "Sun, 15 Dec 2019 15:11:18 -0300",
"msg_from": "Tomas Zubiri <me@tomaszubiri.com>",
"msg_from_op": true,
"msg_subject": "Improvement to psql's connection defaults"
},
{
"msg_contents": "## Tomas Zubiri (me@tomaszubiri.com):\n\n> The problem was that running the command psql without arguments\n\nThere's an excellent manpage for psql, which can also be found online:\nhttps://www.postgresql.org/docs/current/app-psql.html\n\nIn there you'll find a section \"Connecting to a Database\", with the\nfollowing sentences:\n: In order to connect to a database you need to know the name of your\n: target database, the host name and port number of the server, and what\n: user name you want to connect as. psql can be told about those parameters\n: via command line options, namely -d, -h, -p, and -U respectively.\n\nand\n: If you omit the host name, psql will connect via a Unix-domain socket\n: to a server on the local host, or via TCP/IP to localhost on machines\n: that don't have Unix-domain sockets.\n\nI'm a little confused as to why people don't read the documentation and\nturn to the 'net - that's bound to dig up a lot of people who haven't\nread the docs, too.\n\n> So\n> my humble change is for unix builds to try to connect via unix socket,\n> and if that fails, to connect via localhost. This would save headaches\n> for newbies trying to connect for the first time.\n\nI'd thing that opens a can of worms:\n- authentication options for TCP connections, even on localhost, are\n often different from those for Unix-domain sockets (e.g. while\n using peer authentication for administration purposes might make\n a lot of sense, TCP connections need some credential-based authentication\n so \"any rogue process\" cannot simply connect to your database).\n- Do we have any guarantees that these containers always expose the\n PostgreSQL server on what the host thinks is \"localhost:5432\"? I'm\n thinking of network namespaces, dedicated container network interfaces\n and all the other shenanigans. And what about the use cases of \"more\n than one container\" and \"database on the host and in a container\"?\n My concers is that adding more \"magic\" into the connection logic\n will result in more confusion instead of less - the distinction\n between the \"default case Unix-domain socket\" and \"TCP\" will be lost.\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Mon, 16 Dec 2019 14:01:00 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "Tomas Zubiri <me@tomaszubiri.com> writes:\n> The problem was that running the command psql without arguments\n> returned the following\n> error message:\n> psql: could not connect to server: No such file or directory\n> Is the server running locally and accepting\n> connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n\nThe reason this failed, most likely, is using a semi-broken installation\nin which libpq has a different idea than the server of where the\nunix socket should be. The right fix is one or the other of\n\n(a) don't mix-and-match Postgres packages from different vendors,\n\n(b) adjust the server's unix_socket_directories parameter so that\nit creates a socket where your installed libpq expects to find it.\n\nI realize that this isn't great from a newbie-experience standpoint,\nbut unfortunately we don't have a lot of control over varying\npackager decisions about the socket location --- both the \"/tmp\"\nand the \"/var/run/postgresql\" camps have valid reasons for their\nchoices.\n\nI do not think your proposal would improve matters; it'd just introduce\nyet another variable, ie which transport method did libpq choose.\nAs Christoph noted, that affects authentication behaviors, and there\nare a bunch of other user-visible impacts too (SSL, timeouts, ...).\n\nIf we were going to do something of this sort, what I'd be inclined\nto think about is having an option to probe both of the common socket\ndirectory choices, rather than getting into TCP-land. But that still\nmight be a net negative from the standpoint of confusion vs. number of\ncases it fixes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 09:17:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "Hello,\n\nOn 2019-Dec-15, Tomas Zubiri wrote:\n\n> Attached you will find my patch. Below you can find the form required\n> for submitting patches.\n> \n> Project name: Not sure, psql?\n> Uniquely identifiable file name, so we can tell the difference between\n> your v1 and v24:\n> [...]\n\nPlease, where did you find this \"form\"? We don't have a *required* form\nfor submitting patches; I suspect there's an opinionated page somewhere\nthat we should strive to fix. (Those questions you list are appropriate\nto answer, but forcing you to repeat what you had already explained in\nthe first part of your email is pointless bureaucracy.)\n\nThanks\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 11:47:11 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "> On 16 Dec 2019, at 15:47, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n\n> Please, where did you find this \"form\"?\n\nIt seems to be from the wiki:\n\n https://wiki.postgresql.org/wiki/Submitting_a_Patch#Patch_submission\n\ncheers ./daniel\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:51:39 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "Tom, Chris, thank you for your responses.\n\n> There's an excellent manpage for psql, which can also be found online:\n> https://www.postgresql.org/docs/current/app-psql.html\n> I'm a little confused as to why people don't read the documentation and\n> turn to the 'net - that's bound to dig up a lot of people who haven't\n> read the docs, too.\n\nFor many users, Google is our user interface and manual, I can see by\nchecking my browser history that I googled 'postgresql getting\nstarted' and arrived at this page '\nhttps://www.postgresql.org/docs/10/tutorial-accessdb.html ' which\nsuggests to use psql without specifying host.\n20 minutes later I was here\nhttps://www.postgresql.org/docs/12/app-psql.html which probably means\nI found the -h and -p arguments in the manner you suggest.\n\nAn alternative reason why someone would not use man psql would be if\nthey don't know what the client's executable is. Suppose you come from\nmysql where the command for logging into your database was mysql, you\ncan't man psql because that's the command you are looking for, you\nmight google \"postgresql command line client\" which returns the psql\ndoc page.\n\nFinally, you might google the error message that psql returned, which\nis a perfectly reasonable thing to do.\n\n> authentication options for TCP connections, even on localhost, are\n> often different from those for Unix-domain sockets (e.g. while\n> using peer authentication for administration purposes might make\n > a lot of sense, TCP connections need some credential-based authentication\n > so \"any rogue process\" cannot simply connect to your database).\n\nWe already established that a tcp connection was subpar in terms of\nlatency, we shall note then that a tcp connection is subpar in terms\nof security. Additionally, it is duly noted that connection via tcp\nmight prompt the user for a password, which would mean that the user\ninterface for psql could change depending on the connection made.\nThese are not desirable qualities, but I must reiterate that these\nwould only happen instead of showing the user an error. I still feel a\nsubpar connection is the lesser of two evils. Additionally, it's\nalready possible to have this subpar connection and differing\ninterface on non-unix platforms.\nAs a side note,the official postgres image doesn't require a password\nfor localhost connections.\n\n> Do we have any guarantees that these containers always expose the\n> PostgreSQL server on what the host thinks is \"localhost:5432\"? I'm\n> thinking of network namespaces, dedicated container network interfaces\n> and all the other shenanigans. And what about the use cases of \"more\n> than one container\" and \"database on the host and in a container\"?\n> My concers is that adding more \"magic\" into the connection logic\n> will result in more confusion instead of less - the distinction\n> between the \"default case Unix-domain socket\" and \"TCP\" will be lost.\n\nThere are answers to these questions, but since Docker containers\ndon't expect programs to be docker-compliant, these are not things\npostgresql should be concerned about. What postgresql should be\nconcerned about is that it was accesible via tcp on localhost at port\n5432, and psql didn't reach it.\n\nRegarding the magic, this is a very valid concern, but I feel it's too\nlate, someone other than us, (Robert Hass according to Git annotate)\nalready implemented this magic, the roots of psql magic can probably\nbe traced back to peer authentication even, that's some magical stuff\nthat I personally appreciate. I feel like these arguments are directed\ntowards the initial decision of having psql connect without arguments\nvs psql requiring -h and -p arguments (and possibly -d and -U\nparameters as well), a sailed ship.\n\n>(a) don't mix-and-match Postgres packages from different vendors,\n\nSince there's a client-server architecture here, I'm assuming that\nthere's compatibility between different versions of the software. If I\nwere to connect to an instance provided by an external team, I would\nexpect any psql to work with any postgres server barring specific\nexceptions or wide version discrepancies.\n\n(b) adjust the server's unix_socket_directories parameter so that\nit creates a socket where your installed libpq expects to find it.\nNope, I wanted to connect via tcp, not via socket.\n\n> I do not think your proposal would improve matters; it'd just introduce\n> yet another variable, ie which transport method did libpq choose.\n> As Christoph noted, that affects authentication behaviors, and there\n> are a bunch of other user-visible impacts too (SSL, timeouts, ...)\n\nThis variable already exists, it just depends on the OS. Again, these\nuser-visible impacts would\nonly occur if the user would have received an error instead. Which is\nthe lesser of two evils?\n\n> If we were going to do something of this sort, what I'd be inclined\n> to think about is having an option to probe both of the common socket\n> directory choices, rather than getting into TCP-land. But that still\n> might be a net negative from the standpoint of confusion vs. number of\n> cases it fixes.\n\nI think trying both sockets is a great extension of the idea I'm\npresenting, once magic is introduced, the expectation of simplicity\nhas already been broken, so that cost is only paid once, adding\nfurther magic dilutes that cost and makes it worth it.\nGiven the concerns regarding user confusion, consider displaying the\nfailed unix socket connection message, this would mitigate most of the\nconcerns while still providing a better experience than pure failure.\n\nWhen you say confusion, do you mean user confusion or developer\nconfusion? Because I'm interpreting it as developer confusion or\nsource code complexity, I'm fairly confident that these would be a net\ngain for user experience,\nperhaps it's modern software backed by billion dollar wall street\nconglomerates increasing my expectations but, when I received that\nerror, it felt like psql could have known what I meant, and it also\nfelt like it was trying to know what I meant, therefore I tried to\nteach it what I actually meant, I'm sorry for antropomorphizing psql,\nbut it wanted to learn this. Consider this example, if you are away\nfrom home and you tell Google Maps or Uber that you want to go to your\ncity, does it fail claiming that it doesn't have enough information or\nclaiming that the route it would take given the subpar information you\ngave it would be subpar? Or would it do its best and try to guide you\ntowards the center of the city?\n\nThat said, I undersand that this is a classic tradeoff between\nsimplicity of user experience vs simplicity of source code. And since\na simpler user experience necessitates more effort on the backend, I\nunderstand if you would decide not to go for this, you know better\nthan me what the priorities of postgresql are, and it's your time that\nwill be spent maintaining this change, it's understandable for an open\nsource product not to be Google grade. But I do want to reaffirm my\nstance that this would be a better experience for users, I offer my\npatch as a token of this conviction.\n\nRegards.\n\nEl lun., 16 de dic. de 2019 a la(s) 11:17, Tom Lane\n(tgl@sss.pgh.pa.us) escribió:\n>\n> Tomas Zubiri <me@tomaszubiri.com> writes:\n> > The problem was that running the command psql without arguments\n> > returned the following\n> > error message:\n> > psql: could not connect to server: No such file or directory\n> > Is the server running locally and accepting\n> > connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n>\n> The reason this failed, most likely, is using a semi-broken installation\n> in which libpq has a different idea than the server of where the\n> unix socket should be. The right fix is one or the other of\n>\n> (a) don't mix-and-match Postgres packages from different vendors,\n>\n> (b) adjust the server's unix_socket_directories parameter so that\n> it creates a socket where your installed libpq expects to find it.\n>\n> I realize that this isn't great from a newbie-experience standpoint,\n> but unfortunately we don't have a lot of control over varying\n> packager decisions about the socket location --- both the \"/tmp\"\n> and the \"/var/run/postgresql\" camps have valid reasons for their\n> choices.\n>\n> I do not think your proposal would improve matters; it'd just introduce\n> yet another variable, ie which transport method did libpq choose.\n> As Christoph noted, that affects authentication behaviors, and there\n> are a bunch of other user-visible impacts too (SSL, timeouts, ...).\n>\n> If we were going to do something of this sort, what I'd be inclined\n> to think about is having an option to probe both of the common socket\n> directory choices, rather than getting into TCP-land. But that still\n> might be a net negative from the standpoint of confusion vs. number of\n> cases it fixes.\n>\n> regards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 12:54:36 -0300",
"msg_from": "Tomas Zubiri <me@tomaszubiri.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "To summarize possible enhancements to the current patch:\n\na- Don't hide failed attempted connections when defaults are used.\nb- Attempt to connect via other common socket locations \"/tmp\".\nc- New: Display the complete command used once a successful connection\nhas been made, so running plain psql would print\n\"Connecting with psql -h /var/run/postgresql\" in most cases, psql -h\n/tmp in others, psql -h localhost -p 5432 in others.\n\nI could write a patch with them if it were to get implemented.\n\nRegards.\n\nEl lun., 16 de dic. de 2019 a la(s) 12:54, Tomas Zubiri\n(me@tomaszubiri.com) escribió:\n>\n> Tom, Chris, thank you for your responses.\n>\n> > There's an excellent manpage for psql, which can also be found online:\n> > https://www.postgresql.org/docs/current/app-psql.html\n> > I'm a little confused as to why people don't read the documentation and\n> > turn to the 'net - that's bound to dig up a lot of people who haven't\n> > read the docs, too.\n>\n> For many users, Google is our user interface and manual, I can see by\n> checking my browser history that I googled 'postgresql getting\n> started' and arrived at this page '\n> https://www.postgresql.org/docs/10/tutorial-accessdb.html ' which\n> suggests to use psql without specifying host.\n> 20 minutes later I was here\n> https://www.postgresql.org/docs/12/app-psql.html which probably means\n> I found the -h and -p arguments in the manner you suggest.\n>\n> An alternative reason why someone would not use man psql would be if\n> they don't know what the client's executable is. Suppose you come from\n> mysql where the command for logging into your database was mysql, you\n> can't man psql because that's the command you are looking for, you\n> might google \"postgresql command line client\" which returns the psql\n> doc page.\n>\n> Finally, you might google the error message that psql returned, which\n> is a perfectly reasonable thing to do.\n>\n> > authentication options for TCP connections, even on localhost, are\n> > often different from those for Unix-domain sockets (e.g. while\n> > using peer authentication for administration purposes might make\n> > a lot of sense, TCP connections need some credential-based authentication\n> > so \"any rogue process\" cannot simply connect to your database).\n>\n> We already established that a tcp connection was subpar in terms of\n> latency, we shall note then that a tcp connection is subpar in terms\n> of security. Additionally, it is duly noted that connection via tcp\n> might prompt the user for a password, which would mean that the user\n> interface for psql could change depending on the connection made.\n> These are not desirable qualities, but I must reiterate that these\n> would only happen instead of showing the user an error. I still feel a\n> subpar connection is the lesser of two evils. Additionally, it's\n> already possible to have this subpar connection and differing\n> interface on non-unix platforms.\n> As a side note,the official postgres image doesn't require a password\n> for localhost connections.\n>\n> > Do we have any guarantees that these containers always expose the\n> > PostgreSQL server on what the host thinks is \"localhost:5432\"? I'm\n> > thinking of network namespaces, dedicated container network interfaces\n> > and all the other shenanigans. And what about the use cases of \"more\n> > than one container\" and \"database on the host and in a container\"?\n> > My concers is that adding more \"magic\" into the connection logic\n> > will result in more confusion instead of less - the distinction\n> > between the \"default case Unix-domain socket\" and \"TCP\" will be lost.\n>\n> There are answers to these questions, but since Docker containers\n> don't expect programs to be docker-compliant, these are not things\n> postgresql should be concerned about. What postgresql should be\n> concerned about is that it was accesible via tcp on localhost at port\n> 5432, and psql didn't reach it.\n>\n> Regarding the magic, this is a very valid concern, but I feel it's too\n> late, someone other than us, (Robert Hass according to Git annotate)\n> already implemented this magic, the roots of psql magic can probably\n> be traced back to peer authentication even, that's some magical stuff\n> that I personally appreciate. I feel like these arguments are directed\n> towards the initial decision of having psql connect without arguments\n> vs psql requiring -h and -p arguments (and possibly -d and -U\n> parameters as well), a sailed ship.\n>\n> >(a) don't mix-and-match Postgres packages from different vendors,\n>\n> Since there's a client-server architecture here, I'm assuming that\n> there's compatibility between different versions of the software. If I\n> were to connect to an instance provided by an external team, I would\n> expect any psql to work with any postgres server barring specific\n> exceptions or wide version discrepancies.\n>\n> (b) adjust the server's unix_socket_directories parameter so that\n> it creates a socket where your installed libpq expects to find it.\n> Nope, I wanted to connect via tcp, not via socket.\n>\n> > I do not think your proposal would improve matters; it'd just introduce\n> > yet another variable, ie which transport method did libpq choose.\n> > As Christoph noted, that affects authentication behaviors, and there\n> > are a bunch of other user-visible impacts too (SSL, timeouts, ...)\n>\n> This variable already exists, it just depends on the OS. Again, these\n> user-visible impacts would\n> only occur if the user would have received an error instead. Which is\n> the lesser of two evils?\n>\n> > If we were going to do something of this sort, what I'd be inclined\n> > to think about is having an option to probe both of the common socket\n> > directory choices, rather than getting into TCP-land. But that still\n> > might be a net negative from the standpoint of confusion vs. number of\n> > cases it fixes.\n>\n> I think trying both sockets is a great extension of the idea I'm\n> presenting, once magic is introduced, the expectation of simplicity\n> has already been broken, so that cost is only paid once, adding\n> further magic dilutes that cost and makes it worth it.\n> Given the concerns regarding user confusion, consider displaying the\n> failed unix socket connection message, this would mitigate most of the\n> concerns while still providing a better experience than pure failure.\n>\n> When you say confusion, do you mean user confusion or developer\n> confusion? Because I'm interpreting it as developer confusion or\n> source code complexity, I'm fairly confident that these would be a net\n> gain for user experience,\n> perhaps it's modern software backed by billion dollar wall street\n> conglomerates increasing my expectations but, when I received that\n> error, it felt like psql could have known what I meant, and it also\n> felt like it was trying to know what I meant, therefore I tried to\n> teach it what I actually meant, I'm sorry for antropomorphizing psql,\n> but it wanted to learn this. Consider this example, if you are away\n> from home and you tell Google Maps or Uber that you want to go to your\n> city, does it fail claiming that it doesn't have enough information or\n> claiming that the route it would take given the subpar information you\n> gave it would be subpar? Or would it do its best and try to guide you\n> towards the center of the city?\n>\n> That said, I undersand that this is a classic tradeoff between\n> simplicity of user experience vs simplicity of source code. And since\n> a simpler user experience necessitates more effort on the backend, I\n> understand if you would decide not to go for this, you know better\n> than me what the priorities of postgresql are, and it's your time that\n> will be spent maintaining this change, it's understandable for an open\n> source product not to be Google grade. But I do want to reaffirm my\n> stance that this would be a better experience for users, I offer my\n> patch as a token of this conviction.\n>\n> Regards.\n>\n> El lun., 16 de dic. de 2019 a la(s) 11:17, Tom Lane\n> (tgl@sss.pgh.pa.us) escribió:\n> >\n> > Tomas Zubiri <me@tomaszubiri.com> writes:\n> > > The problem was that running the command psql without arguments\n> > > returned the following\n> > > error message:\n> > > psql: could not connect to server: No such file or directory\n> > > Is the server running locally and accepting\n> > > connections on Unix domain socket \"/var/run/postgresql/.s.PGSQL.5432\"?\n> >\n> > The reason this failed, most likely, is using a semi-broken installation\n> > in which libpq has a different idea than the server of where the\n> > unix socket should be. The right fix is one or the other of\n> >\n> > (a) don't mix-and-match Postgres packages from different vendors,\n> >\n> > (b) adjust the server's unix_socket_directories parameter so that\n> > it creates a socket where your installed libpq expects to find it.\n> >\n> > I realize that this isn't great from a newbie-experience standpoint,\n> > but unfortunately we don't have a lot of control over varying\n> > packager decisions about the socket location --- both the \"/tmp\"\n> > and the \"/var/run/postgresql\" camps have valid reasons for their\n> > choices.\n> >\n> > I do not think your proposal would improve matters; it'd just introduce\n> > yet another variable, ie which transport method did libpq choose.\n> > As Christoph noted, that affects authentication behaviors, and there\n> > are a bunch of other user-visible impacts too (SSL, timeouts, ...).\n> >\n> > If we were going to do something of this sort, what I'd be inclined\n> > to think about is having an option to probe both of the common socket\n> > directory choices, rather than getting into TCP-land. But that still\n> > might be a net negative from the standpoint of confusion vs. number of\n> > cases it fixes.\n> >\n> > regards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 14:14:37 -0300",
"msg_from": "Tomas Zubiri <me@tomaszubiri.com>",
"msg_from_op": true,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "## Tomas Zubiri (me@tomaszubiri.com):\n\n> We already established that a tcp connection was subpar in terms of\n> latency, we shall note then that a tcp connection is subpar in terms\n> of security.\n\nIt's an entirely different thing, I'd argue. I'm not even convinced\nthat an error message is a bad thing: not specifying connection parameters\ngives you the defaults (which are clearly documented - having the doc\nmaintainers enter into a SEO-contest would be expecting too much); and\nif that fails, there's an error. Adding more guesswork on how to connect\nto your database server will add confusion instead of reducing it.\n(Where does \"localhost\" resolve to? Does it resolve at all? What\nabout IPv4 vs. IPv6? Is IP traffic allowed there? That's all stuff\nwhich has been relevant in one way or the other while looking at existing\nsystems. Real world can deviate quite significantly from what one\nwhould expect as \"sane\".) I for one prefer to have clear defaults\nand clear error messages in case that does not work.\n\n> Additionally, it's\n> already possible to have this subpar connection and differing\n> interface on non-unix platforms.\n\nI think there's only one relevant platform without unix sockets\nleft (I'm not sure about vxWorks and other embedded systems, but\ntheir applications rarely include full-blown database servers),\nand that system has gone great lengths to include a linux subsystem -\nthat might tell you something.\n\n> >(a) don't mix-and-match Postgres packages from different vendors,\n> \n> Since there's a client-server architecture here, I'm assuming that\n> there's compatibility between different versions of the software.\n\nGenerally speaking: yes. But compiled-in defaults may not match\n(like the location of the Unix sockets) across different vendor\npackages of the same version. And client tools might have a hard\ntime working against a newer major release of the server: the protocol\ndoes not change (at least, it didn't for a long time, except for some\nadditions like SCRAM authentication), but the catalog may have changed\nbetween major versions and the client can't get the information it\nneeds.\n\n> Consider this example, if you are away\n> from home and you tell Google Maps or Uber that you want to go to your\n> city, does it fail claiming that it doesn't have enough information or\n> claiming that the route it would take given the subpar information you\n> gave it would be subpar?\n\nThat's a good example, but not in the way you think: people and vehicles\n(trucks and lorries, even) ending up some hundreds of kilometres from\ntheir intended destination because their navigation system \"tried it's\nbest\" with an ambiguously entered name is quite a common occurence here.\n(For example, there are at least six places called \"Forst\" and some dozens\n\"Neustadt\" - many more if you count boroughs and similar - in Germany).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Tue, 17 Dec 2019 15:14:56 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
},
{
"msg_contents": "On 2019-Dec-16, Daniel Gustafsson wrote:\n\n> On 16 Dec 2019, at 15:47, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> \n> > Please, where did you find this \"form\"?\n> \n> It seems to be from the wiki:\n> \n> https://wiki.postgresql.org/wiki/Submitting_a_Patch#Patch_submission\n\nOK, I made a few edits there and in other related pages. I'm sure more\nimprovements can be had, if somebody has the time and inclination.\n\nThanks!\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 16:13:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Improvement to psql's connection defaults"
}
] |
[
{
"msg_contents": "Hello,\n\nOn cfbot.cputube.org, we keep seeing random failures on appveyor (the\nCI provider it uses for Windows) in this step:\n\n- appveyor-retry cinst winflexbison\n\n\"cinst\" is the apt/yum/pkg-like tool from the Chocolatey package\nsystem, but unfortunately its repository is frequently unavailable.\n\"appveyor-retry\" was added to cfbot by a pull request from David\nFettter (thanks!) and that reduced the rate of bogus failures quite a\nlot by retrying 3 times, but I'm still seeing a sea of red from time\nto time so I'd like to find a another source for those tools.\n\nHere's the full set of software that is already installed on the\nWindows build images:\n\nhttps://www.appveyor.com/docs/windows-images-software/\n\nYou can see MinGW, MSYS and Cygwin there, and I suspect that one of\nthose is the answer, but I'm not familiar with them or what else might\nbe available to install popular F/OSS bits and pieces on that\noperating system, because I really only know how to Unix. Maybe flex\nand bison are already installed somewhere or easily installable with a\nshell command? Would someone who knows about development on Windows\nlike to make a recommendation, or perhaps provide a tweaked version of\nthe attached patch[1]?\n\nThanks,\n\n[1] Instructions: apply to the PG source, push to a public github\nbranch (or gitlab, kiln, ...), log into appveyor.com with your github\n(or ...) account, add the project, watch it build and test.",
"msg_date": "Mon, 16 Dec 2019 11:25:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 11:25:55AM +1300, Thomas Munro wrote:\n> You can see MinGW, MSYS and Cygwin there, and I suspect that one of\n> those is the answer, but I'm not familiar with them or what else might\n> be available to install popular F/OSS bits and pieces on that\n> operating system, because I really only know how to Unix. Maybe flex\n> and bison are already installed somewhere or easily installable with a\n> shell command? Would someone who knows about development on Windows\n> like to make a recommendation, or perhaps provide a tweaked version of\n> the attached patch[1]?\n\nOn my Windows workstations, I use bison and flex bundled in MinGW\nwhich are located under c:\\MinGW\\msys\\1.0\\bin\\. (Then, for a MSVC\nbuild, I just append this path to $ENV{PATH} with a semicolon to do\nthe separation but that's a separate story).\n--\nMichael",
"msg_date": "Mon, 16 Dec 2019 11:46:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 8:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 16, 2019 at 11:25:55AM +1300, Thomas Munro wrote:\n> > You can see MinGW, MSYS and Cygwin there, and I suspect that one of\n> > those is the answer, but I'm not familiar with them or what else might\n> > be available to install popular F/OSS bits and pieces on that\n> > operating system, because I really only know how to Unix. Maybe flex\n> > and bison are already installed somewhere or easily installable with a\n> > shell command? Would someone who knows about development on Windows\n> > like to make a recommendation, or perhaps provide a tweaked version of\n> > the attached patch[1]?\n>\n> On my Windows workstations, I use bison and flex bundled in MinGW\n> which are located under c:\\MinGW\\msys\\1.0\\bin\\. (Then, for a MSVC\n> build, I just append this path to $ENV{PATH} with a semicolon to do\n> the separation but that's a separate story).\n>\n\nI also have the same setup for flex and bison.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 16 Dec 2019 09:00:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> On Mon, Dec 16, 2019 at 8:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > On Mon, Dec 16, 2019 at 11:25:55AM +1300, Thomas Munro wrote:\n> > > You can see MinGW, MSYS and Cygwin there, and I suspect that one of\n> > > those is the answer, but I'm not familiar with them or what else might\n> > > be available to install popular F/OSS bits and pieces on that\n> > > operating system, because I really only know how to Unix. Maybe flex\n> > > and bison are already installed somewhere or easily installable with a\n> > > shell command? Would someone who knows about development on Windows\n> > > like to make a recommendation, or perhaps provide a tweaked version of\n> > > the attached patch[1]?\n> >\n> > On my Windows workstations, I use bison and flex bundled in MinGW\n> > which are located under c:\\MinGW\\msys\\1.0\\bin\\. (Then, for a MSVC\n> > build, I just append this path to $ENV{PATH} with a semicolon to do\n> > the separation but that's a separate story).\n>\n> I also have the same setup for flex and bison.\n\nThanks Michael and Amit. Adding SET\nPATH=%PATH%;C:\\MinGW\\msys\\1.0\\bin\\ did the trick, and allows me to\nremove the Chocolatey dependency. I'll apply that change to cfbot\ntomorrow.\n\n\n",
"msg_date": "Mon, 16 Dec 2019 16:34:07 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 2:04 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n> On Mon, Dec 16, 2019 at 4:31 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > On Mon, Dec 16, 2019 at 8:16 AM Michael Paquier <michael@paquier.xyz> wrote:\n> > > On Mon, Dec 16, 2019 at 11:25:55AM +1300, Thomas Munro wrote:\n> > > > You can see MinGW, MSYS and Cygwin there, and I suspect that one of\n> > > > those is the answer, but I'm not familiar with them or what else might\n> > > > be available to install popular F/OSS bits and pieces on that\n> > > > operating system, because I really only know how to Unix. Maybe flex\n> > > > and bison are already installed somewhere or easily installable with a\n> > > > shell command? Would someone who knows about development on Windows\n> > > > like to make a recommendation, or perhaps provide a tweaked version of\n> > > > the attached patch[1]?\n> > >\n> > > On my Windows workstations, I use bison and flex bundled in MinGW\n> > > which are located under c:\\MinGW\\msys\\1.0\\bin\\. (Then, for a MSVC\n> > > build, I just append this path to $ENV{PATH} with a semicolon to do\n> > > the separation but that's a separate story).\n> >\n> > I also have the same setup for flex and bison.\n>\n> Thanks Michael and Amit. Adding SET\n> PATH=%PATH%;C:\\MinGW\\msys\\1.0\\bin\\ did the trick, and allows me to\n> remove the Chocolatey dependency. I'll apply that change to cfbot\n> tomorrow.\n>\n>\n\n\nThe reason I use chocolatey is to avoid having a dependency on Msys/Msys2 ;-)\n\nIf you're going to link to anything it should probably be to the Msys2\nbinaries, because a) they are likely to be more up to date and b)\nunlike msys1 they are available by default for all four VS toolsets\nAppveyor provides.\n\nFYI I have captured a complete log of what chocolatey does, at\n<https://gist.github.com/adunstan/12e4474c769aa88a584c450548bf2ffa>\nEssentially it just downloads a zip from sourceforge and then sets up\nshims to the binaries.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 14:19:36 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 4:49 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> The reason I use chocolatey is to avoid having a dependency on Msys/Msys2 ;-)\n\nHeh. Well Chocolatey looks like a really nice project, and I see that\nthey package PostgreSQL, and their site shows 85,427 downloads. Neat!\n I just think it's a good idea for this experimental CI to use things\nthat are already on the image if we can. FWIW I've had trouble with\nother stuff fetched over the network by PostgreSQL CI jobs at times\ntoo (Docbook templates and Ubuntu packages).\n\n> If you're going to link to anything it should probably be to the Msys2\n> binaries, because a) they are likely to be more up to date and b)\n> unlike msys1 they are available by default for all four VS toolsets\n> Appveyor provides.\n\nAh, OK that sounds like a good plan then. Any chance you can tell me\nwhat to add to PATH for that? Changing the 1 to a 2 in the path\nmentioned before doesn't work.\n\n\n",
"msg_date": "Mon, 16 Dec 2019 17:20:17 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 2:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > If you're going to link to anything it should probably be to the Msys2\n> > binaries, because a) they are likely to be more up to date and b)\n> > unlike msys1 they are available by default for all four VS toolsets\n> > Appveyor provides.\n>\n> Ah, OK that sounds like a good plan then. Any chance you can tell me\n> what to add to PATH for that? Changing the 1 to a 2 in the path\n> mentioned before doesn't work.\n\nThe Appveyor page says \"MSYS2 (C:\\msys64)\" so I would try adding\n\"C:\\msys64\\bin\" to the PATH.\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Dec 2019 16:51:32 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 7:21 PM Andrew Dunstan\n<andrew.dunstan@2ndquadrant.com> wrote:\n> On Mon, Dec 16, 2019 at 2:50 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > > If you're going to link to anything it should probably be to the Msys2\n> > > binaries, because a) they are likely to be more up to date and b)\n> > > unlike msys1 they are available by default for all four VS toolsets\n> > > Appveyor provides.\n> >\n> > Ah, OK that sounds like a good plan then. Any chance you can tell me\n> > what to add to PATH for that? Changing the 1 to a 2 in the path\n> > mentioned before doesn't work.\n>\n> The Appveyor page says \"MSYS2 (C:\\msys64)\" so I would try adding\n> \"C:\\msys64\\bin\" to the PATH.\n\nThanks. That didn't work but helped me find my way to\nC:\\msys64\\usr\\bin. That version of bison complains about our grammar\nusing deprecated directives, but that's a matter for another day.\n\nhttps://ci.appveyor.com/project/macdice/postgres/builds/29560016\n\n\n",
"msg_date": "Mon, 16 Dec 2019 19:42:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Thanks. That didn't work but helped me find my way to\n> C:\\msys64\\usr\\bin. That version of bison complains about our grammar\n> using deprecated directives, but that's a matter for another day.\n\nOh, that's a known issue on late-model bison. The problem is that the\noption syntax it wants us to use doesn't exist at all on older bison\nversions. So far we haven't been willing to break old platforms just\nto suppress the warning. We'll probably have to find another answer\nonce a decent fraction of PG hackers start using bison versions that\ngive the warning.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 08:55:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the best way to get flex and bison on Windows?"
}
] |
[
{
"msg_contents": "Hello pg hackers,\n\nThis is the definition of the function:\n\nSyncRepWaitForLSN(XLogRecPtr lsn, bool commit)\n\n1. In the code, it emits ereport(WARNING) for the\nProcDiePending/QueryCancelPending case like this:\n\n ereport(WARNING,\n (errcode(ERRCODE_ADMIN_SHUTDOWN),\n errmsg(\"canceling the wait for synchronous\nreplication and terminating connection due to administrator command\"),\n errdetail(\"The transaction has already committed\nlocally, but might not have been replicated to the standby.\")));\n\n The message \"The transaction has already committed locally\" is wrong\nfor non-commit waiting e.g. 2PC Prepare or AbortPrepare, right? so maybe we\njust give the errdtail for the commit==true case.\n\n2. I'm curious how the client should proceed for the ProcDiePending corner\ncase in the function (assuming synchronous_commit as remote_write or\nabove). In this scenario, a transaction has been committed locally on\nmaster but we are not sure if the commit is replicated to standby or not if\nProcDiePending happens. The commit is not in a safe status from the\nperspective of HA, for example if further when auto-failover happens, we\nmay or may not lose the transaction commit on the standby and client just\ngets (and even can not get) a warning of unknown commit replication status.\n\nThanks.\n\nHello pg hackers,This is the definition of the function:SyncRepWaitForLSN(XLogRecPtr lsn, bool commit)1. In the code, it emits ereport(WARNING) for the ProcDiePending/QueryCancelPending case like this: ereport(WARNING, (errcode(ERRCODE_ADMIN_SHUTDOWN), errmsg(\"canceling the wait for synchronous replication and terminating connection due to administrator command\"), errdetail(\"The transaction has already committed locally, but might not have been replicated to the standby.\"))); The message \"The transaction has already committed locally\" is wrong for non-commit waiting e.g. 2PC Prepare or AbortPrepare, right? so maybe we just give the errdtail for the commit==true case.2. I'm curious how the client should proceed for the ProcDiePending corner case in the function (assuming synchronous_commit as remote_write or above). In this scenario, a transaction has been committed locally on master but we are not sure if the commit is replicated to standby or not if ProcDiePending happens. The commit is not in a safe status from the perspective of HA, for example if further when auto-failover happens, we may or may not lose the transaction commit on the standby and client just gets (and even can not get) a warning of unknown commit replication status. Thanks.",
"msg_date": "Mon, 16 Dec 2019 15:37:41 +0800",
"msg_from": "Paul Guo <pguo@pivotal.io>",
"msg_from_op": true,
"msg_subject": "Questions about SyncRepWaitForLSN()"
}
] |
[
{
"msg_contents": "Hi all,\n\nTraditionally, when Bison generates the header file, it starts\nnumbering named tokens at 258, so that numbers below 256 can be used\nfor character literals. Then, during parsing, the external token\nnumber or character literal is mapped to Bison's internal token number\nvia the yytranslate[] array.\n\nThe newly-released Bison 3.5 has the option \"%define api.token.raw\",\nwhich causes Bison to write out the same (\"raw\") token numbers it\nwould use internally, and thus skip building the yytranslate[] array\nas well as the code to access it. To make use of this, there cannot be\nany single character literals in the grammar, otherwise Bison will\nrefuse to build.\n\nAttached is a draft patch to make the core grammar forward-compatible\nwith this option by using FULL_NAMES for all valid single character\ntokens. Benchmarking raw parsing with \"%define api.token.raw\" enabled\nshows ~1.7-1.9% improvement compared with not setting the option. Not\nmuch, but doing one less array access per token reduces cache\npollution and saves a few kB of binary size as well.\n\nIt'll be years before Bison 3.5 is common in the wild, but not doing a\nuseless array access seems like an obvious thing to aspire towards, so\nwe may as well start now.\n\nDespite the simplicity of the basic idea, there are some subtleties to consider:\n\nSince the PL/pgSQL grammar shares the core scanner, all the single\nchar tokens had to change there as well. Likewise, if \"%define\napi.token.raw\" is set in one grammar, it must be set in both. I've\nadded a comment to that effect.\n\nFor ECPG, it would work well enough to change over to named tokens\nonly for the single chars used in the core grammar, leaving just '{'\nand '}' as ECPG-only literals. However, I thought it better to be\nconsistent in that regard, so I made these named tokens as well. This\nadds a wrinkle in that ECPG now needs one {self} pattern to match its\nown single character tokens and also a test for membership in the set\nof {self} chars in the core scanner, needed for the {operator} rule.\nIt seems the simplest thing is to add '{' and '}' to the core's {self}\npattern, making the two cases the same. That would preclude ever using\nthose characters as operators, but I think it's unlikely anyone would\nwant to use them as such. If that's undesirable, that can be worked\naround with additional complexity.\n\nPreviously, {dolqfailed} returned '$', which is not a valid token in\neither the core or PL/pgSQL grammar that I can see. I don't see how\nthis can be anything but a syntax error, so I changed it to throw the\nerror within the scanner with \"unterminated dollar quote\" for the\nmessage, although it doesn't seem all that helpful. I added a test for\nthis to demonstrate, but maybe there's a better place for it.\n\nMaintenance-wise, it would be a bit tricky to enforce the new\nconvention as a \"rule\", since there is currently no easy way to test\nif any new grammar additions add any single character literals unless\nsomeone were to build with \"%define api.token.raw\". Perhaps that's not\na huge deal, but it's something to consider. More concerning is if\nsomeone were to absent-mindedly add a rule to the scanner with \"return\nyytext[0]\", since that would now be interpreted as the wrong token,\ncausing hard-to-diagnose bugs. There's no obvious way to catch that\nindependently from how Bison is configured. Maybe we could have the\nMakefile grep the scanner for likely expressions that return a single\ncharacter.\n\nThe bulk of the patch was created mechanically, so comments referring\nto single char literals have changed as well, leading to some\ninconsistencies. I left it like this, pending bike-shedding.\n\nSpeaking of bike-shedding, I didn't give a whole lot of thought\ntowards the new token names, and just used what came to mind first.\nOCTOTHORPE is probably better known as NUMBER_SIGN, for example.\n\nI will add this to the next commitfest.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 16 Dec 2019 10:04:53 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "small Bison optimization: remove single character literal tokens"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> Traditionally, when Bison generates the header file, it starts\n> numbering named tokens at 258, so that numbers below 256 can be used\n> for character literals. Then, during parsing, the external token\n> number or character literal is mapped to Bison's internal token number\n> via the yytranslate[] array.\n> The newly-released Bison 3.5 has the option \"%define api.token.raw\",\n> which causes Bison to write out the same (\"raw\") token numbers it\n> would use internally, and thus skip building the yytranslate[] array\n> as well as the code to access it. To make use of this, there cannot be\n> any single character literals in the grammar, otherwise Bison will\n> refuse to build.\n> Attached is a draft patch to make the core grammar forward-compatible\n> with this option by using FULL_NAMES for all valid single character\n> tokens. Benchmarking raw parsing with \"%define api.token.raw\" enabled\n> shows ~1.7-1.9% improvement compared with not setting the option. Not\n> much, but doing one less array access per token reduces cache\n> pollution and saves a few kB of binary size as well.\n\nTBH, I'm having a hard time getting excited about this. It seems like\nyou've just moved the mapping from point A to point B, that is, in\nplace of a lookup in the grammar you have to have the lexer translate\nASCII characters to something else. I'm not sure that's an improvement\nat all. And I'm really unexcited about applying a patch that's this\ninvasive in order to chase a very small improvement ... especially a\nvery small improvement that we can't even have anytime soon.\n\n> It'll be years before Bison 3.5 is common in the wild,\n\nIt'll be *decades* before we'd consider requiring it, really, unless\nthere are truly striking improvements unrelated to this point.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 10:23:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: small Bison optimization: remove single character literal tokens"
}
] |
[
{
"msg_contents": "Hi,\n\nWhen exporting data with psql -c \"...\" >file or select ... \\g file inside\npsql,\npost-creation output errors are silently ignored.\nThe problem can be seen easily by creating a small ramdisk and\nfilling it over capacity:\n\n$ sudo mount -t tmpfs -o rw,size =1M tmpfs /mnt/ramdisk\n\n$ psql -d postgres -At \\\n -c \"select repeat('abc', 1000000)\" > /mnt/ramdisk/file\n\n$ echo $?\n0\n\n$ ls -l /mnt/ramdisk/file\n-rw-r--r-- 1 daniel daniel 1048576 Dec 16 15:57 /mnt/ramdisk/file\n\n$ df -h /mnt/ramdisk/\nFilesystem\tSize Used Avail Use% Mounted on\ntmpfs\t\t1.0M 1.0M 0 100% /mnt/ramdisk\n\nThe output that should be 3M byte long is truncated as expected, but\nwe got no error message and no error code from psql, which\nis obviously not nice.\n\nThe reason is that PrintQuery() and the code below it in\nfe_utils/print.c call puts(), putc(), fprintf(),... without checking their\nreturn values or the result of ferror() on the output stream.\nIf we made them do that and had the printing bail out at the first error,\nthat would be a painful change, since there are a lot of such calls:\n$ egrep -w '(fprintf|fputs|fputc)' fe_utils/print.c | wc -l\n326\nand the call sites are in functions that have no error reporting paths\nanyway.\n\nTo start the discussion, here's a minimal patch that checks ferror() in\nPrintQueryTuples() to raise an error when the printing is done\n(better late than never).\nThe error message is generic as opposed to using errno, as \nI don't know whether errno has been clobbered at this point.\nOTOH, I think that the error indicator on the output stream is not\ncleared by successful writes after some other previous writes have failed.\n\nAre there opinions on whether this should be addressed simply like\nin the attached, or a large refactoring of print.c to bail out at the first\nerror would be better, or other ideas on how to proceed?\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite",
"msg_date": "Mon, 16 Dec 2019 17:05:25 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Making psql error out on output failures"
},
{
"msg_contents": "Hi Daniel,\r\nI agree with you if psql output doesn't indicate any error when the disk is full, then it is obviously not nice. In some situations, people may end up lost data permanently. \r\nHowever, after I quickly applied your idea/patch to \"commit bf65f3c8871bcc95a3b4d5bcb5409d3df05c8273 (HEAD -> REL_12_STABLE, origin/REL_12_STABLE)\", and I found the behaviours/results are different.\r\n\r\nHere is the steps and output, \r\n$ sudo mkdir -p /mnt/ramdisk\r\n$ sudo mount -t tmpfs -o rw,size=1M tmpfs /mnt/ramdisk\r\n\r\nTest-1: delete the \"file\", and run psql command from a terminal directly,\r\n$ rm /mnt/ramdisk/file \r\n$ psql -d postgres -At -c \"select repeat('111', 1000000)\" > /mnt/ramdisk/file\r\nError printing tuples\r\nthen dump the file,\r\n$ rm /mnt/ramdisk/file \r\n$ hexdump -C /mnt/ramdisk/file \r\n00000000 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 |1111111111111111|\r\n*\r\n00100000\r\n\r\nTest-2: delete the \"file\", run the command within psql console,\r\n$ rm /mnt/ramdisk/file \r\n$ psql -d postgres\r\npsql (12.1)\r\nType \"help\" for help.\r\n\r\npostgres=# select repeat('111', 1000000) \\g /mnt/ramdisk/file\r\nError printing tuples\r\npostgres=# \r\nThen dump the file again,\r\n$ hexdump -C /mnt/ramdisk/file \r\n00000000 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 | |\r\n*\r\n00100000\r\n\r\nAs you can see the content are different after applied the patch. \r\n\r\nDavid\n\nThe new status of this patch is: Waiting on Author\n",
"msg_date": "Tue, 14 Jan 2020 02:39:15 +0000",
"msg_from": "David Z <idrawone@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Z wrote:\n\n> $ psql -d postgres -At -c \"select repeat('111', 1000000)\" >\n> /mnt/ramdisk/file\n\nThe -A option selects the unaligned output format and -t\nswitches to the \"tuples only\" mode (no header, no footer).\n\n> Test-2: delete the \"file\", run the command within psql console,\n> $ rm /mnt/ramdisk/file \n> $ psql -d postgres\n\nIn this invocation there's no -A and -t, so the beginning of the\noutput is going to be a left padded column name that is not in the\nother output.\nThe patch is not involved in that difference.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 14 Jan 2020 15:37:21 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "Right, the file difference is caused by \"-At\". \r\n\r\nOn the other side, in order to keep the output message more consistent with other tools, I did a litter bit more investigation on pg_dump to see how it handles this situation. Here is my findings.\r\npg_dump using WRITE_ERROR_EXIT to throw the error message when \"(bytes_written != size * nmemb)\", where WRITE_ERROR_EXIT calls fatal(\"could not write to output file: %m\") and then \"pg_log_generic(PG_LOG_ERROR, __VA_ARGS__)\". After ran a quick test in the same situation, I got message like below,\r\n$ pg_dump -h localhost -p 5432 -d postgres -t psql_error -f /mnt/ramdisk/file\r\npg_dump: error: could not write to output file: No space left on device\r\n\r\nIf I change the error log message like below, where \"%m\" is used to pass the value of strerror(errno), \"could not write to output file:\" is copied from function \"WRITE_ERROR_EXIT\". \r\n-\t\t\tpg_log_error(\"Error printing tuples\");\r\n+\t\t\tpg_log_error(\"could not write to output file: %m\");\r\nthen the output message is something like below, which, I believe, is more consistent with pg_dump.\r\n$ psql -d postgres -t -c \"select repeat('111', 1000000)\" -o /mnt/ramdisk/file\r\ncould not write to output file: No space left on device\r\n$ psql -d postgres -t -c \"select repeat('111', 1000000)\" > /mnt/ramdisk/file\r\ncould not write to output file: No space left on device\r\n\r\nHope the information will help.\r\n\r\nDavid\r\n---\r\nHighgo Software Inc. (Canada)\r\nwww.highgo.ca",
"msg_date": "Wed, 15 Jan 2020 18:35:38 +0000",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Zhang wrote:\n\n> If I change the error log message like below, where \"%m\" is used to pass the\n> value of strerror(errno), \"could not write to output file:\" is copied from\n> function \"WRITE_ERROR_EXIT\". \n> - pg_log_error(\"Error printing tuples\"); \n> + pg_log_error(\"could not write to output file: %m\"); \n> then the output message is something like below, which, I believe, is more\n> consistent with pg_dump. \n\nThe problem is that errno may not be reliable to tell us what was\nthe problem that leads to ferror(fout) being nonzero, since it isn't\nsaved at the point of the error and the output code may have called\nmany libc functions between the first occurrence of the output error\nand when pg_log_error() is called.\n\nThe linux manpage on errno warns specifically about this:\n<quote from \"man errno\">\nNOTES\n A common mistake is to do\n\n\t if (somecall() == -1) {\n\t printf(\"somecall() failed\\n\");\n\t if (errno == ...) { ... }\n\t }\n\n where errno no longer needs to have the value it had upon return \nfrom somecall()\n (i.e., it may\thave been changed by the printf(3)). If the value of\nerrno should be\n preserved across a library call, it must be saved:\n</quote>\n\nThis other bit from the POSIX spec [1] is relevant:\n\n \"The value of errno shall be defined only after a call to a function\n for which it is explicitly stated to be set and until it is changed\n by the next function call or if the application assigns it a value.\"\n\nTo use errno in a way that complies with the above, the psql code\nshould be refactored. I don't know if having a more precise error\nmessage justifies that refactoring. I've elaborated a bit about that\nupthread with the initial submission. Besides, I'm not even\nsure that errno is necessarily set on non-POSIX platforms when fputc\nor fputs fails.\nThat's why this patch uses the safer approach to emit a generic\nerror message.\n\n[1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/errno.html\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Thu, 16 Jan 2020 14:20:32 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "On 2020-01-16 5:20 a.m., Daniel Verite wrote:\n> \tDavid Zhang wrote:\n>\n>> If I change the error log message like below, where \"%m\" is used to pass the\n>> value of strerror(errno), \"could not write to output file:\" is copied from\n>> function \"WRITE_ERROR_EXIT\".\n>> - pg_log_error(\"Error printing tuples\");\n>> + pg_log_error(\"could not write to output file: %m\");\n>> then the output message is something like below, which, I believe, is more\n>> consistent with pg_dump.\n> The problem is that errno may not be reliable to tell us what was\n> the problem that leads to ferror(fout) being nonzero, since it isn't\n> saved at the point of the error and the output code may have called\n> many libc functions between the first occurrence of the output error\n> and when pg_log_error() is called.\n>\n> The linux manpage on errno warns specifically about this:\n> <quote from \"man errno\">\n> NOTES\n> A common mistake is to do\n>\n> \t if (somecall() == -1) {\n> \t printf(\"somecall() failed\\n\");\n> \t if (errno == ...) { ... }\n> \t }\n>\n> where errno no longer needs to have the value it had upon return\n> from somecall()\n> (i.e., it may\thave been changed by the printf(3)). If the value of\n> errno should be\n> preserved across a library call, it must be saved:\n> </quote>\n>\n> This other bit from the POSIX spec [1] is relevant:\n>\n> \"The value of errno shall be defined only after a call to a function\n> for which it is explicitly stated to be set and until it is changed\n> by the next function call or if the application assigns it a value.\"\n>\n> To use errno in a way that complies with the above, the psql code\n> should be refactored. I don't know if having a more precise error\n> message justifies that refactoring. I've elaborated a bit about that\n> upthread with the initial submission.\n\nYes, I agree with you. For case 2 \"select repeat('111', 1000000) \\g \n/mnt/ramdisk/file\", it can be easily fixed with more accurate error \nmessage similar to pg_dump, one example could be something like below. \nBut for case 1 \"psql -d postgres -At -c \"select repeat('111', 1000000)\" \n > /mnt/ramdisk/file\" , it may require a lot of refactoring work.\n\ndiff --git a/src/port/snprintf.c b/src/port/snprintf.c\nindex 8fd997553e..e6c239fd9f 100644\n--- a/src/port/snprintf.c\n+++ b/src/port/snprintf.c\n@@ -309,8 +309,10 @@ flushbuffer(PrintfTarget *target)\n\n written = fwrite(target->bufstart, 1, nc, target->stream);\n target->nchars += written;\n- if (written != nc)\n+ if (written != nc) {\n target->failed = true;\n+ fprintf(stderr, \"could not write to output file: \n%s\\n\", strerror(errno));\n+ }\n\n> Besides, I'm not even\n> sure that errno is necessarily set on non-POSIX platforms when fputc\n> or fputs fails.\nVerified, fputs does set the errno at least in Ubuntu Linux.\n> That's why this patch uses the safer approach to emit a generic\n> error message.\n>\n> [1] https://pubs.opengroup.org/onlinepubs/9699919799/functions/errno.html\n>\n>\n> Best regards,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n\n",
"msg_date": "Thu, 16 Jan 2020 23:23:43 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Zhang wrote:\n\n> Yes, I agree with you. For case 2 \"select repeat('111', 1000000) \\g \n> /mnt/ramdisk/file\", it can be easily fixed with more accurate error \n> message similar to pg_dump, one example could be something like below. \n> But for case 1 \"psql -d postgres -At -c \"select repeat('111', 1000000)\" \n> > /mnt/ramdisk/file\" , it may require a lot of refactoring work.\n\nI don't quite see why you make that distinction? The relevant bits of\ncode are common, it's all the code in src/fe_utils/print.c called\nfrom PrintQuery().\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 20 Jan 2020 11:42:05 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\nOn 2020-01-20 2:42 a.m., Daniel Verite wrote:\n> \tDavid Zhang wrote:\n>\n>> Yes, I agree with you. For case 2 \"select repeat('111', 1000000) \\g\n>> /mnt/ramdisk/file\", it can be easily fixed with more accurate error\n>> message similar to pg_dump, one example could be something like below.\n>> But for case 1 \"psql -d postgres -At -c \"select repeat('111', 1000000)\"\n>>> /mnt/ramdisk/file\" , it may require a lot of refactoring work.\n> I don't quite see why you make that distinction? The relevant bits of\n> code are common, it's all the code in src/fe_utils/print.c called\n> from PrintQuery().\n\nThe reason is that, within PrintQuery() function call, one case goes to \nprint_unaligned_text(), and the other case goes to print_aligned_text(). \nThe error \"No space left on device\" can be logged by fprintf() which is \nredefined as pg_fprintf() when print_aligned_text() is called, however \nthe original c fputs function is used directly when \nprint_unaligned_text() is called, and it is used everywhere.\n\nWill that be a better solution if redefine fputs similar to fprintf and \nlog the exact error when first time discovered? The concern is that if \nwe can't provide a more accurate information to the end user when error \nhappens, sometimes the end user might got even confused.\n\n>\n> Best regards,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n",
"msg_date": "Mon, 27 Jan 2020 12:41:00 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Zhang wrote:\n\n> The error \"No space left on device\" can be logged by fprintf() which is \n> redefined as pg_fprintf() when print_aligned_text() is called\n\nAre you sure? I don't find that redefinition. Besides\nprint_aligned_text() also calls putc and puts.\n\n> Will that be a better solution if redefine fputs similar to fprintf and \n> log the exact error when first time discovered? \n\nI think we can assume it's not acceptable to have pg_fprintf()\nto print anything to stderr, or to set a flag through a global\nvariable. So even if using pg_fprintf() for all the printing, no matter\nhow (through #defines or otherwise), there's still the problem that the\nerror needs to be propagated up the call chain to be processed by psql.\n\n> The concern is that if we can't provide a more accurate\n> information to the end user when error happens, sometimes the\n> end user might got even confused.\n\nIt's better to have a more informative message, but I'm for\nnot having the best be the enemy of the good.\nThe first concern is that at the moment, we have no error\nreport at all in the case when the output can be opened\nbut the error happens later along the writes.\n\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Tue, 28 Jan 2020 13:14:09 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\nOn 2020-01-28 4:14 a.m., Daniel Verite wrote:\n> \tDavid Zhang wrote:\n>\n>> The error \"No space left on device\" can be logged by fprintf() which is\n>> redefined as pg_fprintf() when print_aligned_text() is called\n> Are you sure? I don't find that redefinition. Besides\n> print_aligned_text() also calls putc and puts.\nYes, below is the gdb debug message when psql first time detects the \nerror \"No space left on device\". Test case, \"postgres=# select \nrepeat('111', 1000000) \\g /mnt/ramdisk/file\"\nbt\n#0 flushbuffer (target=0x7ffd6a709ad0) at snprintf.c:313\n#1 0x000055ba90b88b6c in dopr_outchmulti (c=32, slen=447325, \ntarget=0x7ffd6a709ad0) at snprintf.c:1435\n#2 0x000055ba90b88d5e in trailing_pad (padlen=-1499997, \ntarget=0x7ffd6a709ad0) at snprintf.c:1514\n#3 0x000055ba90b87f36 in fmtstr (value=0x55ba90bb4f9a \"\", leftjust=1, \nminlen=1499997, maxwidth=0, pointflag=0, target=0x7ffd6a709ad0) at \nsnprintf.c:994\n#4 0x000055ba90b873c6 in dopr (target=0x7ffd6a709ad0, \nformat=0x55ba90bb5083 \"%s%-*s\", args=0x7ffd6a709f40) at snprintf.c:675\n#5 0x000055ba90b865b5 in pg_vfprintf (stream=0x55ba910cf240, \nfmt=0x55ba90bb507f \"%-*s%s%-*s\", args=0x7ffd6a709f40) at snprintf.c:257\n#6 0x000055ba90b866aa in pg_fprintf (stream=0x55ba910cf240, \nfmt=0x55ba90bb507f \"%-*s%s%-*s\") at snprintf.c:270\n#7 0x000055ba90b75d22 in print_aligned_text (cont=0x7ffd6a70a210, \nfout=0x55ba910cf240, is_pager=false) at print.c:937\n#8 0x000055ba90b7ba19 in printTable (cont=0x7ffd6a70a210, \nfout=0x55ba910cf240, is_pager=false, flog=0x0) at print.c:3378\n#9 0x000055ba90b7bedc in printQuery (result=0x55ba910f9860, \nopt=0x7ffd6a70a2c0, fout=0x55ba910cf240, is_pager=false, flog=0x0) at \nprint.c:3496\n#10 0x000055ba90b39560 in PrintQueryTuples (results=0x55ba910f9860) at \ncommon.c:874\n#11 0x000055ba90b39d55 in PrintQueryResults (results=0x55ba910f9860) at \ncommon.c:1262\n#12 0x000055ba90b3a343 in SendQuery (query=0x55ba910f2590 \"select \nrepeat('111', 1000000) \") at common.c:1446\n#13 0x000055ba90b51f36 in MainLoop (source=0x7f1623a9ea00 \n<_IO_2_1_stdin_>) at mainloop.c:505\n#14 0x000055ba90b5c4da in main (argc=3, argv=0x7ffd6a70a738) at \nstartup.c:445\n(gdb) l\n308 size_t written;\n309\n310 written = fwrite(target->bufstart, 1, nc, \ntarget->stream);\n311 target->nchars += written;\n312 if (written != nc)\n313 target->failed = true;\n314 }\n315 target->bufptr = target->bufstart;\n316 }\n317\n(gdb) p written\n$2 = 1023\n(gdb) p nc\n$3 = 1024\n(gdb) p strerror(errno)\n$4 = 0x7f16238672c9 \"No space left on device\"\n(gdb)\n>\n>> Will that be a better solution if redefine fputs similar to fprintf and\n>> log the exact error when first time discovered?\n> I think we can assume it's not acceptable to have pg_fprintf()\n> to print anything to stderr, or to set a flag through a global\n> variable. So even if using pg_fprintf() for all the printing, no matter\n> how (through #defines or otherwise), there's still the problem that the\n> error needs to be propagated up the call chain to be processed by psql.\n>\n>> The concern is that if we can't provide a more accurate\n>> information to the end user when error happens, sometimes the\n>> end user might got even confused.\n> It's better to have a more informative message, but I'm for\n> not having the best be the enemy of the good.\n> The first concern is that at the moment, we have no error\n> report at all in the case when the output can be opened\n> but the error happens later along the writes.\n>\n>\n> Best regards,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 12:59:30 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Zhang wrote:\n\n> > Are you sure? I don't find that redefinition. Besides\n> > print_aligned_text() also calls putc and puts.\n> Yes, below is the gdb debug message when psql first time detects the \n> error \"No space left on device\". Test case, \"postgres=# select \n> repeat('111', 1000000) \\g /mnt/ramdisk/file\"\n> bt\n> #0 flushbuffer (target=0x7ffd6a709ad0) at snprintf.c:313\n\nIndeed. For some reason gdb won't let me step into these fprintf()\ncalls, but you're right they're redefined (through include/port.h):\n\n#define vsnprintf\t\tpg_vsnprintf\n#define snprintf\t\tpg_snprintf\n#define vsprintf\t\tpg_vsprintf\n#define sprintf \t\tpg_sprintf\n#define vfprintf\t\tpg_vfprintf\n#define fprintf \t\tpg_fprintf\n#define vprintf \t\tpg_vprintf\n#define printf(...)\t\tpg_printf(__VA_ARGS__)\n\nAnyway, I don't see it leading to an actionable way to reliably keep\nerrno, as discussed upthread.\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Wed, 29 Jan 2020 10:51:27 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "hi,\n\nI did some further research on this bug. Here is the summary:\n\n1. Tried to wrap \"fputs\" similar to \"fprintf\" redefined by \"pg_fprintf\", \nbut it ended up with too much error due to \"fputs\" is called everywhere \nin \"print_unaligned_text\". If add an extra static variable to track the \noutput status, then it will be an overkill and potentially more bugs may \nbe introduced.\n\n2. Investigated some other libraries such as libexplain \n(http://libexplain.sourceforge.net/), but it definitely will introduce \nthe complexity.\n\n3. I think a better way to resolve this issue will still be the solution \nwith an extra %m, which can make the error message much more informative \nto the end users. The reason is that,\n\n3.1 Providing the reasons for errors is required by PostgreSQL document, \nhttps://www.postgresql.org/docs/12/error-style-guide.html\n \"Reasons for Errors\n Messages should always state the reason why an error occurred. For \nexample:\n\n BAD: could not open file %s\n BETTER: could not open file %s (I/O failure)\n If no reason is known you better fix the code.\"\n\n3.2 Adding \"%m\" can provide a common and easy to understand reasons \ncrossing many operating systems. The \"%m\" fix has been tested on \nplatforms: CentOS 7, RedHat 7, Ubuntu 18.04, Windows 7/10, and macOS \nMojave 10.14, and it works.\n\n3.3 If multiple errors happened after the root cause \"No space left on \ndevice\", such as \"No such file or directory\" and \"Permission denied\", \nthen it make sense to report the latest one. The end users suppose to \nknow the latest error and solve it first. Eventually, \"No space left on \ndevice\" error will be showing up.\n\n3.4 Test log on different operating systems.\n\n### CentOS 7\n[postgres@localhost ~]$ uname -a\nLinux localhost.localdomain 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 \n15:49:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\n\n[postgres@localhost ~]$ sudo mount -t tmpfs -o rw,size=1M tmpfs /mnt/ramdisk\n[postgres@localhost ~]$ df -h\ntmpfs 1.0M 0 1.0M 0% /mnt/ramdisk\n\n[postgres@localhost ~]$ psql\npsql (12.2)\nType \"help\" for help.\n\npostgres=# select repeat('111', 1000000) \\g /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n[postgres@localhost ~]$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" -o /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n[postgres@localhost ~]$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" > /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n\n### RedHat 7\n[root@rh7 postgres]# uname -a\nLinux rh7 3.10.0-1062.el7.x86_64 #1 SMP Thu Jul 18 20:25:13 UTC 2019 \nx86_64 x86_64 x86_64 GNU/Linux\n[postgres@rh7 ~]$ sudo mkdir -p /mnt/ramdisk\n\nWe trust you have received the usual lecture from the local System\nAdministrator. It usually boils down to these three things:\n\n #1) Respect the privacy of others.\n #2) Think before you type.\n #3) With great power comes great responsibility.\n\n[sudo] password for postgres:\n[postgres@rh7 ~]$ sudo mount -t tmpfs -o rw,size=1M tmpfs /mnt/ramdisk\n\n[postgres@rh7 ~]$ psql -d postgres\npsql (12.2)\nType \"help\" for help.\n\npostgres=# select repeat('111', 1000000) \\g /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n[postgres@rh7 ~]$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" > /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n[postgres@rh7 ~]$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" -o /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n\n### Ubuntu 18.04\npostgres=# select repeat('111', 10000000) \\g /mnt/ramdisk/file\nError printing tuples (No space left on device)\npostgres=# \\q\n\ndavid@david-VM:~$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" > /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\ndavid@david-VM:~$ psql -d postgres -At -c \"select repeat('111', \n1000000)\" -o /mnt/ramdisk/file\nError printing tuples (No space left on device)\n\n### Windows 7\npostgres=# select repeat('111', 1000000) \\g E:/file\nError printing tuples (No space left on device)\npostgres=# \\q\n\nC:\\pg12.1\\bin>psql -d postgres -U postgres -h 172.20.14.29 -At -c \n\"select repeat\n('111', 100000)\" >> E:/file\nError printing tuples (No space left on device)\n\nC:\\pg12.1\\bin>psql -d postgres -U postgres -h 172.20.14.29 -At -c \n\"select repeat\n('111', 100000)\" -o E:/file\nError printing tuples (No space left on device)\n\n### Windows 10\npostgres=# select repeat('111', 1000000) \\g E:/file\nError printing tuples (No space left on device)\npostgres=# \\q\n\nC:\\>psql -d postgres -U postgres -h 192.168.0.19 -At -c \"select \nrepeat('111', 10000000)\" -o E:/file\nError printing tuples (No space left on device)\n\nC:\\>psql -d postgres -U postgres -h 192.168.0.19 -At -c \"select \nrepeat('111', 2000000)\" >> E:/file\nError printing tuples (No space left on device)\n\n### macOS Mojave 10.14\npostgres=# select repeat('111', 1000000) \\g /Volumes/sdcard/file\nError printing tuples (No space left on device)\npostgres=# \\q\n\nMacBP:bin david$ psql -d postgres -h 192.168.0.10 -At -c \"select \nrepeat('111', 3000000)\" > /Volumes/sdcard/file\nError printing tuples (No space left on device)\n\nMacBP:bin david$ psql -d postgres -h 192.168.0.10 -At -c \"select \nrepeat('111', 3000000)\" -o /Volumes/sdcard/file\nError printing tuples (No space left on device)\n\n\nOn 2020-01-29 1:51 a.m., Daniel Verite wrote:\n> \tDavid Zhang wrote:\n>\n>>> Are you sure? I don't find that redefinition. Besides\n>>> print_aligned_text() also calls putc and puts.\n>> Yes, below is the gdb debug message when psql first time detects the\n>> error \"No space left on device\". Test case, \"postgres=# select\n>> repeat('111', 1000000) \\g /mnt/ramdisk/file\"\n>> bt\n>> #0 flushbuffer (target=0x7ffd6a709ad0) at snprintf.c:313\n> Indeed. For some reason gdb won't let me step into these fprintf()\n> calls, but you're right they're redefined (through include/port.h):\n>\n> #define vsnprintf\t\tpg_vsnprintf\n> #define snprintf\t\tpg_snprintf\n> #define vsprintf\t\tpg_vsprintf\n> #define sprintf \t\tpg_sprintf\n> #define vfprintf\t\tpg_vfprintf\n> #define fprintf \t\tpg_fprintf\n> #define vprintf \t\tpg_vprintf\n> #define printf(...)\t\tpg_printf(__VA_ARGS__)\n>\n> Anyway, I don't see it leading to an actionable way to reliably keep\n> errno, as discussed upthread.\n>\n> Best regards,\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Tue, 18 Feb 2020 10:28:26 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "On 2020-Feb-18, David Zhang wrote:\n\n> 3. I think a better way to resolve this issue will still be the solution\n> with an extra %m, which can make the error message much more informative to\n> the end users.\n\nYes, agreed. However, we use a style like this:\n\n\t\tpg_log_error(\"could not print tuples: %m\");\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 28 Feb 2020 13:03:59 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "Hi Alvaro,\n\nThanks for your review, now the new patch with the error message in PG \nstyle is attached.\n\nOn 2020-02-28 8:03 a.m., Alvaro Herrera wrote:\n> On 2020-Feb-18, David Zhang wrote:\n>\n>> 3. I think a better way to resolve this issue will still be the solution\n>> with an extra %m, which can make the error message much more informative to\n>> the end users.\n> Yes, agreed. However, we use a style like this:\n>\n> \t\tpg_log_error(\"could not print tuples: %m\");\n>\n-- \nDavid\n\nSoftware Engineer\nHighgo Software Inc. (Canada)\nwww.highgo.ca",
"msg_date": "Fri, 28 Feb 2020 21:23:49 -0800",
"msg_from": "David Zhang <david.zhang@highgo.ca>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tDavid Zhang wrote:\n\n> Thanks for your review, now the new patch with the error message in PG \n> style is attached.\n\nThe current status of the patch is \"Needs review\" at \nhttps://commitfest.postgresql.org/27/2400/\n\nIf there's no more review to do, would you consider moving it to\nReady for Committer?\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Fri, 06 Mar 2020 10:36:51 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "On 2020-03-06 10:36, Daniel Verite wrote:\n> \tDavid Zhang wrote:\n> \n>> Thanks for your review, now the new patch with the error message in PG\n>> style is attached.\n> \n> The current status of the patch is \"Needs review\" at\n> https://commitfest.postgresql.org/27/2400/\n> \n> If there's no more review to do, would you consider moving it to\n> Ready for Committer?\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Mar 2020 16:14:16 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Making psql error out on output failures"
},
{
"msg_contents": "\tPeter Eisentraut wrote:\n\n> > If there's no more review to do, would you consider moving it to\n> > Ready for Committer?\n> \n> committed\n\nThanks!\n\nBest regards,\n-- \nDaniel Vérité\nPostgreSQL-powered mailer: http://www.manitou-mail.org\nTwitter: @DanielVerite\n\n\n",
"msg_date": "Mon, 23 Mar 2020 19:51:57 +0100",
"msg_from": "\"Daniel Verite\" <daniel@manitou-mail.org>",
"msg_from_op": true,
"msg_subject": "Re: Making psql error out on output failures"
}
] |
[
{
"msg_contents": "I started to think a little harder about the rough ideas I sketched\nyesterday in [1] about making the planner deal with outer joins in\na less ad-hoc manner. One thing that emerged very quickly is that\nI was misremembering how the parser creates join alias Vars.\nConsider for example\n\ncreate table t1(a int, b int);\ncreate table t2(x int, y int);\n\nselect a, t1.a, x, t2.x from t1 left join t2 on b = y;\n\nThe Vars that the parser will produce in the SELECT's targetlist have,\nrespectively,\n\n\t :varno 3 \n\t :varattno 1 \n\n\t :varno 1 \n\t :varattno 1 \n\n\t :varno 3 \n\t :varattno 3 \n\n\t :varno 2 \n\t :varattno 1 \n\n(where \"3\" is the rangetable index of the unnamed join relation).\nSo as far as the parser is concerned, a \"join alias\" var is just\none that you named by referencing the join output column; it's\nnot tracking whether the value is one that's affected by the join\nsemantics.\n\nWhat I'd like, in order to make progress with the planner rewrite,\nis that all four Vars in the tlist have varno 3, showing that\nthey are (potentially) semantically distinct from the Vars in\nthe JOIN ON clause (which'd have varnos 1 and 2 in this example).\n\nThis is a pretty small change as far as most of the system is\nconcerned; there should be noplace that fails to cope with a\njoin alias Var, since it'd have been legal to write a join\nalias Var in anyplace that would change.\n\nHowever, it's a bit sticky for ruleutils.c, which needs to be\nable to regurgitate these Vars in their original spellings.\n(This is \"needs\", not \"wants\", because there are various\nconditions under which we don't have the option of spelling\nit either way. For instance, if both tables expose columns\nnamed \"z\", then you must write \"t1.z\" or \"t2.z\"; the columns\nwon't have unique names at the join level.)\n\nWhat I'd like to do about that is redefine the existing\nvarnoold/varoattno fields as being the \"syntactic\" identifier\nof the Var, versus the \"semantic\" identifier that varno/varattno\nwould be, and have ruleutils.c always use varnoold/varoattno\nwhen trying to print a Var.\n\nI think that this approach would greatly clarify what those fields\nmean and how they should be manipulated --- for example, it makes\nit clear that _equalVar() should ignore varnoold/varoattno, since\nVars with the same semantic meaning should be considered equal\neven if they were spelled differently.\n\nWhile at it, I'd be inclined to rename those fields, since the\nexisting names aren't even consistently spelled, much less meaningful.\nPerhaps \"varsno/varsattno\" or \"varnosyn/varattnosyn\".\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/7771.1576452845%40sss.pgh.pa.us\n\n\n",
"msg_date": "Mon, 16 Dec 2019 12:00:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I'd like, in order to make progress with the planner rewrite,\n> is that all four Vars in the tlist have varno 3, showing that\n> they are (potentially) semantically distinct from the Vars in\n> the JOIN ON clause (which'd have varnos 1 and 2 in this example).\n>\n> This is a pretty small change as far as most of the system is\n> concerned; there should be noplace that fails to cope with a\n> join alias Var, since it'd have been legal to write a join\n> alias Var in anyplace that would change.\n\nI don't have an opinion about the merits of this change, but I'm\ncurious how this manages to work. It seems like there would be a fair\nnumber of places that needed to map the join alias var back to some\nbaserel that can supply it. And it seems like there could be multiple\nlevels of join alias vars as well, since you could have joins nested\ninside of other joins, possibly with subqueries involved.\n\nAt some point I had the idea that it might make sense to have\nequivalence classes that had both a list of full members (which are\nexactly equivalent) and nullable members (which are either equivalent\nor null). I'm not sure whether that idea is of any practical use,\nthough. It does seems strange to me that the representation you are\nproposing gets at the question only indirectly. The nullable version\nof the Var has got a different varno and varattno than the\nnon-nullable version of the Var, but other than that there's no\nconnection between them. How do you go about matching those together?\nI guess varnoold/varoattno can do the trick, but if that's only being\nused by ruleutils.c then there must be some other mechanism.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:55:26 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Dec 16, 2019 at 12:00 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'd like, in order to make progress with the planner rewrite,\n>> is that all four Vars in the tlist have varno 3, showing that\n>> they are (potentially) semantically distinct from the Vars in\n>> the JOIN ON clause (which'd have varnos 1 and 2 in this example).\n\n> I don't have an opinion about the merits of this change, but I'm\n> curious how this manages to work. It seems like there would be a fair\n> number of places that needed to map the join alias var back to some\n> baserel that can supply it. And it seems like there could be multiple\n> levels of join alias vars as well, since you could have joins nested\n> inside of other joins, possibly with subqueries involved.\n\nSure. Right now, we smash join aliases down to the ultimately-referenced\nbase vars early in planning (see flatten_join_alias_vars). After the\npatch that I'm proposing right now, that would continue to be the case,\nso there'd be little change in most of the planner from this. However,\nthe later changes that I speculated about in the other thread would\ninvolve delaying that smashing in cases where the join output value is\npossibly different from the input value, so that we would have a clear\nrepresentational distinction between those things, something we lack\ntoday.\n\n> At some point I had the idea that it might make sense to have\n> equivalence classes that had both a list of full members (which are\n> exactly equivalent) and nullable members (which are either equivalent\n> or null).\n\nYeah, this is another way that you might get at the problem, but it\nseems to me it's not really addressing the fundamental squishiness.\nIf the \"nullable members\" might be null, then what semantics are\nyou promising exactly? You certainly haven't got anything that\ndefines a sort order for them.\n\n> I'm not sure whether that idea is of any practical use,\n> though. It does seems strange to me that the representation you are\n> proposing gets at the question only indirectly. The nullable version\n> of the Var has got a different varno and varattno than the\n> non-nullable version of the Var, but other than that there's no\n> connection between them. How do you go about matching those together?\n\nYou'd have to look into the join's joinaliasvars list (or more likely,\nsome new planner data structure derived from that) to discover that\nthere's any connection. That seems fine to me, because AFAICS\nrelatively few places would need to do that. It's certainly better\nthan using a representation that suggests that two values are the same\nwhen they're not. (TBH, I've spent the last dozen years waiting for\nsomeone to come up with an example that completely breaks equivalence\nclasses, if not our entire approach to outer joins. So far we've been\nable to work around every case, but we've sometimes had to give up on\noptimizations that would be nice to have.)\n\nA related example that is bugging me is that the grouping-sets patch\nbroke the meaning of Vars that represent post-grouping values ---\nthere again, the value might have gone to null as a result of grouping,\nbut you can't tell it apart from values that haven't. I think this is\nless critical because such Vars can't appear in FROM/WHERE so they're\nof little interest to most of the planner, but we've still had to put\nin kluges like 90947674f because of that. We might be well advised\nto invent some join-alias-like mechanism for those. (I have a vague\nmemory now that Gierth wanted to do something like that and I\ndiscouraged it because it was unlike the way we did outer joins ...\nso he was right, but what we should have done was fix outer joins not\ndouble down on the kluge.)\n\n> I guess varnoold/varoattno can do the trick, but if that's only being\n> used by ruleutils.c then there must be some other mechanism.\n\nActually, they're nothing but debug support currently --- ruleutils\ndoesn't use them either. It's possible that this change would allow\nruleutils to save cycles in a lot of cases by not having to drill down\nthrough subplans to identify the ultimate referent of upper-plan Vars.\nBut I haven't investigated that yet.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 16:40:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "I wrote:\n> I started to think a little harder about the rough ideas I sketched\n> yesterday in [1] about making the planner deal with outer joins in\n> a less ad-hoc manner.\n> [1] https://www.postgresql.org/message-id/7771.1576452845%40sss.pgh.pa.us\n\nAfter further study, the idea of using join alias vars to track\nouter-join semantics basically doesn't work at all. Join alias vars in\ntheir current form (ie, references to the output columns of a JOIN RTE)\naren't suitable for the purpose of representing possibly-nulled inputs\nto that same RTE. There are two big stumbling blocks:\n\n* In the presence of JOIN USING, we don't necessarily have a JOIN output\ncolumn that is equivalent to either input column. The output is\ndefinitely not equal to the nullable side of an OJ, since it won't go to\nNULL; and it might not be equivalent to the non-nullable side either,\nbecause JOIN USING might've coerced it to some common datatype.\n\n* We also don't have any output column that could represent a whole-row\nreference to either input table. I thought about representing that with\na RowExpr of join output Vars, but that fails to preserve the existing\nsemantics: a whole-row reference to the nullable side goes to NULL, not\nto a row of NULLs, when we're null-extending the join.\n\nSo that kind of crashed and burned. We could maybe fake it by inventing\nsome new conventions about magic attnums of the join RTE that correspond\nto the values we want, but that seems really messy and error-prone.\n\nThe alternatives that seem plausible at this point are\n\n(1) Create some sort of wrapper node indicating \"the contents of this\nexpression might be replaced by NULL\". This is basically what the\nplanner's PlaceHolderVars do, so maybe we'd just be talking about\nintroducing those at some earlier stage.\n\n(2) Explicitly mark Vars as being nullable by some outer join. I think\nwe could probably get this down to one additional integer field in\nstruct Var, so it wouldn't be too much of a penalty.\n\nThe wrapper approach is more general since you can wrap something\nthat's not necessarily a plain Var; but it's also bulkier and so\nprobably a bit less efficient. I'm not sure which idea I like better.\n\nWith either approach, we could either make parse analysis inject the\nnullability markings, or wait to do it in the planner. On a purely\nabstract system structural level, I like the former better: it is\nexactly the province of parse analysis to decide what are the semantics\nof what the user typed, and surely what a Var means is part of that.\nOTOH, if we do it that way, the planner potentially has to rearrange the\nmarkings after it does join strength reduction; so maybe it's best to\njust wait till after that planning phase to address this at all.\n\nAny thoughts about that?\n\nAnyway, I had started to work on getting parse analysis to label\nouter-join-nullable Vars properly, and soon decided that no matter how\nwe do it, there's not enough information available at the point where\nparse analysis makes a Var. The referenced RTE is not, in itself,\nenough info, and I don't think we want to decorate RTEs with more info\nthat's only needed during parse analysis. What would be saner is to add\nany extra info to the ParseNamespaceItem structs. But that requires\nsome refactoring to allow the ParseNamespaceItems, not just the\nreferenced RTEs, to be passed down through Var lookup/construction.\nSo attached is a patch that refactors things that way. As proof of\nconcept, I added the rangetable index to ParseNamespaceItem, and used\nthat to get rid of the RTERangeTablePosn() searches that we formerly had\nin a bunch of places. Now, RTERangeTablePosn() isn't likely to be all\nthat expensive, but still this should be a little faster and cleaner.\nAlso, I was able to confine the fuzzy-lookup heuristic stuff to within\nparse_relation.c instead of letting it bleed out to the rest of the\nparser.\n\nThis seems to me to be good cleanup regardless of whether we ever\nask parse analysis to label outer-join-nullable Vars. So, barring\nobjection, I'd like to push it soon.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 20 Dec 2019 11:12:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The alternatives that seem plausible at this point are\n>\n> (1) Create some sort of wrapper node indicating \"the contents of this\n> expression might be replaced by NULL\". This is basically what the\n> planner's PlaceHolderVars do, so maybe we'd just be talking about\n> introducing those at some earlier stage.\n>\n> (2) Explicitly mark Vars as being nullable by some outer join. I think\n> we could probably get this down to one additional integer field in\n> struct Var, so it wouldn't be too much of a penalty.\n>\n> The wrapper approach is more general since you can wrap something\n> that's not necessarily a plain Var; but it's also bulkier and so\n> probably a bit less efficient. I'm not sure which idea I like better.\n\nI'm not sure which is better, either, although I would like to note in\npassing that the name PlaceHolderVar seems to me to be confusing and\nterrible. It took me years to understand it, and I've never been\ntotally sure that I actually do. Why is it not called\nMightBeNullWrapper or something?\n\nIf you chose to track it in the Var, maybe you could do better than to\ntrack whether it might have gone to NULL. For example, perhaps you\ncould track the set of baserels that are syntactically below the Var\nlocation and have the Var on the nullable side of a join, rather than\njust have a Boolean that indicates whether there are any. I don't know\nwhether the additional effort would be worth the cost of maintaining\nthe information, but it seems like it might be.\n\n> With either approach, we could either make parse analysis inject the\n> nullability markings, or wait to do it in the planner. On a purely\n> abstract system structural level, I like the former better: it is\n> exactly the province of parse analysis to decide what are the semantics\n> of what the user typed, and surely what a Var means is part of that.\n> OTOH, if we do it that way, the planner potentially has to rearrange the\n> markings after it does join strength reduction; so maybe it's best to\n> just wait till after that planning phase to address this at all.\n>\n> Any thoughts about that?\n\nGenerally, I like the idea of driving this off the parse tree, because\nit seems to me that, ultimately, whether a Var is *potentially*\nnullable or not depends on the query as provided by the user. And, if\nwe replan the query, these determinations don't change, at least as\nlong as they are only driven by the query syntax and not, say,\nattisnull or opclass details. It would be nice not to redo the work\nunnecessarily. However, that seems to require some way of segregating\nthe information we derive as a preliminary and syntactical judgement\nfrom subsequent inferences made during query planning, because the\nlatter CAN change during replanning.\n\nIt might be useful 'Relids' with each Var rather than just 'bool'. In\nother words, based on where the reference to the Var is in the\noriginal query text, figure out the set of joins where (1) the Var is\nsyntactically above the join and (2) on the nullable side, and then\nput the relations on the other sides of those joins into the Relids.\nThen if you later determine that A LEFT JOIN B actually can't make\nanything go to null, you can just ignore the presence of A in this set\nfor the rest of planning. I feel like this kind of idea might have\nother applications too, although I admit that it also has a cost.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Dec 2019 12:11:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 20, 2019 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The alternatives that seem plausible at this point are\n>> ...\n>> (2) Explicitly mark Vars as being nullable by some outer join. I think\n>> we could probably get this down to one additional integer field in\n>> struct Var, so it wouldn't be too much of a penalty.\n\n> It might be useful 'Relids' with each Var rather than just 'bool'. In\n> other words, based on where the reference to the Var is in the\n> original query text, figure out the set of joins where (1) the Var is\n> syntactically above the join and (2) on the nullable side, and then\n> put the relations on the other sides of those joins into the Relids.\n> Then if you later determine that A LEFT JOIN B actually can't make\n> anything go to null, you can just ignore the presence of A in this set\n> for the rest of planning. I feel like this kind of idea might have\n> other applications too, although I admit that it also has a cost.\n\nYeah, a bitmapset might be a better idea than just recording the topmost\nrelevant join's relid. But it's also more expensive, and I think we ought\nto keep the representation of Vars as cheap as possible. (On the third\nhand, an empty BMS is cheap, while if the alternative to a non-empty BMS\nis to put a separate wrapper node around the Var, that's hardly better.)\n\nThe key advantage of a BMS, really, is that it dodges the issue of needing\nto re-mark Vars when you re-order two outer joins using the outer join\nidentities. You really don't want that to result in having to consider\nVars above the two joins to be different depending on the order you chose\nfor the OJs, because that'd enormously complicate considering both sorts\nof Paths at the same time. The rough idea I'd had about coping with that\nissue with just a single relid is that maybe it doesn't matter --- maybe\nwe can always mark Vars according to the *syntactically* highest nulling\nOJ, regardless of the actual join order. But I'm not totally sure that\ncan work.\n\nIn any case, what the planner likes to work with is sets of baserel\nrelids covered by a join, not the relid(s) of the join RTEs themselves.\nSo maybe there's going to be a conversion step required anyhow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 13:16:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-20 11:12:53 -0500, Tom Lane wrote:\n> (2) Explicitly mark Vars as being nullable by some outer join. I think\n> we could probably get this down to one additional integer field in\n> struct Var, so it wouldn't be too much of a penalty.\n\nI've for a while wished that we could, e.g. so execution can use faster\ntuple deforming code, infer nullability of columns above the scan\nlevel. Right now there's no realistic way ExecTypeFromTL() can figure\nthat out, for upper query nodes. If we were to introduce something like\nthe field you suggest, it'd be darn near trivial.\n\nOTOH, I'd really at some point like to start moving TupleDesc\ncomputations to the planner - they're quite expensive, and we do them\nover and over again. And that would not necessarily need a convenient\nexecution time representation anymore. But I don't think moving\ntupledesc computation into the planner is a small rearrangement...\n\n\nWould we want to have only boolean state, or more information (e.g. not\nnull, maybe null, is null)?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:19:10 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "I wrote:\n> Anyway, I had started to work on getting parse analysis to label\n> outer-join-nullable Vars properly, and soon decided that no matter how\n> we do it, there's not enough information available at the point where\n> parse analysis makes a Var. The referenced RTE is not, in itself,\n> enough info, and I don't think we want to decorate RTEs with more info\n> that's only needed during parse analysis. What would be saner is to add\n> any extra info to the ParseNamespaceItem structs.\n\nHere is a further step on this journey. It's still just parser\nrefactoring, and doesn't (AFAICT) result in any change in generated\nparse trees, but it seems worth posting and committing separately.\n\nThe two key ideas here are:\n\n1. Integrate ParseNamespaceItems a bit further into the parser's\nrelevant APIs. In particular, the addRangeTableEntryXXX functions\nno longer return just a bare RTE, but a ParseNamespaceItem wrapper\nfor it. This gets rid of a number of kluges we had for finding out\nthe RT index of the new RTE, since that's now carried along in the\nnsitem --- we no longer need fragile assumptions about how the new\nRTE is still the last one in the rangetable, at some point rather\ndistant from where it was actually appended to the list.\n\nMost of the callers of addRangeTableEntryXXX functions just turn\naround and pass the result to addRTEtoQuery, which I've renamed\nto addNSItemtoQuery; it doesn't gin up a new nsitem anymore but\njust installs the one it's passed. It's perhaps a bit inconsistent\nthat I renamed that function but not addRangeTableEntryXXX.\nI considered making those addNamespaceItemXXX, but desisted on the\nperhaps-thin grounds that they don't link the new nsitem into the\nparse state, only the RTE. This could be argued of course.\n\n2. Add per-column information to the ParseNamespaceItems. As of\nthis patch, the useful part of that is column type/typmod/collation\ninfo which can be used to generate Vars referencing this RTE.\nI envision that the next step will involve generating the Vars'\nidentity (varno/varattno columns) from that as well, and this\npatch includes logic to set up some associated per-column fields.\nBut those are not actually getting fed into the Vars quite yet.\n(The step after that will be to add outer-join-nullability info.)\n\nBut independently of those future improvements, this patch is\na win because it allows carrying forward column-type info that's\nknown at the time we do addRangeTableEntryXXX, and using that\nwhen we make a Var, instead of having to do the rather expensive\ncomputations involved in expandRTE() or get_rte_attribute_type().\nget_rte_attribute_type() is indeed gone altogether, and while\nexpandRTE() is still needed, it's not used in any performance-critical\nparse analysis code paths.\n\nOn a complex-query test case that I've used before [1], microbenchmarking\njust raw parsing plus parse analysis shows a full 20% speedup over HEAD,\nwhich I think can mostly be attributed to getting rid of the syscache\nlookups that get_rte_attribute_type() did for Vars referencing base\nrelations. The total impact over a complete query execution cycle\nis a lot less of course. Still, it's pretty clearly a performance win,\nand to my mind the code is also cleaner --- this is paying down some\ntechnical debt from when we bolted JOIN syntax onto pre-existing\nparsing code.\n\nBarring objections, I plan to commit this fairly soon and get onto the\nnext step, which will start to have ramifications outside the parser.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/6970.1545327857%40sss.pgh.pa.us",
"msg_date": "Mon, 30 Dec 2019 22:50:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "I wrote:\n> Here is a further step on this journey. It's still just parser\n> refactoring, and doesn't (AFAICT) result in any change in generated\n> parse trees, but it seems worth posting and committing separately.\n\nPushed at 5815696bc.\n\n> 2. Add per-column information to the ParseNamespaceItems. As of\n> this patch, the useful part of that is column type/typmod/collation\n> info which can be used to generate Vars referencing this RTE.\n> I envision that the next step will involve generating the Vars'\n> identity (varno/varattno columns) from that as well, and this\n> patch includes logic to set up some associated per-column fields.\n> But those are not actually getting fed into the Vars quite yet.\n\nHere's a further step that does that.\n\nThe core idea of this patch is to make the parser generate join alias\nVars (that is, ones with varno pointing to a JOIN RTE) only when the\nalias Var is actually different from any raw join input, that is a type\ncoercion and/or COALESCE is necessary to generate the join output value.\nOtherwise just generate varno/varattno pointing to the relevant join\ninput column.\n\nIn effect, this means that the planner's flatten_join_alias_vars()\ntransformation is already done in the parser, for all cases except\n(a) columns that are merged by JOIN USING and are transformed in the\nprocess, and (b) whole-row join Vars. In principle that would allow\nus to skip doing flatten_join_alias_vars() in many more queries than\nwe do now, but we don't have quite enough infrastructure to know that\nwe can do so --- in particular there's no cheap way to know whether\nthere are any whole-row join Vars. I'm not sure if it's worth the\ntrouble to add a Query-level flag for that, and in any case it seems\nlike fit material for a separate patch. But even without skipping the\nwork entirely, this should make flatten_join_alias_vars() faster,\nparticularly where there are nested joins that it had to flatten\nrecursively.\n\nAn essential part of this change is to replace Var nodes'\nvarnoold/varoattno fields with varnosyn/varattnosyn, which have\nconsiderably more tightly-defined meanings than the old fields: when\nthey differ from varno/varattno, they identify the Var's position in\nan aliased JOIN RTE, and the join alias is what ruleutils.c should\nprint for the Var. This is necessary because the varno change\ndestroyed ruleutils.c's ability to find the JOIN RTE from the Var's\nvarno.\n\nAnother way in which this change broke ruleutils.c is that it's no\nlonger feasible to determine, from a JOIN RTE's joinaliasvars list,\nwhich join columns correspond to which columns of the join's immediate\ninput relations. (If those are sub-joins, the joinaliasvars entries\nmay point to columns of their base relations, not the sub-joins.)\nBut that was a horrid mess requiring a lot of fragile assumptions\nalready, so let's just bite the bullet and add some more JOIN RTE\nfields to make it more straightforward to figure that out. I added\ntwo integer-List fields containing the relevant column numbers from\nthe left and right input rels, plus a count of how many merged columns\nthere are.\n\nThis patch depends on the ParseNamespaceColumn infrastructure that\nI added in commit 5815696bc. The biggest bit of code change is\nrestructuring transformFromClauseItem's handling of JOINs so that\nthe ParseNamespaceColumn data is propagated upward correctly.\n\nOther than that and the ruleutils fixes, everything pretty much\njust works, though some processing is now inessential. I grabbed\ntwo pieces of low-hanging fruit in that line:\n\n1. In find_expr_references, we don't need to recurse into join alias\nVars anymore. There aren't any except for references to merged USING\ncolumns, which are more properly handled when we scan the join's RTE.\nThis change actually fixes an edge-case issue: we will now record a\ndependency on any type-coercion function present in a USING column's\njoinaliasvar, even if that join column has no references in the query\ntext. The odds of the missing dependency causing a problem seem quite\nsmall: you'd have to posit somebody dropping an implicit cast between\ntwo data types, without removing the types themselves, and then having\na stored rule containing a whole-row Var for a join whose USING merge\ndepends on that cast. So I don't feel a great need to change this in\nthe back branches. But in theory this way is more correct.\n\n2. markRTEForSelectPriv and markTargetListOrigin don't need to recurse\ninto join alias Vars either, because the cases they care about don't\napply to alias Vars for USING columns that are semantically distinct\nfrom the underlying columns. This removes the only case in which\nmarkVarForSelectPriv could be called with NULL for the RTE, so adjust\nthe comments to describe that hack as being strictly internal to\nmarkRTEForSelectPriv.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 02 Jan 2020 12:37:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Fri, Dec 20, 2019 at 11:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The alternatives that seem plausible at this point are\n>> (1) Create some sort of wrapper node indicating \"the contents of this\n>> expression might be replaced by NULL\". This is basically what the\n>> planner's PlaceHolderVars do, so maybe we'd just be talking about\n>> introducing those at some earlier stage.\n>> ...\n\n> I'm not sure which is better, either, although I would like to note in\n> passing that the name PlaceHolderVar seems to me to be confusing and\n> terrible. It took me years to understand it, and I've never been\n> totally sure that I actually do. Why is it not called\n> MightBeNullWrapper or something?\n\nHere's a data dump about my further thoughts in this area. I've concluded\nthat the \"wrapper\" approach is the right way to proceed, and that rather\nthan having the planner introduce the wrappers as happens now, we should\nindeed have the parser introduce the wrappers from the get-go. There are\na few arguments for that:\n\n* Arguably, this is a question of decorating the parse tree with\ninformation about query semantics. I've always held that parse analysis\nis what should introduce knowledge of semantics; the planner ought not be\nreverse-engineering that.\n\n* AFAICS, we would need an additional pass over the query tree in order to\ndo this in the planner. There is no existing recursive tree-modification\npass that happens at an appropriate time.\n\n* We can use the same type of wrapper node to solve the problems with\ngrouping-set expressions that were discussed in\nhttps://www.postgresql.org/message-id/flat/7dbdcf5c-b5a6-ef89-4958-da212fe10176%40iki.fi\nalthough I'd leave that for a follow-on patch rather than try to fix\nit immediately. Here again, it'd be better to introduce the wrappers\nat parse time --- check_ungrouped_columns() is already detecting the\npresence of grouping-expression references, so we could make it inject\nwrappers around them at relatively little extra cost.\n\nPer Robert's complaint above, these wrappers need better documentation,\nand they should be called something other than PlaceHolderVar, even though\nthey're basically that (and hopefully will replace those completely).\nI'm tentatively thinking of calling them \"NullableVar\", but am open to\nbetter ideas. And here is a proposed addition to optimizer/README to\nexplain why they exist. I'm not quite satisfied with the explanation yet\n--- in particular, if we don't need them at runtime, why do we need them\nat parse time? Any thoughts about how to explain this more solidly are\nwelcome.\n\n----------\n\nTo simplify handling of some issues with outer joins, we use NullableVars,\nwhich are produced by the parser and used by the planner, but do not\nappear in finished plans. A NullableVar is a wrapper around another\nexpression, decorated with a set of outer-join relids, and notionally\nhaving the semantics\n\n\tCASE WHEN any-of-these-outer-joins-produced-a-null-extended-row\n\tTHEN NULL\n\tELSE contained-expression\n\tEND\n\nIt's only notional, because no such calculation is ever done explicitly.\nIn a finished plan, the NullableVar construct is replaced by a plain Var\nreferencing an output column of the topmost mentioned outer join, while\nthe \"contained expression\" is the corresponding input to the bottommost\njoin. Any forcing to null happens in the course of calculating the\nouter join results. (Because we don't ever have to do the calculation\nexplicitly, it's not necessary to distinguish which side of an outer join\ngot null-extended, which'd otherwise be essential information for FULL\nJOIN cases.)\n\nA NullableVar wrapper is placed around a Var referencing a column of the\nnullable side of an outer join when that reference appears syntactically\nabove (outside) the outer join, but not when the reference is below the\nouter join, such as within its ON clause. References to the non-nullable\nside of an outer join are never wrapped. NullableVars mentioning multiple\njoin nodes arise from cases with nested outer joins.\n\nIt might seem that the NullableVar construct is unnecessary (and indeed,\nwe got by without it for many years). In a join row that's null-extended\nfor lack of a matching nullable-side row, the only reasonable value to\nimpute to a Var of that side is NULL, no matter where you look in the\nparse tree. However there are pressing reasons to use NullableVars\nanyway:\n\n* NullableVars simplify reasoning about where to evaluate qual clauses.\nConsider\n\tSELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y) WHERE foo(t2.z)\n(Assume foo() is not strict, so that we can't reduce the left join to\na plain join.) A naive implementation might try to push the foo(t2.z)\ncall down to the scan of t2, but that is not correct because (a) what\nfoo() should actually see for a null-extended join row is NULL, and\n(b) if foo() returns false, we should suppress the t1 row from the join\naltogether, not emit it with a null-extended t2 row. On the other hand,\nit *would* be correct (and desirable) to push the call down if the query\nwere\n\tSELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y AND foo(t2.z))\nIf the upper WHERE clause is represented as foo(NullableVar(t2.z)), then\nwe can recognize that the NullableVar construct must be evaluated above\nthe join, since it references the join's relid. Meanwhile, a t2.z\nreference within the ON clause receives no such decoration, so in the\nsecond case foo(t2.z) can be seen to be safe to push down to the scan\nlevel. Thus we can solve the qual-placement problem in a simple and\ngeneral fashion.\n\n* NullableVars simplify reasoning around EquivalenceClasses. Given say\n\tSELECT * FROM t1 LEFT JOIN t2 ON (t1.x = t2.y) WHERE t1.x = 42\nwe would like to put t1.x and t2.y and 42 into the same EquivalenceClass\nand then derive \"t2.y = 42\" to use as a restriction clause for the scan\nof t2. However, it'd be wrong to conclude that t2.y will always have\nthe value 42, or that it's equal to t1.x in every joined row. The use\nof NullableVar wrappers sidesteps this problem: we can put t2.y in the\nEquivalenceClass, and we can derive all the equalities we want about it,\nbut they will not lead to conclusions that NullableVar(t2.y) is equal to\nanything.\n\n* NullableVars are necessary to avoid wrong results when flattening\nsub-selects. If t2 in the above example is a sub-select or view in which\nthe y output column is a constant, and we want to pull up that sub-select,\nwe cannot simply substitute that constant for every use of t2.y in the\nouter query: a Const node will not produce \"NULL\" when that's needed.\nBut it does work if the t2.y Vars are wrapped in NullableVars. The\nNullableVar shows that the contained value might be replaced by a NULL,\nand it carries enough information so that we can generate a plan tree in\nwhich that replacement does happen when necessary (by evaluating the\nConst below the outer join and making upper references to it be Vars).\nMoreover, when pulling up the constant into portions of the parse tree\nthat are below the outer join, the right things also happen: those\nreferences can validly become plain Consts.\n\nIn essence, these examples show that it's useful to treat references to\na column of the nullable side of an outer join as being semantically\ndistinct depending on whether they are \"above\" or \"below\" the outer join,\neven though no distinction exists once the calculation of a particular\njoin output row is complete.\n\n----------\n\nAs you might gather from that, I'm thinking of changing the planner\nso that (at least for outer joins) the relid set for a join includes\nthe RTE number of the join node itself. I haven't decided yet if\nthat should happen across-the-board or just in the areas where we\nuse relid sets to decide which qual expressions get evaluated where.\n\nSome other exciting things that will happen:\n\n* RestrictInfo.is_pushed_down will go away; as sketched above, the\npresence of the outer join's relid in the qual's required_relids\n(due to NullableVars' outer join relid sets getting added into that\nby pull_varnos) will tell us whether the qual must be treated as\na join or filter qual for the current join level.\n\n* I think a lot of hackery in distribute_qual_to_rels can go away,\nsuch as the below_outer_join flag, and maybe check_outerjoin_delay.\nAll of that is basically trying to reverse-engineer the qual\nplacement semantics that the wrappers will make explicit.\n\n* As sketched above, equivalence classes will no longer need to\ntreat outer-join equalities with suspicion, and I think the\nreconsider_outer_join_clauses stuff goes away too.\n\n* There's probably other hackery that can be simplified; I've not\ngone looking in detail yet.\n\nI've not written any actual code, but am close to being ready to.\nOne thing I'm still struggling with is how to continue to support\nouter join \"identity 3\":\n\n 3. (A leftjoin B on (Pab)) leftjoin C on (Pbc)\n = A leftjoin (B leftjoin C on (Pbc)) on (Pab)\n\n Identity 3 only holds if predicate Pbc must fail for all-null B\n rows (that is, Pbc is strict for at least one column of B).\n\nPer this sketch, if the query is initially written the first way, Pbc's\nreferences to B Vars would have NullableVars indicating a dependence on\nthe A/B join, seemingly preventing Pbc from being pushed into the RHS of\nthat join per the identity. But if the query is initially written the\nsecond way, there will be no NullableVar wrappers in either predicate.\nMaybe it's sufficient to strip the NullableVar wrappers once we've\ndetected the applicability of the identity. (We'll need code for that\nanyway, since outer-join strength reduction will create cases where\nNullableVar wrappers need to go away.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Feb 2020 15:24:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "On 5/2/2020 01:24, Tom Lane wrote:\n> I've not written any actual code, but am close to being ready to.\nThis thread gives us hope to get started on solving some of the basic \nplanner problems.\nBut there is no activity for a long time, as I see. Have You tried to \nimplement this idea? Is it actual now?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 22 Dec 2021 18:17:35 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n> On 5/2/2020 01:24, Tom Lane wrote:\n>> I've not written any actual code, but am close to being ready to.\n\n> This thread gives us hope to get started on solving some of the basic \n> planner problems.\n> But there is no activity for a long time, as I see. Have You tried to \n> implement this idea? Is it actual now?\n\nIt's been on the back burner for awhile :-(. I've not forgotten\nabout it though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 22 Dec 2021 10:42:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
},
{
"msg_contents": "On 22/12/2021 20:42, Tom Lane wrote:\n> Andrey Lepikhov <a.lepikhov@postgrespro.ru> writes:\n>> On 5/2/2020 01:24, Tom Lane wrote:\n>>> I've not written any actual code, but am close to being ready to.\n> \n>> This thread gives us hope to get started on solving some of the basic\n>> planner problems.\n>> But there is no activity for a long time, as I see. Have You tried to\n>> implement this idea? Is it actual now?\n> \n> It's been on the back burner for awhile :-(. I've not forgotten\n> about it though.\nI would try to develop this feature. Idea is clear for me, but \ndefinition of the NullableVars structure is not obvious. Maybe you can \nprepare a sketch of this struct or you already have some draft code?\n\n-- \nregards,\nAndrey Lepikhov\nPostgres Professional\n\n\n",
"msg_date": "Wed, 22 Dec 2021 22:01:22 +0500",
"msg_from": "Andrey Lepikhov <a.lepikhov@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Clarifying/rationalizing Vars' varno/varattno/varnoold/varoattno"
}
] |
[
{
"msg_contents": "Hi,\n\nAccording to microsoft documentation at:\nhttps://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom\nThe function CryptGenRandom is deprecated, and may can be removed in future release.\nConsidering that postgres only supports windows versions that have the new API, it would be good to make the replace.\n\nBCryptGenRandom apparently works without having to set up an environment before calling it, allowing a simplification in the file that makes the call.\nThe drawback, its change causes need to link to bcrypt.lib.\n\nOn exec.c, have two memory leaks, and a possible access beyond heap bounds, the patch tries to fix them.\nAccording to documentation at:\nhttps://en.cppreference.com/w/c/experimental/dynamic/strdup\n\"The returned pointer must be passed to free to avoid a memory leak. \"\n\n* memory leak fix to src/common/exec.c\n* CryptGenRandom change by BCryptGenRandom to src/port/pg_strong_random.c\n* link bcrypt.lib to src/tools/msvc/Mkvcbuild.pm\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 16 Dec 2019 17:34:44 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Windows port minor fixes"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 6:34 PM Ranier Vilela <ranier_gyn@hotmail.com>\nwrote:\n\n>\n> Considering that postgres only supports windows versions that have the new\n> API, it would be good to make the replace.\n>\n>\nThat is not actually the case. If you check the _WIN32_WINNT logic\nin src/include/port/win32.h you can see that depending on your building\ntools you can get a version lower than that, for example if using MinGW.\n\n\n>\n> * memory leak fix to src/common/exec.c\n> * CryptGenRandom change by BCryptGenRandom to src/port/pg_strong_random.c\n> * link bcrypt.lib to src/tools/msvc/Mkvcbuild.pm\n>\n>\nIf you want to address 2 unrelated issues, it makes little sense to use a\nsingle thread and 3 patches.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Dec 16, 2019 at 6:34 PM Ranier Vilela <ranier_gyn@hotmail.com> wrote:Considering that postgres only supports windows versions that have the new API, it would be good to make the replace.That is not actually the case. If you check the _WIN32_WINNT logic in src/include/port/win32.h you can see that depending on your building tools you can get a version lower than that, for example if using MinGW. \n* memory leak fix to src/common/exec.c\n* CryptGenRandom change by BCryptGenRandom to src/port/pg_strong_random.c\n* link bcrypt.lib to src/tools/msvc/Mkvcbuild.pm\n\nIf you want to address 2 unrelated issues, it makes little sense to use a single thread and 3 patches.Regards,Juan José Santamaría Flecha",
"msg_date": "Mon, 16 Dec 2019 19:57:10 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 07:57:10PM +0100, Juan José Santamaría Flecha wrote:\n> If you want to address 2 unrelated issues, it makes little sense to use a\n> single thread and 3 patches.\n\nAnd if you actually group things together so as any individual looking\nat your patches does not have to figure out which piece applies to\nwhat, that's also better. Anyway, the patch for putenv() is wrong in\nthe way the memory is freed, but this has been mentioned on another\nthread. We rely on MAXPGPATH heavily so your patch trying to change\nthe buffer length does not bring much, and the windows-crypt call is\nalso wrong based for the version handling, as discussed on another\nthread.\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 13:45:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
}
] |
[
{
"msg_contents": "A customer's report query hit this error.\nERROR: could not resize shared memory segment \"/PostgreSQL.2011322019\" to 134217728 bytes: No space left on device\n\nI found:\nhttps://www.postgresql.org/message-id/flat/CAEepm%3D2D_JGb8X%3DLa-0PX9C8dBX9%3Dj9wY%2By1-zDWkcJu0%3DBQbA%40mail.gmail.com\n\nwork_mem | 128MB\ndynamic_shared_memory_type | posix\nversion | PostgreSQL 12.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\nRunning centos 6.9 / linux 2.6.32-696.23.1.el6.x86_64\n\n$ free -m\n total used free shared buffers cached\nMem: 7871 7223 648 1531 5 1988\n-/+ buffers/cache: 5229 2642\nSwap: 4095 2088 2007\n\n$ mount | grep /dev/shm\ntmpfs on /dev/shm type tmpfs (rw)\n\n$ du -hs /dev/shm\n0 /dev/shm\n\n$ df /dev/shm\nFilesystem 1K-blocks Used Available Use% Mounted on\ntmpfs 4030272 24 4030248 1% /dev/shm\n\nLater, I see:\n$ df -h /dev/shm\nFilesystem Size Used Avail Use% Mounted on\ntmpfs 3.9G 3.3G 601M 85% /dev/shm\n\nI can reproduce the error running a single instance of the query.\n\nThe query plan is 1300 lines long, and involves 482 \"Scan\" nodes on a table\nwhich currently has 93 partitions, and for which current partitions are\n\"daily\". I believe I repartitioned its history earlier this year to \"monthly\",\nprobably to avoid \"OOM with many sorts\", as reported here:\nhttps://www.postgresql.org/message-id/20190708164401.GA22387%40telsasoft.com\n\n$ grep Scan tmp/sql-`date +%F`.1.ex |sed 's/^ *//; s/ on .*//' |sort |uniq -c |sort -nr \n 227 -> Parallel Bitmap Heap Scan\n 227 -> Bitmap Index Scan\n 14 -> Parallel Seq Scan\n 9 -> Seq Scan\n 2 -> Subquery Scan\n 2 -> Index Scan using sites_pkey\n 1 Subquery Scan\n\nThere are total of 10 \"Workers Planned\":\ngrep -o 'Worker.*' tmp/sql-`date +%F`.1.ex\nWorkers Planned: 2\nWorkers Planned: 2\nWorkers Planned: 2\nWorkers Planned: 2\nWorkers Planned: 2\n\nI will plan to repartition again to month granularity unless someone wants to\ncollect further information or suggest a better solution.\n\n(gdb) bt\n#0 pg_re_throw () at elog.c:1717\n#1 0x0000000000886194 in errfinish (dummy=<value optimized out>) at elog.c:464\n#2 0x0000000000749453 in dsm_impl_posix (op=<value optimized out>, \n handle=<value optimized out>, request_size=<value optimized out>, \n impl_private=<value optimized out>, mapped_address=<value optimized out>, \n mapped_size=<value optimized out>, elevel=20) at dsm_impl.c:283\n#3 dsm_impl_op (op=<value optimized out>, handle=<value optimized out>, \n request_size=<value optimized out>, impl_private=<value optimized out>, \n mapped_address=<value optimized out>, mapped_size=<value optimized out>, \n elevel=20) at dsm_impl.c:170\n#4 0x000000000074a7c8 in dsm_create (size=100868096, flags=0) at dsm.c:459\n#5 0x00000000008a94a6 in make_new_segment (area=0x1d70208, \n requested_pages=<value optimized out>) at dsa.c:2156\n#6 0x00000000008aa47a in dsa_allocate_extended (area=0x1d70208, \n size=100663304, flags=5) at dsa.c:712\n#7 0x0000000000670b3f in pagetable_allocate (pagetable=<value optimized out>, \n size=<value optimized out>) at tidbitmap.c:1511\n#8 0x000000000067200c in pagetable_grow (tbm=0x7f82274da8e8, pageno=906296)\n at ../../../src/include/lib/simplehash.h:405\n#9 pagetable_insert (tbm=0x7f82274da8e8, pageno=906296)\n at ../../../src/include/lib/simplehash.h:530\n#10 tbm_get_pageentry (tbm=0x7f82274da8e8, pageno=906296) at tidbitmap.c:1225\n#11 0x00000000006724a0 in tbm_add_tuples (tbm=0x7f82274da8e8, \n tids=<value optimized out>, ntids=1, recheck=false) at tidbitmap.c:405\n#12 0x00000000004d7f1f in btgetbitmap (scan=0x1d7d948, tbm=0x7f82274da8e8)\n at nbtree.c:334\n#13 0x00000000004d103a in index_getbitmap (scan=0x1d7d948, \n bitmap=<value optimized out>) at indexam.c:665\n#14 0x00000000006323d8 in MultiExecBitmapIndexScan (node=0x1dcbdb8)\n at nodeBitmapIndexscan.c:105\n#15 0x00000000006317f4 in BitmapHeapNext (node=0x1d8a030)\n at nodeBitmapHeapscan.c:141\n#16 0x000000000062405c in ExecScanFetch (node=0x1d8a030, \n accessMtd=0x6316d0 <BitmapHeapNext>, \n recheckMtd=0x631440 <BitmapHeapRecheck>) at execScan.c:133\n#17 ExecScan (node=0x1d8a030, accessMtd=0x6316d0 <BitmapHeapNext>, \n recheckMtd=0x631440 <BitmapHeapRecheck>) at execScan.c:200\n#18 0x0000000000622900 in ExecProcNodeInstr (node=0x1d8a030)\n at execProcnode.c:461\n#19 0x000000000062c66f in ExecProcNode (pstate=0x1d7ad70)\n at ../../../src/include/executor/executor.h:239\n#20 ExecAppend (pstate=0x1d7ad70) at nodeAppend.c:292\n#21 0x0000000000622900 in ExecProcNodeInstr (node=0x1d7ad70)\n at execProcnode.c:461\n#22 0x0000000000637da2 in ExecProcNode (pstate=0x1d7a630)\n at ../../../src/include/executor/executor.h:239\n#23 ExecHashJoinOuterGetTuple (pstate=0x1d7a630) at nodeHashjoin.c:833\n#24 ExecHashJoinImpl (pstate=0x1d7a630) at nodeHashjoin.c:356\n#25 ExecHashJoin (pstate=0x1d7a630) at nodeHashjoin.c:572\n#26 0x0000000000622900 in ExecProcNodeInstr (node=0x1d7a630)\n at execProcnode.c:461\n#27 0x0000000000637da2 in ExecProcNode (pstate=0x1d7bff0)\n at ../../../src/include/executor/executor.h:239\n#28 ExecHashJoinOuterGetTuple (pstate=0x1d7bff0) at nodeHashjoin.c:833\n#29 ExecHashJoinImpl (pstate=0x1d7bff0) at nodeHashjoin.c:356\n#30 ExecHashJoin (pstate=0x1d7bff0) at nodeHashjoin.c:572\n#31 0x0000000000622900 in ExecProcNodeInstr (node=0x1d7bff0)\n at execProcnode.c:461\n#32 0x000000000061eac7 in ExecProcNode (queryDesc=0x7f8228b72198, \n direction=<value optimized out>, count=0, execute_once=240)\n at ../../../src/include/executor/executor.h:239\n#33 ExecutePlan (queryDesc=0x7f8228b72198, direction=<value optimized out>, \n count=0, execute_once=240) at execMain.c:1646\n#34 standard_ExecutorRun (queryDesc=0x7f8228b72198, \n direction=<value optimized out>, count=0, execute_once=240)\n at execMain.c:364\n#35 0x00007f8229aa7878 in pgss_ExecutorRun (queryDesc=0x7f8228b72198, \n direction=ForwardScanDirection, count=0, execute_once=true)\n at pg_stat_statements.c:893\n#36 0x00007f8228f8d9ad in explain_ExecutorRun (queryDesc=0x7f8228b72198, \n direction=ForwardScanDirection, count=0, execute_once=true)\n at auto_explain.c:320\n#37 0x000000000061f0ce in ParallelQueryMain (seg=0x1c8d3b8, toc=0x7f82291f1000)\n at execParallel.c:1399\n#38 0x00000000004f7daf in ParallelWorkerMain (main_arg=<value optimized out>)\n at parallel.c:1431\n#39 0x00000000006eb2e0 in StartBackgroundWorker () at bgworker.c:834\n#40 0x00000000006f52ac in do_start_bgworker () at postmaster.c:5770\n#41 maybe_start_bgworkers () at postmaster.c:5996\n#42 0x00000000006f867d in sigusr1_handler (\n postgres_signal_arg=<value optimized out>) at postmaster.c:5167\n#43 <signal handler called>\n#44 0x0000003049ae1603 in __select_nocancel () from /lib64/libc.so.6\n#45 0x00000000006f9d43 in ServerLoop (argc=<value optimized out>, \n argv=<value optimized out>) at postmaster.c:1668\n#46 PostmasterMain (argc=<value optimized out>, argv=<value optimized out>)\n at postmaster.c:1377\n#47 0x000000000066a6b0 in main (argc=3, argv=0x1c5b950) at main.c:228\n\nbt f:\n#2 0x0000000000749453 in dsm_impl_posix (op=<value optimized out>, handle=<value optimized out>, request_size=<value optimized out>, impl_private=<value optimized out>, mapped_address=<value optimized out>, mapped_size=<value optimized out>, elevel=20) at dsm_impl.c:283\n save_errno = <value optimized out>\n st = {st_dev = 26, st_ino = 0, st_nlink = 33554432, st_mode = 4096, st_uid = 0, st_gid = 65536, __pad0 = 0, st_rdev = 8975118, st_size = 496105863, st_blksize = 8974012, st_blocks = 140199374652304, st_atim = {tv_sec = 7593264, tv_nsec = 20}, st_mtim = {tv_sec = 85899345920, tv_nsec = 140198902917632}, \n st_ctim = {tv_sec = 44417024, tv_nsec = 289}, __unused = {7593264, 0, 140724603453440}}\n flags = <value optimized out>\n fd = <value optimized out>\n name = \"/PostgreSQL.1648263397\\000\\000\\024\\242m\\371\\000\\000\\000\\000\\000\\061w3\\202\\177\\000\\000\\000\\000\\000\\001\\000\\000\\000\\000pe\\363\\001\\000\\000\\000\\000\\360\\271\\305\\001\\000\\000\\000\"\n address = <value optimized out>\n\nThanks,\nJustin\n\n\n",
"msg_date": "Mon, 16 Dec 2019 12:49:06 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "ERROR: could not resize shared memory segment...No space left on\n device"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 12:49:06PM -0600, Justin Pryzby wrote:\n>A customer's report query hit this error.\n>ERROR: could not resize shared memory segment \"/PostgreSQL.2011322019\" to 134217728 bytes: No space left on device\n>\n>I found:\n>https://www.postgresql.org/message-id/flat/CAEepm%3D2D_JGb8X%3DLa-0PX9C8dBX9%3Dj9wY%2By1-zDWkcJu0%3DBQbA%40mail.gmail.com\n>\n>work_mem | 128MB\n>dynamic_shared_memory_type | posix\n>version | PostgreSQL 12.1 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-23), 64-bit\n>Running centos 6.9 / linux 2.6.32-696.23.1.el6.x86_64\n>\n>$ free -m\n> total used free shared buffers cached\n>Mem: 7871 7223 648 1531 5 1988\n>-/+ buffers/cache: 5229 2642\n>Swap: 4095 2088 2007\n>\n>$ mount | grep /dev/shm\n>tmpfs on /dev/shm type tmpfs (rw)\n>\n>$ du -hs /dev/shm\n>0 /dev/shm\n>\n>$ df /dev/shm\n>Filesystem 1K-blocks Used Available Use% Mounted on\n>tmpfs 4030272 24 4030248 1% /dev/shm\n>\n>Later, I see:\n>$ df -h /dev/shm\n>Filesystem Size Used Avail Use% Mounted on\n>tmpfs 3.9G 3.3G 601M 85% /dev/shm\n>\n>I can reproduce the error running a single instance of the query.\n>\n>The query plan is 1300 lines long, and involves 482 \"Scan\" nodes on a table\n>which currently has 93 partitions, and for which current partitions are\n>\"daily\". I believe I repartitioned its history earlier this year to \"monthly\",\n>probably to avoid \"OOM with many sorts\", as reported here:\n>https://www.postgresql.org/message-id/20190708164401.GA22387%40telsasoft.com\n>\n\nInterestingly enough, I ran into the same ERROR (not sure if the same\nroot cause) while investigating bug #16104 [1], i.e. on a much simpler\nquery (single join).\n\nThis This particular machine is a bit smaller (only 8GB of RAM and less\ndisk space) so I created a smaller table with \"just\" 1.5B rows:\n\n create table test as select generate_series(1, 1500000000)::bigint i;\n set work_mem = '150MB';\n set max_parallel_workers_per_gather = 8;\n analyze test;\n\n explain select count(*) from test t1 join test t2 using (i);\n\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------\n Finalize Aggregate (cost=67527436.36..67527436.37 rows=1 width=8)\n -> Gather (cost=67527435.53..67527436.34 rows=8 width=8)\n Workers Planned: 8\n -> Partial Aggregate (cost=67526435.53..67526435.54 rows=1 width=8)\n -> Parallel Hash Join (cost=11586911.03..67057685.47 rows=187500024 width=0)\n Hash Cond: (t1.i = t2.i)\n -> Parallel Seq Scan on test t1 (cost=0.00..8512169.24 rows=187500024 width=8)\n -> Parallel Hash (cost=8512169.24..8512169.24 rows=187500024 width=8)\n -> Parallel Seq Scan on test t2 (cost=0.00..8512169.24 rows=187500024 width=8)\n(9 rows)\n\n explain analyze select count(*) from test t1 join test t2 using (i);\n \n ERROR: could not resize shared memory segment \"/PostgreSQL.1743102822\" to 536870912 bytes: No space left on device\n\nNow, work_mem = 150MB might be a bit too high considering the machine\nonly has 8GB of RAM (1GB of which is shared_buffers). But that's still\njust 1.2GB of RAM and this is not an OOM. This actually fills the\n/dev/shm mount, which is limited to 4GB on this box\n\n bench ~ # df | grep shm\n shm 3994752 16 3994736 1% /dev/shm\n\nSo somewhere in the parallel hash join, we allocate 4GB of shared segments ...\n\nThe filesystem usage from the moment of the query execution to the\nfailure looks about like this:\n\n Time fs 1K-blocks Used Available Use% Mounted on\n --------------------------------------------------------------\n 10:13:34 shm 3994752 34744 3960008 1% /dev/shm\n 10:13:35 shm 3994752 35768 3958984 1% /dev/shm\n 10:13:36 shm 3994752 37816 3956936 1% /dev/shm\n 10:13:39 shm 3994752 39864 3954888 1% /dev/shm\n 10:13:42 shm 3994752 41912 3952840 2% /dev/shm\n 10:13:46 shm 3994752 43960 3950792 2% /dev/shm\n 10:13:49 shm 3994752 48056 3946696 2% /dev/shm\n 10:13:56 shm 3994752 52152 3942600 2% /dev/shm\n 10:14:02 shm 3994752 56248 3938504 2% /dev/shm\n 10:14:09 shm 3994752 60344 3934408 2% /dev/shm\n 10:14:16 shm 3994752 68536 3926216 2% /dev/shm\n 10:14:30 shm 3994752 76728 3918024 2% /dev/shm\n 10:14:43 shm 3994752 84920 3909832 3% /dev/shm\n 10:14:43 shm 3994752 84920 3909832 3% /dev/shm\n 10:14:57 shm 3994752 93112 3901640 3% /dev/shm\n 10:15:11 shm 3994752 109496 3885256 3% /dev/shm\n 10:15:38 shm 3994752 125880 3868872 4% /dev/shm\n 10:16:06 shm 3994752 142264 3852488 4% /dev/shm\n 10:19:57 shm 3994752 683208 3311544 18% /dev/shm\n 10:19:58 shm 3994752 1338568 2656184 34% /dev/shm\n 10:20:02 shm 3994752 1600712 2394040 41% /dev/shm\n 10:20:03 shm 3994752 2125000 1869752 54% /dev/shm\n 10:20:04 shm 3994752 2649288 1345464 67% /dev/shm\n 10:20:08 shm 3994752 2518216 1476536 64% /dev/shm\n 10:20:10 shm 3994752 3173576 821176 80% /dev/shm\n 10:20:14 shm 3994752 3697864 296888 93% /dev/shm\n 10:20:15 shm 3994752 3417288 577464 86% /dev/shm\n 10:20:16 shm 3994752 3697864 296888 93% /dev/shm\n 10:20:20 shm 3994752 3828936 165816 96% /dev/shm\n\nAnd at the end, the contents of /dev/shm looks like this:\n\n-rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1005341478\n-rw------- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1011142277\n-rw------- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1047241463\n-rw------- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.1094702083\n-rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.1143288540\n-rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.1180709918\n-rw------- 1 postgres postgres 7408 Dec 14 15:43 PostgreSQL.1239805533\n-rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1292496162\n-rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.138443773\n-rw------- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.1442035225\n-rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.147930162\n-rw------- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.1525896026\n-rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.1541133044\n-rw------- 1 postgres postgres 33624064 Dec 16 22:14 PostgreSQL.1736434498\n-rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1845631548\n-rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1952212453\n-rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1965950370\n-rw------- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.1983158004\n-rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1997631477\n-rw------- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.2071391455\n-rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.210551357\n-rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.2125755117\n-rw------- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.2133152910\n-rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.255342242\n-rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.306663870\n-rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.420982703\n-rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.443494372\n-rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.457417415\n-rw------- 1 postgres postgres 4194304 Dec 16 22:20 PostgreSQL.462376479\n-rw------- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.512403457\n-rw------- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.546049346\n-rw------- 1 postgres postgres 196864 Dec 16 22:13 PostgreSQL.554918510\n-rw------- 1 postgres postgres 687584 Dec 16 22:13 PostgreSQL.585813590\n-rw------- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.612034010\n-rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.635077233\n-rw------- 1 postgres postgres 7408 Dec 15 17:28 PostgreSQL.69856210\n-rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.785623413\n-rw------- 1 postgres postgres 4194304 Dec 16 22:14 PostgreSQL.802559608\n-rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.825442833\n-rw------- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.827813234\n-rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.942923396\n-rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.948192559\n-rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.968081079\n\nThat's a lot of shared segments, considering there are only ~8 workers\nfor the parallel hash join. And some of the segments are 512MB, so not\nexactly tiny/abiding to the work_mem limit :-(\n\nI'm not very familiar with the PHJ internals, but this seems a bit\nexcessive. I mean, how am I supposed to limit memory usage in these\nqueries? Why shouldn't this be subject to work_mem?\n\n\nregards\n\n[1] https://www.postgresql.org/message-id/flat/16104-dc11ed911f1ab9df%40postgresql.org\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Mon, 16 Dec 2019 22:53:14 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: could not resize shared memory segment...No space left\n on device"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 10:53 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Interestingly enough, I ran into the same ERROR (not sure if the same\n> root cause) while investigating bug #16104 [1], i.e. on a much simpler\n> query (single join).\n>\n> This This particular machine is a bit smaller (only 8GB of RAM and less\n> disk space) so I created a smaller table with \"just\" 1.5B rows:\n>\n> create table test as select generate_series(1, 1500000000)::bigint i;\n> set work_mem = '150MB';\n> set max_parallel_workers_per_gather = 8;\n> analyze test;\n>\n> explain select count(*) from test t1 join test t2 using (i);\n>\n> QUERY PLAN\n> ------------------------------------------------------------------------------------------------------------\n> Finalize Aggregate (cost=67527436.36..67527436.37 rows=1 width=8)\n> -> Gather (cost=67527435.53..67527436.34 rows=8 width=8)\n> Workers Planned: 8\n> -> Partial Aggregate (cost=67526435.53..67526435.54 rows=1 width=8)\n> -> Parallel Hash Join (cost=11586911.03..67057685.47 rows=187500024 width=0)\n> Hash Cond: (t1.i = t2.i)\n> -> Parallel Seq Scan on test t1 (cost=0.00..8512169.24 rows=187500024 width=8)\n> -> Parallel Hash (cost=8512169.24..8512169.24 rows=187500024 width=8)\n> -> Parallel Seq Scan on test t2 (cost=0.00..8512169.24 rows=187500024 width=8)\n> (9 rows)\n>\n> explain analyze select count(*) from test t1 join test t2 using (i);\n>\n> ERROR: could not resize shared memory segment \"/PostgreSQL.1743102822\" to 536870912 bytes: No space left on device\n>\n> Now, work_mem = 150MB might be a bit too high considering the machine\n> only has 8GB of RAM (1GB of which is shared_buffers). But that's still\n> just 1.2GB of RAM and this is not an OOM. This actually fills the\n> /dev/shm mount, which is limited to 4GB on this box\n>\n> bench ~ # df | grep shm\n> shm 3994752 16 3994736 1% /dev/shm\n>\n> So somewhere in the parallel hash join, we allocate 4GB of shared segments ...\n>\n> The filesystem usage from the moment of the query execution to the\n> failure looks about like this:\n>\n> Time fs 1K-blocks Used Available Use% Mounted on\n> --------------------------------------------------------------\n> 10:13:34 shm 3994752 34744 3960008 1% /dev/shm\n> 10:13:35 shm 3994752 35768 3958984 1% /dev/shm\n> 10:13:36 shm 3994752 37816 3956936 1% /dev/shm\n> 10:13:39 shm 3994752 39864 3954888 1% /dev/shm\n> 10:13:42 shm 3994752 41912 3952840 2% /dev/shm\n> 10:13:46 shm 3994752 43960 3950792 2% /dev/shm\n> 10:13:49 shm 3994752 48056 3946696 2% /dev/shm\n> 10:13:56 shm 3994752 52152 3942600 2% /dev/shm\n> 10:14:02 shm 3994752 56248 3938504 2% /dev/shm\n> 10:14:09 shm 3994752 60344 3934408 2% /dev/shm\n> 10:14:16 shm 3994752 68536 3926216 2% /dev/shm\n> 10:14:30 shm 3994752 76728 3918024 2% /dev/shm\n> 10:14:43 shm 3994752 84920 3909832 3% /dev/shm\n> 10:14:43 shm 3994752 84920 3909832 3% /dev/shm\n> 10:14:57 shm 3994752 93112 3901640 3% /dev/shm\n> 10:15:11 shm 3994752 109496 3885256 3% /dev/shm\n> 10:15:38 shm 3994752 125880 3868872 4% /dev/shm\n> 10:16:06 shm 3994752 142264 3852488 4% /dev/shm\n> 10:19:57 shm 3994752 683208 3311544 18% /dev/shm\n> 10:19:58 shm 3994752 1338568 2656184 34% /dev/shm\n> 10:20:02 shm 3994752 1600712 2394040 41% /dev/shm\n> 10:20:03 shm 3994752 2125000 1869752 54% /dev/shm\n> 10:20:04 shm 3994752 2649288 1345464 67% /dev/shm\n> 10:20:08 shm 3994752 2518216 1476536 64% /dev/shm\n> 10:20:10 shm 3994752 3173576 821176 80% /dev/shm\n> 10:20:14 shm 3994752 3697864 296888 93% /dev/shm\n> 10:20:15 shm 3994752 3417288 577464 86% /dev/shm\n> 10:20:16 shm 3994752 3697864 296888 93% /dev/shm\n> 10:20:20 shm 3994752 3828936 165816 96% /dev/shm\n>\n> And at the end, the contents of /dev/shm looks like this:\n>\n> -rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1005341478\n> -rw------- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1011142277\n> -rw------- 1 postgres postgres 1048576 Dec 16 22:20 PostgreSQL.1047241463\n> -rw------- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.1094702083\n> -rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.1143288540\n> -rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.1180709918\n> -rw------- 1 postgres postgres 7408 Dec 14 15:43 PostgreSQL.1239805533\n> -rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1292496162\n> -rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.138443773\n> -rw------- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.1442035225\n> -rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.147930162\n> -rw------- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.1525896026\n> -rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.1541133044\n> -rw------- 1 postgres postgres 33624064 Dec 16 22:14 PostgreSQL.1736434498\n> -rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1845631548\n> -rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1952212453\n> -rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.1965950370\n> -rw------- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.1983158004\n> -rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.1997631477\n> -rw------- 1 postgres postgres 16777216 Dec 16 22:20 PostgreSQL.2071391455\n> -rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.210551357\n> -rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.2125755117\n> -rw------- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.2133152910\n> -rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.255342242\n> -rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.306663870\n> -rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.420982703\n> -rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.443494372\n> -rw------- 1 postgres postgres 134217728 Dec 16 22:20 PostgreSQL.457417415\n> -rw------- 1 postgres postgres 4194304 Dec 16 22:20 PostgreSQL.462376479\n> -rw------- 1 postgres postgres 16777216 Dec 16 22:16 PostgreSQL.512403457\n> -rw------- 1 postgres postgres 8388608 Dec 16 22:14 PostgreSQL.546049346\n> -rw------- 1 postgres postgres 196864 Dec 16 22:13 PostgreSQL.554918510\n> -rw------- 1 postgres postgres 687584 Dec 16 22:13 PostgreSQL.585813590\n> -rw------- 1 postgres postgres 4194304 Dec 16 22:15 PostgreSQL.612034010\n> -rw------- 1 postgres postgres 33624064 Dec 16 22:19 PostgreSQL.635077233\n> -rw------- 1 postgres postgres 7408 Dec 15 17:28 PostgreSQL.69856210\n> -rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.785623413\n> -rw------- 1 postgres postgres 4194304 Dec 16 22:14 PostgreSQL.802559608\n> -rw------- 1 postgres postgres 67108864 Dec 16 22:20 PostgreSQL.825442833\n> -rw------- 1 postgres postgres 8388608 Dec 16 22:15 PostgreSQL.827813234\n> -rw------- 1 postgres postgres 268435456 Dec 16 22:20 PostgreSQL.942923396\n> -rw------- 1 postgres postgres 536870912 Dec 16 22:20 PostgreSQL.948192559\n> -rw------- 1 postgres postgres 2097152 Dec 16 22:20 PostgreSQL.968081079\n>\n> That's a lot of shared segments, considering there are only ~8 workers\n> for the parallel hash join. And some of the segments are 512MB, so not\n> exactly tiny/abiding to the work_mem limit :-(\n>\n> I'm not very familiar with the PHJ internals, but this seems a bit\n> excessive. I mean, how am I supposed to limit memory usage in these\n> queries? Why shouldn't this be subject to work_mem?\n\nIt's subject to work_mem per process (leader + workers). So it would\nlike to use 150M * 9 = 1350M, but then there are things that we don't\nmeasure at all, including the per batch data as you were complaining\nabout in that other thread, and here that's quite extreme because the\nbug in question is one that reaches large partition counts. You're\nalso looking at the raw shared memory segments, but there is a level\non top of that which is the DSA allocator. It suffers from\nfragmentation like any other general purpose allocator (so maybe it is\nbacked by something like 2x the memory the client allocated at worst),\nthough unfortunately, unlike typical allocators, we have to force the\nOS to really allocate the memory pages on Linux only or it fails with\nSIGBUS later when the kernel can't extend a tmpfs file.\n\n\n",
"msg_date": "Tue, 17 Dec 2019 12:17:08 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: ERROR: could not resize shared memory segment...No space left on\n device"
}
] |
[
{
"msg_contents": "I want to address the issue that calling a record-returning function \nalways requires specifying a result column list, even though there are \ncases where the function could be self-aware enough to know the result \ncolumn list of a particular call. For example, most of the functions in \ncontrib/tablefunc are like that.\n\nSQL:2016 has a feature called polymorphic table functions (PTF) that \naddresses this. The full PTF feature is much larger, so I just carved \nout this particular piece of functionality. Here is a link to some \nrelated information: \nhttps://modern-sql.com/blog/2018-11/whats-new-in-oracle-database-18c#ptf\n\nThe idea is that you attach a helper function to the main function. The \nhelper function is called at parse time with the constant arguments of \nthe main function call and can compute a result row description (a \nTupleDesc in our case).\n\nExample from the patch:\n\nCREATE FUNCTION connectby_describe(internal)\nRETURNS internal\nAS 'MODULE_PATHNAME', 'connectby_describe'\nLANGUAGE C;\n\nCREATE FUNCTION connectby(text,text,text,text,int,text)\nRETURNS setof record\nDESCRIBE WITH connectby_describe\nAS 'MODULE_PATHNAME','connectby_text'\nLANGUAGE C STABLE STRICT;\n\n(The general idea is very similar to Pavel's patch \"parse time support \nfunction\"[0] but addressing a disjoint problem.)\n\nThe original SQL:2016 syntax is a bit different: There, you'd first \ncreate two separate functions: a \"describe\" and a \"fulfill\" and then \ncreate the callable PTF referencing those two (similar to how an \naggregate is composed of several component functions). I think \ndeviating from this makes some sense because we can then more easily \n\"upgrade\" existing record-returning functions with this functionality.\n\nAnother difference is that AFAICT, the standard specifies that if the \ndescribe function cannot resolve the call, the call fails. Again, in \norder to be able to upgrade existing functions (instead of having to \ncreate a second set of functions with a different name), I have made it \nso that you can still specify an explicit column list if the describe \nfunction does not succeed.\n\nIn this prototype patch, I have written the C interface and several \nexamples using existing functions in the source tree. Eventually, I'd \nlike to also add PL-level support for this.\n\nThoughts so far?\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAFj8pRARh+r4=HNwQ+hws-D6msus01Dw_6zjNYur6tPk1+W0rA@mail.gmail.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 16 Dec 2019 19:53:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "polymorphic table functions light"
},
{
"msg_contents": "Hi\n\npo 16. 12. 2019 v 19:53 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> I want to address the issue that calling a record-returning function\n> always requires specifying a result column list, even though there are\n> cases where the function could be self-aware enough to know the result\n> column list of a particular call. For example, most of the functions in\n> contrib/tablefunc are like that.\n>\n> SQL:2016 has a feature called polymorphic table functions (PTF) that\n> addresses this. The full PTF feature is much larger, so I just carved\n> out this particular piece of functionality. Here is a link to some\n> related information:\n> https://modern-sql.com/blog/2018-11/whats-new-in-oracle-database-18c#ptf\n>\n> The idea is that you attach a helper function to the main function. The\n> helper function is called at parse time with the constant arguments of\n> the main function call and can compute a result row description (a\n> TupleDesc in our case).\n>\n> Example from the patch:\n>\n> CREATE FUNCTION connectby_describe(internal)\n> RETURNS internal\n> AS 'MODULE_PATHNAME', 'connectby_describe'\n> LANGUAGE C;\n>\n> CREATE FUNCTION connectby(text,text,text,text,int,text)\n> RETURNS setof record\n> DESCRIBE WITH connectby_describe\n> AS 'MODULE_PATHNAME','connectby_text'\n> LANGUAGE C STABLE STRICT;\n>\n> (The general idea is very similar to Pavel's patch \"parse time support\n> function\"[0] but addressing a disjoint problem.)\n>\n> The original SQL:2016 syntax is a bit different: There, you'd first\n> create two separate functions: a \"describe\" and a \"fulfill\" and then\n> create the callable PTF referencing those two (similar to how an\n> aggregate is composed of several component functions). I think\n> deviating from this makes some sense because we can then more easily\n> \"upgrade\" existing record-returning functions with this functionality.\n>\n> Another difference is that AFAICT, the standard specifies that if the\n> describe function cannot resolve the call, the call fails. Again, in\n> order to be able to upgrade existing functions (instead of having to\n> create a second set of functions with a different name), I have made it\n> so that you can still specify an explicit column list if the describe\n> function does not succeed.\n>\n> In this prototype patch, I have written the C interface and several\n> examples using existing functions in the source tree. Eventually, I'd\n> like to also add PL-level support for this.\n>\n> Thoughts so far?\n>\n\nWhat I read about it - it can be very interesting feature. It add lot of\ndynamic to top queries - it can be used very easy for cross tables on\nserver side.\n\nSure - it can be used very badly - but it is nothing new for stored\nprocedures.\n\nPersonally I like this feature. The difference from standard syntax\nprobably is not problem a) there are little bit syntax already, b) I cannot\nto imagine wide using of this feature. But it can be interesting for\nextensions.\n\nBetter to use some special pseudotype for describe function instead\n\"internal\" - later it can interesting for PL support\n\nRegards\n\nPavel\n\n\n\n\n>\n>\n> [0]:\n>\n> https://www.postgresql.org/message-id/flat/CAFj8pRARh+r4=HNwQ+hws-D6msus01Dw_6zjNYur6tPk1+W0rA@mail.gmail.com\n>\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nHipo 16. 12. 2019 v 19:53 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:I want to address the issue that calling a record-returning function \nalways requires specifying a result column list, even though there are \ncases where the function could be self-aware enough to know the result \ncolumn list of a particular call. For example, most of the functions in \ncontrib/tablefunc are like that.\n\nSQL:2016 has a feature called polymorphic table functions (PTF) that \naddresses this. The full PTF feature is much larger, so I just carved \nout this particular piece of functionality. Here is a link to some \nrelated information: \nhttps://modern-sql.com/blog/2018-11/whats-new-in-oracle-database-18c#ptf\n\nThe idea is that you attach a helper function to the main function. The \nhelper function is called at parse time with the constant arguments of \nthe main function call and can compute a result row description (a \nTupleDesc in our case).\n\nExample from the patch:\n\nCREATE FUNCTION connectby_describe(internal)\nRETURNS internal\nAS 'MODULE_PATHNAME', 'connectby_describe'\nLANGUAGE C;\n\nCREATE FUNCTION connectby(text,text,text,text,int,text)\nRETURNS setof record\nDESCRIBE WITH connectby_describe\nAS 'MODULE_PATHNAME','connectby_text'\nLANGUAGE C STABLE STRICT;\n\n(The general idea is very similar to Pavel's patch \"parse time support \nfunction\"[0] but addressing a disjoint problem.)\n\nThe original SQL:2016 syntax is a bit different: There, you'd first \ncreate two separate functions: a \"describe\" and a \"fulfill\" and then \ncreate the callable PTF referencing those two (similar to how an \naggregate is composed of several component functions). I think \ndeviating from this makes some sense because we can then more easily \n\"upgrade\" existing record-returning functions with this functionality.\n\nAnother difference is that AFAICT, the standard specifies that if the \ndescribe function cannot resolve the call, the call fails. Again, in \norder to be able to upgrade existing functions (instead of having to \ncreate a second set of functions with a different name), I have made it \nso that you can still specify an explicit column list if the describe \nfunction does not succeed.\n\nIn this prototype patch, I have written the C interface and several \nexamples using existing functions in the source tree. Eventually, I'd \nlike to also add PL-level support for this.\n\nThoughts so far?What I read about it - it can be very interesting feature. It add lot of dynamic to top queries - it can be used very easy for cross tables on server side. Sure - it can be used very badly - but it is nothing new for stored procedures.Personally I like this feature. The difference from standard syntax probably is not problem a) there are little bit syntax already, b) I cannot to imagine wide using of this feature. But it can be interesting for extensions.Better to use some special pseudotype for describe function instead \"internal\" - later it can interesting for PL supportRegardsPavel \n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/CAFj8pRARh+r4=HNwQ+hws-D6msus01Dw_6zjNYur6tPk1+W0rA@mail.gmail.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 16 Dec 2019 20:11:48 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> I want to address the issue that calling a record-returning function \n> always requires specifying a result column list, even though there are \n> cases where the function could be self-aware enough to know the result \n> column list of a particular call. For example, most of the functions in \n> contrib/tablefunc are like that.\n\nSeems like a reasonable goal.\n\n> SQL:2016 has a feature called polymorphic table functions (PTF) that \n> addresses this. The full PTF feature is much larger, so I just carved \n> out this particular piece of functionality. Here is a link to some \n> related information: \n> https://modern-sql.com/blog/2018-11/whats-new-in-oracle-database-18c#ptf\n\n> The idea is that you attach a helper function to the main function. The \n> helper function is called at parse time with the constant arguments of \n> the main function call and can compute a result row description (a \n> TupleDesc in our case).\n\n> Example from the patch:\n\n> CREATE FUNCTION connectby_describe(internal)\n> RETURNS internal\n> AS 'MODULE_PATHNAME', 'connectby_describe'\n> LANGUAGE C;\n\n> CREATE FUNCTION connectby(text,text,text,text,int,text)\n> RETURNS setof record\n> DESCRIBE WITH connectby_describe\n> AS 'MODULE_PATHNAME','connectby_text'\n> LANGUAGE C STABLE STRICT;\n\n> (The general idea is very similar to Pavel's patch \"parse time support \n> function\"[0] but addressing a disjoint problem.)\n\nHm. Given that this involves a function-taking-and-returning-internal,\nI think it's fairly silly to claim that it is implementing a SQL-standard\nfeature, or even a subset or related feature. Nor do I see a pathway\nwhereby this might end in a feature you could use without writing C code.\n\nThat being the case, I'm not in favor of using up SQL syntax space for it\nif we don't have to. Moreover, this approach requires a whole lot of\nduplicative-seeming new infrastructure, such as a new pg_proc column.\nAnd you're not even done yet --- where's the pg_dump support?\n\nI think we'd be better off to address this by extending the existing\n\"support function\" infrastructure by inventing a new support request type,\nmuch as Pavel's patch did. I've not gotten around to reviewing the latest\nversion of his patch, so I'm not sure if it provides enough flexibility to\nsolve this particular problem, or if we'd need a different request type\nthan he proposes. But I'd rather go down that path than this one.\nIt should provide the same amount of functionality with a whole lot less\noverhead code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 16:13:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "On 16/12/2019 22:13, Tom Lane wrote:\n> That being the case, I'm not in favor of using up SQL syntax space for it\n> if we don't have to.\n\n\nDo I understand correctly that you are advocating *against* using\nstandard SQL syntax for a feature that is defined by the SQL Standard\nand that we have no similar implementation for?\n\n\nIf so, I would like to stand up to it. We are known as (at least one\nof) the most conforming implementations and I hope we will continue to\nbe so. I would rather we remove from rather than add to this page:\nhttps://wiki.postgresql.org/wiki/PostgreSQL_vs_SQL_Standard\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 01:30:00 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 16/12/2019 22:13, Tom Lane wrote:\n>> That being the case, I'm not in favor of using up SQL syntax space for it\n>> if we don't have to.\n\n> Do I understand correctly that you are advocating *against* using\n> standard SQL syntax for a feature that is defined by the SQL Standard\n> and that we have no similar implementation for?\n\nMy point is that what Peter is proposing is exactly *not* the standard's\nfeature. We generally avoid using up standard syntax for not-standard\nsemantics, especially if there's any chance that somebody might come along\nand build a more-conformant version later. (Having said that, I had the\nimpression that what he was proposing wasn't the standard's syntax either,\nbut just a homegrown CREATE FUNCTION addition. I don't really see the\npoint of doing it like that when we can do it below the level of SQL.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Dec 2019 22:55:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "On 2019-12-16 19:53, Peter Eisentraut wrote:\n> SQL:2016 has a feature called polymorphic table functions (PTF) that\n> addresses this. The full PTF feature is much larger, so I just carved\n> out this particular piece of functionality. Here is a link to some\n> related information:\n> https://modern-sql.com/blog/2018-11/whats-new-in-oracle-database-18c#ptf\n> \n> The idea is that you attach a helper function to the main function. The\n> helper function is called at parse time with the constant arguments of\n> the main function call and can compute a result row description (a\n> TupleDesc in our case).\n\nHere is an updated patch for the record, since the previous patch had \naccumulated some significant merge conflicts.\n\nI will reply to the discussions elsewhere in the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 24 Jan 2020 09:11:04 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "On 2019-12-16 22:13, Tom Lane wrote:\n> Hm. Given that this involves a function-taking-and-returning-internal,\n> I think it's fairly silly to claim that it is implementing a SQL-standard\n> feature, or even a subset or related feature. Nor do I see a pathway\n> whereby this might end in a feature you could use without writing C code.\n\n> I think we'd be better off to address this by extending the existing\n> \"support function\" infrastructure by inventing a new support request type,\n\nI definitely want to make it work in a way that does not require writing \nC code. My idea was to create a new type, perhaps called \"descriptor\", \nthat represents essentially a tuple descriptor. (It could be exactly a \nTupleDesc, as this patch does, or something similar.) For the sake of \ndiscussion, we could use JSON as the text representation of this. Then \na PL/pgSQL function or something else high level could easily be written \nto assemble this. Interesting use cases are for example in the area of \nusing PL/Perl or PL/Python for unpacking some serialization format using \nexisting modules in those languages.\n\nThe SQL standard has the option of leaving the call signatures of the \nPTF support functions implementation defined, so this approach would \nappear to be within the spirit of the specification.\n\nObviously, there is a lot of leg work to be done between here and there, \nbut it seems doable. The purpose of this initial patch submission was \nto get some opinions on the basic idea of \"determine result tuple \nstructure by calling helper function at parse time\", and so far no one \nhas fallen off their chair from that, so I'm encouraged. ;-)\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:27:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "On 2019-12-20 01:30, Vik Fearing wrote:\n> On 16/12/2019 22:13, Tom Lane wrote:\n>> That being the case, I'm not in favor of using up SQL syntax space for it\n>> if we don't have to.\n> \n> Do I understand correctly that you are advocating *against* using\n> standard SQL syntax for a feature that is defined by the SQL Standard\n> and that we have no similar implementation for?\n\nOn the question of using SQL syntax or not for this, there are a couple \nof arguments I'm considering.\n\nFirst, the SQL standard explicitly permits not implementing the exact \nsignatures of the PTF component procedures; see feature code B208. \nWhile this does not literally permit diverging on the CREATE FUNCTION \nsyntax, it's clear that they expect that the creation side of this will \nhave some incompatibilities. The existing practices of other vendors \nsupport this observation. What's more interesting in practice is making \nthe invocation side compatible.\n\nSecond, set-returning functions in PostgreSQL already exist and in my \nmind it would make sense to make this feature work with existing \nfunctions or allow easy \"upgrades\" rather than introducing another \ncompletely new syntax to do something very similar to what already \nexists. This wouldn't be a good user experience. And the full standard \nsyntax is also complicated and different enough that it wouldn't be \ntrivial to add.\n\nBut I'm open to other ideas.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 24 Jan 2020 09:42:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: polymorphic table functions light"
},
{
"msg_contents": "> On 24 Jan 2020, at 08:27, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> \n> I definitely want to make it work in a way that does not require writing C code. My idea was to create a new type, perhaps called \"descriptor\", that represents essentially a tuple descriptor. (It could be exactly a TupleDesc, as this patch does, or something similar.) For the sake of discussion, we could use JSON as the text representation of this. Then a PL/pgSQL function or something else high level could easily be written to assemble this. Interesting use cases are for example in the area of using PL/Perl or PL/Python for unpacking some serialization format using existing modules in those languages.\n\nI do think it’s very desirable to make it usable outside of C code.\n\n> Obviously, there is a lot of leg work to be done between here and there, but it seems doable. The purpose of this initial patch submission was to get some opinions on the basic idea of \"determine result tuple structure by calling helper function at parse time\", and so far no one has fallen off their chair from that, so I'm encouraged. ;-)\n\nI’m interested in this development, as it makes RECORD-returning SRFs in the SELECT list a viable proposition, and that in turn allows a ValuePerCall SRF to get meaningful benefit from pipelining. (They could always pipeline, but there is no way to extract information from the RECORD that’s returned, with the sole exception of row_to_json.)\n\nI couldn’t check out that it would work though because I couldn’t apply the v2 (or v1) patch against either 12.0 or 530609a (which I think was sometime around 25th Jan). Against 12.0, I got a few rejections (prepjointree.c and clauses.c). I figured they might be inconsequential, but no: initdb then fails at CREATE VIEW pg_policies. Different rejections against 530609a, but still initdb fails.\n\nBut I’m definitely very much encouraged.\n\ndenty.\n\n",
"msg_date": "Sat, 1 Feb 2020 09:55:28 +0000",
"msg_from": "Dent John <denty@QQdd.eu>",
"msg_from_op": false,
"msg_subject": "Re: polymorphic table functions light"
}
] |
[
{
"msg_contents": "Hi,\nAccording to microsoft documentation at:\nhttps://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom\nThe function CryptGenRandom is deprecated, and may can be removed in future release.\nThis patch add support to use BCryptGenRandom.\n\nBCryptGenRandom apparently works without having to set up an environment before calling.\nThe drawback, its change causes need to link to bcrypt.lib.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 16 Dec 2019 21:18:10 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "Forget Mkvcbuild_v1.patch\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 16 Dec 2019 21:24:12 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 09:18:10PM +0000, Ranier Vilela wrote:\n> According to microsoft documentation at:\n> https://docs.microsoft.com/en-us/windows/win32/api/wincrypt/nf-wincrypt-cryptgenrandom\n> The function CryptGenRandom is deprecated, and may can be removed in future release.\n> This patch add support to use BCryptGenRandom.\n\n+#if defined(_MSC_VER) && _MSC_VER >= 1900 \\\n+ && defined(MIN_WINNT) && MIN_WINNT >= 0x0600\n+#define USE_WIN32_BCRYPTGENRANDOM\n[...]\n+ $postgres->AddLibrary('bcrypt.lib') if ($vsVersion > '12.00');\n \nAnd looking at this page, it is said that the minimum version\nsupported by this function is Windows 2008:\nhttps://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptgenrandom\n\nNow, your changes in MkvcBuild.pm and the backend code assume that\nwe need to include bcrypt.lib since MSVC 2015 (at least version\n14.00 or _MSC_VER >= 1900. Do you have a reference about when this\nhas been introduced in VS? The MS docs don't seem to hold a hint\nabout that..\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 12:43:42 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port add support to BCryptGenRandom"
}
] |
[
{
"msg_contents": "Hi,\nOn exec.c, have two memory leaks, and a possible access beyond heap bounds, the patch tries to fix them.\nAccording to documentation at:\nhttps://en.cppreference.com/w/c/experimental/dynamic/strdup\n\"The returned pointer must be passed to free to avoid a memory leak. \"\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 16 Dec 2019 21:22:14 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Memory leak, at src/common/exec.c"
},
{
"msg_contents": "\n\nOn 12/16/19 1:22 PM, Ranier Vilela wrote:\n> Hi,\n> On exec.c, have two memory leaks, and a possible access beyond heap bounds, the patch tries to fix them.\n> According to documentation at:\n> https://en.cppreference.com/w/c/experimental/dynamic/strdup\n> \"The returned pointer must be passed to free to avoid a memory leak.\"\n\nPlease see the man page for putenv. Are you certain it is safe to\nfree the string passed to putenv after putenv returns? I think this\nmay be implemented differently on various platforms.\n\nTaken from `man putenv`:\n\n\"NOTES\n The putenv() function is not required to be reentrant, and the \none in glibc 2.0 is not, but the glibc 2.1 version is.\n\n Since version 2.1.2, the glibc implementation conforms to \nSUSv2: the pointer string given to putenv() is used. In particular, \nthis string becomes part of the environment; changing it later will\n change the environment. (Thus, it is an error is to call \nputenv() with an automatic variable as the argument, then return from \nthe calling function while string is still part of the environment.)\n However, glibc versions 2.0 to 2.1.1 differ: a copy of the \nstring is used. On the one hand this causes a memory leak, and on the \nother hand it violates SUSv2.\n\n The 4.4BSD version, like glibc 2.0, uses a copy.\n\n SUSv2 removes the const from the prototype, and so does glibc 2.1.3.\n\"\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Mon, 16 Dec 2019 13:34:40 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Memory leak, at src/common/exec.c"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n> Please see the man page for putenv. Are you certain it is safe to\n> free the string passed to putenv after putenv returns? I think this\n> may be implemented differently on various platforms.\n\nPOSIX requires the behavior the glibc man page describes:\n\n The putenv() function shall use the string argument to set environment\n variable values. The string argument should point to a string of the\n form \"name=value\". The putenv() function shall make the value of\n the environment variable name equal to value by altering an existing\n variable or creating a new one. In either case, the string pointed to\n by string shall become part of the environment, so altering the string\n shall change the environment.\n\nSo yeah, that patch is completely wrong. It might've survived light\ntesting with non-debug versions of malloc/free, but under any sort\nof load the environment variable would become corrupted. The reason\nfor the strdup in our code is exactly to make a long-lived string\nthat can safely be given to putenv.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 17:33:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Memory leak, at src/common/exec.c"
},
{
"msg_contents": "According to the documentation at:\nhttps://wiki.sei.cmu.edu/confluence/display/c/POS34-C.+Do+not+call+putenv%28%29+with+a+pointer+to+an+automatic+variable+as+the+argument\n\"Using setenv() is easier and consequently less error prone than using putenv().\"\nputenv is problematic and error prone, better replace by setenv.\n\nAs a result, set_pglocale_pgservice, is much simpler and more readable.\n\nregards,\nRanier Vilela",
"msg_date": "Mon, 16 Dec 2019 22:44:21 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Memory leak, at src/common/exec.c"
},
{
"msg_contents": "Ranier Vilela <ranier_gyn@hotmail.com> writes:\n> According to the documentation at:\n> https://wiki.sei.cmu.edu/confluence/display/c/POS34-C.+Do+not+call+putenv%28%29+with+a+pointer+to+an+automatic+variable+as+the+argument\n> \"Using setenv() is easier and consequently less error prone than using putenv().\"\n> putenv is problematic and error prone, better replace by setenv.\n\nsetenv is also less portable: it does not appear in SUSv2, which is still\nour baseline spec for Unix platforms. We've avoided its use since 2001,\ncf. ec7ddc158.\n\nIt's also fair to wonder how well this change would fly on Windows,\nwhere we have to implement putenv for ourselves to get things to work\nright (cf. src/port/win32env.c, which does not offer support for\nsetenv).\n\nPlease stop inventing reasons to change code that's worked fine for\ndecades. We have better things to spend our time on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 18:09:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Memory leak, at src/common/exec.c"
},
{
"msg_contents": "According to [1], windows does not support setenv, so for the patch to work [3], would need to add it.\nWith the possibility of setenv going further [2], I am submitting in this thread, the patch to add setenv support on the windows side, avoiding starting a new trhead.\nIt is based on pre-existing functions, and seeks to correctly emulate the functioning of the POSIX setenv, but has not yet been tested.\n\nIf this work is not acceptable then it is finished. And two memory leaks and a possible access beyond heap bounds, reported and not fixed.\n\nregards,\nRanier Vilela\n\n[1] https://www.postgresql.org/message-id/29478.1576537771%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/30119.1576538578%40sss.pgh.pa.us\n[3] https://www.postgresql.org/message-id/SN2PR05MB264066382E2CC75E734492C8E3510%40SN2PR05MB2640.namprd05.prod.outlook.com",
"msg_date": "Tue, 17 Dec 2019 03:30:01 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Memory leak, at src/common/exec.c"
}
] |
[
{
"msg_contents": "I noticed while investigating [1] that we have one single solitary\nuse of setenv(3) in our code base, in secure_open_gssapi().\n\nIt's been project policy since 2001 to avoid setenv(), and I notice\nthat src/port/win32env.c lacks support for setenv(), making it\npretty doubtful that the call has the semantics one would wish\non Windows.\n\nNow, versions of the POSIX spec released in this century do have setenv(),\nand even seem to regard it as \"more standard\" than putenv(). So maybe\nthere's a case for moving our goalposts and deciding to allow use of\nsetenv(). But then it seems like we'd better twiddle win32env.c to\nsupport it; and I'm not sure back-patching such a change would be wise.\n\nAlternatively, we could change secure_open_gssapi() to use putenv(),\nat the cost of a couple more lines of code.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/SN2PR05MB264066382E2CC75E734492C8E3510%40SN2PR05MB2640.namprd05.prod.outlook.com\n\n\n",
"msg_date": "Mon, 16 Dec 2019 18:22:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Unportable(?) use of setenv() in secure_open_gssapi()"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> I noticed while investigating [1] that we have one single solitary\n> use of setenv(3) in our code base, in secure_open_gssapi().\n> \n> It's been project policy since 2001 to avoid setenv(), and I notice\n> that src/port/win32env.c lacks support for setenv(), making it\n> pretty doubtful that the call has the semantics one would wish\n> on Windows.\n\nYeah, that doesn't seem good, though you'd have to be building with MIT\nKerberos for Windows to end up with GSSAPI on a Windows build in the\nfirst place (much more common on Windows is to build with Microsoft SSPI\nsupport instead). Still, it looks like someone went to the trouble of\nsetting that up on a buildfarm animal- looks like hamerkop has it.\n\n> Now, versions of the POSIX spec released in this century do have setenv(),\n> and even seem to regard it as \"more standard\" than putenv(). So maybe\n> there's a case for moving our goalposts and deciding to allow use of\n> setenv(). But then it seems like we'd better twiddle win32env.c to\n> support it; and I'm not sure back-patching such a change would be wise.\n> \n> Alternatively, we could change secure_open_gssapi() to use putenv(),\n> at the cost of a couple more lines of code.\n> \n> Thoughts?\n\nSo, auth.c already does the song-and-dance for putenv for this exact\nvariable, but it happens too late if you want to use GSSAPI for an\nencrypted connection. Looking at this now, it seems like we should\nreally just move up where that's happening instead of having it done\nonce in be-secure-gssapi.c and then again in auth.c. Maybe we could do\nit in BackendInitialize..?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 16 Dec 2019 19:46:32 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Unportable(?) use of setenv() in secure_open_gssapi()"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> It's been project policy since 2001 to avoid setenv(), and I notice\n>> that src/port/win32env.c lacks support for setenv(), making it\n>> pretty doubtful that the call has the semantics one would wish\n>> on Windows.\n\n> Yeah, that doesn't seem good, though you'd have to be building with MIT\n> Kerberos for Windows to end up with GSSAPI on a Windows build in the\n> first place (much more common on Windows is to build with Microsoft SSPI\n> support instead). Still, it looks like someone went to the trouble of\n> setting that up on a buildfarm animal- looks like hamerkop has it.\n\nIt looks like it'd only matter if Kerberos were using a different CRT\nversion than PG proper, which is probably even less likely. Still,\nthat could happen.\n\n> So, auth.c already does the song-and-dance for putenv for this exact\n> variable, but it happens too late if you want to use GSSAPI for an\n> encrypted connection. Looking at this now, it seems like we should\n> really just move up where that's happening instead of having it done\n> once in be-secure-gssapi.c and then again in auth.c. Maybe we could do\n> it in BackendInitialize..?\n\nHm, yeah, and it's also pretty darn inconsistent that one of them does\noverwrite = 1 while the other emulates overwrite = 0. I'd be for\nunifying that code. It'd also lead to a more safely back-patchable\nfix than the other solution.\n\nGoing forward, adding support for setenv() wouldn't be an unreasonable\nthing to do, I think. It's certainly something that people find\nattractive to use, and the portability issues we had with it back at\nthe turn of the century should be pretty much gone. I do note that my\nold dinosaur gaur, which is the last surviving buildfarm member without\nunsetenv(), lacks setenv() as well --- but I'd be willing to add support\nfor that as a src/port module. We'd also have to fix win32env.c, but\nthat's not much new code either.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 20:44:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Unportable(?) use of setenv() in secure_open_gssapi()"
}
] |
[
{
"msg_contents": "Hi,\n\nI was responding to a question about postgres' per-backend memory usage,\nmaking me look at the various contexts below CacheMemoryContext. There\nis pretty much always a significant number of contexts below, one for\neach index:\n\n CacheMemoryContext: 524288 total in 7 blocks; 8680 free (0 chunks); 515608 used\n index info: 2048 total in 2 blocks; 568 free (1 chunks); 1480 used: pg_class_tblspc_relfilenode_index\n index info: 2048 total in 2 blocks; 960 free (0 chunks); 1088 used: pg_statistic_ext_relid_index\n index info: 2048 total in 2 blocks; 976 free (0 chunks); 1072 used: blarg_pkey\n index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: pg_index_indrelid_index\n index info: 2048 total in 2 blocks; 600 free (1 chunks); 1448 used: pg_attrdef_adrelid_adnum_index\n index info: 2048 total in 2 blocks; 656 free (2 chunks); 1392 used: pg_db_role_setting_databaseid_rol_index\n index info: 2048 total in 2 blocks; 544 free (2 chunks); 1504 used: pg_opclass_am_name_nsp_index\n index info: 2048 total in 2 blocks; 928 free (2 chunks); 1120 used: pg_foreign_data_wrapper_name_index\n index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_enum_oid_index\n index info: 2048 total in 2 blocks; 600 free (1 chunks); 1448 used: pg_class_relname_nsp_index\n index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_foreign_server_oid_index\n index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_publication_pubname_index\n...\n index info: 3072 total in 2 blocks; 1144 free (2 chunks); 1928 used: pg_conversion_default_index\n...\n\nwhile I also think we could pretty easily reduce the amount of memory\nused for each index, I want to focus on something else here:\n\nWe waste a lot of space due to all these small contexts. Even leaving\naside the overhead of the context and its blocks - not insignificant -\nthey are mostly between ~1/2 a ~1/4 empty.\n\nAt the same time we probably don't want to inline all of them into\nCacheMemoryContext - too likely to introduce bugs, and too hard to\nmaintain leak free.\n\n\nBut what if we had a new type of memory context that did not itself\nmanage memory underlying allocations, but instead did so via the parent?\nIf such a context tracked all the live allocations in some form of list,\nit could then free them from the parent at reset time. In other words,\nit'd proxy all memory management via the parent, only adding a separate\nname, and tracking of all live chunks.\n\nObviously such a context would be less efficient to reset than a plain\naset.c one - but I don't think that'd matter much for these types of\nuse-cases. The big advantage in this case would be that we wouldn't\nhave separate two separate \"blocks\" for each index cache entry, but\ninstead allocations could all be done within CacheMemoryContext.\n\nDoes that sound like a sensible idea?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Dec 2019 15:35:12 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I was responding to a question about postgres' per-backend memory usage,\n> making me look at the various contexts below CacheMemoryContext. There\n> is pretty much always a significant number of contexts below, one for\n> each index:\n> index info: 2048 total in 2 blocks; 568 free (1 chunks); 1480 used: pg_class_tblspc_relfilenode_index\n\nYup.\n\n> But what if we had a new type of memory context that did not itself\n> manage memory underlying allocations, but instead did so via the parent?\n> If such a context tracked all the live allocations in some form of list,\n> it could then free them from the parent at reset time. In other words,\n> it'd proxy all memory management via the parent, only adding a separate\n> name, and tracking of all live chunks.\n\nI dunno, that seems like a *lot* of added overhead, and opportunity for\nbugs. Maybe it'd be all right for contexts in which alloc/dealloc is\nvery infrequent. But why not just address this problem by reducing the\nallocset blocksize parameter (some more) for these index contexts?\n\nI'd even go a bit further, and suggest that the right way to exploit\nour knowledge that these contexts' contents don't change much is to\ngo the other way, and reduce not increase their per-chunk overhead.\nI've wanted for some time to build a context type that doesn't support\npfree() but just makes it a no-op, and doesn't round request sizes up\nfurther than the next maxalign boundary. Without pfree we don't need\na normal chunk header; the minimum requirement of a context pointer\nis enough. And since we aren't going to be recycling any chunks, there's\nno need to try to standardize their sizes. This seems like it'd be ideal\nfor cases like the index cache contexts.\n\n(For testing purposes, the generation.c context type might be close\nenough for this, and it'd be easier to shove in.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 18:58:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "On Mon, Dec 16, 2019 at 03:35:12PM -0800, Andres Freund wrote:\n>Hi,\n>\n>I was responding to a question about postgres' per-backend memory usage,\n>making me look at the various contexts below CacheMemoryContext. There\n>is pretty much always a significant number of contexts below, one for\n>each index:\n>\n> CacheMemoryContext: 524288 total in 7 blocks; 8680 free (0 chunks); 515608 used\n> index info: 2048 total in 2 blocks; 568 free (1 chunks); 1480 used: pg_class_tblspc_relfilenode_index\n> index info: 2048 total in 2 blocks; 960 free (0 chunks); 1088 used: pg_statistic_ext_relid_index\n> index info: 2048 total in 2 blocks; 976 free (0 chunks); 1072 used: blarg_pkey\n> index info: 2048 total in 2 blocks; 872 free (0 chunks); 1176 used: pg_index_indrelid_index\n> index info: 2048 total in 2 blocks; 600 free (1 chunks); 1448 used: pg_attrdef_adrelid_adnum_index\n> index info: 2048 total in 2 blocks; 656 free (2 chunks); 1392 used: pg_db_role_setting_databaseid_rol_index\n> index info: 2048 total in 2 blocks; 544 free (2 chunks); 1504 used: pg_opclass_am_name_nsp_index\n> index info: 2048 total in 2 blocks; 928 free (2 chunks); 1120 used: pg_foreign_data_wrapper_name_index\n> index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_enum_oid_index\n> index info: 2048 total in 2 blocks; 600 free (1 chunks); 1448 used: pg_class_relname_nsp_index\n> index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_foreign_server_oid_index\n> index info: 2048 total in 2 blocks; 960 free (2 chunks); 1088 used: pg_publication_pubname_index\n>...\n> index info: 3072 total in 2 blocks; 1144 free (2 chunks); 1928 used: pg_conversion_default_index\n>...\n>\n>while I also think we could pretty easily reduce the amount of memory\n>used for each index, I want to focus on something else here:\n>\n>We waste a lot of space due to all these small contexts. Even leaving\n>aside the overhead of the context and its blocks - not insignificant -\n>they are mostly between ~1/2 a ~1/4 empty.\n>\n>At the same time we probably don't want to inline all of them into\n>CacheMemoryContext - too likely to introduce bugs, and too hard to\n>maintain leak free.\n>\n>\n>But what if we had a new type of memory context that did not itself\n>manage memory underlying allocations, but instead did so via the parent?\n>If such a context tracked all the live allocations in some form of list,\n>it could then free them from the parent at reset time. In other words,\n>it'd proxy all memory management via the parent, only adding a separate\n>name, and tracking of all live chunks.\n>\n>Obviously such a context would be less efficient to reset than a plain\n>aset.c one - but I don't think that'd matter much for these types of\n>use-cases. The big advantage in this case would be that we wouldn't\n>have separate two separate \"blocks\" for each index cache entry, but\n>instead allocations could all be done within CacheMemoryContext.\n>\n>Does that sound like a sensible idea?\n>\n\nI do think it's an interesting idea, worth exploring.\n\nI agree it's probably OK if the proxy contexts are a bit less efficient,\nbut I think we can restrict their use to places where that's not an\nissue (i.e. low frequency of resets, small number of allocated chunks\netc.). And if needed we can probably find ways to improve the efficiency\ne.g. by replacing the linked list with a small hash table or something\n(to speed-up pfree etc.). Or something.\n\nI think the big question is what this would mean for the parent context.\nBecause suddenly it's a mix of chunks with different life spans, which\nwould originally be segregared in different malloc-ed blocks. And now\nthat would not be true, so e.g. after deleting the child context the\nmemory would not be freed but just moved to the freelist.\n\nIt would also confuse MemoryContextStats, which would suddenly not\nrealize some of the chunks are actually \"owned\" by the child context.\nMaybe this could be improved, but only partially (unless we'd want to\nhave a per-chunk flag if it's owned by the context or by a proxy).\n\nNot sure if this would impact accounting (e.g. what if someone creates a\ncustom aggregate, creating a separate proxy context per group?). Would\nthat work or not?\n\nAlso, would this need to support nested proxy contexts? That might\ncomplicate things quite a bit, I'm afraid.\n\nFWIW I don't know answers to these questions.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Tue, 17 Dec 2019 01:12:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "From: Andres Freund <andres@anarazel.de>\n> We waste a lot of space due to all these small contexts. Even leaving\n> aside the overhead of the context and its blocks - not insignificant -\n> they are mostly between ~1/2 a ~1/4 empty.\n> \n> \n> But what if we had a new type of memory context that did not itself\n> manage memory underlying allocations, but instead did so via the parent?\n> If such a context tracked all the live allocations in some form of list,\n> it could then free them from the parent at reset time. In other words,\n> it'd proxy all memory management via the parent, only adding a separate\n> name, and tracking of all live chunks.\n\nIt sounds like that it will alleviate the memory bloat caused by SAVEPOINT and RELEASE, which leave CurTransactionContext for each subtransaction. The memory overuse got Linux down when our customer's batch application ran millions of SQL statements in a transaction with psqlODBC. psqlODBC uses savepoints by default to enable statement rollback.\n\n(I guess this issue of one memory context per subtransaction caused the crash of Amazon Aurora on the Prime Day last year.)\n\n\nRegards\nTakayuki Tsunakawa\n\n\n\n",
"msg_date": "Tue, 17 Dec 2019 00:46:15 +0000",
"msg_from": "\"tsunakawa.takay@fujitsu.com\" <tsunakawa.takay@fujitsu.com>",
"msg_from_op": false,
"msg_subject": "RE: reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-16 18:58:36 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > But what if we had a new type of memory context that did not itself\n> > manage memory underlying allocations, but instead did so via the parent?\n> > If such a context tracked all the live allocations in some form of list,\n> > it could then free them from the parent at reset time. In other words,\n> > it'd proxy all memory management via the parent, only adding a separate\n> > name, and tracking of all live chunks.\n> \n> I dunno, that seems like a *lot* of added overhead, and opportunity for\n> bugs.\n\nWhat kind of bugs are you thinking of?\n\n\n> Maybe it'd be all right for contexts in which alloc/dealloc is\n> very infrequent.\n\nI don't think the overhead would be enough to matter even for moderaly\ncommon cases. Sure, another 16bytes of overhead isn't free, nor is the\nindirection upon allocation/free, but it's also not that bad. I'd be\nsurprised if it didn't turn out to be cheaper in a lot of cases,\nactually, due to not needing a separate init block etc. Obviously it'd\nmake no sense to use such a context for cases with very frequent\nallocations (say parsing, copying a node tree), or where bulk\ndeallocations of a lot of small allocations is important - but there's\nplenty other types of cases.\n\n\n> But why not just address this problem by reducing the allocset\n> blocksize parameter (some more) for these index contexts?\n\nWell, but what would we set it to? The total allocated memory sizes for\ndifferent indexes varies between ~1kb and 4kb. And we'll have to cope\nwith that without creating waste again. We could allow much lower\ninitial and max block sizes for aset, I guess, so anything large gets to\nbe its own malloc() block.\n\n\n> I'd even go a bit further, and suggest that the right way to exploit\n> our knowledge that these contexts' contents don't change much is to\n> go the other way, and reduce not increase their per-chunk overhead.\n\nYea, I was wondering about that too. However, while there's also a good\nnumber of small allocations, a large fraction of the used space is\nactually larger allocations. And using a \"alloc only\" context doesn't\nreally address the fact that the underlying memory blocks are quite\nwasteful - especially given that this data essentially lives forever.\n\nFor the specific case of RelationInitIndexAccessInfo(), allocations that\ncommonly live for the rest of the backend's life and are frequent enough\nof them to matter, it might be worth micro-optimizing the\nallocations. E.g. not doing ~7 separate allocations within a few\nlines... Not primarily because of the per-allocation overheads, but\nmore because that'd allow to size things right directly.\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Dec 2019 18:18:23 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-17 01:12:43 +0100, Tomas Vondra wrote:\n> On Mon, Dec 16, 2019 at 03:35:12PM -0800, Andres Freund wrote:\n> > But what if we had a new type of memory context that did not itself\n> > manage memory underlying allocations, but instead did so via the parent?\n> > If such a context tracked all the live allocations in some form of list,\n> > it could then free them from the parent at reset time. In other words,\n> > it'd proxy all memory management via the parent, only adding a separate\n> > name, and tracking of all live chunks.\n> > \n> > Obviously such a context would be less efficient to reset than a plain\n> > aset.c one - but I don't think that'd matter much for these types of\n> > use-cases. The big advantage in this case would be that we wouldn't\n> > have separate two separate \"blocks\" for each index cache entry, but\n> > instead allocations could all be done within CacheMemoryContext.\n> > \n> > Does that sound like a sensible idea?\n> > \n> \n> I do think it's an interesting idea, worth exploring.\n> \n> I agree it's probably OK if the proxy contexts are a bit less efficient,\n> but I think we can restrict their use to places where that's not an\n> issue (i.e. low frequency of resets, small number of allocated chunks\n> etc.). And if needed we can probably find ways to improve the efficiency\n> e.g. by replacing the linked list with a small hash table or something\n> (to speed-up pfree etc.). Or something.\n\nI don't think you'd need a hash table for efficiency - I was thinking of\njust using a doubly linked list. That allows O(1) unlinking.\n\n\n> I think the big question is what this would mean for the parent context.\n> Because suddenly it's a mix of chunks with different life spans, which\n> would originally be segregared in different malloc-ed blocks. And now\n> that would not be true, so e.g. after deleting the child context the\n> memory would not be freed but just moved to the freelist.\n\nI think in the case of CacheMemoryContext it'd not really be a large\nchange - we already have vastly different lifetimes there, e.g. for the\nrelcache entries themselves. I could also see using something like this\nfor some of the executor sub-contexts - they commonly have only very few\nallocations, but need to be resettable individually.\n\n\n> It would also confuse MemoryContextStats, which would suddenly not\n> realize some of the chunks are actually \"owned\" by the child context.\n> Maybe this could be improved, but only partially (unless we'd want to\n> have a per-chunk flag if it's owned by the context or by a proxy).\n\nI'm not sure it'd really be worth fixing this fully, tbh. Maybe just\nreporting at MemoryContextStats time whether a sub-context is included\nin the parent's total or not.\n\n\n> Not sure if this would impact accounting (e.g. what if someone creates a\n> custom aggregate, creating a separate proxy context per group?). Would\n> that work or not?\n\nI'm not sure what problem you're thinking of?\n\n\n> Also, would this need to support nested proxy contexts? That might\n> complicate things quite a bit, I'm afraid.\n\nI mean, it'd probably not be a great idea to do so much - due to\nincreased overhead - but I don't see why it wouldn't work. If it\nactually is something that we'd want to make work efficiently at some\npoint, it shouldn't be too hard to have code to walk up the chain of\nparent contexts at creation time to the next context that's not a proxy.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 16 Dec 2019 18:26:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": true,
"msg_subject": "Re: reducing memory usage by using \"proxy\" memory contexts?"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> For the specific case of RelationInitIndexAccessInfo(), allocations that\n> commonly live for the rest of the backend's life and are frequent enough\n> of them to matter, it might be worth micro-optimizing the\n> allocations. E.g. not doing ~7 separate allocations within a few\n> lines... Not primarily because of the per-allocation overheads, but\n> more because that'd allow to size things right directly.\n\nHmm ... that would be worth trying, for sure, since it's so easy ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 16 Dec 2019 23:20:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: reducing memory usage by using \"proxy\" memory contexts?"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 00:36\n\n>Hmm. In the case of this logic, we are referring to the current end\n>of WAL with endptr, and what you are calling the startptr is really\n>the redo LSN of the last checkpoint in all the routines which are now\n>confused with RedoRecPtr: RemoveOldXlogFile, RemoveXlogFile and\n>XLOGfileslop. Using lower-case for all the characters of the variable\n>name sounds like a good improvement as well, so taking a combination\n>of all that I would just use \"lastredoptr\" in those three code paths\n>(note that we used to have PriorRedoPtr before). As that's a\n>confusion I introduced with d9fadbf, I would like to fix that and\n>backpatch this change down to 11. (Ranier gets the authorship\n>per se as that's extracted from a larger patch).\nHey Michael, thank you so much for considering correct at least part of an extensive work.\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 00:36\n\n\n\n>Hmm. In the case of this logic, we are referring to the current end\n>of WAL with endptr, and what you are calling the startptr is really\n>the redo LSN of the last checkpoint in all the routines which are now\n>confused with RedoRecPtr: RemoveOldXlogFile, RemoveXlogFile and\n>XLOGfileslop. Using lower-case for all the characters of the variable\n>name sounds like a good improvement as well, so taking a combination\n>of all that I would just use \"lastredoptr\" in those three code paths\n>(note that we used to have PriorRedoPtr before). As that's a\n>confusion I introduced with d9fadbf, I would like to fix that and\n>backpatch this change down to 11. (Ranier gets the authorship\n>per se as that's extracted from a larger patch).\nHey Michael, thank you so much for considering correct at least part of an extensive work.\n\n\nBest regards,\nRanier Vilela",
"msg_date": "Tue, 17 Dec 2019 03:37:02 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 03:43\nPara: Ranier Vilela\nCc: pgsql-hackers@lists.postgresql.org\nAssunto: Re: [PATCH] Windows port add support to BCryptGenRandom\n\n>And looking at this page, it is said that the minimum version\n>supported by this function is Windows 2008:\n>https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptgenrandom<https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptgenrandom>\n\n>Now, your changes in MkvcBuild.pm and the backend code assume that\n>we need to include bcrypt.lib since MSVC 2015 (at least version\n>14.00 or _MSC_VER >= 1900. Do you have a reference about when this\n>has been introduced in VS? The MS docs don't seem to hold a hint\n>about that..\nSorry Perl I understand a little bit.\nWindows Vista I believe.\nhttps://github.com/openssl/openssl/blob/master/crypto/rand/rand_win.c\nis the primary font and have more information.\n\nBest regards,\nRanier Vilela\n[https://avatars0.githubusercontent.com/u/3279138?s=400&v=4]<https://github.com/openssl/openssl/blob/master/crypto/rand/rand_win.c>\nopenssl/rand_win.c at master · openssl/openssl · GitHub<https://github.com/openssl/openssl/blob/master/crypto/rand/rand_win.c>\nTLS/SSL and crypto library. Contribute to openssl/openssl development by creating an account on GitHub.\ngithub.com\n\n\n\n\n\n\n\n\nDe: Michael Paquier\n\nEnviadas: Terça-feira, 17 de Dezembro de 2019 03:43\nPara: Ranier Vilela\nCc: pgsql-hackers@lists.postgresql.org\nAssunto: Re: [PATCH] Windows port add support to BCryptGenRandom\n\n\n\n\n>And looking at this page, it is said that the minimum version\n>supported by this function is Windows 2008:\n>https://docs.microsoft.com/en-us/windows/win32/api/bcrypt/nf-bcrypt-bcryptgenrandom\n\n>Now, your changes in MkvcBuild.pm and the backend code assume that\n>we need to include bcrypt.lib since MSVC 2015 (at least version\n>14.00 or _MSC_VER >= 1900. Do you have a reference about when this\n>has been introduced in VS? The MS docs don't seem to hold a hint\n>about that..\nSorry Perl I understand a little bit.\n\nWindows Vista I believe.\nhttps://github.com/openssl/openssl/blob/master/crypto/rand/rand_win.c\nis the primary font and have more information.\n\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\n\n\n\n\nopenssl/rand_win.c at master · openssl/openssl · GitHub\n\nTLS/SSL and crypto library. Contribute to openssl/openssl development by creating an account on GitHub.\n\ngithub.com",
"msg_date": "Tue, 17 Dec 2019 03:57:56 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 03:57:56AM +0000, Ranier Vilela wrote:\n> Windows Vista I believe.\n> https://github.com/openssl/openssl/blob/master/crypto/rand/rand_win.c\n> is the primary font and have more information.\n\nSo, this basically matches with what the MS documents tell us, and my\nimpression: this API is available down to at least MSVC 2008, which is\nmuch more than what we support on HEAD where one can use MSVC 2013 and\nnewer versions. Note that for the minimal platforms supported our\ndocumentation cite Windows Server 2008 R2 SP1 and Windows 7, implying\n_WIN32_WINNT >= 0x0600.\n\nIn short, this means two things:\n- Your patch, as presented, is wrong.\n- There is no need to make conditional the use of BCryptGenRandom.\n--\nMichael",
"msg_date": "Tue, 17 Dec 2019 13:34:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port add support to BCryptGenRandom"
}
] |
[
{
"msg_contents": "Hi,\n\nI wonder if it's worthwhile to fix the following not-so-friendly error message:\n\ncreate index on foo ((row(a)));\nERROR: column \"\" has pseudo-type record\n\nFor example, the attached patch makes it this:\n\ncreate index on foo ((row(a)));\nERROR: column \"row\" has pseudo-type record\n\nNote that \"row\" as column name has been automatically chosen by the caller.\n\nThanks,\nAmit",
"msg_date": "Tue, 17 Dec 2019 15:47:07 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "empty column name in error message"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> I wonder if it's worthwhile to fix the following not-so-friendly error message:\n\n> create index on foo ((row(a)));\n> ERROR: column \"\" has pseudo-type record\n\nUgh. That used to work more nicely:\n\nregression=# create index fooi on foo ((row(a)));\nERROR: column \"pg_expression_1\" has pseudo-type record\n\nBut that was back in 8.4 :-( ... 9.0 and up behave as you show.\nI'm guessing we broke it when we rearranged the rules for naming\nindex expression columns.\n\n> For example, the attached patch makes it this:\n\n> create index on foo ((row(a)));\n> ERROR: column \"row\" has pseudo-type record\n\nHaven't read the patch in any detail yet, but that seems like\nan improvement. And I guess we need a test case, or we'll\nbreak it again :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 11:21:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: empty column name in error message"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 1:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Haven't read the patch in any detail yet, but that seems like\n> an improvement. And I guess we need a test case, or we'll\n> break it again :-(\n\nThanks for adding the test case.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 18 Dec 2019 11:23:17 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: empty column name in error message"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 11:23:17AM +0900, Amit Langote wrote:\n> Thanks for adding the test case.\n\nFor the archives: this has been applied as of 2acab05.\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 16:30:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: empty column name in error message"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems to me that we currently allow expressions that are anonymous\nand self-referencing composite type records as partition key, but\nshouldn't. Allowing them leads to this:\n\ncreate table foo (a int) partition by list ((row(a, b)));\ncreate table foo1 partition of foo for values in ('(1)'::foo);\ncreate table foo2 partition of foo for values in ('(2)'::foo);\nexplain select * from foo where row(a) = '(1)'::foo;\nERROR: stack depth limit exceeded\n\nStack trace is this:\n\n#0 errfinish (dummy=0) at elog.c:442\n#1 0x0000000000911a51 in check_stack_depth () at postgres.c:3288\n#2 0x00000000007970e6 in expression_tree_mutator (node=0x31890a0,\nmutator=0x82095f <eval_const_expressions_mutator>,\ncontext=0x7fff0578ef60) at nodeFuncs.c:2526\n#3 0x000000000082340b in eval_const_expressions_mutator\n(node=0x31890a0, context=0x7fff0578ef60) at clauses.c:3605\n#4 0x000000000079875c in expression_tree_mutator (node=0x31890f8,\nmutator=0x82095f <eval_const_expressions_mutator>,\ncontext=0x7fff0578ef60) at nodeFuncs.c:2996\n#5 0x000000000082340b in eval_const_expressions_mutator\n(node=0x31890f8, context=0x7fff0578ef60) at clauses.c:3605\n#6 0x000000000079810c in expression_tree_mutator (node=0x3188cc8,\nmutator=0x82095f <eval_const_expressions_mutator>,\ncontext=0x7fff0578ef60) at nodeFuncs.c:2863\n#7 0x000000000082225d in eval_const_expressions_mutator\n(node=0x3188cc8, context=0x7fff0578ef60) at clauses.c:3154\n#8 0x000000000079875c in expression_tree_mutator (node=0x3189240,\nmutator=0x82095f <eval_const_expressions_mutator>,\ncontext=0x7fff0578ef60) at nodeFuncs.c:2996\n#9 0x000000000082340b in eval_const_expressions_mutator\n(node=0x3189240, context=0x7fff0578ef60) at clauses.c:3605\n#10 0x000000000082090c in eval_const_expressions (root=0x0,\nnode=0x3189240) at clauses.c:2265\n#11 0x0000000000a75169 in RelationBuildPartitionKey\n(relation=0x7f5ca3e479a8) at partcache.c:139\n#12 0x0000000000a7aa5e in RelationBuildDesc (targetRelId=17178,\ninsertIt=true) at relcache.c:1171\n#13 0x0000000000a7c975 in RelationIdGetRelation (relationId=17178) at\nrelcache.c:2035\n#14 0x000000000048e0c0 in relation_open (relationId=17178, lockmode=1)\nat relation.c:59\n#15 0x0000000000a8a4f7 in load_typcache_tupdesc (typentry=0x1c16bc0)\nat typcache.c:793\n#16 0x0000000000a8a3bb in lookup_type_cache (type_id=17180, flags=256)\nat typcache.c:748\n#17 0x0000000000a8bba4 in lookup_rowtype_tupdesc_internal\n(type_id=17180, typmod=-1, noError=false) at typcache.c:1570\n#18 0x0000000000a8be43 in lookup_rowtype_tupdesc (type_id=17180,\ntypmod=-1) at typcache.c:1656\n#19 0x0000000000a0713f in record_cmp (fcinfo=0x7fff0578f4d0) at rowtypes.c:815\n#20 0x0000000000a083e2 in btrecordcmp (fcinfo=0x7fff0578f4d0) at rowtypes.c:1276\n#21 0x0000000000a97bd9 in FunctionCall2Coll (flinfo=0x2bb4a98,\ncollation=0, arg1=51939144, arg2=51940000) at fmgr.c:1162\n#22 0x00000000008443f6 in qsort_partition_list_value_cmp (a=0x3188c50,\nb=0x3188c58, arg=0x2bb46c0) at partbounds.c:1769\n#23 0x0000000000af9dc6 in qsort_arg (a=0x3188c50, n=2, es=8,\ncmp=0x84439a <qsort_partition_list_value_cmp>, arg=0x2bb46c0) at\nqsort_arg.c:132\n#24 0x000000000084186a in create_list_bounds (boundspecs=0x3188650,\nnparts=2, key=0x2bb46c0, mapping=0x7fff0578f7d8) at partbounds.c:396\n#25 0x00000000008410ec in partition_bounds_create\n(boundspecs=0x3188650, nparts=2, key=0x2bb46c0,\nmapping=0x7fff0578f7d8) at partbounds.c:206\n#26 0x0000000000847622 in RelationBuildPartitionDesc\n(rel=0x7f5ca3e47560) at partdesc.c:205\n#27 0x0000000000a7aa6a in RelationBuildDesc (targetRelId=17178,\ninsertIt=true) at relcache.c:1172\n\nAlso:\n\ncreate table foo (a int) partition by list ((row(a)));\ncreate table foo1 partition of foo for values in (row(1));\ncreate table foo2 partition of foo for values in (row(2));\n\nexplain select * from foo where row(a) = '(1)'::foo;\n QUERY PLAN\n----------------------------------------------------------\n Seq Scan on foo1 foo (cost=0.00..41.88 rows=13 width=4)\n Filter: (ROW(a) = '(1)'::foo)\n(2 rows)\n\nexplain select * from foo where row(a) = '(2)'::foo;\n QUERY PLAN\n----------------------------------------------------------\n Seq Scan on foo2 foo (cost=0.00..41.88 rows=13 width=4)\n Filter: (ROW(a) = '(2)'::foo)\n(2 rows)\n\n-- another session\nexplain select * from foo where row(a) = '(1)'::foo;\nERROR: record type has not been registered\nLINE 1: explain select * from foo where row(a) = '(1)'::foo;\n\nAttached a patch to fix that.\n\nThanks,\nAmit",
"msg_date": "Tue, 17 Dec 2019 18:03:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> It seems to me that we currently allow expressions that are anonymous\n> and self-referencing composite type records as partition key, but\n> shouldn't. Allowing them leads to this:\n\nHm. Seems like the restrictions here ought to be just about the same\nas on index columns, no? That is, it should be roughly a test like\n\"no pseudo-types\". The check you're proposing seems awfully specific,\nand I doubt that the equivalent check in CREATE INDEX looks the same.\n(But I didn't go look ... I might be wrong.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 12:12:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 2:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > It seems to me that we currently allow expressions that are anonymous\n> > and self-referencing composite type records as partition key, but\n> > shouldn't. Allowing them leads to this:\n>\n> Hm. Seems like the restrictions here ought to be just about the same\n> as on index columns, no? That is, it should be roughly a test like\n> \"no pseudo-types\". The check you're proposing seems awfully specific,\n> and I doubt that the equivalent check in CREATE INDEX looks the same.\n> (But I didn't go look ... I might be wrong.)\n\nWe also need to disallow self-referencing composite type in the case\nof partitioning, because otherwise it leads to infinite recursion\nshown in my first email.\n\nThe timing of building PartitionDesc is what causes it, because the\nconstruction of PartitionBoundInfo in turn requires opening the parent\nrelation if the partition partition key is of self-referencing\ncomposite type, because we need the TupleDesc when sorting the\npartition bounds. Maybe we'll need to rearrange that someday so that\nPartitionDesc is built outside RelationBuildDesc path, so this\ninfinite recursion doesn't occur, but maybe allowing this case isn't\nthat useful to begin with?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 18 Dec 2019 15:13:48 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Dec 18, 2019 at 2:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Hm. Seems like the restrictions here ought to be just about the same\n>> as on index columns, no?\n\n> We also need to disallow self-referencing composite type in the case\n> of partitioning, because otherwise it leads to infinite recursion\n> shown in my first email.\n\nMy point is basically that CheckAttributeType already covers that\nissue, as well as a lot of others. So why isn't the partitioning\ncode using it?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 08:38:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 10:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Wed, Dec 18, 2019 at 2:12 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> Hm. Seems like the restrictions here ought to be just about the same\n> >> as on index columns, no?\n>\n> > We also need to disallow self-referencing composite type in the case\n> > of partitioning, because otherwise it leads to infinite recursion\n> > shown in my first email.\n>\n> My point is basically that CheckAttributeType already covers that\n> issue, as well as a lot of others. So why isn't the partitioning\n> code using it?\n\nMy reason to not use it was that the error message that are produced\nare not quite helpful in this case; compare what my patch produces vs.\nwhat one gets with CheckAttributeType(\"expr\", ...):\n\n a int,\n b int\n ) PARTITION BY RANGE (((a, b)));\n-ERROR: partition key cannot be of anonymous or self-referencing composite type\n-LINE 4: ) PARTITION BY RANGE (((a, b)));\n- ^\n+ERROR: column \"expr\" has pseudo-type record\n\n CREATE TABLE partitioned (\n a int,\n b int\n ) PARTITION BY RANGE ((row(a, b)));\n-ERROR: partition key cannot be of anonymous or self-referencing composite type\n-LINE 4: ) PARTITION BY RANGE ((row(a, b)));\n- ^\n+ERROR: column \"expr\" has pseudo-type record\n\n CREATE TABLE partitioned (\n a int,\n b int\n ) PARTITION BY RANGE ((row(a, b)::partitioned));\n-ERROR: partition key cannot be of anonymous or self-referencing composite type\n-LINE 4: ) PARTITION BY RANGE ((row(a, b)::partitioned));\n- ^\n+ERROR: composite type partitioned cannot be made a member of itself\n\nThanks,\nAmit\n\n\n",
"msg_date": "Thu, 19 Dec 2019 14:15:34 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Wed, Dec 18, 2019 at 10:38 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> My point is basically that CheckAttributeType already covers that\n>> issue, as well as a lot of others. So why isn't the partitioning\n>> code using it?\n\n> My reason to not use it was that the error message that are produced\n> are not quite helpful in this case;\n\nI can't get terribly excited about that; but in any case, if we think\nthe errors aren't nice enough, the answer is to improve them, not\nre-implement the function badly.\n\nAfter further thought, it seems to me that we are dealing with two\nnearly independent issues:\n\n1. We must not accept partition bounds values that are of underdetermined\ntypes, else (a) we are likely to get failures like \"record type has not\nbeen registered\" while loading them back from disk, and (b) polymorphic\nbtree support functions are likely to complain that they can't identify\nthe type they're supposed to work on. This is exactly the same issue that\nexpression indexes face, so we should be applying the same checks, that\nis CheckAttributeType(). I do not believe that checking for RECORD is\nadequate to close this hole. At the very least, RECORD[] is equally\ndangerous, and in general I think any pseudotype would be risky.\n\n2. If the partitioning expression contains a reference to the partitioned\ntable's rowtype, we get infinite recursion while trying to load the\nrelcache entry. The patch proposes to prevent that by checking whether\nthe expression's final result type is that type, but that's not nearly\nadequate because a reference anywhere inside the expression is just as\nbad. In general, considering possibly-inlined SQL functions, I'm doubtful\nthat any precheck is going to be able to prevent this scenario.\n\nNow as far as point 1 goes, I think it's not really that awful to use\nCheckAttributeType() with a dummy attribute name. The attached\nincomplete patch uses \"partition key\" which causes it to emit errors\nlike\n\nregression=# create table fool (a int, b int) partition by list ((row(a, b))); \nERROR: column \"partition key\" has pseudo-type record\n\nI don't think that that's unacceptable. But if we wanted to improve it,\nwe could imagine adding another flag, say CHKATYPE_IS_PARTITION_KEY,\nthat doesn't affect CheckAttributeType's semantics, just the wording of\nthe error messages it throws.\n\nAs far as point 2 goes, I think this is another outgrowth of the\nfundamental design error that we load a partitioned rel's partitioning\ninfo immediately when the relcache entry is created, rather than later\non-demand. If we weren't doing that then it wouldn't be problematic\nto inspect the rel's rowtype while constructing the partitioning info.\nI've bitched about this before, if memory serves, but couldn't light\na fire under anyone about fixing it. Now I think we have no choice.\nIt was never a great idea that minimal construction of a relcache\nentry could result in running arbitrary user-defined code.\n\nNote that the end result of this would be to allow, not prohibit,\ncases like your example. I wonder whether we couldn't also lift\nthe restriction against whole-row Vars in partition expressions.\nDoesn't seem like there is much difference between such a Var and\na row(...)::table_rowtype expression.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 21 Dec 2019 12:02:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "I wrote:\n> As far as point 2 goes, I think this is another outgrowth of the\n> fundamental design error that we load a partitioned rel's partitioning\n> info immediately when the relcache entry is created, rather than later\n> on-demand. If we weren't doing that then it wouldn't be problematic\n> to inspect the rel's rowtype while constructing the partitioning info.\n> I've bitched about this before, if memory serves, but couldn't light\n> a fire under anyone about fixing it. Now I think we have no choice.\n> It was never a great idea that minimal construction of a relcache\n> entry could result in running arbitrary user-defined code.\n\nHere's a draft patch for that. There are a couple of secondary issues\nI didn't do anything about yet:\n\n* When rebuilding an open relcache entry for a partitioned table, this\ncoding now always quasi-leaks the old rd_pdcxt, where before that happened\nonly if the partdesc actually changed. (Even if I'd kept the\nequalPartitionDescs call, it would always fail.) I complained about the\nquasi-leak behavior before, but this probably pushes it up to the level of\n\"must fix\". What I'm inclined to do is to hack\nRelationDecrementReferenceCount so that, when the refcount goes to zero,\nwe delete any child contexts of rd_pdcxt. That's pretty annoying but in\nthe big scheme of things it's unlikely to matter.\n\n* It'd be better to declare RelationGetPartitionKey and\nRelationGetPartitionDesc in relcache.h and get their callers out of the\nbusiness of including rel.h, where possible.\n\n* equalPartitionDescs is now dead code, should we remove it?\n\n> Note that the end result of this would be to allow, not prohibit,\n> cases like your example. I wonder whether we couldn't also lift\n> the restriction against whole-row Vars in partition expressions.\n> Doesn't seem like there is much difference between such a Var and\n> a row(...)::table_rowtype expression.\n\nI didn't look into that either. I wouldn't propose back-patching that,\nbut it'd be interesting to try to fix it in HEAD.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 21 Dec 2019 16:13:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "I wrote:\n> Now as far as point 1 goes, I think it's not really that awful to use\n> CheckAttributeType() with a dummy attribute name. The attached\n> incomplete patch uses \"partition key\" which causes it to emit errors\n> like\n> regression=# create table fool (a int, b int) partition by list ((row(a, b))); \n> ERROR: column \"partition key\" has pseudo-type record\n> I don't think that that's unacceptable. But if we wanted to improve it,\n> we could imagine adding another flag, say CHKATYPE_IS_PARTITION_KEY,\n> that doesn't affect CheckAttributeType's semantics, just the wording of\n> the error messages it throws.\n\nHere's a fleshed-out patch that does it like that.\n\nWhile poking at this, I also started to wonder why CheckAttributeType\nwasn't recursing into ranges, since those are our other kind of\ncontainer type. And the answer is that it must, because we allow\ncreation of ranges over composite types:\n\nregression=# create table foo (f1 int, f2 int);\nCREATE TABLE\nregression=# create type foorange as range (subtype = foo);\nCREATE TYPE\nregression=# alter table foo add column r foorange;\nALTER TABLE\n\nSimple things still work on table foo, but surely this is exactly\nwhat CheckAttributeType is supposed to be preventing. With the\nsecond attached patch you get\n\nregression=# alter table foo add column r foorange;\nERROR: composite type foo cannot be made a member of itself\n\nThe second patch needs to go back all the way, the first one\nonly as far as we have partitions.\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 22 Dec 2019 16:51:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > As far as point 2 goes, I think this is another outgrowth of the\n> > fundamental design error that we load a partitioned rel's partitioning\n> > info immediately when the relcache entry is created, rather than later\n> > on-demand. If we weren't doing that then it wouldn't be problematic\n> > to inspect the rel's rowtype while constructing the partitioning info.\n> > I've bitched about this before, if memory serves, but couldn't light\n> > a fire under anyone about fixing it. Now I think we have no choice.\n> > It was never a great idea that minimal construction of a relcache\n> > entry could result in running arbitrary user-defined code.\n>\n> Here's a draft patch for that.\n\nThanks for writing the patch. This also came up recently on another thread [1].\n\n> There are a couple of secondary issues\n> I didn't do anything about yet:\n>\n> * When rebuilding an open relcache entry for a partitioned table, this\n> coding now always quasi-leaks the old rd_pdcxt, where before that happened\n> only if the partdesc actually changed. (Even if I'd kept the\n> equalPartitionDescs call, it would always fail.) I complained about the\n> quasi-leak behavior before, but this probably pushes it up to the level of\n> \"must fix\". What I'm inclined to do is to hack\n> RelationDecrementReferenceCount so that, when the refcount goes to zero,\n> we delete any child contexts of rd_pdcxt. That's pretty annoying but in\n> the big scheme of things it's unlikely to matter.\n\nHacking RelationDecrementReferenceCount() like that sounds OK.\n\n- else if (rebuild && newrel->rd_pdcxt != NULL)\n+ if (rebuild && newrel->rd_pdcxt != NULL)\n\nChecking rebuild seems unnecessary in this block, although that's true\neven without the patch.\n\n+ * To ensure that it's not leaked completely, re-attach it to the\n+ * new reldesc, or make it a child of the new reldesc's rd_pdcxt\n+ * in the unlikely event that there is one already. (See hack in\n+ * RelationBuildPartitionDesc.)\n...\n+ if (relation->rd_pdcxt != NULL) /* probably never happens */\n+ MemoryContextSetParent(newrel->rd_pdcxt, relation->rd_pdcxt);\n+ else\n+ relation->rd_pdcxt = newrel->rd_pdcxt;\n\nWhile I can imagine that RelationBuildPartitionDesc() might encounter\nan old partition descriptor making the re-parenting hack necessary\nthere, I don't see why it's needed here, because a freshly built\nrelation descriptor would not contain the partition descriptor after\nthis patch.\n\n> * It'd be better to declare RelationGetPartitionKey and\n> RelationGetPartitionDesc in relcache.h and get their callers out of the\n> business of including rel.h, where possible.\n\nAlthough I agree to declare them in relcache.h, that doesn't reduce\nthe need to include rel.h in their callers much, because anyplace that\nuses RelationGetPartitionDesc() is also very likely to use\nRelationGetRelid() which is in rel.h.\n\n> * equalPartitionDescs is now dead code, should we remove it?\n\nDon't see any problem with doing so.\n\n> > Note that the end result of this would be to allow, not prohibit,\n> > cases like your example. I wonder whether we couldn't also lift\n> > the restriction against whole-row Vars in partition expressions.\n> > Doesn't seem like there is much difference between such a Var and\n> > a row(...)::table_rowtype expression.\n>\n> I didn't look into that either. I wouldn't propose back-patching that,\n> but it'd be interesting to try to fix it in HEAD.\n\nAgreed.\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/CA%2BHiwqFucUh7hYkfZ6x1MVcs_R24eUfNVuRwdE_FwuwK8XpSZg%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 23 Dec 2019 18:42:36 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Sun, Dec 22, 2019 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> + * To ensure that it's not leaked completely, re-attach it to the\n> + * new reldesc, or make it a child of the new reldesc's rd_pdcxt\n> + * in the unlikely event that there is one already. (See hack in\n> + * RelationBuildPartitionDesc.)\n\n> While I can imagine that RelationBuildPartitionDesc() might encounter\n> an old partition descriptor making the re-parenting hack necessary\n> there, I don't see why it's needed here, because a freshly built\n> relation descriptor would not contain the partition descriptor after\n> this patch.\n\nWell, as the comment says, that's probably unreachable today. But\nI could see it happening in the future, particularly if we ever allow\npartitioned system catalogs. There are a lot of paths through this\ncode that are not obvious to the naked eye, and some of them can cause\nrelcache entries to get populated behind-your-back. Most of relcache.c\nis careful about this; I do not see an excuse for the partition-data\ncode to be less so, even if we think it can't happen today.\n\n(I notice that RelationBuildPartitionKey is making a similar assumption\nthat the partkey couldn't magically appear while it's working, and I\ndon't like it much there either.)\n\n>> * It'd be better to declare RelationGetPartitionKey and\n>> RelationGetPartitionDesc in relcache.h and get their callers out of the\n>> business of including rel.h, where possible.\n\n> Although I agree to declare them in relcache.h, that doesn't reduce\n> the need to include rel.h in their callers much, because anyplace that\n> uses RelationGetPartitionDesc() is also very likely to use\n> RelationGetRelid() which is in rel.h.\n\nHm. Well, we can try anyway.\n\n> [1] https://www.postgresql.org/message-id/CA%2BHiwqFucUh7hYkfZ6x1MVcs_R24eUfNVuRwdE_FwuwK8XpSZg%40mail.gmail.com\n\nOh, interesting --- I hadn't been paying much attention to that thread.\nI'll compare your PoC there to what I did.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Dec 2019 09:49:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "BTW, I forgot to mention: while I think the patch to forbid pseudotypes\nby using CheckAttributeType() can be back-patched, I'm leaning towards\nnot back-patching the other patch. The situation where we get into\ninfinite recursion seems not very likely in practice, and it's not\ngoing to cause any crash or data loss, so I think we can just say\n\"sorry that's not supported before v13\". The patch as I'm proposing\nit seems rather invasive for a back-branch fix. Also, changing\nRelationGetPartitionKey/Desc from macros to functions is at least a\nweak ABI break. If there are extensions calling either, they might\nstill work without a recompile --- but if they have code paths that\nare the first thing to touch either field since a relcache flush,\nthey'd crash.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Dec 2019 10:00:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 23:49 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > [1]\n> https://www.postgresql.org/message-id/CA%2BHiwqFucUh7hYkfZ6x1MVcs_R24eUfNVuRwdE_FwuwK8XpSZg%40mail.gmail.com\n>\n> Oh, interesting --- I hadn't been paying much attention to that thread.\n> I'll compare your PoC there to what I did.\n\n\nActually, I should’ve said that your patch is much better attempt at\ngetting this in order, so there’s not much to see in my patch really. :)\n\nThanks,\nAmit\n\n>\n\nOn Mon, Dec 23, 2019 at 23:49 Tom Lane <tgl@sss.pgh.pa.us> wrote:Amit Langote <amitlangote09@gmail.com> writes:> [1] https://www.postgresql.org/message-id/CA%2BHiwqFucUh7hYkfZ6x1MVcs_R24eUfNVuRwdE_FwuwK8XpSZg%40mail.gmail.com\n\nOh, interesting --- I hadn't been paying much attention to that thread.\nI'll compare your PoC there to what I did.Actually, I should’ve said that your patch is much better attempt at getting this in order, so there’s not much to see in my patch really. :)Thanks,Amit",
"msg_date": "Tue, 24 Dec 2019 01:09:54 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Mon, Dec 23, 2019 at 23:49 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Oh, interesting --- I hadn't been paying much attention to that thread.\n>> I'll compare your PoC there to what I did.\n\n> Actually, I should’ve said that your patch is much better attempt at\n> getting this in order, so there’s not much to see in my patch really. :)\n\nOne thing I see is that you chose to relocate RelationGetPartitionDesc's\ndeclaration to partdesc.h, whereupon RelationBuildPartitionDesc doesn't\nhave to be exported at all anymore. Perhaps that's a better factorization\nthan what I did. It supposes that any caller of RelationGetPartitionDesc\nis going to need partdesc.h, but that seems reasonable. We could likewise\nmove RelationGetPartitionKey to partcache.h.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Dec 2019 13:57:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "I wrote:\n> One thing I see is that you chose to relocate RelationGetPartitionDesc's\n> declaration to partdesc.h, whereupon RelationBuildPartitionDesc doesn't\n> have to be exported at all anymore. Perhaps that's a better factorization\n> than what I did. It supposes that any caller of RelationGetPartitionDesc\n> is going to need partdesc.h, but that seems reasonable. We could likewise\n> move RelationGetPartitionKey to partcache.h.\n\nI concluded that that is indeed a better solution; it does allow removing\nsome rel.h inclusions (though possibly those were just duplicative?), and\nit also means that relcache.c itself doesn't need any partitioning\ninclusions at all.\n\nHere's a cleaned-up patch that does it like that and also fixes the\nmemory leakage issue.\n\nI noticed along the way that with partkeys only being loaded on demand,\nwe no longer need the incredibly-unsafe hack in RelationBuildPartitionKey\nwhereby it just silently ignores failure to read the pg_partitioned_table\nentry. I also rearranged RelationBuildPartitionDesc so that it uses the\nsame context-reparenting trick as RelationBuildPartitionKey. That doesn't\nsave us anything, but it makes the code considerably more robust, I think;\nwe don't need to assume as much about what partition_bounds_copy does.\n\nOne other thing worth noting is that I used unlikely() to try to\ndiscourage the compiler from inlining RelationBuildPartitionDesc\ninto RelationGetPartitionDesc (and likewise for the Key functions).\nNot sure how effective that is, but it can't hurt.\n\nI haven't removed equalPartitionDescs here; that seems like material\nfor a separate patch (to make it easier to put it back if we need it).\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 23 Dec 2019 16:33:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, I forgot to mention: while I think the patch to forbid pseudotypes\n> by using CheckAttributeType() can be back-patched, I'm leaning towards\n> not back-patching the other patch. The situation where we get into\n> infinite recursion seems not very likely in practice, and it's not\n> going to cause any crash or data loss, so I think we can just say\n> \"sorry that's not supported before v13\". The patch as I'm proposing\n> it seems rather invasive for a back-branch fix.\n\nIt is indeed.\n\nJust to be sure, by going with \"unsupported before v13\", which one do you mean:\n\n* documenting it as so\n* giving an error in such cases, like the patch in the first email on\nthis thread did\n* doing nothing really\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 24 Dec 2019 10:20:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 6:33 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > One thing I see is that you chose to relocate RelationGetPartitionDesc's\n> > declaration to partdesc.h, whereupon RelationBuildPartitionDesc doesn't\n> > have to be exported at all anymore. Perhaps that's a better factorization\n> > than what I did. It supposes that any caller of RelationGetPartitionDesc\n> > is going to need partdesc.h, but that seems reasonable. We could likewise\n> > move RelationGetPartitionKey to partcache.h.\n>\n> I concluded that that is indeed a better solution; it does allow removing\n> some rel.h inclusions (though possibly those were just duplicative?), and\n> it also means that relcache.c itself doesn't need any partitioning\n> inclusions at all.\n>\n> Here's a cleaned-up patch that does it like that and also fixes the\n> memory leakage issue.\n\nThanks for the updated patch. I didn't find anything to complain about.\n\n> I haven't removed equalPartitionDescs here; that seems like material\n> for a separate patch (to make it easier to put it back if we need it).\n\nSeems like a good idea.\n\nBtw, does the memory leakage fix in this patch address any of the\npending concerns that were discussed on the \"hyrax vs.\nRelationBuildPartitionDesc\" thread earlier this year[1]?\n\nThanks,\nAmit\n\n[1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b\n\n\n",
"msg_date": "Tue, 24 Dec 2019 10:59:45 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 10:59 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> Btw, does the memory leakage fix in this patch address any of the\n> pending concerns that were discussed on the \"hyrax vs.\n> RelationBuildPartitionDesc\" thread earlier this year[1]?\n>\n> [1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b\n\nI thought about this a little and I think it *does* address the main\ncomplaint in the above thread.\n\nThe main complaint as I understand is that receiving repeated\ninvalidations due to partitions being concurrently added while a\nPartitionDirectory is holding a pointer to PartitionDesc causes many\ncopies of PartitionDesc to pile up due to the parent table being\nrebuilt upon processing of each invalidation.\n\nNow because we don't build the PartitionDesc in the\nRelationClearRelation path, that can't happen. Although it still\nseems possible for the piling up to occur if it's\nRelationBuildPartitionDesc that is run repeatedly via\nRelationGetParttionDesc while partitions are being concurrently added,\nbut I couldn't find anything in the partitioning code that does that.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 24 Dec 2019 16:55:14 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 6:42 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> On Sun, Dec 22, 2019 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > > I wonder whether we couldn't also lift\n> > > the restriction against whole-row Vars in partition expressions.\n> > > Doesn't seem like there is much difference between such a Var and\n> > > a row(...)::table_rowtype expression.\n> >\n> > I didn't look into that either. I wouldn't propose back-patching that,\n> > but it'd be interesting to try to fix it in HEAD.\n>\n> Agreed.\n\nI gave that a try and ended up with attached that applies on top of\nyour delay-loading-relcache-partition-data-2.patch.\n\nThanks,\nAmit",
"msg_date": "Tue, 24 Dec 2019 18:08:48 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Tue, Dec 24, 2019 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, I forgot to mention: while I think the patch to forbid pseudotypes\n>> by using CheckAttributeType() can be back-patched, I'm leaning towards\n>> not back-patching the other patch. The situation where we get into\n>> infinite recursion seems not very likely in practice, and it's not\n>> going to cause any crash or data loss, so I think we can just say\n>> \"sorry that's not supported before v13\". The patch as I'm proposing\n>> it seems rather invasive for a back-branch fix.\n\n> It is indeed.\n\n> Just to be sure, by going with \"unsupported before v13\", which one do you mean:\n\n> * documenting it as so\n> * giving an error in such cases, like the patch in the first email on\n> this thread did\n> * doing nothing really\n\nI was thinking \"do nothing in the back branches\". I don't believe we\ncan detect such cases reliably (at least not without complicated logic,\nwhich'd defeat the point), so I don't think giving an error is actually\nfeasible, and I doubt that documenting it would be useful. If we get\nsome field complaints about this, it'd be time enough to reconsider.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 24 Dec 2019 12:42:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 2:42 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> > On Tue, Dec 24, 2019 at 12:00 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> BTW, I forgot to mention: while I think the patch to forbid pseudotypes\n> >> by using CheckAttributeType() can be back-patched, I'm leaning towards\n> >> not back-patching the other patch. The situation where we get into\n> >> infinite recursion seems not very likely in practice, and it's not\n> >> going to cause any crash or data loss, so I think we can just say\n> >> \"sorry that's not supported before v13\". The patch as I'm proposing\n> >> it seems rather invasive for a back-branch fix.\n>\n> > It is indeed.\n>\n> > Just to be sure, by going with \"unsupported before v13\", which one do you mean:\n>\n> > * documenting it as so\n> > * giving an error in such cases, like the patch in the first email on\n> > this thread did\n> > * doing nothing really\n>\n> I was thinking \"do nothing in the back branches\". I don't believe we\n> can detect such cases reliably (at least not without complicated logic,\n> which'd defeat the point), so I don't think giving an error is actually\n> feasible, and I doubt that documenting it would be useful. If we get\n> some field complaints about this, it'd be time enough to reconsider.\n\nSure, thanks for the reply.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Wed, 25 Dec 2019 09:33:28 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> On Tue, Dec 24, 2019 at 10:59 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>> Btw, does the memory leakage fix in this patch address any of the\n>> pending concerns that were discussed on the \"hyrax vs.\n>> RelationBuildPartitionDesc\" thread earlier this year[1]?\n>> [1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b\n\n> I thought about this a little and I think it *does* address the main\n> complaint in the above thread.\n\nI experimented with the test shown in [1]. This patch does prevent that\ncase from accumulating copies of the partition descriptor.\n\n(The performance of that test case is still awful, more or less O(N^2)\nin the number of repetitions. But I think what's going on is that it\nrepeatedly creates and deletes the same catalog entries, and we're not\nsmart enough to recognize that the dead row versions are fully dead,\nso lots of time is wasted by unique-index checks. It's not clear\nthat that's of any interest for real-world cases.)\n\nI remain of the opinion that this is a pretty crummy, ad-hoc way to manage\nthe partition descriptor caching. It's less bad than before, but I'm\nstill concerned that holding a relcache entry open for any long period\ncould result in bloat if the cache entry is rebuilt many times meanwhile\n--- and there's no strong reason to think that can't happen. Still,\nmaybe we can wait to solve that until there's some evidence that it\ndoes happen in useful cases.\n\nI also poked at the test case mentioned in the other thread about foreign\nkeys across lots of partitions [2]. Again, this patch gets rid of the\nmemory bloat, though the performance is still pretty awful with lots of\npartitions for other reasons.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/10797.1552679128%40sss.pgh.pa.us\n[2] https://www.postgresql.org/message-id/OSAPR01MB374809E8DE169C8BF2B82CBD9F6B0%40OSAPR01MB3748.jpnprd01.prod.outlook.com\n\n\n",
"msg_date": "Wed, 25 Dec 2019 00:31:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "I wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n>> On Tue, Dec 24, 2019 at 10:59 AM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> Btw, does the memory leakage fix in this patch address any of the\n>>> pending concerns that were discussed on the \"hyrax vs.\n>>> RelationBuildPartitionDesc\" thread earlier this year[1]?\n>>> [1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b\n\n>> I thought about this a little and I think it *does* address the main\n>> complaint in the above thread.\n\nIt occurred to me to also recheck the original complaint in that thread,\nwhich was poor behavior in CLOBBER_CACHE_ALWAYS builds. I didn't have\nthe patience to run a full CCA test, but I did run update.sql, which\nwe previously established was sufficient to show the problem. There's\nno apparent memory bloat, either with HEAD or with the patch. I also\nsee the runtime (for update.sql on its own) dropping from about \n474 sec in HEAD to 457 sec with the patch. So that indicates that we're\nactually saving a noticeable amount of work, not just postponing it,\nat least under CCA scenarios where relcache entries get flushed a lot.\n\nI also tried to measure update.sql's runtime in a regular debug build\n(not CCA). I get pretty repeatable results of 279ms on HEAD vs 273ms\nwith patch, or about a 2% overall savings. That's at the very limit of\nwhat I'd consider a reproducible difference, but still it seems to be\nreal. So that seems like evidence that forcing the partition data to be\nloaded immediately rather than on-demand is a loser from a performance\nstandpoint as well as the recursion concerns that prompted this patch.\n\nWhich naturally leads one to wonder whether forcing other relcache\nsubstructures (triggers, rules, etc) to be loaded immediately isn't\na loser as well. I'm still feeling like we're overdue to redesign how\nall of this works and come up with a more uniform, less fragile/ad-hoc\napproach. But I don't have the time or interest to do that right now.\n\nAnyway, I've run out of reasons not to commit this patch, so I'll\ngo do that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Dec 2019 13:21:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n>> On Sun, Dec 22, 2019 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>> I wonder whether we couldn't also lift\n>>> the restriction against whole-row Vars in partition expressions.\n>>> Doesn't seem like there is much difference between such a Var and\n>>> a row(...)::table_rowtype expression.\n\n> I gave that a try and ended up with attached that applies on top of\n> your delay-loading-relcache-partition-data-2.patch.\n\nPushed with minor fixes.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 25 Dec 2019 15:45:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 5:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Amit Langote <amitlangote09@gmail.com> writes:\n> >> On Sun, Dec 22, 2019 at 6:13 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >>> I wonder whether we couldn't also lift\n> >>> the restriction against whole-row Vars in partition expressions.\n> >>> Doesn't seem like there is much difference between such a Var and\n> >>> a row(...)::table_rowtype expression.\n>\n> > I gave that a try and ended up with attached that applies on top of\n> > your delay-loading-relcache-partition-data-2.patch.\n>\n> Pushed with minor fixes.\n\nThank you.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Thu, 26 Dec 2019 10:41:08 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 3:21 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I wrote:\n> > Amit Langote <amitlangote09@gmail.com> writes:\n> >> On Tue, Dec 24, 2019 at 10:59 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> >>> Btw, does the memory leakage fix in this patch address any of the\n> >>> pending concerns that were discussed on the \"hyrax vs.\n> >>> RelationBuildPartitionDesc\" thread earlier this year[1]?\n> >>> [1] https://www.postgresql.org/message-id/flat/3800.1560366716%40sss.pgh.pa.us#092b6b4f6bf75d2f3f90ef6a3b3eab5b\n>\n> >> I thought about this a little and I think it *does* address the main\n> >> complaint in the above thread.\n>\n> It occurred to me to also recheck the original complaint in that thread,\n> which was poor behavior in CLOBBER_CACHE_ALWAYS builds.\n\nThanks for taking the time to do that.\n\n> I didn't have\n> the patience to run a full CCA test, but I did run update.sql, which\n> we previously established was sufficient to show the problem. There's\n> no apparent memory bloat, either with HEAD or with the patch. I also\n> see the runtime (for update.sql on its own) dropping from about\n> 474 sec in HEAD to 457 sec with the patch. So that indicates that we're\n> actually saving a noticeable amount of work, not just postponing it,\n> at least under CCA scenarios where relcache entries get flushed a lot.\n\nYeah, as long as nothing in between those flushes needs to look at the\npartition descriptor.\n\n> I also tried to measure update.sql's runtime in a regular debug build\n> (not CCA). I get pretty repeatable results of 279ms on HEAD vs 273ms\n> with patch, or about a 2% overall savings. That's at the very limit of\n> what I'd consider a reproducible difference, but still it seems to be\n> real. So that seems like evidence that forcing the partition data to be\n> loaded immediately rather than on-demand is a loser from a performance\n> standpoint as well as the recursion concerns that prompted this patch.\n\nAgreed.\n\n> Which naturally leads one to wonder whether forcing other relcache\n> substructures (triggers, rules, etc) to be loaded immediately isn't\n> a loser as well. I'm still feeling like we're overdue to redesign how\n> all of this works and come up with a more uniform, less fragile/ad-hoc\n> approach. But I don't have the time or interest to do that right now.\n\nI suppose if on-demand loading of partition descriptors can result in\nup to 2% savings, we can perhaps expect slightly more by doing the\nsame for other substructures. Also, the more different substructures\nare accessed similarly the better.\n\n> Anyway, I've run out of reasons not to commit this patch, so I'll\n> go do that.\n\nThank you. I noticed that there are comments suggesting that certain\nRelationData members are to be accessed using their RelationGet*\nfunctions, but partitioning members do not have such comments. How\nabout the attached?\n\nRegards,\nAmit",
"msg_date": "Thu, 26 Dec 2019 14:47:21 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Amit Langote <amitlangote09@gmail.com> writes:\n> Thank you. I noticed that there are comments suggesting that certain\n> RelationData members are to be accessed using their RelationGet*\n> functions, but partitioning members do not have such comments. How\n> about the attached?\n\nGood idea, done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 11:20:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 10:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Now as far as point 1 goes, I think it's not really that awful to use\n> > CheckAttributeType() with a dummy attribute name. The attached\n> > incomplete patch uses \"partition key\" which causes it to emit errors\n> > like\n> > regression=# create table fool (a int, b int) partition by list ((row(a, b)));\n> > ERROR: column \"partition key\" has pseudo-type record\n> > I don't think that that's unacceptable. But if we wanted to improve it,\n> > we could imagine adding another flag, say CHKATYPE_IS_PARTITION_KEY,\n> > that doesn't affect CheckAttributeType's semantics, just the wording of\n> > the error messages it throws.\n>\n> Here's a fleshed-out patch that does it like that.\n>\n> While poking at this, I also started to wonder why CheckAttributeType\n> wasn't recursing into ranges, since those are our other kind of\n> container type. And the answer is that it must, because we allow\n> creation of ranges over composite types:\n>\n> regression=# create table foo (f1 int, f2 int);\n> CREATE TABLE\n> regression=# create type foorange as range (subtype = foo);\n> CREATE TYPE\n> regression=# alter table foo add column r foorange;\n> ALTER TABLE\n>\n> Simple things still work on table foo, but surely this is exactly\n> what CheckAttributeType is supposed to be preventing. With the\n> second attached patch you get\n>\n> regression=# alter table foo add column r foorange;\n> ERROR: composite type foo cannot be made a member of itself\n>\n> The second patch needs to go back all the way, the first one\n> only as far as we have partitions.\n\nWhile working on regression tests for index collation versioning [1],\nI noticed that the 2nd patch apparently broke the ability to create a\ntable using a range over collatable datatype attribute, which we\napparently don't test anywhere. Simple example to reproduce:\n\nCREATE TYPE myrange_text AS range (subtype = text);\nCREATE TABLE test_text(\n meh myrange_text\n);\nERROR: 42P16: no collation was derived for column \"meh\" with\ncollatable type text\nHINT: Use the COLLATE clause to set the collation explicitly.\n\nAFAICT, this is only a thinko in CheckAttributeType(), where the range\ncollation should be provided rather than the original tuple desc one,\nas per attached. I also added a create/drop table in an existing\nregression test that was already creating range over collatable type.\n\n[1] https://www.postgresql.org/message-id/CAEepm%3D0uEQCpfq_%2BLYFBdArCe4Ot98t1aR4eYiYTe%3DyavQygiQ%40mail.gmail.com",
"msg_date": "Fri, 31 Jan 2020 11:25:07 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Sun, Dec 22, 2019 at 10:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> While poking at this, I also started to wonder why CheckAttributeType\n>> wasn't recursing into ranges, since those are our other kind of\n>> container type. And the answer is that it must, because we allow\n>> creation of ranges over composite types:\n\n> While working on regression tests for index collation versioning [1],\n> I noticed that the 2nd patch apparently broke the ability to create a\n> table using a range over collatable datatype attribute, which we\n> apparently don't test anywhere.\n\nUgh.\n\n> AFAICT, this is only a thinko in CheckAttributeType(), where the range\n> collation should be provided rather than the original tuple desc one,\n> as per attached. I also added a create/drop table in an existing\n> regression test that was already creating range over collatable type.\n\nLooks good, although I think maybe we'd better test the case a little\nharder than this. Will tweak that and push -- thanks!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 Jan 2020 16:20:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 04:20:36PM -0500, Tom Lane wrote:\n> Julien Rouhaud <rjuju123@gmail.com> writes:\n> > On Sun, Dec 22, 2019 at 10:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> While poking at this, I also started to wonder why CheckAttributeType\n> >> wasn't recursing into ranges, since those are our other kind of\n> >> container type. And the answer is that it must, because we allow\n> >> creation of ranges over composite types:\n> \n> > While working on regression tests for index collation versioning [1],\n> > I noticed that the 2nd patch apparently broke the ability to create a\n> > table using a range over collatable datatype attribute, which we\n> > apparently don't test anywhere.\n> \n> Ugh.\n> \n> > AFAICT, this is only a thinko in CheckAttributeType(), where the range\n> > collation should be provided rather than the original tuple desc one,\n> > as per attached. I also added a create/drop table in an existing\n> > regression test that was already creating range over collatable type.\n> \n> Looks good, although I think maybe we'd better test the case a little\n> harder than this. Will tweak that and push -- thanks!\n\nAh, I wasn't sure that additional tests on a table would be worthwhile enough.\nThanks for tweaking and pushing!\n\n\n",
"msg_date": "Sat, 1 Feb 2020 08:46:25 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "Is there a way out if someone accidentally executes the same test case\nagainst PG12?\n\ntestdb=# create table partitioned (a int, b int)\ntestdb-# partition by list ((row(a, b)::partitioned));\nCREATE TABLE\ntestdb=# DROP TABLE partitioned;\nERROR: cache lookup failed for type 18269\n\n\n>\n> Ah, I wasn't sure that additional tests on a table would be worthwhile\n> enough.\n> Thanks for tweaking and pushing!\n>\n>\n>\n\nIs there a way out if someone accidentally executes the same test case against PG12?testdb=# create table partitioned (a int, b int)testdb-# partition by list ((row(a, b)::partitioned));CREATE TABLEtestdb=# DROP TABLE partitioned;ERROR: cache lookup failed for type 18269 \nAh, I wasn't sure that additional tests on a table would be worthwhile enough.\nThanks for tweaking and pushing!",
"msg_date": "Wed, 9 Sep 2020 19:47:22 +0530",
"msg_from": "Jobin Augustine <jobinau@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
},
{
"msg_contents": "On Wed, Sep 9, 2020 at 4:17 PM Jobin Augustine <jobinau@gmail.com> wrote:\n>\n> Is there a way out if someone accidentally executes the same test case against PG12?\n>\n> testdb=# create table partitioned (a int, b int)\n> testdb-# partition by list ((row(a, b)::partitioned));\n> CREATE TABLE\n> testdb=# DROP TABLE partitioned;\n> ERROR: cache lookup failed for type 18269\n\nAFAICT this is only a side effect of that particular use case if you\ntry to drop it without having a relcache entry. Do any access before\ndropping it and it should be fine, for instance:\n\nrjuju=# create table partitioned (a int, b int)\nrjuju-# partition by list ((row(a, b)::partitioned));\nCREATE TABLE\nrjuju=# DROP TABLE partitioned;\nERROR: cache lookup failed for type 144845\nrjuju=# \\d partitioned\n Partitioned table \"public.partitioned\"\n Column | Type | Collation | Nullable | Default\n--------+---------+-----------+----------+---------\n a | integer | | |\n b | integer | | |\nPartition key: LIST ((ROW(a, b)::partitioned))\nNumber of partitions: 0\n\nrjuju=# DROP TABLE partitioned;\nDROP TABLE\n\n\n",
"msg_date": "Wed, 9 Sep 2020 17:01:39 +0200",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: unsupportable composite type partition keys"
}
] |
[
{
"msg_contents": "While following an old link to\nhttps://www.postgresql.org/docs/10/auth-methods.html\n\nI see a list of links to authentication methods. However:\n\nWhen I hit the current version\nhttps://www.postgresql.org/docs/current/auth-methods.html\n\nThere are absolutely no links...\n\nDave Cramer\n\nWhile following an old link to https://www.postgresql.org/docs/10/auth-methods.htmlI see a list of links to authentication methods. However:When I hit the current version https://www.postgresql.org/docs/current/auth-methods.htmlThere are absolutely no links...Dave Cramer",
"msg_date": "Tue, 17 Dec 2019 06:42:53 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "client auth docs seem to have devolved"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n> While following an old link to\n> https://www.postgresql.org/docs/10/auth-methods.html\n>\n> I see a list of links to authentication methods. However:\n>\n> When I hit the current version\n> https://www.postgresql.org/docs/current/auth-methods.html\n>\n> There are absolutely no links...\n>\n>\nThat's because the structure of the docs changed. You need to hit \"up\",\nwhich will take you to\nhttps://www.postgresql.org/docs/current/client-authentication.html, which\nnow has the list of links. Note how the different methods used to be\n20.3.x, and are now directly listed as 20.y.\n\nI'm unsure if that was intentional in the upstream docs, but that's what\nmakes the website behave like it does.\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com> wrote:While following an old link to https://www.postgresql.org/docs/10/auth-methods.htmlI see a list of links to authentication methods. However:When I hit the current version https://www.postgresql.org/docs/current/auth-methods.htmlThere are absolutely no links...That's because the structure of the docs changed. You need to hit \"up\", which will take you to https://www.postgresql.org/docs/current/client-authentication.html, which now has the list of links. Note how the different methods used to be 20.3.x, and are now directly listed as 20.y. I'm unsure if that was intentional in the upstream docs, but that's what makes the website behave like it does. -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 17 Dec 2019 12:53:15 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "On Tue, 17 Dec 2019 at 06:53, Magnus Hagander <magnus@hagander.net> wrote:\n\n> On Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n>> While following an old link to\n>> https://www.postgresql.org/docs/10/auth-methods.html\n>>\n>> I see a list of links to authentication methods. However:\n>>\n>> When I hit the current version\n>> https://www.postgresql.org/docs/current/auth-methods.html\n>>\n>> There are absolutely no links...\n>>\n>>\n> That's because the structure of the docs changed. You need to hit \"up\",\n> which will take you to\n> https://www.postgresql.org/docs/current/client-authentication.html, which\n> now has the list of links. Note how the different methods used to be\n> 20.3.x, and are now directly listed as 20.y.\n>\n> I'm unsure if that was intentional in the upstream docs, but that's what\n> makes the website behave like it does.\n>\n\nFair enough but\n\n20.3. Authentication Methods\nThe following sections describe the authentication methods in more detail.\n\ncertainly is misleading.\n\nThanks,\n\nDave\n\n>\n\nOn Tue, 17 Dec 2019 at 06:53, Magnus Hagander <magnus@hagander.net> wrote:On Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com> wrote:While following an old link to https://www.postgresql.org/docs/10/auth-methods.htmlI see a list of links to authentication methods. However:When I hit the current version https://www.postgresql.org/docs/current/auth-methods.htmlThere are absolutely no links...That's because the structure of the docs changed. You need to hit \"up\", which will take you to https://www.postgresql.org/docs/current/client-authentication.html, which now has the list of links. Note how the different methods used to be 20.3.x, and are now directly listed as 20.y. I'm unsure if that was intentional in the upstream docs, but that's what makes the website behave like it does. Fair enough but20.3. Authentication MethodsThe following sections describe the authentication methods in more detail.certainly is misleading.Thanks,Dave",
"msg_date": "Tue, 17 Dec 2019 07:02:05 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 1:02 PM Dave Cramer <davecramer@gmail.com> wrote:\n\n> On Tue, 17 Dec 2019 at 06:53, Magnus Hagander <magnus@hagander.net> wrote:\n>\n>> On Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com>\n>> wrote:\n>>\n>>> While following an old link to\n>>> https://www.postgresql.org/docs/10/auth-methods.html\n>>>\n>>> I see a list of links to authentication methods. However:\n>>>\n>>> When I hit the current version\n>>> https://www.postgresql.org/docs/current/auth-methods.html\n>>>\n>>> There are absolutely no links...\n>>>\n>>>\n>> That's because the structure of the docs changed. You need to hit \"up\",\n>> which will take you to\n>> https://www.postgresql.org/docs/current/client-authentication.html,\n>> which now has the list of links. Note how the different methods used to be\n>> 20.3.x, and are now directly listed as 20.y.\n>>\n>> I'm unsure if that was intentional in the upstream docs, but that's what\n>> makes the website behave like it does.\n>>\n>\n> Fair enough but\n>\n> 20.3. Authentication Methods\n> The following sections describe the authentication methods in more detail.\n>\n> certainly is misleading.\n>\n>\nThis was changed by Peter in\ncommit 56811e57323faa453947eb82f007e323a952e1a1 along with the\nrestructuring. It used to say \"the following subsections\". So techically I\nthink that change is correct, but that doesn't necessarily make it helpful.\n\nBut based on how it actually renders, since that section doesn't contain\nany actual useful info, we should perhaps just remove section 20.3\ncompletely. Peter, thoughts?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 17, 2019 at 1:02 PM Dave Cramer <davecramer@gmail.com> wrote:On Tue, 17 Dec 2019 at 06:53, Magnus Hagander <magnus@hagander.net> wrote:On Tue, Dec 17, 2019 at 12:43 PM Dave Cramer <davecramer@gmail.com> wrote:While following an old link to https://www.postgresql.org/docs/10/auth-methods.htmlI see a list of links to authentication methods. However:When I hit the current version https://www.postgresql.org/docs/current/auth-methods.htmlThere are absolutely no links...That's because the structure of the docs changed. You need to hit \"up\", which will take you to https://www.postgresql.org/docs/current/client-authentication.html, which now has the list of links. Note how the different methods used to be 20.3.x, and are now directly listed as 20.y. I'm unsure if that was intentional in the upstream docs, but that's what makes the website behave like it does. Fair enough but20.3. Authentication MethodsThe following sections describe the authentication methods in more detail.certainly is misleading.This was changed by Peter in commit 56811e57323faa453947eb82f007e323a952e1a1 along with the restructuring. It used to say \"the following subsections\". So techically I think that change is correct, but that doesn't necessarily make it helpful. But based on how it actually renders, since that section doesn't contain any actual useful info, we should perhaps just remove section 20.3 completely. Peter, thoughts?-- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 17 Dec 2019 13:06:57 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> This was changed by Peter in\n> commit 56811e57323faa453947eb82f007e323a952e1a1 along with the\n> restructuring. It used to say \"the following subsections\". So techically I\n> think that change is correct, but that doesn't necessarily make it helpful.\n\n> But based on how it actually renders, since that section doesn't contain\n> any actual useful info, we should perhaps just remove section 20.3\n> completely. Peter, thoughts?\n\nThen, URLs pointing to that page (such as Dave evidently has bookmarked)\nwould break entirely, which doesn't seem like an improvement.\n\nI suggest changing the sect1's contents to be a list of available auth\nmethods, linked to their subsections. That would provide approximately\nthe same quality-of-use as the subsection TOC that used to be there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 11:01:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "> Then, URLs pointing to that page (such as Dave evidently has bookmarked)\n> would break entirely, which doesn't seem like an improvement.\n>\n\nit was linked to in a bug report.\n\nDave Cramer\n\nThen, URLs pointing to that page (such as Dave evidently has bookmarked)\nwould break entirely, which doesn't seem like an improvement.it was linked to in a bug report.Dave Cramer",
"msg_date": "Tue, 17 Dec 2019 11:38:29 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Magnus Hagander <magnus@hagander.net> writes:\n> > This was changed by Peter in\n> > commit 56811e57323faa453947eb82f007e323a952e1a1 along with the\n> > restructuring. It used to say \"the following subsections\". So techically\n> I\n> > think that change is correct, but that doesn't necessarily make it\n> helpful.\n>\n> > But based on how it actually renders, since that section doesn't contain\n> > any actual useful info, we should perhaps just remove section 20.3\n> > completely. Peter, thoughts?\n>\n> Then, URLs pointing to that page (such as Dave evidently has bookmarked)\n> would break entirely, which doesn't seem like an improvement.\n>\n\nUgh, that's a good point of course. Didn't think of that.\n\n\nI suggest changing the sect1's contents to be a list of available auth\n> methods, linked to their subsections. That would provide approximately\n> the same quality-of-use as the subsection TOC that used to be there.\n>\n\nYeah, that sounds better. Is there some docbook magic that can do that for\nus?\n\n-- \n Magnus Hagander\n Me: https://www.hagander.net/ <http://www.hagander.net/>\n Work: https://www.redpill-linpro.com/ <http://www.redpill-linpro.com/>\n\nOn Tue, Dec 17, 2019 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:Magnus Hagander <magnus@hagander.net> writes:\n> This was changed by Peter in\n> commit 56811e57323faa453947eb82f007e323a952e1a1 along with the\n> restructuring. It used to say \"the following subsections\". So techically I\n> think that change is correct, but that doesn't necessarily make it helpful.\n\n> But based on how it actually renders, since that section doesn't contain\n> any actual useful info, we should perhaps just remove section 20.3\n> completely. Peter, thoughts?\n\nThen, URLs pointing to that page (such as Dave evidently has bookmarked)\nwould break entirely, which doesn't seem like an improvement.Ugh, that's a good point of course. Didn't think of that.\nI suggest changing the sect1's contents to be a list of available auth\nmethods, linked to their subsections. That would provide approximately\nthe same quality-of-use as the subsection TOC that used to be there.Yeah, that sounds better. Is there some docbook magic that can do that for us? -- Magnus Hagander Me: https://www.hagander.net/ Work: https://www.redpill-linpro.com/",
"msg_date": "Tue, 17 Dec 2019 20:58:46 +0100",
"msg_from": "Magnus Hagander <magnus@hagander.net>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "Magnus Hagander <magnus@hagander.net> writes:\n> On Tue, Dec 17, 2019 at 5:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I suggest changing the sect1's contents to be a list of available auth\n>> methods, linked to their subsections. That would provide approximately\n>> the same quality-of-use as the subsection TOC that used to be there.\n\n> Yeah, that sounds better. Is there some docbook magic that can do that for\n> us?\n\nI was just intending to do it the hard way, since even if such magic\nexists, it'd probably only regurgitate the section titles. It seems\nmore useful to allow for some descriptive text along with that.\n(Not a lot, but maybe a full sentence for each one.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Dec 2019 15:42:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "I wrote:\n> Magnus Hagander <magnus@hagander.net> writes:\n>> This was changed by Peter in\n>> commit 56811e57323faa453947eb82f007e323a952e1a1 along with the\n>> restructuring. It used to say \"the following subsections\". So techically I\n>> think that change is correct, but that doesn't necessarily make it helpful.\n>> But based on how it actually renders, since that section doesn't contain\n>> any actual useful info, we should perhaps just remove section 20.3\n>> completely. Peter, thoughts?\n\n> Then, URLs pointing to that page (such as Dave evidently has bookmarked)\n> would break entirely, which doesn't seem like an improvement.\n\nAlso, our docs' own internal links to that section would break --- there\nare built-in assumptions that there's one pointable-to place that explains\nall the auth methods.\n\n> I suggest changing the sect1's contents to be a list of available auth\n> methods, linked to their subsections. That would provide approximately\n> the same quality-of-use as the subsection TOC that used to be there.\n\nConcretely, I propose the attached. Anybody want to editorialize on\nmy short descriptions of the auth methods?\n\n\t\t\tregards, tom lane",
"msg_date": "Wed, 18 Dec 2019 13:07:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "I wrote:\n> Concretely, I propose the attached. Anybody want to editorialize on\n> my short descriptions of the auth methods?\n\nPushed after a bit more fiddling with the wording.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Dec 2019 09:44:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
},
{
"msg_contents": "On 2019-Dec-19, Tom Lane wrote:\n\n> I wrote:\n> > Concretely, I propose the attached. Anybody want to editorialize on\n> > my short descriptions of the auth methods?\n> \n> Pushed after a bit more fiddling with the wording.\n\nLooks good, thanks.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Dec 2019 12:26:53 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: client auth docs seem to have devolved"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 04:45\n>And if you actually group things together so as any individual looking\n>at your patches does not have to figure out which piece applies to\n>what, that's also better.\nI'm still trying to find the best way.\n\n>Anyway, the patch for putenv() is wrong in the way the memory is freed, but this >has been mentioned on another thread.\nOh yes, putenv depending on glibc version, copy and not others, the pointer.\nAt Windows side, the Dr.Memory, reported two memory leaks, with strdup.\nThe v2 is better, because, simplifies the function.\nSubmitted a proposal for setenv support for Windows, in other thread.\n\n>We rely on MAXPGPATH heavily so your patch trying to change\n>the buffer length does not bring much,\nI am a little confused about which path you are talking about.\nIf it about var path at function validate_exec, I believe that there is a mistake.\n\nchar path_exe[MAXPGPATH + sizeof(\".exe\") - 1];\nThe -1, its suspicious and can be removed.\n\nOnce there, I tried to improve the code by simplifying and removing the excessive number of functions.\n\nAt Windows side, the code paths, is less tested.\nThe Dr.Memory, reported 3794 potential unaddressable access at WIN32 block, pipe_read_line function, wich call validade_exec.\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\n\nDe: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 04:45\n>And if you actually group things together so as any individual looking\n>at your patches does not have to figure out which piece applies to\n>what, that's also better. \nI'm still trying to find the best way.\n\n>Anyway, the patch for putenv() is wrong in the way the memory is freed, but this >has been mentioned on another thread.\n\nOh yes, putenv depending on glibc version, copy and not others, the pointer.\nAt Windows side, the Dr.Memory, reported two memory leaks, with strdup.\nThe v2 is better, because, simplifies the function.\nSubmitted a proposal for setenv support for Windows, in other thread.\n\n>We rely on MAXPGPATH heavily so your patch trying to change\n>the buffer length does not bring much, \nI am a little confused about which path you are talking about.\nIf it about var path at function validate_exec, I believe that there is a mistake.\n\nchar path_exe[MAXPGPATH + sizeof(\".exe\") - 1];\nThe -1, its suspicious and can be removed.\n\nOnce there, I tried to improve the code by simplifying and removing the excessive number of functions.\n\nAt Windows side, the code paths, is less tested.\nThe Dr.Memory, reported 3794 potential unaddressable access at WIN32 block, pipe_read_line function, wich call validade_exec.\n\nBest regards,\nRanier Vilela",
"msg_date": "Tue, 17 Dec 2019 14:06:31 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Windows port minor fixes"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 9:06 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> De: Michael Paquier\n> Enviadas: Terça-feira, 17 de Dezembro de 2019 04:45\n> >And if you actually group things together so as any individual looking\n> >at your patches does not have to figure out which piece applies to\n> >what, that's also better.\n> I'm still trying to find the best way.\n\nA lot of your emails, like this one, seem to be replies to other\nemails, but at least in my mail reader (gmail) something you're doing\nis causing the threading to get broken, so it's very hard to know what\nthis is replying to.\n\nAlso, the way the quoted material is presented in your emails is quite\nodd-looking.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 10:44:02 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Tue, Dec 17, 2019 at 9:06 AM Ranier Vilela <ranier_gyn@hotmail.com> wrote:\n> > De: Michael Paquier\n> > Enviadas: Terça-feira, 17 de Dezembro de 2019 04:45\n> > >And if you actually group things together so as any individual looking\n> > >at your patches does not have to figure out which piece applies to\n> > >what, that's also better.\n> > I'm still trying to find the best way.\n> \n> A lot of your emails, like this one, seem to be replies to other\n> emails, but at least in my mail reader (gmail) something you're doing\n> is causing the threading to get broken, so it's very hard to know what\n> this is replying to.\n\nI'm reasonably confident (though I can't be sure about gmail, but I see\nthe same thing in mutt) the issue here is that there's no References or\nIn-Reply-To headers in the emails. There's some 'Thread-Subject' and\n'Thread-Index' headers but it seems that gmail and mutt can't sort out\nwhat those are or how to use them to do proper threading (if it's even\npossible with those headers.. I'm not really sure how you'd use the\n'Thread-Index' value since it seems to just be a hex code..).\n\n> Also, the way the quoted material is presented in your emails is quite\n> odd-looking.\n\nYeah, agree with this too, though that bothers me somewhat less than the\nthreading issue.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Dec 2019 10:59:20 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
},
{
"msg_contents": "De: Robert Haas <robertmhaas@gmail.com>\nEnviado: quarta-feira, 18 de dezembro de 2019 15:44\n\n>A lot of your emails, like this one, seem to be replies to other\n>emails, but at least in my mail reader (gmail) something you're doing\n>is causing the threading to get broken, so it's very hard to know what\n>this is replying to.\n\nI can't tell if it's me doing something wrong or if live outlook can't organize it the right way.\nAnyway, I will switch to gmail,\nranier.vf@gmail.com, to see if it looks better.\n\nregards,\nRanier Vilela\n\n",
"msg_date": "Wed, 18 Dec 2019 17:36:18 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: Windows port minor fixes"
},
{
"msg_contents": "Greetings,\n\n* Ranier Vilela (ranier_gyn@hotmail.com) wrote:\n> De: Robert Haas <robertmhaas@gmail.com>\n> Enviado: quarta-feira, 18 de dezembro de 2019 15:44\n> \n> >A lot of your emails, like this one, seem to be replies to other\n> >emails, but at least in my mail reader (gmail) something you're doing\n> >is causing the threading to get broken, so it's very hard to know what\n> >this is replying to.\n> \n> I can't tell if it's me doing something wrong or if live outlook can't organize it the right way.\n\nAlright, well, oddly enough, *this* email included the other headers and\nappears threaded properly (in mutt, at least).\n\nDid you do something different when replying to this email vs. the other\nemails you've been replying to?\n\nAlso- it's custom here to \"reply-all\" and not to just reply to the list.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Dec 2019 13:59:21 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
},
{
"msg_contents": "Em qua., 18 de dez. de 2019 às 15:59, Stephen Frost <sfrost@snowman.net>\nescreveu:\n\n> >Alright, well, oddly enough, *this* email included the other headers and\n> >appears threaded properly (in mutt, at least).\n>\n> >Did you do something different when replying to this email vs. the other\n> >emails you've been replying to?\n>\n> Well, live outlook, is kinda dumb at that point, when responds, he add\nonly person email, not list email.\nSo the answer goes only to the person.\n\nAlso- it's custom here to \"reply-all\" and not to just reply to the list.\n>\nOk, adjusted.\n\nregards,\nRanier Vilela\n\nEm qua., 18 de dez. de 2019 às 15:59, Stephen Frost <sfrost@snowman.net> escreveu:>Alright, well, oddly enough, *this* email included the other headers and\n>appears threaded properly (in mutt, at least).\n\n>Did you do something different when replying to this email vs. the other\n>emails you've been replying to?\nWell, live outlook, is kinda dumb at that point, when responds, he add only person email, not list email.So the answer goes only to the person.\nAlso- it's custom here to \"reply-all\" and not to just reply to the list.Ok, adjusted.regards,Ranier Vilela",
"msg_date": "Wed, 18 Dec 2019 17:19:08 -0300",
"msg_from": "Ranier Vf <ranier.vf@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Windows port minor fixes"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 04:34\n>So, this basically matches with what the MS documents tell us, and my\n>impression: this API is available down to at least MSVC 2008, which is\n>much more than what we support on HEAD where one can use MSVC 2013 and\n>newer versions. Note that for the minimal platforms supported our\n>documentation cite Windows Server 2008 R2 SP1 and Windows 7, implying\n>_WIN32_WINNT >= 0x0600.\nAs concern [1], at src/include/port/win32.h, the comments still references Windows XP and claims about possible MingW break.\n\n>In short, this means two things:\n>- Your patch, as presented, is wrong.\nWell, I try correct him to target MSVC 2013.\n\n>There is no need to make conditional the use of BCryptGenRandom.\nIf legacy Windows Crypto API still remain, and the patch can broken MingW, I believe as necessary conditional use of BCryptGenRandom.\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Michael Paquier\nEnviadas: Terça-feira, 17 de Dezembro de 2019 04:34\n>So, this basically matches with what the MS documents tell us, and my\n>impression: this API is available down to at least MSVC 2008, which is\n>much more than what we support on HEAD where one can use MSVC 2013 and\n>newer versions. Note that for the minimal platforms supported our\n>documentation cite Windows Server 2008 R2 SP1 and Windows 7, implying\n>_WIN32_WINNT >= 0x0600.\nAs concern [1], at src/include/port/win32.h, the comments still references Windows XP and claims about possible MingW break.\n\n>In short, this means two things:\n>- Your patch, as presented, is wrong.\nWell, I try correct him to target MSVC 2013.\n\n>There is no need to make conditional the use of BCryptGenRandom.\nIf legacy Windows Crypto API still remain, and the patch can broken MingW, I believe as necessary conditional use of BCryptGenRandom.\n\nBest regards,\nRanier Vilela",
"msg_date": "Tue, 17 Dec 2019 14:20:20 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "On Tue, Dec 17, 2019 at 02:20:20PM +0000, Ranier Vilela wrote:\n> As concern [1], at src/include/port/win32.h, the comments still\n> references Windows XP and claims about possible MingW break.\n\nThis looks like a leftover of d9dd406, which has made the code to\nrequire C99. As we don't support compilation with Windows XP and\nrequire Windows 7, we should be able to remove all the dance around\nMIN_WINNT in win32.h, don't you think?\n--\nMichael",
"msg_date": "Wed, 18 Dec 2019 11:19:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 3:20 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 17, 2019 at 02:20:20PM +0000, Ranier Vilela wrote:\n> > As concern [1], at src/include/port/win32.h, the comments still\n> > references Windows XP and claims about possible MingW break.\n>\n> This looks like a leftover of d9dd406, which has made the code to\n> require C99. As we don't support compilation with Windows XP and\n> require Windows 7, we should be able to remove all the dance around\n> MIN_WINNT in win32.h, don't you think?\n>\n>\n+1, there is a reference in [1] about that is possible to build PostgreSQL\nusing the GNU compiler tools for older versions of Windows, that should be\nalso updated.\n\n[1] https://www.postgresql.org/docs/current/install-windows.html\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Wed, Dec 18, 2019 at 3:20 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 17, 2019 at 02:20:20PM +0000, Ranier Vilela wrote:\n> As concern [1], at src/include/port/win32.h, the comments still\n> references Windows XP and claims about possible MingW break.\n\nThis looks like a leftover of d9dd406, which has made the code to\nrequire C99. As we don't support compilation with Windows XP and\nrequire Windows 7, we should be able to remove all the dance around\nMIN_WINNT in win32.h, don't you think?+1, there is a reference in [1] about that is possible to build PostgreSQL using the GNU compiler tools for older versions of Windows, that should be also updated.[1] https://www.postgresql.org/docs/current/install-windows.htmlRegards,Juan José Santamaría Flecha",
"msg_date": "Wed, 18 Dec 2019 09:52:07 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port add support to BCryptGenRandom"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 09:52:07AM +0100, Juan José Santamaría Flecha wrote:\n> +1, there is a reference in [1] about that is possible to build PostgreSQL\n> using the GNU compiler tools for older versions of Windows, that should be\n> also updated.\n\nThere is actually a little bit more which could be cleaned up. I am\ngoing to begin a new thread on that after finishing looking.\n\n> [1] https://www.postgresql.org/docs/current/install-windows.html\n\nAre you referring to the part about cygwin? We could remove all the\nparagraph..\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 10:24:43 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Windows port add support to BCryptGenRandom"
}
] |
[
{
"msg_contents": "Hi,\n\nIt seems that d986d4e87f6 forgot to update a comment upon renaming a variable.\n\nAttached fixes it.\n\nThanks,\nAmit",
"msg_date": "Wed, 18 Dec 2019 15:49:31 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": true,
"msg_subject": "obsolete comment in ExecBRUpdateTriggers()"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 1:49 AM Amit Langote <amitlangote09@gmail.com> wrote:\n> It seems that d986d4e87f6 forgot to update a comment upon renaming a variable.\n>\n> Attached fixes it.\n\nCommitted and back-patched to v12.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Dec 2019 09:32:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: obsolete comment in ExecBRUpdateTriggers()"
}
] |
[
{
"msg_contents": "As a curious omission, DROP RULE does not check allow_system_table_mods. \n Creating and renaming a rule does, and also creating, renaming, and \ndropping a trigger does. The impact of this is probably nil in \npractice, but for consistency we should probably add that. The patch is \npretty simple.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 18 Dec 2019 09:56:42 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "allow_system_table_mods and DROP RULE"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 3:56 AM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n> As a curious omission, DROP RULE does not check allow_system_table_mods.\n> Creating and renaming a rule does, and also creating, renaming, and\n> dropping a trigger does. The impact of this is probably nil in\n> practice, but for consistency we should probably add that. The patch is\n> pretty simple.\n\n+1. LGTM.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 10:53:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: allow_system_table_mods and DROP RULE"
},
{
"msg_contents": "On 2019-12-18 16:53, Robert Haas wrote:\n> On Wed, Dec 18, 2019 at 3:56 AM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> As a curious omission, DROP RULE does not check allow_system_table_mods.\n>> Creating and renaming a rule does, and also creating, renaming, and\n>> dropping a trigger does. The impact of this is probably nil in\n>> practice, but for consistency we should probably add that. The patch is\n>> pretty simple.\n> \n> +1. LGTM.\n\ncommitted\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 20 Dec 2019 08:33:21 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: allow_system_table_mods and DROP RULE"
}
] |
[
{
"msg_contents": "I present a patch to allow READ UNCOMMITTED that is simple, useful and\nefficient. This was previously thought to have no useful definition within\nPostgreSQL, though I have identified a use case for diagnostics and\nrecovery that merits adding a short patch to implement it.\n\nMy docs for this are copied here:\n\n In <productname>PostgreSQL</productname>'s <acronym>MVCC</acronym>\n architecture, readers are not blocked by writers, so in general\n you should have no need for this transaction isolation level.\n\n In general, read uncommitted will return inconsistent results and\n wrong answers. If you look at the changes made by a transaction\n while it continues to make changes then you may get partial results\n from queries, or you may miss index entries that haven't yet been\n written. However, if you are reading transactions that are paused\n at the end of their execution for whatever reason then you can\n see a consistent result.\n\n The main use case for this transaction isolation level is for\n investigating or recovering data. Examples of this would be when\n inspecting the writes made by a locked or hanging transaction, when\n you are running queries on a standby node that is currently paused,\n such as when a standby node has halted at a recovery target with\n <literal>recovery_target_inclusive = false</literal> or when you\n need to inspect changes made by an in-doubt prepared transaction to\n decide whether to commit or abort that transaction.\n\n In <productname>PostgreSQL</productname> read uncommitted mode gives\n a consistent snapshot of the currently running transactions at the\n time the snapshot was taken. Transactions starting after that time\n will not be visible, even though they are not yet committed.\n\nThis is a new and surprising thought, so please review the attached patch.\n\nPlease notice that almost all of the infrastructure already exists to\nsupport this, so this patch does very little. It avoids additional locking\non main execution paths and as far as I am aware, does not break anything.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 10:01:34 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Read Uncommitted"
},
{
"msg_contents": "On 18.12.2019 13:01, Simon Riggs wrote:\n> I present a patch to allow READ UNCOMMITTED that is simple, useful and \n> efficient. This was previously thought to have no useful definition \n> within PostgreSQL, though I have identified a use case for diagnostics \n> and recovery that merits adding a short patch to implement it.\n>\n> My docs for this are copied here:\n>\n> In <productname>PostgreSQL</productname>'s \n> <acronym>MVCC</acronym>./configure \n> --prefix=/home/knizhnik/postgresql/dist --enable-debug \n> --enable-cassert CFLAGS=-O0\n>\n> architecture, readers are not blocked by writers, so in general\n> you should have no need for this transaction isolation level.\n>\n> In general, read uncommitted will return inconsistent results and\n> wrong answers. If you look at the changes made by a transaction\n> while it continues to make changes then you may get partial results\n> from queries, or you may miss index entries that haven't yet been\n> written. However, if you are reading transactions that are paused\n> at the end of their execution for whatever reason then you can\n> see a consistent result.\n>\n> The main use case for this transaction isolation level is for\n> investigating or recovering data. Examples of this would be when\n> inspecting the writes made by a locked or hanging transaction, when\n> you are running queries on a standby node that is currently paused,\n> such as when a standby node has halted at a recovery target with\n> <literal>recovery_target_inclusive = false</literal> or when you\n> need to inspect changes made by an in-doubt prepared transaction to\n> decide whether to commit or abort that transaction.\n>\n> In <productname>PostgreSQL</productname> read uncommitted mode gives\n> a consistent snapshot of the currently running transactions at the\n> time the snapshot was taken. Transactions starting after that time\n> will not be visible, even though they are not yet committed.\n>\n> This is a new and surprising thought, so please review the attached patch.\n>\n> Please notice that almost all of the infrastructure already exists to \n> support this, so this patch does very little. It avoids additional \n> locking on main execution paths and as far as I am aware, does not \n> break anything.\n>\n> -- \n> Simon Riggshttp://www.2ndQuadrant.com/ <http://www.2ndquadrant.com/>\n> PostgreSQL Solutions for the Enterprise\n\nAs far as I understand with \"read uncommitted\" policy we can see two \nversions of the same tuple if it was updated by two transactions both of \nwhich were started before us and committed during table traversal by \ntransaction with \"read uncommitted\" policy. Certainly \"read uncommitted\" \nmeans that we are ready to get inconsistent results, but is it really \nacceptable to multiple versions of the same tuple?\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company\n\n\n\n\n\n\n\n\n\nOn 18.12.2019 13:01, Simon Riggs wrote:\n\n\n\n\n\nI present a patch to allow READ UNCOMMITTED\n that is simple, useful and efficient. This was previously\n thought to have no useful definition within PostgreSQL,\n though I have identified a use case for diagnostics and\n recovery that merits adding a short patch to implement it.\n\n\nMy docs for this are copied here:\n \n\n In\n <productname>PostgreSQL</productname>'s\n <acronym>MVCC</acronym>./configure\n --prefix=/home/knizhnik/postgresql/dist --enable-debug\n --enable-cassert CFLAGS=-O0\n\n architecture, readers are not blocked by writers, so\n in general\n you should have no need for this transaction isolation\n level.\n\n In general, read uncommitted will return inconsistent\n results and\n wrong answers. If you look at the changes made by a\n transaction\n while it continues to make changes then you may get\n partial results\n from queries, or you may miss index entries that\n haven't yet been \n written. However, if you are reading transactions that\n are paused\n at the end of their execution for whatever reason\n then you can\n see a consistent result.\n\n The main use case for this transaction isolation level\n is for\n investigating or recovering data. Examples of this\n would be when \n inspecting the writes made by a locked or hanging\n transaction, when \n you are running queries on a standby node that is\n currently paused,\n such as when a standby node has halted at a recovery\n target with \n <literal>recovery_target_inclusive =\n false</literal> or when you\n need to inspect changes made by an in-doubt prepared\n transaction to\n decide whether to commit or abort that transaction.\n\n\n In\n <productname>PostgreSQL</productname> read\n uncommitted mode gives\n a consistent snapshot of the currently running\n transactions at the\n time the snapshot was taken. Transactions starting\n after that time \n will not be visible, even though they are not yet\n committed.\n\n This is a new and surprising thought, so please review the\n attached patch.\n\n\nPlease notice that almost all of the infrastructure\n already exists to support this, so this patch does very\n little. It avoids additional locking on main execution\n paths and as far as I am aware, does not break anything.\n\n -- \n\n\n\n\n\nSimon Riggs \n http://www.2ndQuadrant.com/\n PostgreSQL Solutions for the Enterprise\n\n\n\n\n\n\n\n\n\n\n\n\n As far as I understand with \"read uncommitted\" policy we can see two\n versions of the same tuple if it was updated by two transactions\n both of which were started before us and committed during table\n traversal by transaction with \"read uncommitted\" policy. Certainly \n \"read uncommitted\" means that we are ready to get inconsistent\n results, but is it really acceptable to multiple versions of the\n same tuple?\n\n\n\n-- \nKonstantin Knizhnik\nPostgres Professional: http://www.postgrespro.com\nThe Russian Postgres Company",
"msg_date": "Wed, 18 Dec 2019 15:11:37 +0300",
"msg_from": "Konstantin Knizhnik <k.knizhnik@postgrespro.ru>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> I present a patch to allow READ UNCOMMITTED that is simple, useful and\n> efficient.\n\nWon't this break entirely the moment you try to read a tuple containing\ntoasted-out-of-line values? There's no guarantee that the toast-table\nentries haven't been vacuumed away.\n\nI suspect it can also be broken by cases involving, eg, dropped columns.\nThere are a lot of assumptions in the system that no one will ever try\nto read dead tuples.\n\nThe fact that you can construct a use-case in which it's good for\nsomething doesn't make it safe in general :-(\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 09:06:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 12:11, Konstantin Knizhnik <k.knizhnik@postgrespro.ru>\nwrote:\n\nAs far as I understand with \"read uncommitted\" policy we can see two\n> versions of the same tuple if it was updated by two transactions both of\n> which were started before us and committed during table traversal by\n> transaction with \"read uncommitted\" policy. Certainly \"read uncommitted\"\n> means that we are ready to get inconsistent results, but is it really\n> acceptable to multiple versions of the same tuple?\n>\n\n \"In general, read uncommitted will return inconsistent results and\n wrong answers. If you look at the changes made by a transaction\n while it continues to make changes then you may get partial results\n from queries, or you may miss index entries that haven't yet been\n written. However, if you are reading transactions that are paused\n at the end of their execution for whatever reason then you can\n see a consistent result.\"\n\nI think I already covered your concerns in my suggested docs for this\nfeature.\n\nI'm not suggesting it for general use.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 12:11, Konstantin Knizhnik <k.knizhnik@postgrespro.ru> wrote:\nAs far as I understand with \"read uncommitted\" policy we can see two\n versions of the same tuple if it was updated by two transactions\n both of which were started before us and committed during table\n traversal by transaction with \"read uncommitted\" policy. Certainly \n \"read uncommitted\" means that we are ready to get inconsistent\n results, but is it really acceptable to multiple versions of the\n same tuple? \"In general, read uncommitted will return inconsistent results and wrong answers. If you look at the changes made by a transaction while it continues to make changes then you may get partial results from queries, or you may miss index entries that haven't yet been written. However, if you are reading transactions that are paused at the end of their execution for whatever reason then you can see a consistent result.\"I think I already covered your concerns in my suggested docs for this feature.I'm not suggesting it for general use.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 15:14:20 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 14:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > I present a patch to allow READ UNCOMMITTED that is simple, useful and\n> > efficient.\n>\n> Won't this break entirely the moment you try to read a tuple containing\n> toasted-out-of-line values? There's no guarantee that the toast-table\n> entries haven't been vacuumed away.\n>\n> I suspect it can also be broken by cases involving, eg, dropped columns.\n> There are a lot of assumptions in the system that no one will ever try\n> to read dead tuples.\n>\n\nThis was my first concern when I thought about it, but I realised that by\ntaking a snapshot and then calculating xmin normally, this problem would go\naway.\n\nSo this won't happen with the proposed patch.\n\n\n> The fact that you can construct a use-case in which it's good for\n> something doesn't make it safe in general :-(\n>\n\nI agree that safety is a concern, but I don't see any safety issues in the\npatch as proposed.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 14:06, Tom Lane <tgl@sss.pgh.pa.us> wrote:Simon Riggs <simon@2ndquadrant.com> writes:\n> I present a patch to allow READ UNCOMMITTED that is simple, useful and\n> efficient.\n\nWon't this break entirely the moment you try to read a tuple containing\ntoasted-out-of-line values? There's no guarantee that the toast-table\nentries haven't been vacuumed away.\n\nI suspect it can also be broken by cases involving, eg, dropped columns.\nThere are a lot of assumptions in the system that no one will ever try\nto read dead tuples.This was my first concern when I thought about it, but I realised that by taking a snapshot and then calculating xmin normally, this problem would go away.So this won't happen with the proposed patch. \nThe fact that you can construct a use-case in which it's good for\nsomething doesn't make it safe in general :-(I agree that safety is a concern, but I don't see any safety issues in the patch as proposed.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 15:17:48 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> This was my first concern when I thought about it, but I realised that by taking a snapshot and then calculating xmin normally, this problem would go away.\n\nWhy? As soon as a transaction aborts, the TOAST rows can be vacuumed\naway, but the READ UNCOMMITTED transaction might've already seen the\nmain tuple. This is not even a particularly tight race, necessarily,\nsince for example the table might be scanned, feeding tuples into a\ntuplesort, and then the detoating might happen further up in the query\ntree after the sort has completed.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 12:35:28 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>\n> wrote:\n> > This was my first concern when I thought about it, but I realised that\n> by taking a snapshot and then calculating xmin normally, this problem would\n> go away.\n>\n> Why? As soon as a transaction aborts...\n>\n\nSo this is the same discussion as elsewhere about potentially aborted\ntransactions...\nAFAIK, the worst that happens in that case is that the reading transaction\nwill end with an ERROR, similar to a serializable error.\n\nAnd that won't happen in the use cases I've explicitly described this as\nbeing useful for, which is where the writing transactions have completed\nexecuting.\n\nI'm not claiming any useful behavior outside of those specific use cases;\nthis is not some magic discovery that goes faster - the user has absolutely\nno reason to run this whatsoever unless they want to see uncommitted data.\nThere is a very explicit warning advising against using it for anything\nelse.\n\nJust consider this part of the recovery toolkit.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> This was my first concern when I thought about it, but I realised that by taking a snapshot and then calculating xmin normally, this problem would go away.\n\nWhy? As soon as a transaction aborts...So this is the same discussion as elsewhere about potentially aborted transactions...AFAIK, the worst that happens in that case is that the reading transaction will end with an ERROR, similar to a serializable error.And that won't happen in the use cases I've explicitly described this as being useful for, which is where the writing transactions have completed executing.I'm not claiming any useful behavior outside of those specific use cases; this is not some magic discovery that goes faster - the user has absolutely no reason to run this whatsoever unless they want to see uncommitted data. There is a very explicit warning advising against using it for anything else.Just consider this part of the recovery toolkit.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 18:06:21 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> So this is the same discussion as elsewhere about potentially aborted transactions...\n\nYep.\n\n> AFAIK, the worst that happens in that case is that the reading transaction will end with an ERROR, similar to a serializable error.\n\nI'm not convinced of that. There's a big difference between a\nserializable error, which is an error that is expected to be\nuser-facing and was designed with that in mind, and just failing a\nbunch of random sanity checks all over the backend. If those sanity\nchecks happen to be less than comprehensive, which I suspect is\nlikely, there will probably be scenarios where you can crash a backend\nand cause a system-wide restart. And you can probably also return just\nplain wrong answers to queries in some scenarios.\n\n> Just consider this part of the recovery toolkit.\n\nI agree that it would be useful to have a recovery toolkit for reading\nuncommitted data, but I think a lot more thought needs to be given to\nhow such a thing should be designed. If you just add something called\nREAD UNCOMMITTED, people are going to expect it to have *way* saner\nsemantics than this will. They'll use it routinely, not just as a\nlast-ditch mechanism to recover otherwise-lost data. And I'm\nreasonably confident that will not work out well.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:26:31 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> So this is the same discussion as elsewhere about potentially aborted\n> transactions...\n> AFAIK, the worst that happens in that case is that the reading transaction\n> will end with an ERROR, similar to a serializable error.\n\nNo, the worst case is transactions trying to read invalid data, resulting\nin either crashes or exploitable security breaches (in the usual vein of\nwhat can go wrong if you can get the C code to follow an invalid pointer).\n\nThis seems possible, for example, if you can get a transaction to read\nuncommitted data that was written according to some other rowtype than\nwhat the reading transaction thinks the table rowtype is. Casting my eyes\nthrough AlterTableGetLockLevel(), it looks like all the easy ways to break\nit like that are safe (for now) because they require AccessExclusiveLock.\nBut I am quite afraid that we'd introduce security holes by future\nreductions of required lock levels --- or else that this feature would be\nthe sole reason why we couldn't reduce the lock level for some DDL\noperation. I'm doubtful that its use-case is worth that.\n\n> And that won't happen in the use cases I've explicitly described this as\n> being useful for, which is where the writing transactions have completed\n> executing.\n\nMy concerns, at least, are not about whether this has any interesting\nuse-cases. They're about whether the feature can be abused to cause\nsecurity problems. I think the odds are fair that that'd be true\neven today, and higher that it'd become true sometime in the future.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 13:37:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "\n\nOn 12/18/19 10:06 AM, Simon Riggs wrote:\n> On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com \n> <mailto:robertmhaas@gmail.com>> wrote:\n> \n> On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com \n> <mailto:simon@2ndquadrant.com>> wrote:\n>> This was my first concern when I thought about it, but I realised\n> that by taking a snapshot and then calculating xmin normally, this \n> problem would go away.\n> \n> Why? As soon as a transaction aborts...\n> \n> \n> So this is the same discussion as elsewhere about potentially aborted\n> transactions... AFAIK, the worst that happens in that case is that\n> the reading transaction will end with an ERROR, similar to a\n> serializable error.\n> \n> And that won't happen in the use cases I've explicitly described this\n> as being useful for, which is where the writing transactions have\n> completed executing.\n> \n> I'm not claiming any useful behavior outside of those specific use \n> cases; this is not some magic discovery that goes faster - the user\n> has absolutely no reason to run this whatsoever unless they want to\n> see uncommitted data. There is a very explicit warning advising\n> against using it for anything else.\n> \n> Just consider this part of the recovery toolkit.\n\nIn that case, don't call it \"read uncommitted\". Call it some other\nthing entirely. Users coming from other databases may request\n\"read uncommitted\" isolation expecting something that works.\nCurrently, that gets promoted to \"read committed\" and works. After\nyour change, that simply breaks and gives them an error.\n\nI was about to write something about security and stability problems,\nbut Robert and Tom did elsewhere, so I'll just echo their concerns.\n\nLooking at the regression tests, I'm surprised read uncommitted gets\nso little test coverage. There's a test in src/test/isolation but\nnothing at all in src/test/regression covering this isolation level.\n\nThe one in src/test/isolation doesn't look very comprehensive. I'd\nat least expect a test that verifies you don't get a syntax error\nwhen you request READ UNCOMMITTED isolation from SQL.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Wed, 18 Dec 2019 10:46:55 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > So this is the same discussion as elsewhere about potentially aborted\n> > transactions...\n> > AFAIK, the worst that happens in that case is that the reading\n> transaction\n> > will end with an ERROR, similar to a serializable error.\n>\n> No, the worst case is transactions trying to read invalid data, resulting\n> in either crashes or exploitable security breaches (in the usual vein of\n> what can go wrong if you can get the C code to follow an invalid pointer).\n>\n\nYes, but we're not following any pointers as a result of this. The output\nis just rows.\n\n\n> This seems possible, for example, if you can get a transaction to read\n> uncommitted data that was written according to some other rowtype than\n> what the reading transaction thinks the table rowtype is. Casting my eyes\n> through AlterTableGetLockLevel(), it looks like all the easy ways to break\n> it like that are safe (for now) because they require AccessExclusiveLock.\n> But I am quite afraid that we'd introduce security holes by future\n> reductions of required lock levels --- or else that this feature would be\n> the sole reason why we couldn't reduce the lock level for some DDL\n> operation. I'm doubtful that its use-case is worth that.\n>\n\nI think we can limit it to Read Only transactions without any limitation on\nthe proposed use cases.\n\nBut I'll think some more about that, just in case.\n\n\n> > And that won't happen in the use cases I've explicitly described this as\n> > being useful for, which is where the writing transactions have completed\n> > executing.\n>\n> My concerns, at least, are not about whether this has any interesting\n> use-cases. They're about whether the feature can be abused to cause\n> security problems. I think the odds are fair that that'd be true\n> even today, and higher that it'd become true sometime in the future.\n>\n\nI share your concerns. We have no need or reason to make a quick decision\non this patch.\n\nI submit this patch only as a useful tool for recovering data.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 18:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:Simon Riggs <simon@2ndquadrant.com> writes:\n> So this is the same discussion as elsewhere about potentially aborted\n> transactions...\n> AFAIK, the worst that happens in that case is that the reading transaction\n> will end with an ERROR, similar to a serializable error.\n\nNo, the worst case is transactions trying to read invalid data, resulting\nin either crashes or exploitable security breaches (in the usual vein of\nwhat can go wrong if you can get the C code to follow an invalid pointer).Yes, but we're not following any pointers as a result of this. The output is just rows. \nThis seems possible, for example, if you can get a transaction to read\nuncommitted data that was written according to some other rowtype than\nwhat the reading transaction thinks the table rowtype is. Casting my eyes\nthrough AlterTableGetLockLevel(), it looks like all the easy ways to break\nit like that are safe (for now) because they require AccessExclusiveLock.\nBut I am quite afraid that we'd introduce security holes by future\nreductions of required lock levels --- or else that this feature would be\nthe sole reason why we couldn't reduce the lock level for some DDL\noperation. I'm doubtful that its use-case is worth that.I think we can limit it to Read Only transactions without any limitation on the proposed use cases.But I'll think some more about that, just in case. \n> And that won't happen in the use cases I've explicitly described this as\n> being useful for, which is where the writing transactions have completed\n> executing.\n\nMy concerns, at least, are not about whether this has any interesting\nuse-cases. They're about whether the feature can be abused to cause\nsecurity problems. I think the odds are fair that that'd be true\neven today, and higher that it'd become true sometime in the future.I share your concerns. We have no need or reason to make a quick decision on this patch.I submit this patch only as a useful tool for recovering data.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 18:50:22 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On 18/12/2019 20:46, Mark Dilger wrote:\n> On 12/18/19 10:06 AM, Simon Riggs wrote:\n>> Just consider this part of the recovery toolkit.\n> \n> In that case, don't call it \"read uncommitted\". Call it some other\n> thing entirely. Users coming from other databases may request\n> \"read uncommitted\" isolation expecting something that works.\n> Currently, that gets promoted to \"read committed\" and works. After\n> your change, that simply breaks and gives them an error.\n\nI agree that if we have a user-exposed READ UNCOMMITTED isolation level, \nit shouldn't be just a recovery tool. For a recovery tool, I think a \nset-returning function as part of contrib/pageinspect, for example, \nwould be more appropriate. Then it could also try to be more defensive \nagainst corrupt pages, and be superuser-only.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 18 Dec 2019 21:29:22 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > Just consider this part of the recovery toolkit.\n> \n> I agree that it would be useful to have a recovery toolkit for reading\n> uncommitted data, but I think a lot more thought needs to be given to\n> how such a thing should be designed. If you just add something called\n> READ UNCOMMITTED, people are going to expect it to have *way* saner\n> semantics than this will. They'll use it routinely, not just as a\n> last-ditch mechanism to recover otherwise-lost data. And I'm\n> reasonably confident that will not work out well.\n\n+1.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Dec 2019 14:34:33 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Many will want to use it to do aggregation, e.g. a much more efficient COUNT(*), because they want performance and don't care very much about transaction consistency. E.g. they want to compute SUM(sales) by salesperson, region for the past 5 years, and don't care very much if some concurrent transaction aborted in the middle of computing this result.\r\n\r\nOn 12/18/19, 2:35 PM, \"Stephen Frost\" <sfrost@snowman.net> wrote:\r\n\r\n Greetings,\r\n \r\n * Robert Haas (robertmhaas@gmail.com) wrote:\r\n > On Wed, Dec 18, 2019 at 1:06 PM Simon Riggs <simon@2ndquadrant.com> wrote:\r\n > > Just consider this part of the recovery toolkit.\r\n > \r\n > I agree that it would be useful to have a recovery toolkit for reading\r\n > uncommitted data, but I think a lot more thought needs to be given to\r\n > how such a thing should be designed. If you just add something called\r\n > READ UNCOMMITTED, people are going to expect it to have *way* saner\r\n > semantics than this will. They'll use it routinely, not just as a\r\n > last-ditch mechanism to recover otherwise-lost data. And I'm\r\n > reasonably confident that will not work out well.\r\n \r\n +1.\r\n \r\n Thanks,\r\n \r\n Stephen\r\n \r\n\r\n",
"msg_date": "Wed, 18 Dec 2019 20:21:58 +0000",
"msg_from": "\"Finnerty, Jim\" <jfinnert@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "\"Finnerty, Jim\" <jfinnert@amazon.com> writes:\n> Many will want to use it to do aggregation, e.g. a much more efficient COUNT(*), because they want performance and don't care very much about transaction consistency. E.g. they want to compute SUM(sales) by salesperson, region for the past 5 years, and don't care very much if some concurrent transaction aborted in the middle of computing this result.\n\nIt's fairly questionable whether there's any real advantage to be gained\nby READ UNCOMMITTED in that sort of scenario --- almost all the tuples\nyou'd be looking at would be hinted as committed-good, ordinarily, so that\nbypassing the relevant checks isn't going to save much. But I take your\npoint that people would *think* that READ UNCOMMITTED could be used that\nway, if they come from some other DBMS. So this reinforces Mark's point\nthat if we provide something like this, it shouldn't be called READ\nUNCOMMITTED. That should be reserved for something that has reasonably\nconsistent, standards-compliant behavior.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 15:35:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 2:29 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> I agree that if we have a user-exposed READ UNCOMMITTED isolation level,\n> it shouldn't be just a recovery tool. For a recovery tool, I think a\n> set-returning function as part of contrib/pageinspect, for example,\n> would be more appropriate. Then it could also try to be more defensive\n> against corrupt pages, and be superuser-only.\n\n+1.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 18 Dec 2019 16:24:50 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Over in [1], I became concerned that, although postgres supports\nRead Uncommitted transaction isolation (by way of Read Committed\nmode), there was very little test coverage for it:\n\nOn 12/18/19 10:46 AM, Mark Dilger wrote:\n> Looking at the regression tests, I'm surprised read uncommitted gets\n> so little test coverage. There's a test in src/test/isolation but\n> nothing at all in src/test/regression covering this isolation level.\n> \n> The one in src/test/isolation doesn't look very comprehensive. I'd\n> at least expect a test that verifies you don't get a syntax error\n> when you request READ UNCOMMITTED isolation from SQL.\n\nThe attached patch set adds a modicum of test coverage for this.\nDo others feel these tests are worth the small run time overhead\nthey add?\n\n-- \nMark Dilger\n\n[1] \nhttps://www.postgresql.org/message-id/CANP8%2Bj%2BmgWfcX9cTPsk7t%2B1kQCxgyGqHTR5R7suht7mCm_x_hA%40mail.gmail.com",
"msg_date": "Wed, 18 Dec 2019 14:05:11 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Read Uncommitted regression test coverage"
},
{
"msg_contents": "Mark Dilger <hornschnorter@gmail.com> writes:\n>> The one in src/test/isolation doesn't look very comprehensive. I'd\n>> at least expect a test that verifies you don't get a syntax error\n>> when you request READ UNCOMMITTED isolation from SQL.\n\n> The attached patch set adds a modicum of test coverage for this.\n> Do others feel these tests are worth the small run time overhead\n> they add?\n\nNo. As you pointed out yourself, READ UNCOMMITTED is the same as READ\nCOMMITTED, so there's hardly any point in testing its semantic behavior.\nOne or two tests that check that it is accepted by the grammar seem\nlike plenty (and even there, what's there to break? If bison starts\nfailing us to that extent, we've got bigger problems.)\n\nObviously, if we made it behave differently from READ COMMITTED, then\nit would need testing ... but the nature and extent of such testing\nwould depend a lot on what we did to it, so I'm not eager to try to\npredict the need in advance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 17:17:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted regression test coverage"
},
{
"msg_contents": "On 12/18/19 2:29 PM, Heikki Linnakangas wrote:\n> On 18/12/2019 20:46, Mark Dilger wrote:\n>> On 12/18/19 10:06 AM, Simon Riggs wrote:\n>>> Just consider this part of the recovery toolkit.\n>>\n>> In that case, don't call it \"read uncommitted\". Call it some other\n>> thing entirely. Users coming from other databases may request\n>> \"read uncommitted\" isolation expecting something that works.\n>> Currently, that gets promoted to \"read committed\" and works. After\n>> your change, that simply breaks and gives them an error.\n> \n> I agree that if we have a user-exposed READ UNCOMMITTED isolation level, \n> it shouldn't be just a recovery tool. For a recovery tool, I think a \n> set-returning function as part of contrib/pageinspect, for example, \n> would be more appropriate. Then it could also try to be more defensive \n> against corrupt pages, and be superuser-only.\n\n+1.\n\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 18 Dec 2019 18:46:46 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 20:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> \"Finnerty, Jim\" <jfinnert@amazon.com> writes:\n> > Many will want to use it to do aggregation, e.g. a much more efficient\n> COUNT(*), because they want performance and don't care very much about\n> transaction consistency. E.g. they want to compute SUM(sales) by\n> salesperson, region for the past 5 years, and don't care very much if some\n> concurrent transaction aborted in the middle of computing this result.\n>\n> It's fairly questionable whether there's any real advantage to be gained\n> by READ UNCOMMITTED in that sort of scenario --- almost all the tuples\n> you'd be looking at would be hinted as committed-good, ordinarily, so that\n> bypassing the relevant checks isn't going to save much.\n\n\nAgreed; this was not intended to give any kind of backdoor benefit and I\ndon't see any, just tears.\n\n\n> But I take your\n> point that people would *think* that READ UNCOMMITTED could be used that\n> way, if they come from some other DBMS. So this reinforces Mark's point\n> that if we provide something like this, it shouldn't be called READ\n> UNCOMMITTED.\n\n\nSeems like general agreement on that point from others on this thread.\n\n\n> That should be reserved for something that has reasonably\n> consistent, standards-compliant behavior.\n>\n\nSince we're discussing it, exactly what standard are we talking about here?\nI'm not saying I care about that, just to complete the discussion.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 20:36, Tom Lane <tgl@sss.pgh.pa.us> wrote:\"Finnerty, Jim\" <jfinnert@amazon.com> writes:\n> Many will want to use it to do aggregation, e.g. a much more efficient COUNT(*), because they want performance and don't care very much about transaction consistency. E.g. they want to compute SUM(sales) by salesperson, region for the past 5 years, and don't care very much if some concurrent transaction aborted in the middle of computing this result.\n\nIt's fairly questionable whether there's any real advantage to be gained\nby READ UNCOMMITTED in that sort of scenario --- almost all the tuples\nyou'd be looking at would be hinted as committed-good, ordinarily, so that\nbypassing the relevant checks isn't going to save much. Agreed; this was not intended to give any kind of backdoor benefit and I don't see any, just tears. But I take your\npoint that people would *think* that READ UNCOMMITTED could be used that\nway, if they come from some other DBMS. So this reinforces Mark's point\nthat if we provide something like this, it shouldn't be called READ\nUNCOMMITTED. Seems like general agreement on that point from others on this thread. That should be reserved for something that has reasonably\nconsistent, standards-compliant behavior.Since we're discussing it, exactly what standard are we talking about here?I'm not saying I care about that, just to complete the discussion.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 19 Dec 2019 00:07:16 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Wed, 18 Dec 2019 at 19:29, Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n\n> On 18/12/2019 20:46, Mark Dilger wrote:\n> > On 12/18/19 10:06 AM, Simon Riggs wrote:\n> >> Just consider this part of the recovery toolkit.\n> >\n> > In that case, don't call it \"read uncommitted\". Call it some other\n> > thing entirely. Users coming from other databases may request\n> > \"read uncommitted\" isolation expecting something that works.\n> > Currently, that gets promoted to \"read committed\" and works. After\n> > your change, that simply breaks and gives them an error.\n>\n> I agree that if we have a user-exposed READ UNCOMMITTED isolation level,\n> it shouldn't be just a recovery tool. For a recovery tool, I think a\n> set-returning function as part of contrib/pageinspect, for example,\n> would be more appropriate. Then it could also try to be more defensive\n> against corrupt pages, and be superuser-only.\n>\n\nSo the consensus is for a more-specifically named facility.\n\nI was aiming for something that would allow general SELECTs to run with a\nsnapshot that can see uncommitted xacts, so making it a SRF wouldn't really\nallow that.\n\nNot really sure where to go with the UI for this.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 18 Dec 2019 at 19:29, Heikki Linnakangas <hlinnaka@iki.fi> wrote:On 18/12/2019 20:46, Mark Dilger wrote:\n> On 12/18/19 10:06 AM, Simon Riggs wrote:\n>> Just consider this part of the recovery toolkit.\n> \n> In that case, don't call it \"read uncommitted\". Call it some other\n> thing entirely. Users coming from other databases may request\n> \"read uncommitted\" isolation expecting something that works.\n> Currently, that gets promoted to \"read committed\" and works. After\n> your change, that simply breaks and gives them an error.\n\nI agree that if we have a user-exposed READ UNCOMMITTED isolation level, \nit shouldn't be just a recovery tool. For a recovery tool, I think a \nset-returning function as part of contrib/pageinspect, for example, \nwould be more appropriate. Then it could also try to be more defensive \nagainst corrupt pages, and be superuser-only.So the consensus is for a more-specifically named facility.I was aiming for something that would allow general SELECTs to run with a snapshot that can see uncommitted xacts, so making it a SRF wouldn't really allow that.Not really sure where to go with the UI for this.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 19 Dec 2019 00:13:55 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-18 18:06:21 +0000, Simon Riggs wrote:\n> On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>\n> > wrote:\n> > > This was my first concern when I thought about it, but I realised that\n> > by taking a snapshot and then calculating xmin normally, this problem would\n> > go away.\n> >\n> > Why? As soon as a transaction aborts...\n> >\n> \n> So this is the same discussion as elsewhere about potentially aborted\n> transactions...\n> AFAIK, the worst that happens in that case is that the reading transaction\n> will end with an ERROR, similar to a serializable error.\n\nI don't think that's all that can happen. E.g. the toast identifier\nmight have been reused, and there might be a different datum in\nthere. Which then means we'll end up calling operators on data that's\npotentially for a different datatype - it's trivial to cause crashes\nthat way. And, albeit harder, possible to do more than that.\n\nI think there's plenty other problems too, not just toast. There's\ne.g. some parts of the system that access catalogs using a normal\nsnapshot - which might not actually be consistent, because we have\nvarious places where we have to increment the command counter multiple\ntimes as part of a larger catalog manipulation.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 18 Dec 2019 18:22:25 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On 12/18/19 2:17 PM, Tom Lane wrote:\n> Mark Dilger <hornschnorter@gmail.com> writes:\n>>> The one in src/test/isolation doesn't look very comprehensive. I'd\n>>> at least expect a test that verifies you don't get a syntax error\n>>> when you request READ UNCOMMITTED isolation from SQL.\n> \n>> The attached patch set adds a modicum of test coverage for this.\n>> Do others feel these tests are worth the small run time overhead\n>> they add?\n> \n> No. As you pointed out yourself, READ UNCOMMITTED is the same as READ\n> COMMITTED, so there's hardly any point in testing its semantic behavior.\n> One or two tests that check that it is accepted by the grammar seem\n> like plenty (and even there, what's there to break? If bison starts\n> failing us to that extent, we've got bigger problems.)\n\nThe lack of testing in the current system is so complete that if you\ngo into gram.y and remove READ UNCOMMITTED from the grammar, not one\ntest in check-world fails.\n\nSomebody doing something like what Simon is suggesting might refactor\nthe code in a way that unintentionally breaks this isolation level, and\nwe'd not know about it until users started complaining.\n\nThe attached patch is pretty cheap. Perhaps you'll like it better?\n\n-- \nMark Dilger",
"msg_date": "Wed, 18 Dec 2019 19:19:20 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted regression test coverage"
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 02:22, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> On 2019-12-18 18:06:21 +0000, Simon Riggs wrote:\n> > On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> >\n> > > On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>\n> > > wrote:\n> > > > This was my first concern when I thought about it, but I realised\n> that\n> > > by taking a snapshot and then calculating xmin normally, this problem\n> would\n> > > go away.\n> > >\n> > > Why? As soon as a transaction aborts...\n> > >\n> >\n> > So this is the same discussion as elsewhere about potentially aborted\n> > transactions...\n> > AFAIK, the worst that happens in that case is that the reading\n> transaction\n> > will end with an ERROR, similar to a serializable error.\n>\n> I don't think that's all that can happen. E.g. the toast identifier\n> might have been reused, and there might be a different datum in\n> there. Which then means we'll end up calling operators on data that's\n> potentially for a different datatype - it's trivial to cause crashes\n> that way. And, albeit harder, possible to do more than that.\n>\n\nOn the patch as proposed this wouldn't be possible because a toast row\ncan't be vacuumed and then reused while holding back xmin, at least as I\nunderstand it.\n\n\n> I think there's plenty other problems too, not just toast. There's\n> e.g. some parts of the system that access catalogs using a normal\n> snapshot - which might not actually be consistent, because we have\n> various places where we have to increment the command counter multiple\n> times as part of a larger catalog manipulation.\n>\n\nIt seems possible that catalog access would be the thing that makes this\ndifficult. Cache invalidations wouldn't yet have been fired, so that would\nlead to rather weird errors, and as you say, potential issues from data\ntype changes which would not be acceptable in a facility available to\nnon-superusers.\n\nWe could limit that to xacts that don't do DDL, which is a very small % of\nxacts, but then those xacts are more likely to be ones you'd want to\nrecover or investigate.\n\nSo I now withdraw this patch as submitted and won't be resubmitting.\n\nThanks everyone for your input.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 19 Dec 2019 at 02:22, Andres Freund <andres@anarazel.de> wrote:Hi,\n\nOn 2019-12-18 18:06:21 +0000, Simon Riggs wrote:\n> On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> \n> > On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>\n> > wrote:\n> > > This was my first concern when I thought about it, but I realised that\n> > by taking a snapshot and then calculating xmin normally, this problem would\n> > go away.\n> >\n> > Why? As soon as a transaction aborts...\n> >\n> \n> So this is the same discussion as elsewhere about potentially aborted\n> transactions...\n> AFAIK, the worst that happens in that case is that the reading transaction\n> will end with an ERROR, similar to a serializable error.\n\nI don't think that's all that can happen. E.g. the toast identifier\nmight have been reused, and there might be a different datum in\nthere. Which then means we'll end up calling operators on data that's\npotentially for a different datatype - it's trivial to cause crashes\nthat way. And, albeit harder, possible to do more than that.On the patch as proposed this wouldn't be possible because a toast row can't be vacuumed and then reused while holding back xmin, at least as I understand it. \nI think there's plenty other problems too, not just toast. There's\ne.g. some parts of the system that access catalogs using a normal\nsnapshot - which might not actually be consistent, because we have\nvarious places where we have to increment the command counter multiple\ntimes as part of a larger catalog manipulation.It seems possible that catalog access would be the thing that makes this difficult. Cache invalidations wouldn't yet have been fired, so that would lead to rather weird errors, and as you say, potential issues from data type changes which would not be acceptable in a facility available to non-superusers.We could limit that to xacts that don't do DDL, which is a very small % of xacts, but then those xacts are more likely to be ones you'd want to recover or investigate.So I now withdraw this patch as submitted and won't be resubmitting.Thanks everyone for your input.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 19 Dec 2019 09:50:44 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On 2019-12-18 16:14, Simon Riggs wrote:\n> On Wed, 18 Dec 2019 at 12:11, Konstantin Knizhnik \n> <k.knizhnik@postgrespro.ru <mailto:k.knizhnik@postgrespro.ru>> wrote:\n> \n> As far as I understand with \"read uncommitted\" policy we can see two\n> versions of the same tuple if it was updated by two transactions\n> both of which were started before us and committed during table\n> traversal by transaction with \"read uncommitted\" policy. Certainly\n> \"read uncommitted\" means that we are ready to get inconsistent\n> results, but is it really acceptable to multiple versions of the\n> same tuple?\n> \n> \n> \"In general, read uncommitted will return inconsistent results and\n> wrong answers. If you look at the changes made by a transaction\n> while it continues to make changes then you may get partial results\n> from queries, or you may miss index entries that haven't yet been\n> written. However, if you are reading transactions that are paused\n> at the end of their execution for whatever reason then you can\n> see a consistent result.\"\n> \n> I think I already covered your concerns in my suggested docs for this \n> feature.\n\nIndependent of the technical concerns, I don't think the SQL standard \nallows the READ UNCOMMITTED level to behave in a way that violates the \nlogical requirements of the defined database schema. So if we wanted to \nadd this, we should probably name it something else.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Dec 2019 13:38:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Am Donnerstag, den 19.12.2019, 00:13 +0000 schrieb Simon Riggs:\n> So the consensus is for a more-specifically named facility.\n> \n> I was aiming for something that would allow general SELECTs to run\n> with a\n> snapshot that can see uncommitted xacts, so making it a SRF wouldn't\n> really\n> allow that.\n\nThere's pg_dirtyread() [1] around for some while, implementing a SRF\nfor debugging usage on in normal circumstances disappeared data. Its\nnice to not have worrying about anything when you faced with such kind\nof problems, but for such use cases i think a SRF serves quite well.\n\n[1] https://github.com/df7cb/pg_dirtyread\n\n\n\tBernd\n\n\n\n\n",
"msg_date": "Thu, 19 Dec 2019 13:42:19 +0100",
"msg_from": "Bernd Helmle <mailings@oopsware.de>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 12:42, Bernd Helmle <mailings@oopsware.de> wrote:\n\n> Am Donnerstag, den 19.12.2019, 00:13 +0000 schrieb Simon Riggs:\n> > So the consensus is for a more-specifically named facility.\n> >\n> > I was aiming for something that would allow general SELECTs to run\n> > with a\n> > snapshot that can see uncommitted xacts, so making it a SRF wouldn't\n> > really\n> > allow that.\n>\n> There's pg_dirtyread() [1] around for some while, implementing a SRF\n> for debugging usage on in normal circumstances disappeared data. Its\n> nice to not have worrying about anything when you faced with such kind\n> of problems, but for such use cases i think a SRF serves quite well.\n>\n> [1] https://github.com/df7cb/pg_dirtyread\n\n\nAs an example of an SRF for debugging purposes, sure, but then we already\nhad the example of pageinspect, which I wrote BTW, so wasn't unfamiliar\nwith the thought.\n\nNote that pg_dirtyread has got nothing to do with the use cases I\ndescribed.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 19 Dec 2019 at 12:42, Bernd Helmle <mailings@oopsware.de> wrote:Am Donnerstag, den 19.12.2019, 00:13 +0000 schrieb Simon Riggs:\n> So the consensus is for a more-specifically named facility.\n> \n> I was aiming for something that would allow general SELECTs to run\n> with a\n> snapshot that can see uncommitted xacts, so making it a SRF wouldn't\n> really\n> allow that.\n\nThere's pg_dirtyread() [1] around for some while, implementing a SRF\nfor debugging usage on in normal circumstances disappeared data. Its\nnice to not have worrying about anything when you faced with such kind\nof problems, but for such use cases i think a SRF serves quite well.\n\n[1] https://github.com/df7cb/pg_dirtyreadAs an example of an SRF for debugging purposes, sure, but then we already had the example of pageinspect, which I wrote BTW, so wasn't unfamiliar with the thought.Note that pg_dirtyread has got nothing to do with the use cases I described. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 19 Dec 2019 12:56:43 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "\n\nOn 12/19/19 1:50 AM, Simon Riggs wrote:\n> It seems possible that catalog access would be the thing that makes this \n> difficult. Cache invalidations wouldn't yet have been fired, so that \n> would lead to rather weird errors, and as you say, potential issues from \n> data type changes which would not be acceptable in a facility available \n> to non-superusers.\n> \n> We could limit that to xacts that don't do DDL, which is a very small % \n> of xacts, but then those xacts are more likely to be ones you'd want to \n> recover or investigate.\n> \n> So I now withdraw this patch as submitted and won't be resubmitting.\n\nOh, I'm sorry to hear that. I thought this feature sounded useful, and\nwe were working out what its limitations were. What I gathered from\nthe discussion so far was:\n\n - It should be called something other than READ UNCOMMITTED\n - It should only be available to superusers, at least for the initial\n implementation\n - Extra care might be required to lock catalogs to avoid unsafe\n operations that could lead to backends crashing or security\n vulnerabilities\n - Toast tables need to be handled with care\n\nFor the record, in case we revisit this idea in the future, which were\nthe obstacles that killed this patch?\n\nTom's point on that third item:\n\n > But I am quite afraid that we'd introduce security holes by future\n > reductions of required lock levels --- or else that this feature would be\n > the sole reason why we couldn't reduce the lock level for some DDL\n > operation. I'm doubtful that its use-case is worth that.\"\n\nAnybody running SET TRANSACTION ISOLATION LEVEL RECOVERY might\nhave to get ExclusiveLock on most of the catalog tables. But that\nwould only be done if somebody starts a transaction using this\nisolation level, which is not normal, so it shouldn't be a problem\nunder normal situations. If the lock level reduction that Tom\nmentions was implemented, there would be no problem, as long as the\nlock level you reduce to still blocks against ExclusiveLock, which\nsurely it must. If the transaction running in RECOVERY level isolation\ncan't get the locks, then it blocks and doesn't help the administrator\nwho is trying to use this feature, but that is no worse than the\npresent situation where the feature is entirely absent. When no\ncatalog changes are in flight, the administrator gets the locks and\ncan continue inspecting the in-process changes of other transactions.\n\nRobert's point on that fourth item:\n\n > As soon as a transaction aborts, the TOAST rows can be vacuumed\n > away, but the READ UNCOMMITTED transaction might've already seen the\n > main tuple. This is not even a particularly tight race, necessarily,\n > since for example the table might be scanned, feeding tuples into a\n > tuplesort, and then the detoating might happen further up in the query\n > tree after the sort has completed.\n\nI don't know if this could be fixed without adding overhead to toast\nprocessing for non-RECOVERY transactions, but perhaps it doesn't need\nto be fixed at all. Perhaps you just accept that in RECOVERY mode you\ncan't see toast data, and instead get NULLs for all such rows. Now,\nthat could have security implications if somebody defines a policy\nwhere NULL in a toast column means \"allow\" rather than \"deny\" for\nsome issue, but if this RECOVERY mode is limited to superusers, that\nisn't such a big objection.\n\nThere may be a number of other gotchas still to be resolved, but\nabandoning the patch at this stage strikes me as premature.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 19 Dec 2019 07:08:06 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "\n\nOn 12/19/19 7:08 AM, Mark Dilger wrote:\n> and instead get NULLs for all such rows\n\nTo clarify, I mean the toasted column is null for rows\nwhere the value was stored in the toast table rather\nthan stored inline. I'd prefer some special value\nthat means \"this datum unavailable\" so that it could\nbe distinguished from an actual null, but no such value\ncomes to mind.\n\n-- \nMark Dilger\n\n\n",
"msg_date": "Thu, 19 Dec 2019 07:22:52 -0800",
"msg_from": "Mark Dilger <hornschnorter@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-19 09:50:44 +0000, Simon Riggs wrote:\n> On Thu, 19 Dec 2019 at 02:22, Andres Freund <andres@anarazel.de> wrote:\n> \n> > Hi,\n> >\n> > On 2019-12-18 18:06:21 +0000, Simon Riggs wrote:\n> > > On Wed, 18 Dec 2019 at 17:35, Robert Haas <robertmhaas@gmail.com> wrote:\n> > >\n> > > > On Wed, Dec 18, 2019 at 10:18 AM Simon Riggs <simon@2ndquadrant.com>\n> > > > wrote:\n> > > > > This was my first concern when I thought about it, but I realised\n> > that\n> > > > by taking a snapshot and then calculating xmin normally, this problem\n> > would\n> > > > go away.\n> > > >\n> > > > Why? As soon as a transaction aborts...\n> > > >\n> > >\n> > > So this is the same discussion as elsewhere about potentially aborted\n> > > transactions...\n> > > AFAIK, the worst that happens in that case is that the reading\n> > transaction\n> > > will end with an ERROR, similar to a serializable error.\n> >\n> > I don't think that's all that can happen. E.g. the toast identifier\n> > might have been reused, and there might be a different datum in\n> > there. Which then means we'll end up calling operators on data that's\n> > potentially for a different datatype - it's trivial to cause crashes\n> > that way. And, albeit harder, possible to do more than that.\n> >\n> \n> On the patch as proposed this wouldn't be possible because a toast row\n> can't be vacuumed and then reused while holding back xmin, at least as I\n> understand it.\n\nVacuum and pruning remove rows where xmin didn't commit, without testing\nagainst the horizon. Which makes sense, because normally there's so far\nno snapshot including them. Unless we were to weaken that logic -\nwhich'd have bloat impacts - a snapshot wouldn't guarantee anything\nabout the non-removal of such tuples, unless I am missing something.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Dec 2019 07:36:35 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-19 07:08:06 -0800, Mark Dilger wrote:\n> > As soon as a transaction aborts, the TOAST rows can be vacuumed\n> > away, but the READ UNCOMMITTED transaction might've already seen the\n> > main tuple. This is not even a particularly tight race, necessarily,\n> > since for example the table might be scanned, feeding tuples into a\n> > tuplesort, and then the detoating might happen further up in the query\n> > tree after the sort has completed.\n> \n> I don't know if this could be fixed without adding overhead to toast\n> processing for non-RECOVERY transactions, but perhaps it doesn't need\n> to be fixed at all. Perhaps you just accept that in RECOVERY mode you\n> can't see toast data, and instead get NULLs for all such rows. Now,\n> that could have security implications if somebody defines a policy\n> where NULL in a toast column means \"allow\" rather than \"deny\" for\n> some issue, but if this RECOVERY mode is limited to superusers, that\n> isn't such a big objection.\n\nI mean, that's just a small part of the issue. You can get *different*\ndata back for toast columns - incompatible with the datatype, leading to\ncrashes. You can get *different* data back for the same query, running\nit twice, because data that was just inserted can get pruned away if the\ninserting transaction aborted.\n\n\n> There may be a number of other gotchas still to be resolved, but\n> abandoning the patch at this stage strikes me as premature.\n\nI think iff we'd want this feature, you'd have to actually use a much\nlarger hammer, and change the snapshot logic to include information\nabout which aborted transactions are visible, and whose rows cannot be\nremoved. And then vacuuming/hot pruning need to be changed to respect\nthat. And note that'll affect *all* sessions, not just the one wanting\nto use READ UNCOMMITTED.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 19 Dec 2019 08:02:48 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 23:36, Andres Freund <andres@anarazel.de> wrote:\n\n> Hi,\n>\n> > On the patch as proposed this wouldn't be possible because a toast row\n> > can't be vacuumed and then reused while holding back xmin, at least as I\n> > understand it.\n>\n> Vacuum and pruning remove rows where xmin didn't commit, without testing\n> against the horizon. Which makes sense, because normally there's so far\n> no snapshot including them. Unless we were to weaken that logic -\n> which'd have bloat impacts - a snapshot wouldn't guarantee anything\n> about the non-removal of such tuples, unless I am missing something.\n>\n\nMy understanding from reading the above is that Simon didn't propose to\nmake aborted txns visible, only in-progress uncommitted txns.\n\nVacuum only removes such rows if the xact is (a) explicitly aborted in clog\nor (b) provably not still running. It checks RecentXmin and the running\nxids arrays to handle xacts that went away after a crash. Per\nTransactionIdIsInProgress() as used by HeapTupleSatisfiesVacuum(). I see\nthat it's not *quite* as simple as using the RecentXmin threhold, as xacts\nnewer than RecentXmin may also be seen as not in-progress if they're absent\nin the shmem xact arrays and there's no overflow.\n\nBut that's OK so long as the only xacts that some sort of read-uncommitted\nfeature allows to become visible are ones that\nsatisfy TransactionIdIsInProgress(); they cannot have been vacuumed.\n\nThe bigger issue, and the one that IMO makes it impractical to spell this\nas \"READ UNCOMMITTED\", is that an uncommitted txn might've changed the\ncatalogs so there is no one snapshot that is valid for interpreting all\npossible tuples. It'd have to see only txns that have no catalog changes,\nor be narrowed to see just *one specific txn* that had catalog changes.\nThat makes it iffy to spell it as \"READ UNCOMMITTED\" since we can't\nactually make all uncommitted txns visible at once.\n\nI think the suggestions for a SRF based approach might make sense. Perhaps\na few functions:\n\n* a function to list all in-progress xids\n\n* a function to list in-progress xids with/without catalog changes (if\npossible, unsure if we know that until the commit record is written)\n\n* a function (or procedure?) to execute a read-only SELECT or WITH query\nwithin a faked-up snapshot for some in-progress xact and return a SETOF\nRECORD with results. If the txn has catalog changes this would probably\nhave to coalesce each result field with non-builtin data types to text, or\ndo some fancy validation to compare the definition in the txn snapshot with\nthe latest normal snapshot used by the calling session. Ideally this\nfunction could take an array of xids and would query with them all visible\nunless there were catalog changes in any of them, then it'd ERROR.\n\n* a function to generate the SQL text for an alias clause that maps the\nRECORD returned by the above function, so you can semi-conveniently query\nit. (I don't think we have a way for a C callable function to supply a\ndynamic resultset type at plan-time to avoid the need for this, do we?\nPerhaps if we use a procedure not a function?)\n\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Thu, 19 Dec 2019 at 23:36, Andres Freund <andres@anarazel.de> wrote:Hi,\n> On the patch as proposed this wouldn't be possible because a toast row\n> can't be vacuumed and then reused while holding back xmin, at least as I\n> understand it.\n\nVacuum and pruning remove rows where xmin didn't commit, without testing\nagainst the horizon. Which makes sense, because normally there's so far\nno snapshot including them. Unless we were to weaken that logic -\nwhich'd have bloat impacts - a snapshot wouldn't guarantee anything\nabout the non-removal of such tuples, unless I am missing something.My understanding from reading the above is that Simon didn't propose to make aborted txns visible, only in-progress uncommitted txns. Vacuum only removes such rows if the xact is (a) explicitly aborted in clog or (b) provably not still running. It checks RecentXmin and the running xids arrays to handle xacts that went away after a crash. Per TransactionIdIsInProgress() as used by HeapTupleSatisfiesVacuum(). I see that it's not *quite* as simple as using the RecentXmin threhold, as xacts newer than RecentXmin may also be seen as not in-progress if they're absent in the shmem xact arrays and there's no overflow.But that's OK so long as the only xacts that some sort of read-uncommitted feature allows to become visible are ones that satisfy TransactionIdIsInProgress(); they cannot have been vacuumed.The bigger issue, and the one that IMO makes it impractical to spell this as \"READ UNCOMMITTED\", is that an uncommitted txn might've changed the catalogs so there is no one snapshot that is valid for interpreting all possible tuples. It'd have to see only txns that have no catalog changes, or be narrowed to see just *one specific txn* that had catalog changes. That makes it iffy to spell it as \"READ UNCOMMITTED\" since we can't actually make all uncommitted txns visible at once.I think the suggestions for a SRF based approach might make sense. Perhaps a few functions:* a function to list all in-progress xids* a function to list in-progress xids with/without catalog changes (if possible, unsure if we know that until the commit record is written)* a function (or procedure?) to execute a read-only SELECT or WITH query within a faked-up snapshot for some in-progress xact and return a SETOF RECORD with results. If the txn has catalog changes this would probably have to coalesce each result field with non-builtin data types to text, or do some fancy validation to compare the definition in the txn snapshot with the latest normal snapshot used by the calling session. Ideally this function could take an array of xids and would query with them all visible unless there were catalog changes in any of them, then it'd ERROR.* a function to generate the SQL text for an alias clause that maps the RECORD returned by the above function, so you can semi-conveniently query it. (I don't think we have a way for a C callable function to supply a dynamic resultset type at plan-time to avoid the need for this, do we? Perhaps if we use a procedure not a function?)-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 20 Dec 2019 10:11:50 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "Craig Ringer <craig@2ndquadrant.com> writes:\n> My understanding from reading the above is that Simon didn't propose to\n> make aborted txns visible, only in-progress uncommitted txns.\n\nYeah, but an \"in-progress uncommitted txn\" can become an \"aborted txn\"\nat any moment, and there's no interlock that would prevent its generated\ndata from being removed out from under you at any moment after that.\nSo there's a race condition, and as Robert observed, the window isn't\nnecessarily small.\n\n> The bigger issue, and the one that IMO makes it impractical to spell this\n> as \"READ UNCOMMITTED\", is that an uncommitted txn might've changed the\n> catalogs so there is no one snapshot that is valid for interpreting all\n> possible tuples.\n\nIn theory that should be okay, because any such tuples would be in\ntables you can't access due to the in-progress txn having taken\nAccessExclusiveLock on tables it changes the rowtype of. But we keep\nlooking for ways to reduce the locking requirements for ALTER TABLE\nactions --- and as I said upthread, it feels like this feature might\nsomeday be the sole reason why we can't safely reduce lock strength\nfor some form of ALTER. I can't make a concrete argument for that\nthough ... maybe it's not really any worse than the situation just\nafter failure of any DDL-performing txn. But that would bear closer\nstudy than I think it's had.\n\n> I think the suggestions for a SRF based approach might make sense.\n\nYeah, I'd rather do it that way than via a transaction isolation\nlevel. The isolation-level approach seems like people will expect\nstronger semantics than we could actually provide.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 19 Dec 2019 23:18:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
},
{
"msg_contents": "On Fri, 20 Dec 2019 at 12:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Craig Ringer <craig@2ndquadrant.com> writes:\n> > My understanding from reading the above is that Simon didn't propose to\n> > make aborted txns visible, only in-progress uncommitted txns.\n>\n> Yeah, but an \"in-progress uncommitted txn\" can become an \"aborted txn\"\n> at any moment, and there's no interlock that would prevent its generated\n> data from being removed out from under you at any moment after that.\n> So there's a race condition, and as Robert observed, the window isn't\n> necessarily small.\n>\n\nAbsolutely. Many of the same issues arise in the work on logical decoding\nof in-progress xacts for optimistic logical decoding.\n\nUnless such an interlock is added (with all the problems that entails,\nagain per the in-progress logical decoding thread) that limits this to:\n\n* running in recovery when stopped at a recovery target; or\n* peeking at the contents of individual prepared xacts that we can prevent\nsomeone else concurrently aborting/committing\n\nThat'd actually cover the only things I'd personally actually want a\nfeature like this for anyway.\n\nIn any case, Simon's yanked the proposal. I'd like to have some way to peek\nat the contents of individual uncommited xacts, but it's clearly not going\nto be anything called READ UNCOMMITTED that applies to all uncommitted\nxacts at once...\n\n\n> > I think the suggestions for a SRF based approach might make sense.\n>\n> Yeah, I'd rather do it that way than via a transaction isolation\n> level. The isolation-level approach seems like people will expect\n> stronger semantics than we could actually provide.\n>\n\nYeah. Definitely not an isolation level.\n\nI'll be interesting to see if this sparks any more narrowly scoped and\ntargeted ideas, anyway. Thanks for taking the time to think about it.\n\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\nOn Fri, 20 Dec 2019 at 12:18, Tom Lane <tgl@sss.pgh.pa.us> wrote:Craig Ringer <craig@2ndquadrant.com> writes:\n> My understanding from reading the above is that Simon didn't propose to\n> make aborted txns visible, only in-progress uncommitted txns.\n\nYeah, but an \"in-progress uncommitted txn\" can become an \"aborted txn\"\nat any moment, and there's no interlock that would prevent its generated\ndata from being removed out from under you at any moment after that.\nSo there's a race condition, and as Robert observed, the window isn't\nnecessarily small.Absolutely. Many of the same issues arise in the work on logical decoding of in-progress xacts for optimistic logical decoding.Unless such an interlock is added (with all the problems that entails, again per the in-progress logical decoding thread) that limits this to:* running in recovery when stopped at a recovery target; or* peeking at the contents of individual prepared xacts that we can prevent someone else concurrently aborting/committingThat'd actually cover the only things I'd personally actually want a feature like this for anyway.In any case, Simon's yanked the proposal. I'd like to have some way to peek at the contents of individual uncommited xacts, but it's clearly not going to be anything called READ UNCOMMITTED that applies to all uncommitted xacts at once... \n> I think the suggestions for a SRF based approach might make sense.\n\nYeah, I'd rather do it that way than via a transaction isolation\nlevel. The isolation-level approach seems like people will expect\nstronger semantics than we could actually provide.Yeah. Definitely not an isolation level.I'll be interesting to see if this sparks any more narrowly scoped and targeted ideas, anyway. Thanks for taking the time to think about it.-- Craig Ringer http://www.2ndQuadrant.com/ 2ndQuadrant - PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 20 Dec 2019 12:33:02 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Read Uncommitted"
}
] |
[
{
"msg_contents": "TransactionIdIsCurrentTransactionId() doesn't seem to be well optimized for\nthe case when an xid has not yet been assigned, so for read only\ntransactions.\n\nA patch for this is attached.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise",
"msg_date": "Wed, 18 Dec 2019 10:07:07 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 5:07 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> TransactionIdIsCurrentTransactionId() doesn't seem to be well optimized for the case when an xid has not yet been assigned, so for read only transactions.\n>\n> A patch for this is attached.\n\nIt might be an idea to first call TransactionIdIsNormal(xid), then\nGetTopTransactionIdIfAny(), then TransactionIdIsNormal(topxid), so\nthat we don't bother with GetTopTransactionIdIfAny() when\n!TransactionIdIsNormal(xid).\n\nBut it's also not clear to me whether this is actually a win. You're\ndong an extra TransactionIdIsNormal() test to sometimes avoid a\nGetTopTransactionIdIfAny() test. TransactionIdIsNormal() is pretty\ncheap, but GetTopTransactionIdIfAny() isn't all that expensive either,\nand adding more branches costs something.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 19 Dec 2019 14:27:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 02:27:01PM -0500, Robert Haas wrote:\n>On Wed, Dec 18, 2019 at 5:07 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n>> TransactionIdIsCurrentTransactionId() doesn't seem to be well optimized for the case when an xid has not yet been assigned, so for read only transactions.\n>>\n>> A patch for this is attached.\n>\n>It might be an idea to first call TransactionIdIsNormal(xid), then\n>GetTopTransactionIdIfAny(), then TransactionIdIsNormal(topxid), so\n>that we don't bother with GetTopTransactionIdIfAny() when\n>!TransactionIdIsNormal(xid).\n>\n>But it's also not clear to me whether this is actually a win. You're\n>dong an extra TransactionIdIsNormal() test to sometimes avoid a\n>GetTopTransactionIdIfAny() test. TransactionIdIsNormal() is pretty\n>cheap, but GetTopTransactionIdIfAny() isn't all that expensive either,\n>and adding more branches costs something.\n>\n\nI think \"optimization\" patches should generally come with some sort of\nquantification of the gains - e.g. a benchmark with somewhat realistic\nworkload (but even synthetic is better than nothing). Or at least some\nexplanation *why* it's going to be an improvement.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 20 Dec 2019 01:26:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 19:27, Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Wed, Dec 18, 2019 at 5:07 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> > TransactionIdIsCurrentTransactionId() doesn't seem to be well optimized\n> for the case when an xid has not yet been assigned, so for read only\n> transactions.\n> >\n> > A patch for this is attached.\n>\n> It might be an idea to first call TransactionIdIsNormal(xid), then\n> GetTopTransactionIdIfAny(), then TransactionIdIsNormal(topxid), so\n> that we don't bother with GetTopTransactionIdIfAny() when\n> !TransactionIdIsNormal(xid).\n>\n> But it's also not clear to me whether this is actually a win. You're\n> dong an extra TransactionIdIsNormal() test to sometimes avoid a\n> GetTopTransactionIdIfAny() test.\n\n\nThat's not the point of the patch.\n\nIf the TopTransactionId is not assigned, we can leave the whole function\nmore quickly, not just avoid a test.\n\nRead only transactions should have a fast path thru this function since\nthey frequently read more data than write transactions.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Thu, 19 Dec 2019 at 19:27, Robert Haas <robertmhaas@gmail.com> wrote:On Wed, Dec 18, 2019 at 5:07 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> TransactionIdIsCurrentTransactionId() doesn't seem to be well optimized for the case when an xid has not yet been assigned, so for read only transactions.\n>\n> A patch for this is attached.\n\nIt might be an idea to first call TransactionIdIsNormal(xid), then\nGetTopTransactionIdIfAny(), then TransactionIdIsNormal(topxid), so\nthat we don't bother with GetTopTransactionIdIfAny() when\n!TransactionIdIsNormal(xid).\n\nBut it's also not clear to me whether this is actually a win. You're\ndong an extra TransactionIdIsNormal() test to sometimes avoid a\nGetTopTransactionIdIfAny() test. That's not the point of the patch.If the TopTransactionId is not assigned, we can leave the whole function more quickly, not just avoid a test.Read only transactions should have a fast path thru this function since they frequently read more data than write transactions.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 20 Dec 2019 05:46:43 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 12:46 AM Simon Riggs <simon@2ndquadrant.com> wrote:\n> If the TopTransactionId is not assigned, we can leave the whole function more quickly, not just avoid a test.\n\nThose things are not really any different from each other. You leave\nthe function when you've done all the necessary tests....\n\n> Read only transactions should have a fast path thru this function since they frequently read more data than write transactions.\n\nWith regard to this point, I second Tomas's comments.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 20 Dec 2019 08:07:00 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n\n>\n> > Read only transactions should have a fast path thru this function since\n> they frequently read more data than write transactions.\n>\n> With regard to this point, I second Tomas's comments.\n>\n\nI also agree with Tomas' comments. I am explaining *why* it will be an\nimprovement, expanding on my earlier notes.\n\nThis function is called extremely frequently in query processing and is\nfairly efficient. I'm pointing out cases where making it even quicker makes\nsense.\n\nThe TopXid is assigned in very few calls. Write transactions perform\nsearching before the xid is assigned, so UPDATE and DELETE transactions\nwill call this with TopXid unassigned in many small transactions, e.g.\nsimple pgbench. In almost all read-only cases and especially on standby\nnodes there will be no TopXid assigned, so I estimate that 90-99% of calls\nwill be made with TopXid invalid. In this case it makes a great deal of\nsense to have a fastpath out of this function, by testing\nTransactionIdIsNormal(topxid).\n\nI also now notice that on entry the xid provided is hardly ever\nInvalidTransactionId. Once, it might have been called repeatedly with\nFrozenTransactionId, but that is no longer the case since we no longer\nreset the xid on freezing. So the test for TransactionIdIsNormal(xid)\nappears to need rethinking since it is now mostly redundant.\n\nSo if adding a test is considered heavy, I would swap the test for\nTransactionIdIsNormal(xid) and replace with a test for\nTransactionIdIsNormal(topxid).\n\nSuch a frequently used function is worth discussing, just as we previously\noptimised TransactionIdIsInProgress() and MVCC visibility routines, where\nwe discussed what the most common routes through the functions were before\ndeciding how to optimize them.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n> Read only transactions should have a fast path thru this function since they frequently read more data than write transactions.\n\nWith regard to this point, I second Tomas's comments.I also agree with Tomas' comments. I am explaining *why* it will be an improvement, expanding on my earlier notes.This function is called extremely frequently in query processing and is fairly efficient. I'm pointing out cases where making it even quicker makes sense.The TopXid is assigned in very few calls. Write transactions perform searching before the xid is assigned, so UPDATE and DELETE transactions will call this with TopXid unassigned in many small transactions, e.g. simple pgbench. In almost all read-only cases and especially on standby nodes there will be no TopXid assigned, so I estimate that 90-99% of calls will be made with TopXid invalid. In this case it makes a great deal of sense to have a fastpath out of this function, by testing TransactionIdIsNormal(topxid).I also now notice that on entry the xid provided is hardly ever InvalidTransactionId. Once, it might have been called repeatedly with FrozenTransactionId, but that is no longer the case since we no longer reset the xid on freezing. So the test for TransactionIdIsNormal(xid) appears to need rethinking since it is now mostly redundant.So if adding a test is considered heavy, I would swap the test for TransactionIdIsNormal(xid) and replace with a test for TransactionIdIsNormal(topxid).Such a frequently used function is worth discussing, just as we previously optimised TransactionIdIsInProgress() and MVCC visibility routines, where we discussed what the most common routes through the functions were before deciding how to optimize them.-- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 20 Dec 2019 17:35:36 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> On Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n>> With regard to this point, I second Tomas's comments.\n\n> I also agree with Tomas' comments. I am explaining *why* it will be an\n> improvement, expanding on my earlier notes.\n> This function is called extremely frequently in query processing and is\n> fairly efficient. I'm pointing out cases where making it even quicker makes\n> sense.\n\nI think the point is that you haven't demonstrated that this particular\npatch makes it quicker.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 12:46:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Fri, 20 Dec 2019 at 17:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Simon Riggs <simon@2ndquadrant.com> writes:\n> > On Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n> >> With regard to this point, I second Tomas's comments.\n>\n> > I also agree with Tomas' comments. I am explaining *why* it will be an\n> > improvement, expanding on my earlier notes.\n> > This function is called extremely frequently in query processing and is\n> > fairly efficient. I'm pointing out cases where making it even quicker\n> makes\n> > sense.\n>\n> I think the point is that you haven't demonstrated that this particular\n> patch makes it quicker.\n>\n\nNot yet, but I was trying to agree what an appropriate test would be before\nrunning it.\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Fri, 20 Dec 2019 at 17:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:Simon Riggs <simon@2ndquadrant.com> writes:\n> On Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n>> With regard to this point, I second Tomas's comments.\n\n> I also agree with Tomas' comments. I am explaining *why* it will be an\n> improvement, expanding on my earlier notes.\n> This function is called extremely frequently in query processing and is\n> fairly efficient. I'm pointing out cases where making it even quicker makes\n> sense.\n\nI think the point is that you haven't demonstrated that this particular\npatch makes it quicker.Not yet, but I was trying to agree what an appropriate test would be before running it. -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Fri, 20 Dec 2019 17:57:55 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 05:57:55PM +0000, Simon Riggs wrote:\n>On Fri, 20 Dec 2019 at 17:46, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Simon Riggs <simon@2ndquadrant.com> writes:\n>> > On Fri, 20 Dec 2019 at 13:07, Robert Haas <robertmhaas@gmail.com> wrote:\n>> >> With regard to this point, I second Tomas's comments.\n>>\n>> > I also agree with Tomas' comments. I am explaining *why* it will be an\n>> > improvement, expanding on my earlier notes.\n>> > This function is called extremely frequently in query processing and is\n>> > fairly efficient. I'm pointing out cases where making it even quicker\n>> makes\n>> > sense.\n>>\n>> I think the point is that you haven't demonstrated that this particular\n>> patch makes it quicker.\n>>\n>\n>Not yet, but I was trying to agree what an appropriate test would be before\n>running it.\n>\n\nIsn't that a bit backwards? I mean, we usually identify opportunities\nfor optimizations by observing poor performance with a workload, which\nmeans that workload can serve as a test. Of course, it's possible to\nnotice an opprtunity by eye-balling the code, but you've already said\nthis is supposed to improve read-only transactions.\n\nI've actually tried to measure if/how this affects performance using a\nsimple read-only pgbench\n\n pgbench -S -M prepared -T 60 test\n\nI did this with a long-running transaction to prevent hint bits from\ngetting set. But I've not measured any difference in performane. So\neither this improves a different workload, or maybe I'm doing something\nsilly that makes the patch irrelevant.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 21 Dec 2019 16:13:43 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimizing TransactionIdIsCurrentTransactionId()"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Quarta-feira, 18 de Dezembro de 2019 01:18\n>Committed that part.\nThanks.\n\n>Let's take one example. The changes in pg_dump/ like\n>/progname/prog_name/ have just been done in haste, without actual\n>thoughts about how the problem ought to be fixed. And in this case,\n>something which could be more adapted is to remove the argument from\n>usage() because progname is a global variable, initialized from the\n>beginning in pg_restore.\nYeah, this is good hint about how improve the patch.\n\nBest regards,\nRanier Vilela\n\n\n\n\n\n\n\nDe: Michael Paquier\n\nEnviadas: Quarta-feira, 18 de Dezembro de 2019 01:18\n>Committed that part.\nThanks.\n\n>Let's take one example. The changes in pg_dump/ like\n>/progname/prog_name/ have just been done in haste, without actual\n>thoughts about how the problem ought to be fixed. And in this case,\n>something which could be more adapted is to remove the argument from\n>usage() because progname is a global variable, initialized from the\n>beginning in pg_restore.\nYeah, this is good hint about how improve the patch.\n\nBest regards,\nRanier Vilela",
"msg_date": "Wed, 18 Dec 2019 10:16:49 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [Proposal] Level4 Warnings show many shadow vars"
}
] |
[
{
"msg_contents": "De: Michael Paquier\nEnviadas: Quarta-feira, 18 de Dezembro de 2019 02:19\n>This looks like a leftover of d9dd406, which has made the code to\n>require C99. As we don't support compilation with Windows XP and\n>require Windows 7, we should be able to remove all the dance around\n>MIN_WINNT in win32.h, don't you think?\nIt would be a good thing since there is no support for these old systems.\nAnd whenever there is a patch that touches windows, someone could complain that it would be breaking something.\n\nCan You help improve the support of BCryptoGenRandom?\nI still have doubts about:\n\n 1. This break MingW?\n 2. Legacy API, have to stay?\n 3. Perl support, pgbench specifically.\n\nIf legacy API, have to stay, I have no doubt that it needs to be guarded by conditionals.\n\n\nBest regards,\n\nRanier Vilela\n\n 1.\n\n\n\n\n\n\n\nDe: Michael Paquier\n\nEnviadas: Quarta-feira, 18 de Dezembro de 2019 02:19\n>This looks like a leftover of d9dd406, which has made the code to\n>require C99. As we don't support compilation with Windows XP and\n>require Windows 7, we should be able to remove all the dance around\n>MIN_WINNT in win32.h, don't you think?\nIt would be a good thing since there is no support for these old systems.\nAnd whenever there is a patch that touches windows, someone could complain that it would be breaking something.\n\nCan You help improve the support of BCryptoGenRandom?\nI still have doubts about:\n\nThis break MingW?Legacy API, have to stay?Perl support, pgbench specifically.\nIf legacy API, have to stay, I have no doubt that it needs to be guarded by conditionals.\n\n\nBest regards,\nRanier Vilela",
"msg_date": "Wed, 18 Dec 2019 10:28:21 +0000",
"msg_from": "Ranier Vilela <ranier_gyn@hotmail.com>",
"msg_from_op": true,
"msg_subject": "RE: [PATCH] Windows port add support to BCryptGenRandom"
}
] |
[
{
"msg_contents": "The following bug has been logged on the website:\n\nBug reference: 16171\nLogged by: Mahadevan Ramachandran\nEmail address: mahadevan@rapidloop.com\nPostgreSQL version: 12.1\nOperating system: any\nDescription: \n\nRefer src/backend/commands/explain.c, version 12.1.\r\n\r\nWhen a plan node has children, the function ExplainNode starts a JSON array\nwith the key \"Plans\" (line 1955), like so:\r\n\r\n \"Plans\": [ \r\n\r\nwith the intention of creating an array of \"Plan\" objects, one for each\nchild:\r\n\r\n \"Plans\": [\r\n { .. a child plan goes here ..},\r\n { .. a child plan goes here ..}\r\n ]\r\n\r\nHowever, if the node (the current, parent one) is of a certain type (see\nswitch at line 1975), then ExplainMemberNodes is called, which does this\n(lines 3335-6):\r\n\r\n\tif (nsubnodes < nplans)\r\n\t\tExplainPropertyInteger(\"Subplans Removed\", NULL, nplans - nsubnodes,\nes);\r\n\r\nThis can potentially cause a malformed JSON output like this:\r\n\r\n \"Plans\": [\r\n { .. a child plan goes here ..},\r\n \"Subplans Removed\": 5,\r\n { .. a child plan goes here ..}\r\n ]\r\n\r\nI don't have a sample explain output that exhibits this error, this was\nfound while reviewing the code.",
"msg_date": "Wed, 18 Dec 2019 10:28:43 +0000",
"msg_from": "PG Bug reporting form <noreply@postgresql.org>",
"msg_from_op": true,
"msg_subject": "BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "> On 18 Dec 2019, at 11:28, PG Bug reporting form <noreply@postgresql.org> wrote:\n> \n> The following bug has been logged on the website:\n> \n> Bug reference: 16171\n> Logged by: Mahadevan Ramachandran\n> Email address: mahadevan@rapidloop.com\n> PostgreSQL version: 12.1\n> Operating system: any\n> Description: \n> \n> Refer src/backend/commands/explain.c, version 12.1.\n> \n> When a plan node has children, the function ExplainNode starts a JSON array\n> with the key \"Plans\" (line 1955), like so:\n> \n> \"Plans\": [ \n> \n> with the intention of creating an array of \"Plan\" objects, one for each\n> child:\n> \n> \"Plans\": [\n> { .. a child plan goes here ..},\n> { .. a child plan goes here ..}\n> ]\n> \n> However, if the node (the current, parent one) is of a certain type (see\n> switch at line 1975), then ExplainMemberNodes is called, which does this\n> (lines 3335-6):\n> \n> \tif (nsubnodes < nplans)\n> \t\tExplainPropertyInteger(\"Subplans Removed\", NULL, nplans - nsubnodes,\n> es);\n> \n> This can potentially cause a malformed JSON output like this:\n> \n> \"Plans\": [\n> { .. a child plan goes here ..},\n> \"Subplans Removed\": 5,\n> { .. a child plan goes here ..}\n> ]\n\nNice catch! That seems like a correct analysis to me. The same error is\npresent in YAML output as well AFAICT.\n\n> I don't have a sample explain output that exhibits this error, this was\n> found while reviewing the code.\n\nA tip for when you're struggling to get the output you want for testing\nsomething: grep for it in src/test/regress. Chances are there is already a\ntest covering the precise output you're interested in. For the example at\nhand, the partition_prune.sql suite contains quite a few such queries.\n\nLooking at the output from one of them, in text as well as JSON exemplifies the\nbug clearly:\n\n QUERY PLAN\n--------------------------------------------------------\n Append (actual rows=1 loops=1)\n InitPlan 1 (returns $0)\n -> Result (actual rows=1 loops=1)\n Subplans Removed: 2\n -> Seq Scan on mc3p1 mc3p_1 (actual rows=1 loops=1)\n Filter: ((a = $1) AND (abs(b) < $0))\n(6 rows)\n\n QUERY PLAN\n------------------------------------------------------\n [ +\n { +\n \"Plan\": { +\n \"Node Type\": \"Append\", +\n \"Parallel Aware\": false, +\n \"Actual Rows\": 2, +\n \"Actual Loops\": 1, +\n \"Plans\": [ +\n { +\n \"Node Type\": \"Result\", +\n \"Parent Relationship\": \"InitPlan\", +\n \"Subplan Name\": \"InitPlan 1 (returns $0)\",+\n \"Parallel Aware\": false, +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1 +\n }, +\n \"Subplans Removed\": 1, +\n { +\n \"Node Type\": \"Seq Scan\", +\n \"Parent Relationship\": \"Member\", +\n \"Parallel Aware\": false, +\n \"Relation Name\": \"mc3p0\", +\n \"Alias\": \"mc3p_1\", +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1, +\n \"Filter\": \"((a <= $1) AND (abs(b) < $0))\",+\n \"Rows Removed by Filter\": 0 +\n }, +\n { +\n \"Node Type\": \"Seq Scan\", +\n \"Parent Relationship\": \"Member\", +\n \"Parallel Aware\": false, +\n \"Relation Name\": \"mc3p1\", +\n \"Alias\": \"mc3p_2\", +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1, +\n \"Filter\": \"((a <= $1) AND (abs(b) < $0))\",+\n \"Rows Removed by Filter\": 0 +\n } +\n ] +\n }, +\n \"Triggers\": [ +\n ] +\n } +\n ]\n(1 row)\n\nMoving the \"Subplans Removed\" into a Plan group seems like the least bad option\nto clearly identify it while keeping the formatting legal. The attached patch\ngenerates the following output for JSON instead:\n\n\n \"Plans\": [ +\n { +\n \"Node Type\": \"Result\", +\n \"Parent Relationship\": \"InitPlan\", +\n \"Subplan Name\": \"InitPlan 1 (returns $0)\",+\n \"Parallel Aware\": false, +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1 +\n }, +\n { +\n \"Subplans Removed\": 2 +\n }, +\n { +\n \"Node Type\": \"Seq Scan\", +\n \"Parent Relationship\": \"Member\", +\n \"Parallel Aware\": false, +\n \"Relation Name\": \"mc3p1\", +\n \"Alias\": \"mc3p_1\", +\n \"Actual Rows\": 1, +\n \"Actual Loops\": 1, +\n \"Filter\": \"((a = $1) AND (abs(b) < $0))\", +\n \"Rows Removed by Filter\": 0 +\n } +\n\ncheers ./daniel",
"msg_date": "Wed, 18 Dec 2019 16:15:29 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: tested, passed\nSpec compliant: tested, passed\nDocumentation: tested, passed\n\nI've reviewed and verified this patch and IMHO, this is ready to be committed.\r\n\r\n\r\nSo I have verified this patch against the tip of REL_12_STABLE branch (commit c4c76d19).\r\n- git apply <patch> works without issues.\r\n\r\nAlthough the patch description mentions that it fixes a malformed JSON in explain output by creating a \"Plan\" group, this also fixes the same malformation issue for XML and YAML formats. It does not impact the text output though.\r\n\r\nI'm sharing the problematic part of output for an unpatched version for JSON, XML, and YAML formats:\r\n-- JSON\r\n \"Plans\": [\r\n \"Subplans Removed\": 1,\r\n {\r\n \"Node Type\": \"Seq Scan\",\r\n\r\n-- XML\r\n <Plans>\r\n <Subplans-Removed>1</Subplans-Removed>\r\n <Plan>\r\n\r\n-- YAML\r\n Plans:\r\n Subplans Removed: 1\r\n - Node Type: \"Seq Scan\"\r\n\r\nThe patched version gives the following and correct output:\r\n-- JSON\r\n \"Plans\": [\r\n {\r\n \"Subplans Removed\": 1\r\n }, \r\n {\r\n \"Node Type\": \"Seq Scan\",\r\n\r\n-- XML\r\n <Plans>\r\n <Plan>\r\n <Subplans-Removed>1</Subplans-Removed>\r\n </Plan>\r\n <Plan>\r\n\r\n-- YAML\r\n Plans:\r\n - Subplans Removed: 1\r\n - Node Type: \"Seq Scan\"\r\n\r\nFollowing is the query that I used for validating the output. I picked it up (and simplified) from \"src/test/regress/sql/partition_prune.sql\". You can probably come up with a simpler query, but this does the job. The query below gives the output in JSON format:\r\n----\r\ncreate table ab (a int not null, b int not null) partition by list (a);\r\ncreate table ab_a2 partition of ab for values in(2) partition by list (b);\r\ncreate table ab_a2_b1 partition of ab_a2 for values in (1);\r\ncreate table ab_a1 partition of ab for values in(1) partition by list (b);\r\ncreate table ab_a1_b1 partition of ab_a1 for values in (2);\r\n\r\n-- Disallow index only scans as concurrent transactions may stop visibility\r\n-- bits being set causing \"Heap Fetches\" to be unstable in the EXPLAIN ANALYZE\r\n-- output.\r\nset enable_indexonlyscan = off;\r\n\r\nprepare ab_q1 (int, int, int) as\r\nselect * from ab where a between $1 and $2 and b <= $3;\r\n\r\n-- Execute query 5 times to allow choose_custom_plan\r\n-- to start considering a generic plan.\r\nexecute ab_q1 (1, 8, 3);\r\nexecute ab_q1 (1, 8, 3);\r\nexecute ab_q1 (1, 8, 3);\r\nexecute ab_q1 (1, 8, 3);\r\nexecute ab_q1 (1, 8, 3);\r\n\r\nexplain (format json, analyze, costs off, summary off, timing off) execute ab_q1 (2, 2, 3);\r\n\r\ndeallocate ab_q1;\r\ndrop table ab;\r\n----\n\nThe new status of this patch is: Ready for Committer\n",
"msg_date": "Fri, 24 Jan 2020 09:28:50 +0000",
"msg_from": "Hamid Akhtar <hamid.akhtar@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n> I've reviewed and verified this patch and IMHO, this is ready to be committed.\n\nI took a look at this and I don't think it's really going in the right\ndirection. ISTM the clear intent of this code was to attach the \"Subplans\nRemoved\" item as a field of the parent [Merge]Append node, but the author\nforgot about the intermediate \"Plans\" array. So I think that, rather than\ndoubling down on a mistake, we ought to move where the field is generated\nso that it *is* a field of the parent node.\n\nAnother failure to follow the design conventions for EXPLAIN output is\nthat in non-text formats, the schema for each node type ought to be fixed;\nthat is, if a given field can appear for a particular node type and\nEXPLAIN options, it should appear always, not be omitted just because it's\nzero.\n\nSo that leads me to propose 0001 attached. This does lead to some field\norder rearrangement in text mode, as per the regression test changes,\nbut I think that's not a big deal. (A change can only happen if there\nare initplan(s) attached to the parent node.)\n\nAlso, I wondered whether we had any other violations of correct formatting\nin this code, which led me to the idea of running auto_explain in JSON\nmode and having it feed its result to json_in. This isn't a complete\ntest because it won't whine about duplicate field names, but it did\ncorrectly find the current bug --- and I couldn't find any others while\nrunning the core regression tests with various auto_explain options.\n0002 attached isn't committable, because nobody would want the overhead\nin production, but it seems like a good trick to keep up our sleeves.\n\nThoughts?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 01 Feb 2020 14:37:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "> On 1 Feb 2020, at 20:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Hamid Akhtar <hamid.akhtar@gmail.com> writes:\n>> I've reviewed and verified this patch and IMHO, this is ready to be committed.\n> \n> I took a look at this and I don't think it's really going in the right\n> direction. ISTM the clear intent of this code was to attach the \"Subplans\n> Removed\" item as a field of the parent [Merge]Append node, but the author\n> forgot about the intermediate \"Plans\" array. So I think that, rather than\n> doubling down on a mistake, we ought to move where the field is generated\n> so that it *is* a field of the parent node.\n\nRight, that makes sense; +1 on the attached 0001 patch.\n\n> This does lead to some field\n> order rearrangement in text mode, as per the regression test changes,\n> but I think that's not a big deal. (A change can only happen if there\n> are initplan(s) attached to the parent node.)\n\nDoes that prevent backpatching this, or are we Ok with EXPLAIN text output not\nbeing stable across minors? AFAICT Pg::Explain still works fine with this\nchange, but mileage may vary for other parsers.\n\n> 0002 attached isn't committable, because nobody would want the overhead\n> in production, but it seems like a good trick to keep up our sleeves.\n\nThats a neat trick, I wonder if it would be worth maintaining a curated list of\nthese tricks in a README under src/test to help others avoid/reduce wheel\nreinventing?\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 2 Feb 2020 13:08:07 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "[ cc'ing depesz to see what he thinks about this ]\n\nDaniel Gustafsson <daniel@yesql.se> writes:\n> On 1 Feb 2020, at 20:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> This does lead to some field\n>> order rearrangement in text mode, as per the regression test changes,\n>> but I think that's not a big deal. (A change can only happen if there\n>> are initplan(s) attached to the parent node.)\n\n> Does that prevent backpatching this, or are we Ok with EXPLAIN text output not\n> being stable across minors? AFAICT Pg::Explain still works fine with this\n> change, but mileage may vary for other parsers.\n\nI'm not sure about that either. It should be a clear win for parsers\nof the non-text formats, because now we're generating valid\nJSON-or-whatever where we were not before. But it's not too hard to\nimagine that someone's ad-hoc parser of text output would break,\ndepending on how much it relies on field order rather than indentation\nto make sense of things.\n\nIn the background of all this is that cases where it matters must be\npretty thin on the ground so far, else we'd have gotten complaints\nsooner. So we shouldn't really assume that everyone's parser handles\nsuch cases at all yet.\n\nI'm a little bit inclined to back-patch, on the grounds that JSON\noutput is hopelessly broken without this, and any text-mode parsers\nthat need work would need work by September anyway. But I could\neasily be argued into not back-patching.\n\nAnother approach we could consider is putting your patch in the\nback branches and mine in HEAD. I'm not sure that's a good idea;\nit trades short-term stability of the text format for a long-term\nmess in the non-text formats. But maybe somebody will argue for it.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 02 Feb 2020 11:48:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "On Sun, Feb 02, 2020 at 11:48:32AM -0500, Tom Lane wrote:\n> > Does that prevent backpatching this, or are we Ok with EXPLAIN text output not\n> > being stable across minors? AFAICT Pg::Explain still works fine with this\n> > change, but mileage may vary for other parsers.\n> I'm not sure about that either. It should be a clear win for parsers\n> of the non-text formats, because now we're generating valid\n> JSON-or-whatever where we were not before. But it's not too hard to\n> imagine that someone's ad-hoc parser of text output would break,\n> depending on how much it relies on field order rather than indentation\n> to make sense of things.\n\nChange looks reasonable to me.\n\nInterestingly Pg::Explain doesn't handle either current JSON output in\nthis case (as it's not a valid JSON), nor the new one - but this can be\nfixed easily.\n\nBest regards,\n\ndepesz\n\n\n\n",
"msg_date": "Mon, 3 Feb 2020 12:40:22 +0100",
"msg_from": "hubert depesz lubaczewski <depesz@depesz.com>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "> On 2 Feb 2020, at 17:48, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Thoughts?\n\nKeeping TEXT explain stable across minor versions is very appealing, but more\nso from a policy standpoint than a technical one. The real-world implication\nis probably quite small, but that's a very unscientific guess (searching Github\ndidn't turn up anything but thats far from conclusive). Having broken JSON is\nhowever a clear negative, and so is having a different JSON format in back-\nbranches for something which has never worked in the first place.\n\nI guess I'm leaning towards backpatching, but it's not entirely clear-cut.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 3 Feb 2020 13:55:19 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> I guess I'm leaning towards backpatching, but it's not entirely clear-cut.\n\nThat's where I stand too. I'll wait a day or so to see if anyone\nelse comments; but if not, I'll back-patch.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 03 Feb 2020 09:15:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "Daniel Gustafsson <daniel@yesql.se> writes:\n> On 1 Feb 2020, at 20:37, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> 0002 attached isn't committable, because nobody would want the overhead\n>> in production, but it seems like a good trick to keep up our sleeves.\n\n> Thats a neat trick, I wonder if it would be worth maintaining a curated list of\n> these tricks in a README under src/test to help others avoid/reduce wheel\n> reinventing?\n\nIt occurred to me that as long as this is an uncommittable hack anyway,\nwe could feed the EXPLAIN data to jsonb_in and then hot-wire the jsonb\ncode to whine about duplicate keys. So attached, for the archives' sake,\nis an improved version that does that. I still don't find any problems\n(other than the one we're fixing here); though no doubt if I reverted\n100136849 it'd complain about that.\n\n\t\t\tregards, tom lane",
"msg_date": "Mon, 03 Feb 2020 10:56:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
},
{
"msg_contents": "I wrote:\n> Daniel Gustafsson <daniel@yesql.se> writes:\n>> I guess I'm leaning towards backpatching, but it's not entirely clear-cut.\n\n> That's where I stand too. I'll wait a day or so to see if anyone\n> else comments; but if not, I'll back-patch.\n\nHearing no objections, done that way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 Feb 2020 13:08:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BUG #16171: Potential malformed JSON in explain output"
}
] |
[
{
"msg_contents": "I think previously also the same test failed but not as consistently\nas it is now. I see that consistently the same test case is failing.\n\nDec 17 01:35:33 t/010_logical_decoding_timelines....# Looks like you\nplanned 13 tests but ran 9.\nDec 17 01:35:33 # Looks like your test exited with 255 just after 9.\nDec 17 01:35:33 dubious\nDec 17 01:35:33 Test returned status 255 (wstat 65280, 0xff00)\nDec 17 01:35:33 DIED. FAILED tests 10-13\nDec 17 01:35:33 Failed 4/13 tests, 69.23% okay\n\nOne of the buildfarm logs [1] seems to indicate that it started\nfailing after commit 95f43fee9 On Windows, wait a little to see if\nERROR_ACCESS_DENIED goes away).\n\nI haven't done any detailed analysis, so I could be wrong as well.\n\nThoughts?\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=jacana&dt=2019-12-17%2004%3A16%3A40\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 18 Dec 2019 16:27:55 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "jacana seems to be failing in recoverycheck from last few runs"
},
{
"msg_contents": "Amit Kapila <amit.kapila16@gmail.com> writes:\n> I think previously also the same test failed but not as consistently\n> as it is now. I see that consistently the same test case is failing.\n\nYeah, this is already being discussed over in the -bugs thread.\n\nhttps://www.postgresql.org/message-id/23073.1576626626%40sss.pgh.pa.us\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 09:14:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: jacana seems to be failing in recoverycheck from last few runs"
}
] |
[
{
"msg_contents": "Hello.\n\nI cannot find the reason why EDITOR value on Windows is quoted. It\nshould not be. One may force quote env var if he wants to.\n\nRight now, for example, one cannot use sublime, since to use it in a\nproper way you should\nSET EDITOR=\"C:\\Program Files\\Sublime\\subl.exe\" --wait\n\nThe problem can be solved by introducing PSQL_EDITOR_ARGS env var, but\njust not quoting EDITOR command on Windows will work too.\n\npsql\\command.c:\n\nstatic bool\neditFile(const char *fname, int lineno)\n{\n ...\n\n /*\n * On Unix the EDITOR value should *not* be quoted, since it might include\n * switches, eg, EDITOR=\"pico -t\"; it's up to the user to put quotes in it\n * if necessary. But this policy is not very workable on Windows, due to\n * severe brain damage in their command shell plus the fact that standard\n * program paths include spaces.\n */\n ...\n if (lineno > 0)\n sys = psprintf(\"\\\"%s\\\" %s%d \\\"%s\\\"\",\n editorName, editor_lineno_arg, lineno, fname);\n else\n sys = psprintf(\"\\\"%s\\\" \\\"%s\\\"\",\n editorName, fname);\n ...\n}\n\n\n",
"msg_date": "Wed, 18 Dec 2019 14:56:49 +0100",
"msg_from": "Pavlo Golub <pavlo.golub@cybertec.at>",
"msg_from_op": true,
"msg_subject": "psql's EDITOR behavior on Windows"
},
{
"msg_contents": "Pavlo Golub <pavlo.golub@cybertec.at> writes:\n> I cannot find the reason why EDITOR value on Windows is quoted.\n\nThe comment you quoted explains it: apparently people expect\npaths-with-spaces to work in that value without any manual quoting.\n\n> One may force quote env var if he wants to.\n\nGiven the lack of prior complaints, I'm not sure we should change\nthis to support an editor that can't be made to work without special\ncommand line switches. We're likely to make more people unhappy\nthan happy ... especially if it's then impossible to set the variable\nso that it works with both older and newer psql's.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 09:43:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql's EDITOR behavior on Windows"
},
{
"msg_contents": "I wrote:\n> Pavlo Golub <pavlo.golub@cybertec.at> writes:\n>> I cannot find the reason why EDITOR value on Windows is quoted.\n\n> The comment you quoted explains it: apparently people expect\n> paths-with-spaces to work in that value without any manual quoting.\n\nActually, after digging in the git history and archives, the current\nbehavior seems to trace back to a discussion on pgsql-hackers on\n2004-11-15. The thread linkage in the archives seems rather incomplete,\nbut it boiled down to this:\n\nhttps://www.postgresql.org/message-id/9045.1100539151%40sss.pgh.pa.us\n\nie, the argument that people could handle space-containing paths by\nputting double quotes into the environment variable's value is just wrong.\nPossibly Microsoft fixed that in the fifteen years since, but I'd want to\nsee some proof.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 10:11:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: psql's EDITOR behavior on Windows"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 4:11 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Pavlo Golub <pavlo.golub@cybertec.at> writes:\n> >> I cannot find the reason why EDITOR value on Windows is quoted.\n>\n> > The comment you quoted explains it: apparently people expect\n> > paths-with-spaces to work in that value without any manual quoting.\n>\n> Actually, after digging in the git history and archives, the current\n> behavior seems to trace back to a discussion on pgsql-hackers on\n> 2004-11-15. The thread linkage in the archives seems rather incomplete,\n> but it boiled down to this:\n>\n> https://www.postgresql.org/message-id/9045.1100539151%40sss.pgh.pa.us\n>\n> ie, the argument that people could handle space-containing paths by\n> putting double quotes into the environment variable's value is just wrong.\n> Possibly Microsoft fixed that in the fifteen years since, but I'd want to\n> see some proof.\n\nInteresting. Thanks. I'll prepare a patch showing the case.\n\n>\n> regards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 16:22:21 +0100",
"msg_from": "Pavlo Golub <pavlo.golub@cybertec.at>",
"msg_from_op": true,
"msg_subject": "Re: psql's EDITOR behavior on Windows"
}
] |
[
{
"msg_contents": "Hi,\n\nIs there somewhere a script that can do a restore backup file \"with \noids\" in some newer DB where there is no term \"with oids\"?\n\n\nVladimir Koković, DP senior(69)\n\nSerbia, Belgrade, 18.December 2019\n\n\n\n\n\n\n\nHi,\nIs there somewhere a script that can do a\n restore backup file \"with oids\" in some newer DB where there\n is no term \"with oids\"?\n\n\nVladimir Koković, DP senior(69)\nSerbia, Belgrade, 18.December 2019",
"msg_date": "Wed, 18 Dec 2019 18:28:07 +0100",
"msg_from": "=?UTF-8?Q?gmail_Vladimir_Kokovi=c4=87?= <vladimir.kokovic@gmail.com>",
"msg_from_op": true,
"msg_subject": "Restore backup file \"with oids\""
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 2:28 PM gmail Vladimir Koković <\nvladimir.kokovic@gmail.com> wrote:\n\n> Hi,\n>\n> Is there somewhere a script that can do a restore backup file \"with oids\"\n> in some newer DB where there is no term \"with oids\"?\n>\n>\n>\nShort answer: No!\n\nLong answer: try use the pg_dump binary from your new version (12)\nconnecting to your old version... is the best practice when you want to\nupgrade your PostgreSQL using dump/restore strategy.\n\nRegards,\n\n-- \n Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/\n PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento\n\nOn Wed, Dec 18, 2019 at 2:28 PM gmail Vladimir Koković <vladimir.kokovic@gmail.com> wrote:\n\nHi,\nIs there somewhere a script that can do a\n restore backup file \"with oids\" in some newer DB where there\n is no term \"with oids\"?\nShort answer: No!Long answer: try use the pg_dump binary from your new version (12) connecting to your old version... is the best practice when you want to upgrade your PostgreSQL using dump/restore strategy.Regards,-- Fabrízio de Royes Mello Timbira - http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento",
"msg_date": "Wed, 18 Dec 2019 15:06:58 -0300",
"msg_from": "=?UTF-8?Q?Fabr=C3=ADzio_de_Royes_Mello?= <fabriziomello@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Restore backup file \"with oids\""
},
{
"msg_contents": "On 18 December 2019 19:28:07 EET, \"gmail Vladimir Koković\" <vladimir.kokovic@gmail.com> wrote:\n>Is there somewhere a script that can do a restore backup file \"with \n>oids\" in some newer DB where there is no term \"with oids\"?\n\nThere is nothing automatic, some manual work is needed. A couple of ideas:\n\n1. Install an older version of PostgreSQL that still supports WITH OIDS, and restore the dump there. Use ALTER TABLE SET WITHOUT OIDS, and dump the database again (preferably with v12 of pg_dump).\n\n2. Create the table manually, but with a regular oid-type column in place of the system column. Do a data-only restore. \n\nYou didn't mention what format the backup dump is. If it's in custom format, you can do a data-only restore easily. If it's a text sql file, and it's not too large, you can load it in a text exitor and edit manually.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 18 Dec 2019 20:34:57 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: Restore backup file \"with oids\""
},
{
"msg_contents": "HI,\n\nThe first thing I have to say is that I only have a plain backup file \n\"with oids\" from \"PostgreSQL 8.4.2, 32-bit\",\n\nfrom which I should take the contents of some tables.\nAny help from someone who was in a similar situation would help me a lot \nin resolving this situation.\n\nThank you Heikki for your solution.\n\n\nVladimir Koković, DP senior(69)\n\nSerbia, Belgrade, 18.December 2019\n\n\nOn 18.12.19. 19:34, Heikki Linnakangas wrote:\n> On 18 December 2019 19:28:07 EET, \"gmail Vladimir Koković\" <vladimir.kokovic@gmail.com> wrote:\n>> Is there somewhere a script that can do a restore backup file \"with\n>> oids\" in some newer DB where there is no term \"with oids\"?\n> There is nothing automatic, some manual work is needed. A couple of ideas:\n>\n> 1. Install an older version of PostgreSQL that still supports WITH OIDS, and restore the dump there. Use ALTER TABLE SET WITHOUT OIDS, and dump the database again (preferably with v12 of pg_dump).\n>\n> 2. Create the table manually, but with a regular oid-type column in place of the system column. Do a data-only restore.\n>\n> You didn't mention what format the backup dump is. If it's in custom format, you can do a data-only restore easily. If it's a text sql file, and it's not too large, you can load it in a text exitor and edit manually.\n>\n> - Heikki\n>\n>\n\n\n\n\n\n\nHI,\nThe first thing I have to say is that I only have a\n plain backup file \"with oids\" from \"PostgreSQL 8.4.2, 32-bit\",\n from which I should take the contents of some\n tables.\nAny help from someone who was in a\n similar situation would help me a lot in resolving this\n situation.\n\n\nThank you Heikki for your solution.\n\n\nVladimir Koković, DP senior(69)\nSerbia, Belgrade, 18.December 2019\n\n\n\n\nOn 18.12.19. 19:34, Heikki Linnakangas\n wrote:\n\n\nOn 18 December 2019 19:28:07 EET, \"gmail Vladimir Koković\" <vladimir.kokovic@gmail.com> wrote:\n\n\nIs there somewhere a script that can do a restore backup file \"with \noids\" in some newer DB where there is no term \"with oids\"?\n\n\n\nThere is nothing automatic, some manual work is needed. A couple of ideas:\n\n1. Install an older version of PostgreSQL that still supports WITH OIDS, and restore the dump there. Use ALTER TABLE SET WITHOUT OIDS, and dump the database again (preferably with v12 of pg_dump).\n\n2. Create the table manually, but with a regular oid-type column in place of the system column. Do a data-only restore. \n\nYou didn't mention what format the backup dump is. If it's in custom format, you can do a data-only restore easily. If it's a text sql file, and it's not too large, you can load it in a text exitor and edit manually.\n\n- Heikki",
"msg_date": "Wed, 18 Dec 2019 20:57:26 +0100",
"msg_from": "=?UTF-8?Q?Vladimir_Kokovi=c4=87?= <vladimir.kokovic@a-asoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Restore backup file \"with oids\""
},
{
"msg_contents": "On Wednesday, December 18, 2019, Vladimir Koković <\nvladimir.kokovic@a-asoft.com> wrote:\n\n> HI,\n>\n> The first thing I have to say is that I only have a plain backup file\n> \"with oids\" from \"PostgreSQL 8.4.2, 32-bit\",\n>\n> from which I should take the contents of some tables.\n> Any help from someone who was in a similar situation would help me a lot\n> in resolving this situation\n>\n> Restore it into 9.4 and see if you can get what you need from there?\n\nDavid J.\n\nOn Wednesday, December 18, 2019, Vladimir Koković <vladimir.kokovic@a-asoft.com> wrote:\n\nHI,\nThe first thing I have to say is that I only have a\n plain backup file \"with oids\" from \"PostgreSQL 8.4.2, 32-bit\",\n from which I should take the contents of some\n tables.\nAny help from someone who was in a\n similar situation would help me a lot in resolving this\n situationRestore it into 9.4 and see if you can get what you need from there?David J.",
"msg_date": "Wed, 18 Dec 2019 13:36:48 -0700",
"msg_from": "\"David G. Johnston\" <david.g.johnston@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Restore backup file \"with oids\""
}
] |
[
{
"msg_contents": "According to [1], windows does not support setenv.\nWith the possibility of setenv going further [2], I am submitting in this\nthread, the patch to add setenv support on the windows side,\nIt is based on pre-existing functions, and seeks to correctly emulate the\nfunctioning of the POSIX setenv, but has not yet been tested.\nregards,\n\nRanier Vilela\n\n[1] https://www.postgresql.org/message-id/29478.1576537771%40sss.pgh.pa.us\n<https://www.postgresql.org/message-id/29478.1576537771@sss.pgh.pa.us>\n[2] https://www.postgresql.org/message-id/30119.1576538578%40sss.pgh.pa.us\n<https://www.postgresql.org/message-id/30119.1576538578@sss.pgh.pa.us>",
"msg_date": "Wed, 18 Dec 2019 15:20:40 -0300",
"msg_from": "Ranier Vf <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Windows port: add support to setenv function"
}
] |
[
{
"msg_contents": "Hi\n\nI had a talk with one boy about development in plpgsql. He uses table's\nfunctions. More times he uses returns types based on some table type + few\nattributes. Now he use a ugly hack - he create a view on table plus some\ncolumns - and then he use the view related type as table function result\ntype. For similar uses cases there can be interesting to have a possibility\nto create types by extending other types. Probably almost all functionality\nis inside now - so it should not be hard work.\n\nMy idea is implement inherits clause for CREATE TYPE command.\n\nSome like\n\nCREATE TYPE fx_rt (xx int) INHERITS(pg_class);\n\nWhat do you think about this idea?\n\nRegards\n\nPavel\n\nHiI had a talk with one boy about development in plpgsql. He uses table's functions. More times he uses returns types based on some table type + few attributes. Now he use a ugly hack - he create a view on table plus some columns - and then he use the view related type as table function result type. For similar uses cases there can be interesting to have a possibility to create types by extending other types. Probably almost all functionality is inside now - so it should not be hard work.My idea is implement inherits clause for CREATE TYPE command.Some likeCREATE TYPE fx_rt (xx int) INHERITS(pg_class);What do you think about this idea?RegardsPavel",
"msg_date": "Wed, 18 Dec 2019 19:37:28 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "inherits clause for CREATE TYPE? -"
},
{
"msg_contents": "Pavel Stehule <pavel.stehule@gmail.com> writes:\n> My idea is implement inherits clause for CREATE TYPE command.\n\n-1. We have enough problems dealing with alterations of inherited\nrowtypes already. As long as inheritance is restricted to tables,\nwe can use table locking to help prevent problems --- but there's\nno provision for locking free-standing types. And introducing\nlocking on types would cost way more than this seems likely to be\nworth.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 14:06:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: inherits clause for CREATE TYPE? -"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 12:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I had a talk with one boy about development in plpgsql. He uses table's functions. More times he uses returns types based on some table type + few attributes. Now he use a ugly hack - he create a view on table plus some columns - and then he use the view related type as table function result type. For similar uses cases there can be interesting to have a possibility to create types by extending other types. Probably almost all functionality is inside now - so it should not be hard work.\n>\n> My idea is implement inherits clause for CREATE TYPE command.\n>\n> Some like\n>\n> CREATE TYPE fx_rt (xx int) INHERITS(pg_class);\n>\n> What do you think about this idea?\n\nHow about using composition style approaches?\n\ncreate type base as (...)\n\ncreate type extended as (b base, ...)\n\ncreate function foo() returns extended as ...\n\nmerlin\n\n\n",
"msg_date": "Wed, 18 Dec 2019 14:11:50 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: inherits clause for CREATE TYPE? -"
},
{
"msg_contents": "st 18. 12. 2019 v 21:12 odesílatel Merlin Moncure <mmoncure@gmail.com>\nnapsal:\n\n> On Wed, Dec 18, 2019 at 12:38 PM Pavel Stehule <pavel.stehule@gmail.com>\n> wrote:\n> >\n> > Hi\n> >\n> > I had a talk with one boy about development in plpgsql. He uses table's\n> functions. More times he uses returns types based on some table type + few\n> attributes. Now he use a ugly hack - he create a view on table plus some\n> columns - and then he use the view related type as table function result\n> type. For similar uses cases there can be interesting to have a possibility\n> to create types by extending other types. Probably almost all functionality\n> is inside now - so it should not be hard work.\n> >\n> > My idea is implement inherits clause for CREATE TYPE command.\n> >\n> > Some like\n> >\n> > CREATE TYPE fx_rt (xx int) INHERITS(pg_class);\n> >\n> > What do you think about this idea?\n>\n> How about using composition style approaches?\n>\n> create type base as (...)\n>\n> create type extended as (b base, ...)\n>\n> create function foo() returns extended as ...\n>\n\nIt is a possibility, but it is not practical, because base type will be\nnested, it is hard to access to nested fields ..\n\nCurrently I can do\n\nCREATE TABLE base (...); -- instead CREATE TYPE\nCREATE TABLE extended (...) -- INHERITS (base)\n\nCREATE FUNCTION foo() RETURNS SETOF extended AS ..\n\nThis is working perfect - just disadvantage is garbage table \"extended\"\n\n\n\n> merlin\n>\n\nst 18. 12. 2019 v 21:12 odesílatel Merlin Moncure <mmoncure@gmail.com> napsal:On Wed, Dec 18, 2019 at 12:38 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:\n>\n> Hi\n>\n> I had a talk with one boy about development in plpgsql. He uses table's functions. More times he uses returns types based on some table type + few attributes. Now he use a ugly hack - he create a view on table plus some columns - and then he use the view related type as table function result type. For similar uses cases there can be interesting to have a possibility to create types by extending other types. Probably almost all functionality is inside now - so it should not be hard work.\n>\n> My idea is implement inherits clause for CREATE TYPE command.\n>\n> Some like\n>\n> CREATE TYPE fx_rt (xx int) INHERITS(pg_class);\n>\n> What do you think about this idea?\n\nHow about using composition style approaches?\n\ncreate type base as (...)\n\ncreate type extended as (b base, ...)\n\ncreate function foo() returns extended as ...It is a possibility, but it is not practical, because base type will be nested, it is hard to access to nested fields ..Currently I can doCREATE TABLE base (...); -- instead CREATE TYPECREATE TABLE extended (...) -- INHERITS (base)CREATE FUNCTION foo() RETURNS SETOF extended AS ..This is working perfect - just disadvantage is garbage table \"extended\"\n\nmerlin",
"msg_date": "Thu, 19 Dec 2019 11:52:38 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: inherits clause for CREATE TYPE? -"
}
] |
[
{
"msg_contents": "Hi,\n || curpages <= 0\nexpression is always false and can be safely removed.\nReasons:\n1. curpages is uint32 type\n2. its already test if is zero before.\n3. Never be negative\n\nregards,\nRanier Vilela",
"msg_date": "Wed, 18 Dec 2019 17:12:04 -0300",
"msg_from": "Ranier Vf <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] remove expression always false"
}
] |
[
{
"msg_contents": "Commit bc8036fc666a (12 years ago) seems to have introduced an\nunnecessary catalog heap_open/close in order to do syscache accesses.\nI presume a preliminary version of the patch used sysscans or something.\nI don't think that's necessary, so this patch removes it.\n\n(I noticed while reading Paul Jungwirth's patch that changes how this\nworks.)\n\n-- \n�lvaro Herrera Developer, https://www.PostgreSQL.org/",
"msg_date": "Wed, 18 Dec 2019 19:13:26 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove unnecessary table_open/close from makeArrayTypeName"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> Commit bc8036fc666a (12 years ago) seems to have introduced an\n> unnecessary catalog heap_open/close in order to do syscache accesses.\n\nHuh. Not sure how that got past me, but I agree it's bogus.\n\n> I presume a preliminary version of the patch used sysscans or something.\n> I don't think that's necessary, so this patch removes it.\n\n+1, but please collapse up the whitespace too.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 18 Dec 2019 17:22:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: remove unnecessary table_open/close from makeArrayTypeName"
},
{
"msg_contents": "On Wed, Dec 18, 2019 at 05:22:06PM -0500, Tom Lane wrote:\n> +1, but please collapse up the whitespace too.\n\n+1.\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 11:17:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove unnecessary table_open/close from makeArrayTypeName"
}
] |
[
{
"msg_contents": "Hi all,\n\nAs discussed here, there is in the tree a couple of things related to\npast versions of Windows:\nhttps://www.postgresql.org/message-id/20191218021954.GE1836@paquier.xyz\n\nSo I have been looking at that more closely, and found more:\n- MIN_WINNT can be removed from win32.h thanks to d9dd406 which has\nadded a requirement on C99 with Windows 7 as minimum platform\nsupported. (The issue mentioned previously.)\n- pipe_read_line(), used when finding another binary for a given\ninstallation via find_other_exec() has some special handling related\nto Windows 2000 and older versions.\n- When trying to load getaddrinfo(), we try to load it from\nwship6.ddl, which was something needed in Windows 2000, but newer\nWindows versions include it in ws2_32.dll.\n- A portion of the docs still refer to Windows 98.\n\nThoughts?\n--\nMichael",
"msg_date": "Thu, 19 Dec 2019 11:15:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Clean up some old cruft related to Windows"
},
{
"msg_contents": "At Thu, 19 Dec 2019 11:15:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> As discussed here, there is in the tree a couple of things related to\n> past versions of Windows:\n> https://www.postgresql.org/message-id/201912180219SUSv254.GE1836@paquier.xyz\n> \n> So I have been looking at that more closely, and found more:\n> - MIN_WINNT can be removed from win32.h thanks to d9dd406 which has\n> added a requirement on C99 with Windows 7 as minimum platform\n> supported. (The issue mentioned previously.)\n> - pipe_read_line(), used when finding another binary for a given\n> installation via find_other_exec() has some special handling related\n> to Windows 2000 and older versions.\n> - When trying to load getaddrinfo(), we try to load it from\n> wship6.ddl, which was something needed in Windows 2000, but newer\n> Windows versions include it in ws2_32.dll.\n> - A portion of the docs still refer to Windows 98.\n> \n> Thoughts?\n\nI think MIN_WINNT is definitely emovable.\n\npopen already has the plantform-dependent implement so I think it can\nbe removed irrelevantly for the C99 discussion.\n\nI found some similar places by grep'ing for windows version names the\nwhole source tree.\n\n- The comment for trapsig is mentioning win98/Me/NT/2000/XP.\n\n- We don't need the (only) caller site of IsWindows7OrGreater()?\n\n- The comment for AddUserToTokenDacl() is mentioning \"XP/2K3,\n Vista/2008\".\n\n- InitializeLDAPConnection dynamically loads WLDAP32.DLL for Windows\n 2000. It could either be statically loaded or could be left as it\n is, but the comment seems to need a change in either case.\n\n- The comment for IsoLocaleName mentioning Vista and Visual Studio\n 2012.\n\n- install-windows.sgml is mentioning \"XP and later\" around line 117.\n\n- installation.sgml is mentioning NT/2000/XP as platforms that don't\n support adduser/su, command.\n\n- \"of Windows 2000 or later\" is found at installation.sgml:2467\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 19 Dec 2019 13:46:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 5:47 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n> At Thu, 19 Dec 2019 11:15:26 +0900, Michael Paquier <michael@paquier.xyz>\n> wrote in\n> > Hi all,\n> >\n> > As discussed here, there is in the tree a couple of things related to\n> > past versions of Windows:\n> >\n> https://www.postgresql.org/message-id/201912180219SUSv254.GE1836@paquier.xyz\n> >\n> > So I have been looking at that more closely, and found more:\n> > - MIN_WINNT can be removed from win32.h thanks to d9dd406 which has\n> > added a requirement on C99 with Windows 7 as minimum platform\n> > supported. (The issue mentioned previously.)\n> > - pipe_read_line(), used when finding another binary for a given\n> > installation via find_other_exec() has some special handling related\n> > to Windows 2000 and older versions.\n> > - When trying to load getaddrinfo(), we try to load it from\n> > wship6.ddl, which was something needed in Windows 2000, but newer\n> > Windows versions include it in ws2_32.dll.\n> > - A portion of the docs still refer to Windows 98.\n> >\n> > Thoughts?\n>\n> I think MIN_WINNT is definitely emovable.\n>\n>\nThis is probably not an issue for the supported MSVC and their SDK, but\ncurrent MinGW defaults to Windows 2003 [1]. So I would suggest a logic like:\n\n#define WINNTVER(ver) ((ver) >> 16)\n#define NTDDI_VERSION 0x06000100\n#define _WIN32_WINNT WINNTVER(NTDDI_VERSION)\n\n[1]\nhttps://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Thu, Dec 19, 2019 at 5:47 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:At Thu, 19 Dec 2019 11:15:26 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Hi all,\n> \n> As discussed here, there is in the tree a couple of things related to\n> past versions of Windows:\n> https://www.postgresql.org/message-id/201912180219SUSv254.GE1836@paquier.xyz\n> \n> So I have been looking at that more closely, and found more:\n> - MIN_WINNT can be removed from win32.h thanks to d9dd406 which has\n> added a requirement on C99 with Windows 7 as minimum platform\n> supported. (The issue mentioned previously.)\n> - pipe_read_line(), used when finding another binary for a given\n> installation via find_other_exec() has some special handling related\n> to Windows 2000 and older versions.\n> - When trying to load getaddrinfo(), we try to load it from\n> wship6.ddl, which was something needed in Windows 2000, but newer\n> Windows versions include it in ws2_32.dll.\n> - A portion of the docs still refer to Windows 98.\n> \n> Thoughts?\n\nI think MIN_WINNT is definitely emovable.This is probably not an issue for the supported MSVC and their SDK, but current MinGW defaults to Windows 2003 [1]. So I would suggest a logic like:#define WINNTVER(ver) ((ver) >> 16)#define NTDDI_VERSION 0x06000100#define _WIN32_WINNT WINNTVER(NTDDI_VERSION)[1] https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.hRegards,Juan José Santamaría Flecha",
"msg_date": "Thu, 19 Dec 2019 20:09:45 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 08:09:45PM +0100, Juan José Santamaría Flecha wrote:\n> This is probably not an issue for the supported MSVC and their SDK, but\n> current MinGW defaults to Windows 2003 [1]. So I would suggest a logic like:\n> \n> #define WINNTVER(ver) ((ver) >> 16)\n> #define NTDDI_VERSION 0x06000100\n> #define _WIN32_WINNT WINNTVER(NTDDI_VERSION)\n> \n> [1]\n> https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n\nYou're right, thanks for the pointer. This is this part of the\nheader:\n#define NTDDI_VERSION NTDDI_WS03\n\nThinking more about that, the changes in win32.h are giving me cold\nfeet.\n--\nMichael",
"msg_date": "Tue, 18 Feb 2020 15:54:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 01:46:33PM +0900, Kyotaro Horiguchi wrote:\n> I found some similar places by grep'ing for windows version names the\n> whole source tree.\n> \n> - The comment for trapsig is mentioning win98/Me/NT/2000/XP.\n\nLet's refresh the comment here, as per the following:\nhttps://docs.microsoft.com/en-us/previous-versions/xdkz3x12(v=vs.140)\n\n> - We don't need the (only) caller site of IsWindows7OrGreater()?\n\nThe compiled code can still run with Windows Server 2008. \n\n> - The comment for AddUserToTokenDacl() is mentioning \"XP/2K3,\n> Vista/2008\".\n\nKeeping some context is still good here IMO.\n\n> - InitializeLDAPConnection dynamically loads WLDAP32.DLL for Windows\n> 2000. It could either be statically loaded or could be left as it\n> is, but the comment seems to need a change in either case.\n\nLooks safer to me to keep it.\n\n> - The comment for IsoLocaleName mentioning Vista and Visual Studio\n> 2012.\n\nIt is good to keep some history in this context.\n\n> - install-windows.sgml is mentioning \"XP and later\" around line 117.\n\nBut this still applies to XP, even if compilation is supported from\nWindows 7.\n\n> - installation.sgml is mentioning NT/2000/XP as platforms that don't\n> support adduser/su, command.\n\nNo objections to simplify that a bit.\n\nAttached is a simplified version. It is smaller than the previous\none, but that's already a good cut. I have also done some testing\nwith the service manager to check after pipe_read_line(), and that\nworks.\n\nThoughts?\n--\nMichael",
"msg_date": "Tue, 18 Feb 2020 16:44:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "Hello.\n\nI understand that this is not for back-patching.\n\nAt Tue, 18 Feb 2020 16:44:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Thu, Dec 19, 2019 at 01:46:33PM +0900, Kyotaro Horiguchi wrote:\n> > I found some similar places by grep'ing for windows version names the\n> > whole source tree.\n> > \n> > - The comment for trapsig is mentioning win98/Me/NT/2000/XP.\n> \n> Let's refresh the comment here, as per the following:\n> https://docs.microsoft.com/en-us/previous-versions/xdkz3x12(v=vs.140)\n\n * The Windows runtime docs at\n * http://msdn.microsoft.com/library/en-us/vclib/html/_crt_signal.asp\n...\n- *\t Win32 operating systems generate a new thread to specifically handle\n- *\t that interrupt. This can cause a single-thread application such as UNIX,\n- *\t to become multithreaded, resulting in unexpected behavior.\n+ * SIGINT is not supported for any Win32 application. When a CTRL+C interrupt\n+ * occurs, Win32 operating systems generate a new thread to specifically\n+ * handle that interrupt. This can cause a single-thread application, such as\n+ * one in UNIX, to become multithreaded and cause unexpected behavior.\n *\n * I have no idea how to handle this. (Strange they call UNIX an application!)\n * So this will need some testing on Windows.\n\nThe unmodified section just above is griping that \"Strange they call\nUNIX an application\". The expression \"application such as UNIX\" seems\ncorresponding to the gripe. I tried to find the soruce of the phrase\nbut the above URL (.._crt_signal.asp) sent me \"We're sorry, the page\nyou requested cannot be found.\":(\n\nThank you for checking the belows.\n\n> > - We don't need the (only) caller site of IsWindows7OrGreater()?\n> \n> The compiled code can still run with Windows Server 2008. \n\nDo we let the new PG version for already-unsupported platforms? If I\ndon't missing anything Windows Server 2008 is already\nEnd-Of-Extended-Support (2020/1/14) along with Windows 7.\n\n> > - The comment for AddUserToTokenDacl() is mentioning \"XP/2K3,\n> > Vista/2008\".\n> \n> Keeping some context is still good here IMO.\n\nI'm fine with that.\n\n> > - InitializeLDAPConnection dynamically loads WLDAP32.DLL for Windows\n> > 2000. It could either be statically loaded or could be left as it\n> > is, but the comment seems to need a change in either case.\n> \n> Looks safer to me to keep it.\n\nIf it is still possible that the file is missing on Windows 8/ Server\n2012 or later, the comment should be updatd accordingly.\n\n> > - The comment for IsoLocaleName mentioning Vista and Visual Studio\n> > 2012.\n> \n> It is good to keep some history in this context.\n\nAgreed.\n\n> > - install-windows.sgml is mentioning \"XP and later\" around line 117.\n> \n> But this still applies to XP, even if compilation is supported from\n> Windows 7.\n\nHmm. \"/xp\" can be the reason to preserve it.\n\nBy the way that pharse is considering Windows environment and perhaps\ncmd.exe. So the folloinwg description:\n\nhttps://www.postgresql.org/docs/current/install-windows-full.html\n> In recent SDK versions you can change the targeted CPU architecture,\n> build type, and target OS by using the setenv command, e.g. setenv\n> /x86 /release /xp to target Windows XP or later with a 32-bit\n> release build. See /? for other options to setenv. All commands\n> should be run from the src\\tools\\msvc directory.\n\nAFAICS we cannot use \"setenv command\" on cmd.exe, or no such command\nfound in the msvc directory.\n\n> > - installation.sgml is mentioning NT/2000/XP as platforms that don't\n> > support adduser/su, command.\n> \n> No objections to simplify that a bit.\n\nSorry for the ambiguity. I meant the following part\n\ninstallation.sgml\n> <para>\n> <productname>PostgreSQL</productname> can be expected to work on these operating\n> systems: Linux (all recent distributions), Windows (Win2000 SP4 and later),\n> FreeBSD, OpenBSD, NetBSD, macOS, AIX, HP/UX, and Solaris.\n> Other Unix-like systems may also work but are not currently\n> being tested. In most cases, all CPU architectures supported by\n\n(The coming version of) PostgreSQL doesn't support Win2000 SP4.\n\n> Attached is a simplified version. It is smaller than the previous\n> one, but that's already a good cut. I have also done some testing\n> with the service manager to check after pipe_read_line(), and that\n> works.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 18 Feb 2020 17:54:43 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 7:54 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Dec 19, 2019 at 08:09:45PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > This is probably not an issue for the supported MSVC and their SDK, but\n> > current MinGW defaults to Windows 2003 [1]. So I would suggest a logic\n> like:\n> >\n> > #define WINNTVER(ver) ((ver) >> 16)\n> > #define NTDDI_VERSION 0x06000100\n> > #define _WIN32_WINNT WINNTVER(NTDDI_VERSION)\n> >\n> > [1]\n> >\n> https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n>\n> You're right, thanks for the pointer. This is this part of the\n> header:\n> #define NTDDI_VERSION NTDDI_WS03\n>\n> Thinking more about that, the changes in win32.h are giving me cold\n> feet.\n>\n>\nMaybe this needs a specific thread, as it is not quite cruft but something\nthat will require maintenance.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Feb 18, 2020 at 7:54 AM Michael Paquier <michael@paquier.xyz> wrote:On Thu, Dec 19, 2019 at 08:09:45PM +0100, Juan José Santamaría Flecha wrote:\n> This is probably not an issue for the supported MSVC and their SDK, but\n> current MinGW defaults to Windows 2003 [1]. So I would suggest a logic like:\n> \n> #define WINNTVER(ver) ((ver) >> 16)\n> #define NTDDI_VERSION 0x06000100\n> #define _WIN32_WINNT WINNTVER(NTDDI_VERSION)\n> \n> [1]\n> https://github.com/mirror/mingw-w64/blob/master/mingw-w64-headers/include/sdkddkver.h\n\nYou're right, thanks for the pointer. This is this part of the\nheader:\n#define NTDDI_VERSION NTDDI_WS03\n\nThinking more about that, the changes in win32.h are giving me cold\nfeet.\nMaybe this needs a specific thread, as it is not quite cruft but something that will require maintenance.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 18 Feb 2020 12:05:42 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 9:56 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com>\nwrote:\n\n>\n> https://www.postgresql.org/docs/current/install-windows-full.html\n> > In recent SDK versions you can change the targeted CPU architecture,\n> > build type, and target OS by using the setenv command, e.g. setenv\n> > /x86 /release /xp to target Windows XP or later with a 32-bit\n> > release build. See /? for other options to setenv. All commands\n> > should be run from the src\\tools\\msvc directory.\n>\n> AFAICS we cannot use \"setenv command\" on cmd.exe, or no such command\n> found in the msvc directory.\n>\n>\nI cannot point when SetEnv.bat was exactly dropped, probably Windows 7 SDK\nwas the place where it was included [1], so that needs to be updated.\n\nUsing VS2019 and VS2017 this would be done using VsDevCmd.bat [2], while\nVS2015 and VS2013 use VSVARS32.bat.\n\n[1]\nhttps://docs.microsoft.com/en-us/previous-versions/visualstudio/windows-sdk/ff660764(v=vs.100)?redirectedfrom=MSDN\n[2]\nhttps://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/how-to-set-environment-variables-for-the-visual-studio-command-line\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Feb 18, 2020 at 9:56 AM Kyotaro Horiguchi <horikyota.ntt@gmail.com> wrote:\nhttps://www.postgresql.org/docs/current/install-windows-full.html\n> In recent SDK versions you can change the targeted CPU architecture,\n> build type, and target OS by using the setenv command, e.g. setenv\n> /x86 /release /xp to target Windows XP or later with a 32-bit\n> release build. See /? for other options to setenv. All commands\n> should be run from the src\\tools\\msvc directory.\n\nAFAICS we cannot use \"setenv command\" on cmd.exe, or no such command\nfound in the msvc directory. I cannot point when SetEnv.bat was exactly dropped, probably Windows 7 SDK was the place where it was included [1], so that needs to be updated.Using VS2019 and VS2017 this would be done using VsDevCmd.bat \n\n [2], while VS2015 and VS2013 use VSVARS32.bat.[1] https://docs.microsoft.com/en-us/previous-versions/visualstudio/windows-sdk/ff660764(v=vs.100)?redirectedfrom=MSDN[2] https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/how-to-set-environment-variables-for-the-visual-studio-command-lineRegards,Juan José Santamaría Flecha",
"msg_date": "Tue, 18 Feb 2020 12:26:06 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 12:26 PM Juan José Santamaría Flecha <\njuanjo.santamaria@gmail.com> wrote:\n\n>\n> [Edit] ... probably Windows 7 SDK was the *last* place where it was\nincluded [1]...\n\n>\n> Regards,\n>\n> Juan José Santamaría Flecha\n>\n\nOn Tue, Feb 18, 2020 at 12:26 PM Juan José Santamaría Flecha <juanjo.santamaria@gmail.com> wrote:[Edit] ... probably Windows 7 SDK was the *last* place where it was included [1]...Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 18 Feb 2020 16:46:04 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 12:05:42PM +0100, Juan José Santamaría Flecha wrote:\n> Maybe this needs a specific thread, as it is not quite cruft but something\n> that will require maintenance.\n\nMakes sense. I have discarded that part for now.\n--\nMichael",
"msg_date": "Wed, 19 Feb 2020 12:42:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 12:26:06PM +0100, Juan José Santamaría Flecha wrote:\n> I cannot point when SetEnv.bat was exactly dropped, probably Windows 7 SDK\n> was the place where it was included [1], so that needs to be updated.\n> \n> Using VS2019 and VS2017 this would be done using VsDevCmd.bat [2], while\n> VS2015 and VS2013 use VSVARS32.bat.\n\nWould you like to write a patch for that part?\n--\nMichael",
"msg_date": "Wed, 19 Feb 2020 12:49:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Tue, Feb 18, 2020 at 05:54:43PM +0900, Kyotaro Horiguchi wrote:\n> I understand that this is not for back-patching.\n\nCleanups don't go to back-branches.\n\n> At Tue, 18 Feb 2020 16:44:59 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> The unmodified section just above is griping that \"Strange they call\n> UNIX an application\". The expression \"application such as UNIX\" seems\n> corresponding to the gripe. I tried to find the soruce of the phrase\n> but the above URL (.._crt_signal.asp) sent me \"We're sorry, the page\n> you requested cannot be found.\":(\n\nYes, we should use that instead:\nhttps://docs.microsoft.com/en-us/cpp/c-runtime-library/reference/signal\n\n> Do we let the new PG version for already-unsupported platforms? If I\n> don't missing anything Windows Server 2008 is already\n> End-Of-Extended-Support (2020/1/14) along with Windows 7.\n\nWindows is known for keeping things backward compatible, so I don't\nsee any reason to not allow Postgres to run on those versions.\nOutdated of course, still they could be used at runtime even if they\ncannot compile the code.\n\n> By the way that pharse is considering Windows environment and perhaps\n> cmd.exe. So the folloinwg description:\n> \n> https://www.postgresql.org/docs/current/install-windows-full.html\n\nLet's tackle that as a separate patch as this is MSVC-dependent.\n\n>> <para>\n>> <productname>PostgreSQL</productname> can be expected to work on these operating\n>> systems: Linux (all recent distributions), Windows (Win2000 SP4 and later),\n>> FreeBSD, OpenBSD, NetBSD, macOS, AIX, HP/UX, and Solaris.\n>> Other Unix-like systems may also work but are not currently\n>> being tested. In most cases, all CPU architectures supported by\n> \n> (The coming version of) PostgreSQL doesn't support Win2000 SP4.\n\nRight, per the change for src/common/exec.c. I am wondering though if\nwe don't have more portability issues if we try to run Postgres on\nsomething older than XP as there has been many changes in the last\ncouple of years, and we have no more buildfarm members that old.\nAnyway, that's not worth the cost. For now I have applied to the tree\nthe smaller version as that's still a good cut.\n--\nMichael",
"msg_date": "Wed, 19 Feb 2020 13:22:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 4:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Feb 18, 2020 at 12:26:06PM +0100, Juan José Santamaría Flecha\n> wrote:\n> > I cannot point when SetEnv.bat was exactly dropped, probably Windows 7\n> SDK\n> > was the place where it was *last* included [1], so that needs to be\n> updated.\n> >\n> > Using VS2019 and VS2017 this would be done using VsDevCmd.bat [2], while\n> > VS2015 and VS2013 use VSVARS32.bat.\n>\n> Would you like to write a patch for that part?\n>\n\nPlease find a patched for so. I have tried to make it more version neutral.\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Wed, 19 Feb 2020 19:02:55 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Wed, Feb 19, 2020 at 07:02:55PM +0100, Juan José Santamaría Flecha wrote:\n> Please find a patched for so. I have tried to make it more version\n> neutral.\n\n+ You can change certain build options, such as the targeted CPU\n+ architecture, build type, and the selection of the SDK by using either\n+ <command>VSVARS32.bat</command> or <command>VsDevCmd.bat</command> depending\n+ on your <productname>Visual Studio</productname> release. All commands\n+ should be run from the <filename>src\\tools\\msvc</filename> directory.\n\nBoth commands have different ways of doing things, and don't really\nshine with their documentation, so it could save time to the reader to\nadd more explicit details of how to switch to the 32-bit mode, like\nwith \"VsDevCmd -arch=x86\". And I am not actually sure which\nenvironment variable to touch when using VSVARS32.bat or\nVSVARSALL.bat with MSVC <= 2017.\n--\nMichael",
"msg_date": "Thu, 20 Feb 2020 12:23:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Thu, Feb 20, 2020 at 4:23 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n>\n> + You can change certain build options, such as the targeted CPU\n> + architecture, build type, and the selection of the SDK by using either\n> + <command>VSVARS32.bat</command> or <command>VsDevCmd.bat</command>\n> depending\n> + on your <productname>Visual Studio</productname> release. All commands\n> + should be run from the <filename>src\\tools\\msvc</filename> directory.\n>\n\nI think more parts of this paragraph need tuning, like:\n\n\"In Visual Studio, start the Visual Studio Command Prompt. If you wish to\nbuild a 64-bit version, you must use the 64-bit version of the command, and\nvice versa.\"\n\nThis is what VsDevCmd.bat does, seting up the Visual Studio Command Prompt,\nbut from the command-line.\n\nAlso the following:\n\n\"In the Microsoft Windows SDK, start the CMD shell listed under the SDK on\nthe Start Menu.\"\n\nThis is not the case, you would be working in the CMD setup previously from\nVisual Studio.\n\n\n> And I am not actually sure which\n> environment variable to touch when using VSVARS32.bat or\n> VSVARSALL.bat with MSVC <= 2017.\n>\n\nActually, you can still use the vcvars% scripts to configure architecture,\nplatform_type and winsdk_version with current VS [1].\n\nBoth commands have different ways of doing things, and don't really\n> shine with their documentation\n>\n\nI hear you.\n\nPlease find attached a new version that addresses these issues.\n\n[1]\nhttps://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=vs-2019\n\nRegards,\n\nJuan José Santamaría Flecha",
"msg_date": "Thu, 20 Feb 2020 12:39:56 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Clean up some old cruft related to Windows"
},
{
"msg_contents": "On Thu, Feb 20, 2020 at 12:39:56PM +0100, Juan José Santamaría Flecha wrote:\n> Actually, you can still use the vcvars% scripts to configure architecture,\n> platform_type and winsdk_version with current VS [1].\n\nWe still support the build down to MSVC 2013, so I think that it is\ngood to mention the options available for 2013 and 2015 as well, as\nnoted here:\nhttps://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-options/how-to-set-environment-variables-for-the-visual-studio-command-line\n\"Visual Studio 2015 and earlier versions used VSVARS32.bat, not\nVsDevCmd.bat for the same purpose.\"\n\n> Please find attached a new version that addresses these issues.\n> \n> [1]\n> https://docs.microsoft.com/en-us/cpp/build/building-on-the-command-line?view=vs-2019\n\nThanks, applied after tweaking the text a bit. I have applied that\ndown to 12 where we support MSVC from 2013.\n--\nMichael",
"msg_date": "Fri, 21 Feb 2020 12:08:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Clean up some old cruft related to Windows"
}
] |
[
{
"msg_contents": "The publication exists but for some reason the function can't find it\n\nSELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL,\nNULL,'proto_version','1','publication_names','dbz_publication');\nERROR: publication \"dbz_publication\" does not exist\nCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change\ncallback, associated LSN 0/307D8E8\npostgres=# select * from pg_publication;\n pubname | pubowner | puballtables | pubinsert | pubupdate |\npubdelete | pubtruncate\n-----------------+----------+--------------+-----------+-----------+-----------+-------------\n dbz_publication | 10 | t | t | t | t\n | t\n(1 row)\n\npostgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium',\nNULL, NULL,'proto_version','1','publication_names','dbz_publication');\nERROR: publication \"dbz_publication\" does not exist\nCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change\ncallback, associated LSN 0/307D8E8\n\nDave Cramer\n\nThe publication exists but for some reason the function can't find itSELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL, NULL,'proto_version','1','publication_names','dbz_publication');ERROR: publication \"dbz_publication\" does not existCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change callback, associated LSN 0/307D8E8postgres=# select * from pg_publication; pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate-----------------+----------+--------------+-----------+-----------+-----------+------------- dbz_publication | 10 | t | t | t | t | t(1 row)postgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL, NULL,'proto_version','1','publication_names','dbz_publication');ERROR: publication \"dbz_publication\" does not existCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change callback, associated LSN 0/307D8E8Dave Cramer",
"msg_date": "Thu, 19 Dec 2019 11:59:05 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Thu, 19 Dec 2019 at 11:59, Dave Cramer <davecramer@gmail.com> wrote:\n\n> The publication exists but for some reason the function can't find it\n>\n> SELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL,\n> NULL,'proto_version','1','publication_names','dbz_publication');\n> ERROR: publication \"dbz_publication\" does not exist\n> CONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change\n> callback, associated LSN 0/307D8E8\n> postgres=# select * from pg_publication;\n> pubname | pubowner | puballtables | pubinsert | pubupdate |\n> pubdelete | pubtruncate\n>\n> -----------------+----------+--------------+-----------+-----------+-----------+-------------\n> dbz_publication | 10 | t | t | t | t\n> | t\n> (1 row)\n>\n> postgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium',\n> NULL, NULL,'proto_version','1','publication_names','dbz_publication');\n> ERROR: publication \"dbz_publication\" does not exist\n> CONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change\n> callback, associated LSN 0/307D8E8\n>\n\nIt seems that if you drop the publication on an existing slot it needs to\nbe recreated. Is this expected behaviour\n\ndrop publication dbz_publication ;\nDROP PUBLICATION\npostgres=# create publication dbz_publication for all tables;\nCREATE PUBLICATION\npostgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium',\nNULL, NULL,'proto_version','1','publication_names','dbz_publication');\nERROR: publication \"dbz_publication\" does not exist\nCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change\ncallback, associated LSN 0/4324180\n\nDave Cramer\n\nOn Thu, 19 Dec 2019 at 11:59, Dave Cramer <davecramer@gmail.com> wrote:The publication exists but for some reason the function can't find itSELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL, NULL,'proto_version','1','publication_names','dbz_publication');ERROR: publication \"dbz_publication\" does not existCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change callback, associated LSN 0/307D8E8postgres=# select * from pg_publication; pubname | pubowner | puballtables | pubinsert | pubupdate | pubdelete | pubtruncate-----------------+----------+--------------+-----------+-----------+-----------+------------- dbz_publication | 10 | t | t | t | t | t(1 row)postgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL, NULL,'proto_version','1','publication_names','dbz_publication');ERROR: publication \"dbz_publication\" does not existCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change callback, associated LSN 0/307D8E8It seems that if you drop the publication on an existing slot it needs to be recreated. Is this expected behaviourdrop publication dbz_publication ;DROP PUBLICATIONpostgres=# create publication dbz_publication for all tables;CREATE PUBLICATIONpostgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium', NULL, NULL,'proto_version','1','publication_names','dbz_publication');ERROR: publication \"dbz_publication\" does not existCONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4324180Dave Cramer",
"msg_date": "Thu, 19 Dec 2019 13:15:20 -0500",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On 2019-12-19 19:15, Dave Cramer wrote:\n> It seems that if you drop the publication on an existing slot it needs \n> to be recreated. Is this expected behaviour\n\nA publication is not associated with a slot. Only a subscription is \nassociated with a slot.\n\n> drop publication dbz_publication ;\n> DROP PUBLICATION\n> postgres=# create publication dbz_publication for all tables;\n> CREATE PUBLICATION\n> postgres=# SELECT * FROM pg_logical_slot_get_binary_changes('debezium', \n> NULL, NULL,'proto_version','1','publication_names','dbz_publication');\n> ERROR: publication \"dbz_publication\" does not exist\n> CONTEXT: slot \"debezium\", output plugin \"pgoutput\", in the change \n> callback, associated LSN 0/4324180\n\nThis must be something particular to Debezium.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Dec 2019 19:19:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 07:19:56PM +0100, Peter Eisentraut wrote:\n>On 2019-12-19 19:15, Dave Cramer wrote:\n>>It seems that if you drop the publication on an existing slot it \n>>needs to be recreated. Is this expected behaviour\n>\n>A publication is not associated with a slot. Only a subscription is \n>associated with a slot.\n>\n>>drop publication dbz_publication ;\n>>DROP PUBLICATION\n>>postgres=# create publication dbz_publication for all tables;\n>>CREATE PUBLICATION\n>>postgres=# SELECT * FROM \n>>pg_logical_slot_get_binary_changes('debezium', NULL, \n>>NULL,'proto_version','1','publication_names','dbz_publication');\n>>ERROR: �publication \"dbz_publication\" does not exist\n>>CONTEXT: �slot \"debezium\", output plugin \"pgoutput\", in the change \n>>callback, associated LSN 0/4324180\n>\n>This must be something particular to Debezium.\n>\n\nYeah, I don't see this error message anywhere in our sources on 11 or\n12, so perhaps debezium does something funny? It's not clear to me\nwhere, though, as this suggests it uses the pgoutput module.\n\nWhile trying to reproduce this I however ran into a related issue with\npgoutput/pg_logical_slot_get_binary_changes. If you call the function\nrepeatedly (~10x) you'll get an error like this:\n\nFATAL: out of relcache_callback_list slots\nCONTEXT: slot \"slot\", output plugin \"pgoutput\", in the startup callback\nserver closed the connection unexpectedly\n\tThis probably means the server terminated abnormally\n\tbefore or while processing the request.\n\nThe reason is very simple - each call executes pgoutput_startup, which\ndoes CacheRegisterRelcacheCallback in init_rel_sync_cache. And we do\nthis on each pg_logical_slot_get_binary_changes() call and never remove\nthe callbacks, so we simply run out of MAX_RELCACHE_CALLBACKS slots.\n\nNot sure if this is a known issue/behavior, but it seems a bit annoying\nand possibly related to the issue reported by Dave.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 20 Dec 2019 02:39:11 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Thu, 19 Dec 2019 19:19:56 +0100\nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2019-12-19 19:15, Dave Cramer wrote:\n> > It seems that if you drop the publication on an existing slot it needs \n> > to be recreated. Is this expected behaviour \n> \n> A publication is not associated with a slot. Only a subscription is \n> associated with a slot.\n\nCouldn't it be the same scenario I reported back in october? See:\n\n Subject: Logical replication dead but synching\n Date: Thu, 10 Oct 2019 11:57:52 +0200\n\n\n\n",
"msg_date": "Fri, 20 Dec 2019 10:55:25 +0100",
"msg_from": "Jehan-Guillaume de Rorthais <jgdr@dalibo.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "I've been running into a similar issue and am a little puzzled by it,\nespecially since it survives restarts.\n\nOn Fri, Dec 20, 2019 at 2:39 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> Yeah, I don't see this error message anywhere in our sources on 11 or\n> 12, so perhaps debezium does something funny? It's not clear to me\n> where, though, as this suggests it uses the pgoutput module.\n\nThe error message comes from GetPublicationByName and the context is\nadded by output_plugin_error_callback in logical.c. Stack trace of\nwhere the error occurs below.\n\n# SELECT * FROM pg_publication;\n pubname | pubowner | puballtables | pubinsert | pubupdate |\npubdelete | pubtruncate\n----------------+----------+--------------+-----------+-----------+-----------+-------------\n migration_pub | 10 | f | t | t | t | t\n(1 row)\n\n# SELECT * FROM pg_replication_slots ;\n slot_name | plugin | slot_type | datoid | database | temporary\n| active | active_pid | xmin | catalog_xmin | restart_lsn |\nconfirmed_flush_lsn\n----------------+----------+-----------+--------+----------+-----------+--------+------------+------+--------------+-------------+---------------------\n migration_slot | pgoutput | logical | 13121 | postgres | f\n| f | | | 17153 | 0/CDFC840 | 0/CDFC878\n(1 row)\n\n# SELECT * FROM pg_logical_slot_get_binary_changes('migration_slot',\nNULL, NULL,'proto_version','1','publication_names','migration_pub');\nERROR: publication \"migration_pub\" does not exist\nCONTEXT: slot \"migration_slot\", output plugin \"pgoutput\", in the\nchange callback, associated LSN 0/CDFC878\n\n#0 errstart (elevel=elevel@entry=20,\nfilename=filename@entry=0x5581958a6c70 \"pg_publication.c\",\nlineno=lineno@entry=401,\n funcname=funcname@entry=0x5581958a6ea0 <__func__.24991>\n\"GetPublicationByName\", domain=domain@entry=0x0) at elog.c:251\n#1 0x00005581954771e5 in GetPublicationByName (pubname=0x558196d107a0\n\"migration_pub\", missing_ok=missing_ok@entry=false) at\npg_publication.c:401\n#2 0x00007f77ba1cd704 in LoadPublications (pubnames=<optimized out>)\nat pgoutput.c:467\n#3 0x00007f77ba1cd7e3 in get_rel_sync_entry\n(data=data@entry=0x558196cedee8, relid=<optimized out>) at\npgoutput.c:559\n#4 0x00007f77ba1cdb52 in pgoutput_change (ctx=0x558196d7b4f8,\ntxn=<optimized out>, relation=0x7f77ba1e67c8, change=0x558196cdbab8)\nat pgoutput.c:315\n#5 0x000055819566a2e6 in change_cb_wrapper (cache=<optimized out>,\ntxn=<optimized out>, relation=<optimized out>, change=<optimized out>)\nat logical.c:747\n#6 0x0000558195675785 in ReorderBufferCommit (rb=0x558196d35d38,\nxid=xid@entry=17153, commit_lsn=215994160, end_lsn=<optimized out>,\n commit_time=commit_time@entry=662061745906576,\norigin_id=origin_id@entry=0, origin_lsn=0) at reorderbuffer.c:1592\n#7 0x0000558195667407 in DecodeCommit (ctx=ctx@entry=0x558196d7b4f8,\nbuf=buf@entry=0x7ffd61faae60, parsed=parsed@entry=0x7ffd61faacf0,\nxid=17153) at decode.c:641\n#8 0x00005581956675a0 in DecodeXactOp (ctx=0x558196d7b4f8,\nbuf=buf@entry=0x7ffd61faae60) at decode.c:249\n#9 0x00005581956684cb in LogicalDecodingProcessRecord\n(ctx=ctx@entry=0x558196d7b4f8, record=<optimized out>) at decode.c:117\n#10 0x000055819566c108 in pg_logical_slot_get_changes_guts\n(fcinfo=0x7ffd61fab120, confirm=confirm@entry=true,\nbinary=binary@entry=true) at logicalfuncs.c:309\n#11 0x000055819566c35d in pg_logical_slot_get_binary_changes\n(fcinfo=<optimized out>) at logicalfuncs.c:391\n\ncheers,\nMarco\n\n\n",
"msg_date": "Thu, 24 Dec 2020 12:50:23 +0100",
"msg_from": "Marco Slot <marco@citusdata.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On 12/24/20 12:50 PM, Marco Slot wrote:\n> I've been running into a similar issue and am a little puzzled by it,\n> especially since it survives restarts.\n> \n\nInteresting. Which PostgreSQL version are you using? Any idea how to \nreproduce it? Were there any failures right before the issue appeared?\n\nI wonder if this might be a case of index corruption. Can you try \nforcing an index scan on pg_publication?\n\n SET enable_seqscan = false;\n SET enable_bitmapscan = off;\n SELECT * FROM pg_publication WHERE pubname = 'migration_pub';\n\nAlso, it might be helpful to know why get_rel_sync_entry ended up \ncalling LoadPublications - did we just create the entry?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 24 Dec 2020 13:38:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "> 20 дек. 2019 г., в 06:39, Tomas Vondra <tomas.vondra@2ndquadrant.com> написал(а):\n> \n> While trying to reproduce this I however ran into a related issue with\n> pgoutput/pg_logical_slot_get_binary_changes. If you call the function\n> repeatedly (~10x) you'll get an error like this:\n> \n> FATAL: out of relcache_callback_list slots\n> CONTEXT: slot \"slot\", output plugin \"pgoutput\", in the startup callback\n> server closed the connection unexpectedly\n> \tThis probably means the server terminated abnormally\n> \tbefore or while processing the request.\n> \n> The reason is very simple - each call executes pgoutput_startup, which\n> does CacheRegisterRelcacheCallback in init_rel_sync_cache. And we do\n> this on each pg_logical_slot_get_binary_changes() call and never remove\n> the callbacks, so we simply run out of MAX_RELCACHE_CALLBACKS slots.\n> \n> Not sure if this is a known issue/behavior, but it seems a bit annoying\n> and possibly related to the issue reported by Dave.\n\nSorry for bumping old thread.\nI was involved in troubleshooting logical replication recently. And found out that it sometimes has a really annoying error reporting.\nWhile source of the problem was allegedly in low max_replication_slots, users were getting only this error about relcache_callback_list.\n\nMaybe we could fix this particular error by deduplicating callbacks?\n\n\nBest regards, Andrey Borodin.",
"msg_date": "Fri, 23 Jul 2021 16:02:43 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "Reviving this thread\n\n2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply\nworker for subscription \"sub_mycluster_alltables\" has started\n2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table\nsynchronization worker for subscription \"sub_mycluster_alltables\",\ntable \"t_random\" has started\n2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table\nsynchronization worker for subscription \"sub_mycluster_alltables\",\ntable \"t_random\" has finished\n2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from\nWAL stream: ERROR: publication \"sub_mycluster_alltables\" does not\nexist\n CONTEXT: slot \"sub_mycluster_alltables\", output plugin\n\"pgoutput\", in the change callback, associated LSN 0/4015DF0\n2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical\nreplication worker\" (PID 3738) exited with exit code 1\n\n\nselect * from pg_publication;\n-[ RECORD 1 ]+------------------------\noid | 16415\npubname | sub_mycluster_alltables\npubowner | 10\npuballtables | t\npubinsert | t\npubupdate | t\npubdelete | t\npubtruncate | t\n\n\nselect * from pg_replication_slots;\n-[ RECORD 1 ]-------+--------------------------------\nslot_name | mycluster_cjvq_68cf55677c_6vgcf\nplugin |\nslot_type | physical\ndatoid |\ndatabase |\ntemporary | f\nactive | t\nactive_pid | 433\nxmin |\ncatalog_xmin |\nrestart_lsn | 0/D000000\nconfirmed_flush_lsn |\n-[ RECORD 2 ]-------+--------------------------------\nslot_name | sub_mycluster_alltables\nplugin | pgoutput\nslot_type | logical\ndatoid | 16395\ndatabase | mycluster\ntemporary | f\nactive | t\nactive_pid | 8799\nxmin |\ncatalog_xmin | 500\nrestart_lsn | 0/40011C0\n\nconfirmed_flush_lsn | 0/40011C0\n\n\nI'm at a loss as to where to even look at this point.\n\nDave\n\nReviving this thread2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply worker for subscription \"sub_mycluster_alltables\" has started\n2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has started\n2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has finished\n2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n CONTEXT: slot \"sub_mycluster_alltables\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical replication worker\" (PID 3738) exited with exit code 1select * from pg_publication;\n-[ RECORD 1 ]+------------------------\noid | 16415\npubname | sub_mycluster_alltables\npubowner | 10\npuballtables | t\npubinsert | t\npubupdate | t\npubdelete | t\npubtruncate | tselect * from pg_replication_slots;\n-[ RECORD 1 ]-------+--------------------------------\nslot_name | mycluster_cjvq_68cf55677c_6vgcf\nplugin |\nslot_type | physical\ndatoid |\ndatabase |\ntemporary | f\nactive | t\nactive_pid | 433\nxmin |\ncatalog_xmin |\nrestart_lsn | 0/D000000\nconfirmed_flush_lsn |\n-[ RECORD 2 ]-------+--------------------------------\nslot_name | sub_mycluster_alltables\nplugin | pgoutput\nslot_type | logical\ndatoid | 16395\ndatabase | mycluster\ntemporary | f\nactive | t\nactive_pid | 8799\nxmin |\ncatalog_xmin | 500\nrestart_lsn | 0/40011C0 confirmed_flush_lsn | 0/40011C0 I'm at a loss as to where to even look at this point.Dave",
"msg_date": "Tue, 10 Aug 2021 15:47:35 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n> Reviving this thread\n>\n> 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply worker for subscription \"sub_mycluster_alltables\" has started\n> 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has started\n> 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has finished\n> 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n> CONTEXT: slot \"sub_mycluster_alltables\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n> 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical replication worker\" (PID 3738) exited with exit code 1\n>\n>\n> select * from pg_publication;\n> -[ RECORD 1 ]+------------------------\n> oid | 16415\n> pubname | sub_mycluster_alltables\n> pubowner | 10\n> puballtables | t\n> pubinsert | t\n> pubupdate | t\n> pubdelete | t\n> pubtruncate | t\n>\n\nBy any chance, did you dropped and recreated this publication as\nmentioned in your first email? If so, I think this can happen because\nof our use of historical snapshots to consult system catalogs.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 16:54:02 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Wed, 11 Aug 2021 at 07:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com> wrote:\n> >\n> > Reviving this thread\n> >\n> > 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply\n> worker for subscription \"sub_mycluster_alltables\" has started\n> > 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table\n> synchronization worker for subscription \"sub_mycluster_alltables\", table\n> \"t_random\" has started\n> > 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table\n> synchronization worker for subscription \"sub_mycluster_alltables\", table\n> \"t_random\" has finished\n> > 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from\n> WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n> > CONTEXT: slot \"sub_mycluster_alltables\", output plugin\n> \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n> > 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical\n> replication worker\" (PID 3738) exited with exit code 1\n> >\n> >\n> > select * from pg_publication;\n> > -[ RECORD 1 ]+------------------------\n> > oid | 16415\n> > pubname | sub_mycluster_alltables\n> > pubowner | 10\n> > puballtables | t\n> > pubinsert | t\n> > pubupdate | t\n> > pubdelete | t\n> > pubtruncate | t\n> >\n>\n> By any chance, did you dropped and recreated this publication as\n> mentioned in your first email? If so, I think this can happen because\n> of our use of historical snapshots to consult system catalogs.\n>\n\nIn this case, no.\n\nI am suspecting this error comes from pgoutput though.\n\nDave\n\nOn Wed, 11 Aug 2021 at 07:24, Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com> wrote:\n>\n> Reviving this thread\n>\n> 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply worker for subscription \"sub_mycluster_alltables\" has started\n> 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has started\n> 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has finished\n> 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n> CONTEXT: slot \"sub_mycluster_alltables\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n> 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical replication worker\" (PID 3738) exited with exit code 1\n>\n>\n> select * from pg_publication;\n> -[ RECORD 1 ]+------------------------\n> oid | 16415\n> pubname | sub_mycluster_alltables\n> pubowner | 10\n> puballtables | t\n> pubinsert | t\n> pubupdate | t\n> pubdelete | t\n> pubtruncate | t\n>\n\nBy any chance, did you dropped and recreated this publication as\nmentioned in your first email? If so, I think this can happen because\nof our use of historical snapshots to consult system catalogs.In this case, no. I am suspecting this error comes from pgoutput though. Dave",
"msg_date": "Wed, 11 Aug 2021 07:27:37 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 4:57 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n> On Wed, 11 Aug 2021 at 07:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com> wrote:\n>> >\n>> > Reviving this thread\n>> >\n>> > 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply worker for subscription \"sub_mycluster_alltables\" has started\n>> > 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has started\n>> > 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has finished\n>> > 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n>> > CONTEXT: slot \"sub_mycluster_alltables\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n>> > 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical replication worker\" (PID 3738) exited with exit code 1\n>> >\n>> >\n>> > select * from pg_publication;\n>> > -[ RECORD 1 ]+------------------------\n>> > oid | 16415\n>> > pubname | sub_mycluster_alltables\n>> > pubowner | 10\n>> > puballtables | t\n>> > pubinsert | t\n>> > pubupdate | t\n>> > pubdelete | t\n>> > pubtruncate | t\n>> >\n>>\n>> By any chance, did you dropped and recreated this publication as\n>> mentioned in your first email? If so, I think this can happen because\n>> of our use of historical snapshots to consult system catalogs.\n>\n>\n> In this case, no.\n>\n> I am suspecting this error comes from pgoutput though.\n>\n\nI think it is and the context is generated via\noutput_plugin_error_callback. Is this reproducible for you and if so,\ncan you share a test case or some steps to reproduce this? Does this\nwork and suddenly start giving errors or it happens the very first\ntime you tried to set up publication/subscription? I think some more\ndetails are required about your setup and steps to analyze this\nproblem. You might want to check publication-side logs but not sure if\nget any better clue there.\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Wed, 11 Aug 2021 17:06:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Wed, 11 Aug 2021 at 07:37, Amit Kapila <amit.kapila16@gmail.com> wrote:\n\n> On Wed, Aug 11, 2021 at 4:57 PM Dave Cramer <davecramer@gmail.com> wrote:\n> >\n> > On Wed, 11 Aug 2021 at 07:24, Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> >>\n> >> On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com>\n> wrote:\n> >> >\n> >> > Reviving this thread\n> >> >\n> >> > 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply\n> worker for subscription \"sub_mycluster_alltables\" has started\n> >> > 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table\n> synchronization worker for subscription \"sub_mycluster_alltables\", table\n> \"t_random\" has started\n> >> > 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table\n> synchronization worker for subscription \"sub_mycluster_alltables\", table\n> \"t_random\" has finished\n> >> > 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data\n> from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not\n> exist\n> >> > CONTEXT: slot \"sub_mycluster_alltables\", output plugin\n> \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n> >> > 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical\n> replication worker\" (PID 3738) exited with exit code 1\n> >> >\n> >> >\n> >> > select * from pg_publication;\n> >> > -[ RECORD 1 ]+------------------------\n> >> > oid | 16415\n> >> > pubname | sub_mycluster_alltables\n> >> > pubowner | 10\n> >> > puballtables | t\n> >> > pubinsert | t\n> >> > pubupdate | t\n> >> > pubdelete | t\n> >> > pubtruncate | t\n> >> >\n> >>\n> >> By any chance, did you dropped and recreated this publication as\n> >> mentioned in your first email? If so, I think this can happen because\n> >> of our use of historical snapshots to consult system catalogs.\n> >\n> >\n> > In this case, no.\n> >\n> > I am suspecting this error comes from pgoutput though.\n> >\n>\n> I think it is and the context is generated via\n> output_plugin_error_callback. Is this reproducible for you and if so,\n> can you share a test case or some steps to reproduce this? Does this\n> work and suddenly start giving errors or it happens the very first\n> time you tried to set up publication/subscription? I think some more\n> details are required about your setup and steps to analyze this\n> problem. You might want to check publication-side logs but not sure if\n> get any better clue there.\n>\n\nIn this case I am the messenger. I will try to get a repeatable test case.\n\nDave\n\nOn Wed, 11 Aug 2021 at 07:37, Amit Kapila <amit.kapila16@gmail.com> wrote:On Wed, Aug 11, 2021 at 4:57 PM Dave Cramer <davecramer@gmail.com> wrote:\n>\n> On Wed, 11 Aug 2021 at 07:24, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> On Wed, Aug 11, 2021 at 1:18 AM Dave Cramer <davecramer@gmail.com> wrote:\n>> >\n>> > Reviving this thread\n>> >\n>> > 2021-08-10 19:05:09.096 UTC [3738] LOG: logical replication apply worker for subscription \"sub_mycluster_alltables\" has started\n>> > 2021-08-10 19:05:09.107 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has started\n>> > 2021-08-10 19:05:12.222 UTC [3739] LOG: logical replication table synchronization worker for subscription \"sub_mycluster_alltables\", table \"t_random\" has finished\n>> > 2021-08-10 19:05:14.806 UTC [3738] ERROR: could not receive data from WAL stream: ERROR: publication \"sub_mycluster_alltables\" does not exist\n>> > CONTEXT: slot \"sub_mycluster_alltables\", output plugin \"pgoutput\", in the change callback, associated LSN 0/4015DF0\n>> > 2021-08-10 19:05:14.811 UTC [175] LOG: background worker \"logical replication worker\" (PID 3738) exited with exit code 1\n>> >\n>> >\n>> > select * from pg_publication;\n>> > -[ RECORD 1 ]+------------------------\n>> > oid | 16415\n>> > pubname | sub_mycluster_alltables\n>> > pubowner | 10\n>> > puballtables | t\n>> > pubinsert | t\n>> > pubupdate | t\n>> > pubdelete | t\n>> > pubtruncate | t\n>> >\n>>\n>> By any chance, did you dropped and recreated this publication as\n>> mentioned in your first email? If so, I think this can happen because\n>> of our use of historical snapshots to consult system catalogs.\n>\n>\n> In this case, no.\n>\n> I am suspecting this error comes from pgoutput though.\n>\n\nI think it is and the context is generated via\noutput_plugin_error_callback. Is this reproducible for you and if so,\ncan you share a test case or some steps to reproduce this? Does this\nwork and suddenly start giving errors or it happens the very first\ntime you tried to set up publication/subscription? I think some more\ndetails are required about your setup and steps to analyze this\nproblem. You might want to check publication-side logs but not sure if\nget any better clue there.In this case I am the messenger. I will try to get a repeatable test case.Dave",
"msg_date": "Wed, 11 Aug 2021 07:40:20 -0400",
"msg_from": "Dave Cramer <davecramer@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "On Wed, Aug 11, 2021 at 1:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> I think it is and the context is generated via\n> output_plugin_error_callback. Is this reproducible for you and if so,\n> can you share a test case or some steps to reproduce this? Does this\n> work and suddenly start giving errors or it happens the very first\n> time you tried to set up publication/subscription? I think some more\n> details are required about your setup and steps to analyze this\n> problem. You might want to check publication-side logs but not sure if\n> get any better clue there.\n\nThis seems to regularly reproduce the issue on PostgreSQL 14.4:\n\ndrop subscription if exists local_sub;\ndrop publication if exists local_pub;\ndrop table if exists local;\n\nselect pg_create_logical_replication_slot('test','pgoutput');\ncreate table local (x int, y int);\ninsert into local values (1,2);\ncreate publication local_pub for table local;\ncreate subscription local_sub connection 'host=localhost port=5432'\npublication local_pub with (create_slot = false, slot_name = 'test',\ncopy_data = false);\n\nThe log on the publisher then repeatedly shows:\n2022-08-04 10:46:56.140 CEST [12785] ERROR: publication \"local_pub\"\ndoes not exist\n2022-08-04 10:46:56.140 CEST [12785] CONTEXT: slot \"test\", output\nplugin \"pgoutput\", in the change callback, associated LSN 8/6C01A270\n2022-08-04 10:46:56.140 CEST [12785] STATEMENT: START_REPLICATION\nSLOT \"test\" LOGICAL 0/0 (proto_version '2', publication_names\n'\"local_pub\"')\n\n(fails in the same way when setting up the subscription on a different node)\n\nThe local_pub does appear in pg_publication, but it seems a bit like\nthe change_cb is using an old snapshot when reading from the catalog\nin GetPublicationByName.\n\ncheers,\nMarco\n\n\n",
"msg_date": "Thu, 4 Aug 2022 10:56:45 +0200",
"msg_from": "Marco Slot <marco.slot@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
},
{
"msg_contents": "At Thu, 4 Aug 2022 10:56:45 +0200, Marco Slot <marco.slot@gmail.com> wrote in \n> On Wed, Aug 11, 2021 at 1:37 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> > I think it is and the context is generated via\n> > output_plugin_error_callback. Is this reproducible for you and if so,\n> > can you share a test case or some steps to reproduce this? Does this\n> > work and suddenly start giving errors or it happens the very first\n> > time you tried to set up publication/subscription? I think some more\n> > details are required about your setup and steps to analyze this\n> > problem. You might want to check publication-side logs but not sure if\n> > get any better clue there.\n> \n> This seems to regularly reproduce the issue on PostgreSQL 14.4:\n> \n> drop subscription if exists local_sub;\n> drop publication if exists local_pub;\n> drop table if exists local;\n> \n> select pg_create_logical_replication_slot('test','pgoutput');\n> create table local (x int, y int);\n> insert into local values (1,2);\n> create publication local_pub for table local;\n> create subscription local_sub connection 'host=localhost port=5432'\n> publication local_pub with (create_slot = false, slot_name = 'test',\n> copy_data = false);\n\n> The local_pub does appear in pg_publication, but it seems a bit like\n> the change_cb is using an old snapshot when reading from the catalog\n> in GetPublicationByName.\n\nYes. So the slot should be created *after* the publication is\ncreated. A discussion [1] was held on allowing missing publications\nin such a case but it has not been rached a conclusion..\n\n[1] https://www.postgresql.org/message-id/CAA4eK1LwQAEPJMTwVe3UYODeNMkK2QHf-WZF5aXp5ZcjDRcrUA%40mail.gmail.com\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Fri, 05 Aug 2022 13:09:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: How is this possible \"publication does not exist\""
}
] |
[
{
"msg_contents": "macOS does not support the socket option TCP_USER_TIMEOUT. Yet, I can \nstart a server with postgres -D ... --tcp-user-timeout=100 without a \ndiagnostic. Only when I connect I get a log entry\n\nLOG: setsockopt(TCP_USER_TIMEOUT) not supported\n\nPerhaps the logic in pq_settcpusertimeout() should be changed like this:\n\n int\n pq_settcpusertimeout(int timeout, Port *port)\n {\n+#ifdef TCP_USER_TIMEOUT\n if (port == NULL || IS_AF_UNIX(port->laddr.addr.ss_family))\n return STATUS_OK;\n\n-#ifdef TCP_USER_TIMEOUT\n if (timeout == port->tcp_user_timeout)\n return STATUS_OK;\n\nSo that the #else branch that is supposed to check this will also be run \nin the postmaster (where port == NULL).\n\nOr perhaps there should be a separate GUC check hook that just does\n\n#ifndef TCP_USER_TIMEOUT\n if (val != 0)\n return false;\n#endif\n return true;\n\nThe same considerations apply to the various TCP keepalive settings, but \nsince those are widely supported the unsupported code paths probably \nhaven't gotten much attention.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 19 Dec 2019 19:26:19 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "TCP option assign hook doesn't work well if option not supported"
},
{
"msg_contents": "On Thu, Dec 19, 2019 at 07:26:19PM +0100, Peter Eisentraut wrote:\n> macOS does not support the socket option TCP_USER_TIMEOUT. Yet, I can start\n> a server with postgres -D ... --tcp-user-timeout=100 without a diagnostic.\n> Only when I connect I get a log entry\n> \n> LOG: setsockopt(TCP_USER_TIMEOUT) not supported\n\nYeah, this choice was made to be consistent with what we have for the\nother TCP parameters.\n\n> So that the #else branch that is supposed to check this will also be run in\n> the postmaster (where port == NULL).\n\nHm. That would partially revisit cc3bda3. No actual objections from\nme to generate a LOG when starting the postmaster as that won't be\ninvasive, though I think that it should be done consistently for all\nthe TCP parameters.\n\n> Or perhaps there should be a separate GUC check hook that just does\n> \n> #ifndef TCP_USER_TIMEOUT\n> if (val != 0)\n> return false;\n> #endif\n> return true;\n> \n> The same considerations apply to the various TCP keepalive settings, but\n> since those are widely supported the unsupported code paths probably haven't\n> gotten much attention.\n\nYeah, Windows does not support tcp_keepalives_count for one, so\nsetting it in postgresql.conf generate the same LOG message for each\nconnection attempt.\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 11:35:50 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TCP option assign hook doesn't work well if option not supported"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile testing something unrelated, Tomas reported[1] that he could\nmake a parallel worker ignore a SIGTERM and hang forever in\nConditionVariableSleep(). I looked into this and realised that it's\nmore likely in master. Commit 1321509f refactored the latch wait loop\nto look a little bit more like other examples* by putting\nCHECK_FOR_INTERRUPTS() after ResetLatch(), whereas previously it was\nat the top of the loop. ConditionVariablePrepareToSleep() was\neffectively relying on that order when it reset the latch without its\nown CFI().\n\nThe bug goes back to the introduction of CVs however, because there\nwas no guarantee that you'd ever reach ConditionVariableSleep(). You\ncould call ConditionVariablePrepareToSleep(), test your condition,\ndecide you're done, then call ConditionVariableCancelSleep(), then\nreach some other WaitLatch() with no intervening CFI(). It might be\nhard to find a code path that actually does that without a\ncoincidental CFI() to save you, but at least in theory the problem\nexists.\n\nI think we should probably just remove the unusual ResetLatch() call,\nrather than adding a CFI(). See attached. Thoughts?\n\n*It can't quite be exactly like the two patterns shown in latch.h,\nnamely { Reset, Test, Wait } and { Test, Wait, Reset }, because the\nreal test is external to this function; we have the other possible\nrotation { Wait, Reset, Test }, and this code is only run if the\nclient's test failed. Really it's a nested loop, with the outer loop\nbelonging to the caller, so we have { Test', { Wait, Reset, Test } }.\nHowever, CHECK_FOR_INTERRUPTS() also counts as a test of work to do,\nand AFAICS it always belongs between Reset and Wait, no matter how far\nyou rotate the loop. I wonder if latch.h should mention that.\n\n[1] https://www.postgresql.org/message-id/20191217232124.3dtrycatgfm6oxxb%40development",
"msg_date": "Fri, 20 Dec 2019 12:05:34 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Condition variables vs interrupts"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 12:05:34PM +1300, Thomas Munro wrote:\n\n> I think we should probably just remove the unusual ResetLatch() call,\n> rather than adding a CFI(). See attached. Thoughts?\n\nI agree: removing the ResetLatch() and having the wait event code deal \nwith it is a better way to go. I wonder why the reset was done in the \nfirst place?\n\n-- \nShawn Debnath\nAmazon Web Services (AWS)\n\n\n",
"msg_date": "Fri, 20 Dec 2019 17:09:46 -0800",
"msg_from": "Shawn Debnath <sdn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: Condition variables vs interrupts"
},
{
"msg_contents": "On Sat, Dec 21, 2019 at 2:10 PM Shawn Debnath <sdn@amazon.com> wrote:\n> On Fri, Dec 20, 2019 at 12:05:34PM +1300, Thomas Munro wrote:\n> > I think we should probably just remove the unusual ResetLatch() call,\n> > rather than adding a CFI(). See attached. Thoughts?\n>\n> I agree: removing the ResetLatch() and having the wait event code deal\n> with it is a better way to go. I wonder why the reset was done in the\n> first place?\n\nThanks for the review. I was preparing to commit this today, but I\nthink I've spotted another latch protocol problem in the stable\nbranches only and I'd to get some more eyeballs on it. I bet it's\nreally hard to hit, but ConditionVariableSleep()'s return path (ie\nwhen the CV has been signalled) forgets that the the latch is\nmultiplexed and latches are merged: just because it was set by\nConditionVariableSignal() doesn't mean it wasn't *also* set by die()\nor some other interrupt, and yet at the point we return, we've reset\nthe latch and not run CFI(), and there's nothing to say the caller\nwon't then immediately wait on the latch in some other code path, and\nsleep like a log despite the earlier delivery of (say) SIGTERM. If\nI'm right about that, I think the solution is to move that CFI down in\nthe stable branches (which you already did for master in commit\n1321509f).",
"msg_date": "Tue, 24 Dec 2019 15:10:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Condition variables vs interrupts"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 3:10 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Sat, Dec 21, 2019 at 2:10 PM Shawn Debnath <sdn@amazon.com> wrote:\n> > On Fri, Dec 20, 2019 at 12:05:34PM +1300, Thomas Munro wrote:\n> > > I think we should probably just remove the unusual ResetLatch() call,\n> > > rather than adding a CFI(). See attached. Thoughts?\n> >\n> > I agree: removing the ResetLatch() and having the wait event code deal\n> > with it is a better way to go. I wonder why the reset was done in the\n> > first place?\n\nI have pushed this on master only.\n\n> Thanks for the review. I was preparing to commit this today, but I\n> think I've spotted another latch protocol problem in the stable\n> branches only and I'd to get some more eyeballs on it. I bet it's\n> really hard to hit, but ConditionVariableSleep()'s return path (ie\n> when the CV has been signalled) forgets that the the latch is\n> multiplexed and latches are merged: just because it was set by\n> ConditionVariableSignal() doesn't mean it wasn't *also* set by die()\n> or some other interrupt, and yet at the point we return, we've reset\n> the latch and not run CFI(), and there's nothing to say the caller\n> won't then immediately wait on the latch in some other code path, and\n> sleep like a log despite the earlier delivery of (say) SIGTERM. If\n> I'm right about that, I think the solution is to move that CFI down in\n> the stable branches (which you already did for master in commit\n> 1321509f).\n\nI'm not going to back-patch this (ie the back-branches version from my\nprevious email) without more discussion, but I still think it's subtly\nbroken.\n\n\n",
"msg_date": "Tue, 28 Jan 2020 15:31:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Condition variables vs interrupts"
}
] |
[
{
"msg_contents": "More about expressions always false.\n1. /src/backend/executor/execExprInterp.c\nndims <= 0 neve be negative, because ndims aways is added up +1\n2. src/backend/utils/adt/formatting.c\nresult is declared long. Comparison with int limits is always false.\n3. src/backend/utils/adt/jsonfuncs.c\nlindex is declared long. . Comparison with int limits is always false.\n4. src/backend/utils/adt/network.c\nip_addrsize is macro and awlays return 4 or 16\n\nregards,\nRanier Vilela",
"msg_date": "Thu, 19 Dec 2019 20:29:39 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix expressions always false"
},
{
"msg_contents": "Ranier Vilela <ranier.vf@gmail.com> writes:\n\n> More about expressions always false.\n> 2. src/backend/utils/adt/formatting.c\n> result is declared long. Comparison with int limits is always false.\n> 3. src/backend/utils/adt/jsonfuncs.c\n> lindex is declared long. . Comparison with int limits is always false.\n\n1) long is 64 bits on Unix-like platforms\n2) checking a long against LONG_MIN/LONG_MAX is _definitely_ pointless\n3) it's being cast to an int for the from_char_set_int() call below\n\nPlease take your time to read the whole context of the code you're\nchanging, and consider other platforms than just Windows.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen\n\n\n",
"msg_date": "Thu, 19 Dec 2019 23:57:59 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix expressions always false"
},
{
"msg_contents": ">1) long is 64 bits on Unix-like platforms\n>2) checking a long against LONG_MIN/LONG_MAX is _definitely_ pointless\n>3) it's being cast to an int for the from_char_set_int() call below\n>Please take your time to read the whole context of the code you're\n>changing, and consider other platforms than just Windows.\nThank you for point me, about this.\n\nregards,\nRanier Vilela\n\nEm qui., 19 de dez. de 2019 às 20:58, Dagfinn Ilmari Mannsåker <\nilmari@ilmari.org> escreveu:\n\n> Ranier Vilela <ranier.vf@gmail.com> writes:\n>\n> > More about expressions always false.\n> > 2. src/backend/utils/adt/formatting.c\n> > result is declared long. Comparison with int limits is always false.\n> > 3. src/backend/utils/adt/jsonfuncs.c\n> > lindex is declared long. . Comparison with int limits is always false.\n>\n> 1) long is 64 bits on Unix-like platforms\n> 2) checking a long against LONG_MIN/LONG_MAX is _definitely_ pointless\n> 3) it's being cast to an int for the from_char_set_int() call below\n>\n> Please take your time to read the whole context of the code you're\n> changing, and consider other platforms than just Windows.\n>\n> - ilmari\n> --\n> \"A disappointingly low fraction of the human race is,\n> at any given time, on fire.\" - Stig Sandbeck Mathisen\n>",
"msg_date": "Thu, 19 Dec 2019 21:42:06 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] Fix expressions always false"
}
] |
[
{
"msg_contents": "Continuing on always false expressions.\nThere are three difficult cases, whose solutions which needs to be well\nthought out.\nThis is not a case of simply removing the expressions, perhaps, but have to\nbe sure.\n\nFirst case:\nsrc \\ backend \\ executor \\ nodeSubplan.c (line 507)\n\nif (node-> hashtable)\n\nnode-> hastable is assigned with NULL at line 498, so the test will always\nfail.\n\nSecond case:\nHere the case is similar, but worse.\n\nsrc \\ backend \\ executor \\ nodeSubplan.c (line 535)\nif (node-> hashnulls)\n ResetTupleHashTable (node-> hashtable);\n\nnode-> hashnulls is assigned with NULL at line 499, so the test will always\nfail.\nOtherwise, it would have already been discovered, because it would\ninevitably occur\nan access violation, since > hashtable would be accessed.\n\nThird case:\n\\ src \\ backend \\ utils \\ cache \\ relcache.c (line 5190)\nif (relation-> rd_pubactions)\n\nIt will never be executed, because if relation-> rd_pubactions is true, the\nfunction returns on line 5154.\n\nregards,\nRanier Vilela\n\nContinuing on always false expressions.There are three difficult cases, whose solutions which needs to be well thought out.This is not a case of simply removing the expressions, perhaps, but have to be sure.First case:src \\ backend \\ executor \\ nodeSubplan.c (line 507)if (node-> hashtable)node-> hastable is assigned with NULL at line 498, so the test will always fail.Second case:Here the case is similar, but worse.src \\ backend \\ executor \\ nodeSubplan.c (line 535)if (node-> hashnulls) ResetTupleHashTable (node-> hashtable);node-> hashnulls is assigned with NULL at line 499, so the test will always fail.Otherwise, it would have already been discovered, because it would inevitably occuran access violation, since > hashtable would be accessed.Third case:\\ src \\ backend \\ utils \\ cache \\ relcache.c (line 5190)if (relation-> rd_pubactions)It will never be executed, because if relation-> rd_pubactions is true, the function returns on line 5154.regards,Ranier Vilela",
"msg_date": "Thu, 19 Dec 2019 21:01:11 -0300",
"msg_from": "Ranier Vilela <ranier.vf@gmail.com>",
"msg_from_op": true,
"msg_subject": "More issues with expressions always false (no patch)"
},
{
"msg_contents": "On 12/20/19 1:01 AM, Ranier Vilela wrote:> First case:\n> src \\ backend \\ executor \\ nodeSubplan.c (line 507)\n> \n> if (node-> hashtable)\n> \n> node-> hastable is assigned with NULL at line 498, so the test will \n> always fail.\n> \n> Second case:\n> Here the case is similar, but worse.\n> \n> src \\ backend \\ executor \\ nodeSubplan.c (line 535)\n> if (node-> hashnulls)\n> ResetTupleHashTable (node-> hashtable);\n\nThese two look like likely bugs. It looks like the code will always \ncreate new hash tables despite commit \nhttps://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=356687bd825e5ca7230d43c1bffe7a59ad2e77bd \nintending to reset them if they already exist.\n\nAdditionally it looks like the code would reset the wrong hash table in \nthe second place if the bug was fixed.\n\nI have attached a patch.\n\n> Third case:\n> \\ src \\ backend \\ utils \\ cache \\ relcache.c (line 5190)\n> if (relation-> rd_pubactions)\n> \n> It will never be executed, because if relation-> rd_pubactions is true, \n> the function returns on line 5154.\n\nI have not looked into this one in detail, but the free at line 5192 \nlooks like potentially dead code.\n\nAndreas",
"msg_date": "Fri, 20 Dec 2019 01:54:53 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: More issues with expressions always false (no patch)"
},
{
"msg_contents": "On 12/20/19 1:54 AM, Andreas Karlsson wrote:\n> On 12/20/19 1:01 AM, Ranier Vilela wrote:> First case:\n>> Third case:\n>> \\ src \\ backend \\ utils \\ cache \\ relcache.c (line 5190)\n>> if (relation-> rd_pubactions)\n>>\n>> It will never be executed, because if relation-> rd_pubactions is \n>> true, the function returns on line 5154.\n> \n> I have not looked into this one in detail, but the free at line 5192 \n> looks like potentially dead code.\n\nI have looked at it now and it seems like this code has been dead since \nthe function was originally implemented in 665d1fad99e.\n\nPeter, what do you think?\n\nAndreas",
"msg_date": "Fri, 20 Dec 2019 16:35:25 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: More issues with expressions always false (no patch)"
},
{
"msg_contents": "Andreas Karlsson <andreas@proxel.se> writes:\n> On 12/20/19 1:54 AM, Andreas Karlsson wrote:\n>> On 12/20/19 1:01 AM, Ranier Vilela wrote:> First case:\n>>> Third case:\n>>> \\ src \\ backend \\ utils \\ cache \\ relcache.c (line 5190)\n>>> if (relation-> rd_pubactions)\n>>> \n>>> It will never be executed, because if relation-> rd_pubactions is \n>>> true, the function returns on line 5154.\n\n>> I have not looked into this one in detail, but the free at line 5192 \n>> looks like potentially dead code.\n\n> I have looked at it now and it seems like this code has been dead since \n> the function was originally implemented in 665d1fad99e.\n\nI would not put a whole lot of faith in that. This argument supposes\nthat nothing else can touch the relcache entry while we are doing\nGetRelationPublications and the pg_publication syscache accesses inside\nthe foreach loop. Now in practice, yeah, it's somewhat unlikely that\nanything down inside there would take an interest in our relation's\npublication actions, especially if our relation isn't a system catalog.\nBut there are closely related situations in other relcache functions\nthat compute cached values like this where we *do* have to worry about\nreentrant/recursive use of the function. I think the \"useless\" free\nis cheap insurance against a permanent memory leak, as well as more\nlike the coding in nearby functions like RelationGetIndexAttrBitmap.\nI wouldn't change it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:34:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: More issues with expressions always false (no patch)"
},
{
"msg_contents": "On 12/20/19 10:34 PM, Tom Lane wrote:\n> I think the \"useless\" free\n> is cheap insurance against a permanent memory leak, as well as more\n> like the coding in nearby functions like RelationGetIndexAttrBitmap.\n> I wouldn't change it.\n\nGood point, if there is a pattern it is good to follow it. But I am \npretty sure that the other issue Ranier's static analysis discovered is \na real bug and not just about shaving off a virtually no clock cycles \n(but I am not 100% sure my fix is correct). Will submit it to the \ncommitfest so people can take a look.\n\nAndreas\n\n\n",
"msg_date": "Wed, 8 Jan 2020 19:41:55 +0100",
"msg_from": "Andreas Karlsson <andreas@proxel.se>",
"msg_from_op": false,
"msg_subject": "Re: More issues with expressions always false (no patch)"
}
] |
[
{
"msg_contents": "Hi, hackers!\n\nThis is continuation of thread [0] in pgsql-general with proposed changes. As Maksim pointed out, this topic was raised before here [1].\n\nCurrently, we can have split brain with the combination of following steps:\n0. Setup cluster with synchronous replication. Isolate primary from standbys.\n1. Issue upsert query INSERT .. ON CONFLICT DO NOTHING\n2. CANCEL 1 during wait for synchronous replication\n3. Retry 1. Idempotent query will succeed and client have confirmation of written data, while it is not present on any standby.\n\nThread [0] contain reproduction from psql.\n\nIn certain situations we cannot avoid cancelation of timed out queries. Yes, we can interpret warnings and thread them as errors, but warning is emitted on step 1, not on step 3.\n\nI think proper solution here would be to add GUC to disallow cancellation of synchronous replication. Retry step 3 will wait on locks after hanging 1 and data will be consistent.\nThree is still a problem when backend is not canceled, but terminated [2]. Ideal solution would be to keep locks on changed data. Some well known databases threat termination of synchronous replication as system failure and refuse to operate until standbys appear (see Maximum Protection mode). From my point of view it's enough to PANIC once so that HA tool be informed that something is going wrong.\nAnyway situation with cancelation is more dangerous. We've observed it in some user cases.\n\nPlease find attached draft of proposed change.\n\nBest regards, Andrey Borodin.\n\n[0] https://www.postgresql.org/message-id/flat/B70260F9-D0EC-438D-9A59-31CB996B320A%40yandex-team.ru\n[1] https://www.postgresql.org/message-id/flat/CAEET0ZHG5oFF7iEcbY6TZadh1mosLmfz1HLm311P9VOt7Z%2Bjeg%40mail.gmail.com\n[2] https://www.postgresql.org/docs/current/warm-standby.html#SYNCHRONOUS-REPLICATION-HA",
"msg_date": "Fri, 20 Dec 2019 10:03:57 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 6:04 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> I think proper solution here would be to add GUC to disallow cancellation of synchronous replication. Retry step 3 will wait on locks after hanging 1 and data will be consistent.\n> Three is still a problem when backend is not canceled, but terminated [2]. Ideal solution would be to keep locks on changed data. Some well known databases threat termination of synchronous replication as system failure and refuse to operate until standbys appear (see Maximum Protection mode). From my point of view it's enough to PANIC once so that HA tool be informed that something is going wrong.\n\nSending a cancellation is currently the only way to resume after\ndisabling synchronous replication. Some HA solutions (e.g.\npg_auto_failover) rely on this behaviour. Would it be worth checking\nwhether synchronous replication is still required?\n\nMarco\n\n\n",
"msg_date": "Fri, 20 Dec 2019 08:23:10 +0100",
"msg_from": "Marco Slot <marco@citusdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 20 дек. 2019 г., в 12:23, Marco Slot <marco@citusdata.com> написал(а):\n> \n> On Fri, Dec 20, 2019 at 6:04 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> I think proper solution here would be to add GUC to disallow cancellation of synchronous replication. Retry step 3 will wait on locks after hanging 1 and data will be consistent.\n>> Three is still a problem when backend is not canceled, but terminated [2]. Ideal solution would be to keep locks on changed data. Some well known databases threat termination of synchronous replication as system failure and refuse to operate until standbys appear (see Maximum Protection mode). From my point of view it's enough to PANIC once so that HA tool be informed that something is going wrong.\n> \n> Sending a cancellation is currently the only way to resume after\n> disabling synchronous replication. Some HA solutions (e.g.\n> pg_auto_failover) rely on this behaviour. Would it be worth checking\n> whether synchronous replication is still required?\n\nI think changing synchronous_standby_names to some available standbys will resume all backends waiting for synchronous replication.\nDo we need to check necessity of synchronous replication in any other case?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 20 Dec 2019 15:07:26 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Andrey Borodin <x4mmm@yandex-team.ru> writes:\n> I think proper solution here would be to add GUC to disallow cancellation of synchronous replication.\n\nThis sounds entirely insane to me. There is no possibility that you\ncan prevent a failure from occurring at this step.\n\n> Three is still a problem when backend is not canceled, but terminated [2].\n\nExactly. If you don't have a fix that handles that case, you don't have\nanything. In fact, you've arguably made things worse, by increasing the\ntemptation to terminate or \"kill -9\" the nonresponsive session.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 16:19:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 03:07:26PM +0500, Andrey Borodin wrote:\n>> Sending a cancellation is currently the only way to resume after\n>> disabling synchronous replication. Some HA solutions (e.g.\n>> pg_auto_failover) rely on this behaviour. Would it be worth checking\n>> whether synchronous replication is still required?\n> \n> I think changing synchronous_standby_names to some available\n> standbys will resume all backends waiting for synchronous\n> replication. Do we need to check necessity of synchronous\n> replication in any other case? \n\nYeah, I am not on board with the concept of this thread. Depending\non your HA configuration you can also reset synchronous_standby_names\nafter a certain small-ish threshold has been reached in WAL to get at\nthe same result by disabling synchronous replication, though your\ncluster cannot perform safely a failover so you need to keep track of\nthat state. Something which would be useful is to improve some cases\nwhere you still want to use synchronous replication by switching to a\ndifferent standby. I recall that sometimes that can be rather slow..\n--\nMichael",
"msg_date": "Sat, 21 Dec 2019 11:39:22 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 11:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> I think changing synchronous_standby_names to some available standbys will resume all backends waiting for synchronous replication.\n> Do we need to check necessity of synchronous replication in any other case?\n\nThe GUCs are not re-checked in the main loop in SyncRepWaitForLSN, so\nbackends will remain stuck there even if synchronous replication has\nbeen (temporarily) disabled while they were waiting.\n\nI do agree with the general sentiment that terminating the connection\nis preferable over sending a response to the client (except when\nsynchronous replication was already disabled). Synchronous replication\ndoes not guarantee that a committed write is actually on any replica,\nbut it does in general guarantee that a commit has been replicated\nbefore sending a response to the client. That's arguably more\nimportant because the rest of what the application might depend on the\ntransaction completing and replicating successfully. I don't know of\ncases other than cancellation in which a response is sent to the\nclient without replication when synchronous replication is enabled.\n\nThe error level should be FATAL instead of PANIC, since PANIC restarts\nthe database and I don't think there is a reason to do that.\n\nMarco\n\n\n",
"msg_date": "Sat, 21 Dec 2019 11:34:05 +0100",
"msg_from": "Marco Slot <marco@citusdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 21 дек. 2019 г., в 2:19, Tom Lane <tgl@sss.pgh.pa.us> написал(а):\n> \n> Andrey Borodin <x4mmm@yandex-team.ru> writes:\n>> I think proper solution here would be to add GUC to disallow cancellation of synchronous replication.\n> \n> This sounds entirely insane to me. There is no possibility that you\n> can prevent a failure from occurring at this step.\nYes, we cannot prevent failure. If we wait here for a long time and someone cancels query, probably, node is failed. Database already lives in some other Availability Zone.\nAll we should do - refuse to commit anything here. Any committed data will be lost.\n\n>> Three is still a problem when backend is not canceled, but terminated [2].\n> \n> Exactly. If you don't have a fix that handles that case, you don't have\n> anything. In fact, you've arguably made things worse, by increasing the\n> temptation to terminate or \"kill -9\" the nonresponsive session.\nCurrently, any Postgres HA solution can loose data when application issues INSERT ... ON CONFLICT DO NOTHING with retry. There is no need for any DBA mistake. Just a driver capable of issuing cancel on timeout.\n\nAdministrator issuing kill -9 is OK, database must shutdown to prevent splitbrain. Preferably, database should refuse to start after shutdown.\nI'm not proposing this behavior as default. If administrator (or HA tool) configured DB in this mode - probably they know what they are doing.\n\n> 21 дек. 2019 г., в 15:34, Marco Slot <marco@citusdata.com> написал(а):\n> \n> On Fri, Dec 20, 2019 at 11:07 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> I think changing synchronous_standby_names to some available standbys will resume all backends waiting for synchronous replication.\n>> Do we need to check necessity of synchronous replication in any other case?\n> \n> The GUCs are not re-checked in the main loop in SyncRepWaitForLSN, so\n> backends will remain stuck there even if synchronous replication has\n> been (temporarily) disabled while they were waiting.\nSyncRepInitConfig() will be called on SIGHUP. Backends waiting for synchronous replication will wake up on WAIT_EVENT_SYNC_REP and happily succeed.\n\n> I do agree with the general sentiment that terminating the connection\n> is preferable over sending a response to the client (except when\n> synchronous replication was already disabled). Synchronous replication\n> does not guarantee that a committed write is actually on any replica,\n> but it does in general guarantee that a commit has been replicated\n> before sending a response to the client. That's arguably more\n> important because the rest of what the application might depend on the\n> transaction completing and replicating successfully. I don't know of\n> cases other than cancellation in which a response is sent to the\n> client without replication when synchronous replication is enabled.\n> \n> The error level should be FATAL instead of PANIC, since PANIC restarts\n> the database and I don't think there is a reason to do that.\n\nTerminating connection is unacceptable, actually. Client will retry idempotent query. This query now do not need to write anything and will be committed.\nWe need to shutdown database and prevent it from starting. We should not acknowledge any data before synchronous replication configuration allows us.\n\nWhen client tries to cancel his query - we refuse to do so and hold his write locks. If anyone terminate connection - locks will be released. It is better to shut down whole DB, then release these locks and continue to receive queries.\n\n\nAll this does not apply to simple cases when user accidentally enabled synchronous replication. This is a setup for quite sophisticated HA tool, which will rewind local database, when transient network partition will be over and old timeline is archived, and attach it to new primary.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sat, 21 Dec 2019 20:11:17 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 21.12.2019 00:19, Tom Lane wrote:\n\n>> Three is still a problem when backend is not canceled, but terminated [2].\n> Exactly. If you don't have a fix that handles that case, you don't have\n> anything. In fact, you've arguably made things worse, by increasing the\n> temptation to terminate or \"kill -9\" the nonresponsive session.\n\n\nI assume that the termination of backend that causes termination of \nPostgreSQL instance in Andrey's patch proposal have to be resolved by \nexternal HA agents that could interrupt such terminations as parent \nprocess of postmaster and make appropriate decisions e.g., restart \nPostgreSQL node in closed from external users state (via pg_hba.conf \nmanipulation) until all sync replicas synchronize changes from master. \nStolon HA tool implements this strategy [1]. This logic (waiting for \nall replicas declared in synchronous_standby_names replicate all WAL \nfrom master) could be implemented inside PostgreSQL kernel after start \nrecovery process before database is opened to users and this can be done \nseparately later.\n\nAnother approach is to implement two-phase commit over master and sync \nreplicas (as it did Oracle in old versions [2]) where the risk to get \nlocal committed data under instance restarting and query canceling is \nminimal (after starting of final commitment phase). But this approach \nhas latency penalty and complexity to resolve partial (prepared but not \ncommitted) transactions under coordinator (in this case master node) \nfailure in automatic mode. Nicely if this approach will be implemented \nlater as option of synchronous commit.\n\n\n1. \nhttps://github.com/sorintlab/stolon/blob/master/doc/syncrepl.md#handling-postgresql-sync-repl-limits-under-such-circumstances\n\n2. \nhttps://docs.oracle.com/cd/B28359_01/server.111/b28326/repmaster.htm#i33607\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n",
"msg_date": "Wed, 25 Dec 2019 12:34:22 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 21.12.2019 13:34, Marco Slot wrote:\n\n> I do agree with the general sentiment that terminating the connection\n> is preferable over sending a response to the client (except when\n> synchronous replication was already disabled).\n\n\nBut in this case locally committed data becomes visible to new incoming \ntransactions that is bad side-effect of this issue. Under failover those \nchanges potentially undo.\n\n\n> Synchronous replication\n> does not guarantee that a committed write is actually on any replica,\n> but it does in general guarantee that a commit has been replicated\n> before sending a response to the client. That's arguably more\n> important because the rest of what the application might depend on the\n> transaction completing and replicating successfully. I don't know of\n> cases other than cancellation in which a response is sent to the\n> client without replication when synchronous replication is enabled.\n\n\nYes, at query canceling (e.g. by timeout from client driver) client \nreceives response about completed transaction (though with warning which \nnot all client drivers can handle properly) and the guarantee about \nsuccessfully replicated transaction *violates*.\n\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n",
"msg_date": "Wed, 25 Dec 2019 13:28:39 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 25 дек. 2019 г., в 15:28, Maksim Milyutin <milyutinma@gmail.com> написал(а):\n> \n>> Synchronous replication\n>> does not guarantee that a committed write is actually on any replica,\n>> but it does in general guarantee that a commit has been replicated\n>> before sending a response to the client. That's arguably more\n>> important because the rest of what the application might depend on the\n>> transaction completing and replicating successfully. I don't know of\n>> cases other than cancellation in which a response is sent to the\n>> client without replication when synchronous replication is enabled.\n> \n> \n> Yes, at query canceling (e.g. by timeout from client driver) client receives response about completed transaction (though with warning which not all client drivers can handle properly) and the guarantee about successfully replicated transaction *violates*.\n\nWe obviously need a design discussion here to address the issue. But the immediate question is should we add this topic to January CF items?\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Wed, 25 Dec 2019 15:45:23 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Wed, Dec 25, 2019, 11:28 Maksim Milyutin <milyutinma@gmail.com> wrote:\n\n> But in this case locally committed data becomes visible to new incoming\n> transactions that is bad side-effect of this issue.\n>\n\nYour application should be prepared for that in any case.\n\nAt the point where synchronous replication waits, the commit has already\nbeen written to disk on the primary. If postgres restarts while waiting for\nreplication then the write becomes immediately visible regardless of\nwhether it was replicated. I don't think throwing a PANIC actually prevents\nthat and if it does it's coincidental. Best you can do is signal to the\nclient that the commit status is unknown.\n\nThat's far from ideal, but fixing it requires a much bigger change to\nstreaming replication. The write should be replicated prior to commit on\nthe primary, but applied after in a way where unapplied writes on the\nsecondary can be overwritten/discarded if it turns out they did not commit\non the primary.\n\nMarco\n\nOn Wed, Dec 25, 2019, 11:28 Maksim Milyutin <milyutinma@gmail.com> wrote:\nBut in this case locally committed data becomes visible to new incoming \ntransactions that is bad side-effect of this issue. Your application should be prepared for that in any case.At the point where synchronous replication waits, the commit has already been written to disk on the primary. If postgres restarts while waiting for replication then the write becomes immediately visible regardless of whether it was replicated. I don't think throwing a PANIC actually prevents that and if it does it's coincidental. Best you can do is signal to the client that the commit status is unknown.That's far from ideal, but fixing it requires a much bigger change to streaming replication. The write should be replicated prior to commit on the primary, but applied after in a way where unapplied writes on the secondary can be overwritten/discarded if it turns out they did not commit on the primary.Marco",
"msg_date": "Wed, 25 Dec 2019 12:27:57 +0100",
"msg_from": "Marco Slot <marco@citusdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 25.12.2019 14:27, Marco Slot wrote:\n\n>\n>\n> On Wed, Dec 25, 2019, 11:28 Maksim Milyutin <milyutinma@gmail.com \n> <mailto:milyutinma@gmail.com>> wrote:\n>\n> But in this case locally committed data becomes visible to new\n> incoming\n> transactions that is bad side-effect of this issue.\n>\n>\n> Your application should be prepared for that in any case.\n>\n> At the point where synchronous replication waits, the commit has \n> already been written to disk on the primary. If postgres \n> restarts while waiting for replication then the write becomes \n> immediately visible regardless of whether it was replicated.\n\n\nYes, this write is recovered after starting of instance. At first, this \ncase I want to delegate to external HA tool around PostgreSQL. It can \nhandle instance stopping and take switchover to sync replica or start \ncurrent instance with closed connections from external users until all \nwrites replicate to sync replicas. Later, arguably closing connection \nafter recovery process could be implemented inside the kernel.\n\n\n> I don't think throwing a PANIC actually prevents that and if it does \n> it's coincidental.\n\n\nPANIC lets down instance and doesn't provide clients to read locally \ncommitted data. HA tool takes further steps to close access to these \ndata as described above.\n\n\n> That's far from ideal, but fixing it requires a much bigger change to \n> streaming replication. The write should be replicated prior to commit \n> on the primary, but applied after in a way where unapplied writes on \n> the secondary can be overwritten/discarded if it turns out they did \n> not commit on the primary.\n\n\nThanks for sharing your opinion about enhancement of synchronous commit \nprotocol. Here [1] my position is listed. It would like to see positions \nof other members of community.\n\n\n1. \nhttps://www.postgresql.org/message-id/f3ffc220-e601-cc43-3784-f9bba66dc382%40gmail.com\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n\n\n\n\nOn 25.12.2019 14:27, Marco Slot wrote:\n\n\n\n\n\n\nOn Wed, Dec 25, 2019,\n 11:28 Maksim Milyutin <milyutinma@gmail.com>\n wrote:\n\n But in this case locally committed data becomes visible to\n new incoming \n transactions that is bad side-effect of this issue. \n\n\n\n\n\nYour application should be prepared for that in\n any case.\n\n\nAt the point where synchronous replication\n waits, the commit has already been written to disk on the\n primary. If postgres restarts while waiting for\n replication then the write becomes immediately visible\n regardless of whether it was replicated.\n\n\n\n\nYes, this write is recovered after starting of instance. At\n first, this case I want to delegate to external HA tool around\n PostgreSQL. It can handle instance stopping and take switchover to\n sync replica or start current instance with closed connections\n from external users until all writes replicate to sync replicas.\n Later, arguably closing connection after recovery process could be\n implemented inside the kernel.\n\n\n\n\n\n I don't think throwing a PANIC actually\n prevents that and if it does it's coincidental.\n\n\n\n\n\nPANIC lets down instance and doesn't provide clients to read\n locally committed data. HA tool takes further steps to close\n access to these data as described above.\n\n\n\n\n\nThat's far from ideal, but fixing it\n requires a much bigger change to streaming replication. The\n write should be replicated prior to commit on the primary,\n but applied after in a way where unapplied writes on the\n secondary can be overwritten/discarded if it turns out they\n did not commit on the primary.\n\n\n\n\nThanks for sharing your opinion about enhancement of synchronous\n commit protocol. Here [1] my position is listed. It would like to\n see positions of other members of community.\n\n\n\n1.\nhttps://www.postgresql.org/message-id/f3ffc220-e601-cc43-3784-f9bba66dc382%40gmail.com\n\n-- \nBest regards,\nMaksim Milyutin",
"msg_date": "Wed, 25 Dec 2019 17:32:23 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 25.12.2019 13:45, Andrey Borodin wrote:\n>> 25 дек. 2019 г., в 15:28, Maksim Milyutin <milyutinma@gmail.com> написал(а):\n>>\n>>> Synchronous replication\n>>> does not guarantee that a committed write is actually on any replica,\n>>> but it does in general guarantee that a commit has been replicated\n>>> before sending a response to the client. That's arguably more\n>>> important because the rest of what the application might depend on the\n>>> transaction completing and replicating successfully. I don't know of\n>>> cases other than cancellation in which a response is sent to the\n>>> client without replication when synchronous replication is enabled.\n>>\n>> Yes, at query canceling (e.g. by timeout from client driver) client receives response about completed transaction (though with warning which not all client drivers can handle properly) and the guarantee about successfully replicated transaction *violates*.\n> We obviously need a design discussion here to address the issue. But the immediate question is should we add this topic to January CF items?\n\n\n+1 on posting this topic to January CF.\n\nAndrey, some fixes from me:\n\n1) pulled out the cancelling of QueryCancelPending from internal branch \nwhere synchronous_commit_cancelation is set so that to avoid dummy \niterations with printing message \"canceling the wait for ...\"\n\n2) rewrote errdetail message under cancelling query: I hold in this case \nwe cannot assert that transaction committed locally because its changes \nare not visible as yet so I propose to write about locally flushed \ncommit wal record.\n\nUpdated patch is attached.\n\n-- \nBest regards,\nMaksim Milyutin",
"msg_date": "Thu, 26 Dec 2019 16:14:40 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 12:04 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Currently, we can have split brain with the combination of following steps:\n> 0. Setup cluster with synchronous replication. Isolate primary from standbys.\n> 1. Issue upsert query INSERT .. ON CONFLICT DO NOTHING\n> 2. CANCEL 1 during wait for synchronous replication\n> 3. Retry 1. Idempotent query will succeed and client have confirmation of written data, while it is not present on any standby.\n\nIt seems to me that in order for synchronous replication to work\nreliably, you've got to be very careful about any situation where a\ncommit might or might not have completed, and this is one such\nsituation. When the client sends the query cancel, it does not and\ncannot know whether the INSERT statement has not yet completed, has\nalready completed but not yet replicated, or has completed and\nreplicated but not yet sent back a response. However, the server's\nresponse will be different in each of those cases, because in the\nsecond case, there will be a WARNING about synchronous replication\nhaving been interrupted. If the client ignores that WARNING, there are\ngoing to be problems.\n\nNow, even if you do pay attention to the warning, things are not\ntotally great here, because if you have inadvertently interrupted a\nreplication wait, how do you recover? You can't send a command that\nmeans \"oh, I want to wait after all.\" You would have to check the\nstandbys yourself, from the application code, and see whether the\nchanges that the query made have shown up there, or check the LSN on\nthe master and wait for the standbys to advance to that LSN. That's\nnot great, but might be doable for some applications.\n\nOne idea that I had during the initial discussion around synchronous\nreplication was that maybe there ought to be a distinction between\ntrying to cancel the query and trying to cancel the replication wait.\nImagine that you could send a cancel that would only cancel\nreplication waits but not queries, or only queries but not replication\nwaits. Then you could solve this problem by asking the server to\nPQcancelWithAdvancedMagic(conn, PQ_CANCEL_TYPE_QUERY). I wasn't sure\nthat people would want this, and it didn't seem essential for the\nversion of this feature, but maybe this example shows that it would be\nworthwhile. I don't really have any idea how you'd integrate such a\nfeature with psql, but maybe it would be good enough to have it\navailable through the C interface. Also, it's a wire-protocol change,\nso there are compatibility issues to think about.\n\nAll that being said, like Tom and Michael, I don't think teaching the\nbackend to ignore cancels is the right approach. We have had\ninnumerable problems over the years that were caused by the backend\nfailing to respond to cancels when we would really have liked it to do\nso, and users REALLY hate it when you tell them that they have to shut\ndown and restart (with crash recovery) the entire database because of\na single stuck backend.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 Dec 2019 16:55:55 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 29.12.2019 00:55, Robert Haas wrote:\n\n> On Fri, Dec 20, 2019 at 12:04 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> Currently, we can have split brain with the combination of following steps:\n>> 0. Setup cluster with synchronous replication. Isolate primary from standbys.\n>> 1. Issue upsert query INSERT .. ON CONFLICT DO NOTHING\n>> 2. CANCEL 1 during wait for synchronous replication\n>> 3. Retry 1. Idempotent query will succeed and client have confirmation of written data, while it is not present on any standby.\n> All that being said, like Tom and Michael, I don't think teaching the\n> backend to ignore cancels is the right approach. We have had\n> innumerable problems over the years that were caused by the backend\n> failing to respond to cancels when we would really have liked it to do\n> so, and users REALLY hate it when you tell them that they have to shut\n> down and restart (with crash recovery) the entire database because of\n> a single stuck backend.\n>\n\nThe stuckness of backend is not deadlock here. To cancel waiting of \nbackend fluently, client is enough to turn off synchronous replication \n(change synchronous_standby_names through server reload) or change \nsynchronous replica to another livable one (again through changing of \nsynchronous_standby_names). In first case he explicitly agrees with \nexistence of local (not replicated) commits in master.\n\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n",
"msg_date": "Sun, 29 Dec 2019 02:19:28 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 6:19 PM Maksim Milyutin <milyutinma@gmail.com> wrote:\n> The stuckness of backend is not deadlock here. To cancel waiting of\n> backend fluently, client is enough to turn off synchronous replication\n> (change synchronous_standby_names through server reload) or change\n> synchronous replica to another livable one (again through changing of\n> synchronous_standby_names). In first case he explicitly agrees with\n> existence of local (not replicated) commits in master.\n\nSure, that's true. But I still maintain that responding to ^C is an\nimportant property of the system. If you have to do some more\ncomplicated set of steps like the ones you propose here, a decent\nnumber of people aren't going to figure it out and will end up\nunhappy. Now, as it is, you're unhappy, so I guess you can't please\neveryone, but you asked for opinions so I'm giving you mine.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 28 Dec 2019 18:54:03 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 29 дек. 2019 г., в 4:54, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Sat, Dec 28, 2019 at 6:19 PM Maksim Milyutin <milyutinma@gmail.com> wrote:\n>> The stuckness of backend is not deadlock here. To cancel waiting of\n>> backend fluently, client is enough to turn off synchronous replication\n>> (change synchronous_standby_names through server reload) or change\n>> synchronous replica to another livable one (again through changing of\n>> synchronous_standby_names). In first case he explicitly agrees with\n>> existence of local (not replicated) commits in master.\n> \n> Sure, that's true. But I still maintain that responding to ^C is an\n> important property of the system.\nNot loosing data - is a nice property of the database either.\nCurrently, synchronous replication fails to provide its guaranty - no data will be acknowledged until it is replicated.\nWe want to create a mode where this guaranty is provided.\n\nWhen user issued CANCEL we could return him his warning or error, but we should not drop data locks. Other transactions should not get acknowledged on basis of non-replicated data.\n\n> If you have to do some more\n> complicated set of steps like the ones you propose here, a decent\n> number of people aren't going to figure it out and will end up\n> unhappy. Now, as it is, you're unhappy, so I guess you can't please\n> everyone, but you asked for opinions so I'm giving you mine.\n\nThere are many cases when we do not allow user to shoot into his foot. For example, anti-wraparound vacuum. Single-user vacuum freeze is much less pain than split-brain. In case of wraparound protection, there is deterministic steps to take to get your database back consistently. But in case of split-brain there is no single plan for cure.\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 29 Dec 2019 14:13:27 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 04:55:55PM -0500, Robert Haas wrote:\n> On Fri, Dec 20, 2019 at 12:04 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> > Currently, we can have split brain with the combination of following steps:\n> > 0. Setup cluster with synchronous replication. Isolate primary from standbys.\n> > 1. Issue upsert query INSERT .. ON CONFLICT DO NOTHING\n> > 2. CANCEL 1 during wait for synchronous replication\n> > 3. Retry 1. Idempotent query will succeed and client have confirmation of written data, while it is not present on any standby.\n> \n> It seems to me that in order for synchronous replication to work\n> reliably, you've got to be very careful about any situation where a\n> commit might or might not have completed, and this is one such\n> situation. When the client sends the query cancel, it does not and\n> cannot know whether the INSERT statement has not yet completed, has\n> already completed but not yet replicated, or has completed and\n> replicated but not yet sent back a response. However, the server's\n> response will be different in each of those cases, because in the\n> second case, there will be a WARNING about synchronous replication\n> having been interrupted. If the client ignores that WARNING, there are\n> going to be problems.\n\nThis gets to the heart of something I was hoping to discuss. When is\nsomething committed? You would think it is when the client receives the\ncommit message, but Postgres can commit something, and try to inform the\nclient but fail to inform, perhaps due to network problems. In Robert's\ncase above, we send a \"success\", but it is only a success on the primary\nand not on the synchronous standby.\n\nIn the first case I mentioned, we commit without guaranteeing the client\nknows, but in the second case, we tell the client success with a warning\nthat the synchronous standby didn't get the commit. Are clients even\nchecking warning messages? You see it in psql, but what about\napplications that use Postgres. Do they even check for warnings?\nShould administrators be informed via email or some command when this\nhappens?\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Mon, 30 Dec 2019 09:39:10 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 4:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> Not loosing data - is a nice property of the database either.\n\nSure, but there's more than one way to fix that problem, as I pointed\nout in my first response.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 09:13:27 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 9:39 AM Bruce Momjian <bruce@momjian.us> wrote:\n> This gets to the heart of something I was hoping to discuss. When is\n> something committed? You would think it is when the client receives the\n> commit message, but Postgres can commit something, and try to inform the\n> client but fail to inform, perhaps due to network problems.\n\nThis kind of problem can happen even without synchronous replication.\nI've alluded to this problem in a couple of blog posts I've done on\nsync rep.\n\nIf an application is connected to the database and sends a COMMIT\ncommand (or a data-modifying command outside a transaction that will\ncommit implicitly) and the connection is closed before it receives a\nresponse, it does not know whether the COMMIT actually happened. It\nwill have to wait until the database is back up and running and then\ngo examine the state of the database with SELECT statements and try to\nfigure out whether the changes it wanted actually got made. Otherwise\nit doesn't know whether the failure that resulted in a loss of network\nconnectivity occurred before or after the commit.\n\nI imagine that most applications are way too dumb to do this properly\nand just report errors to the user and let the user decide what to do\nto try to recover. And I imagine that most users are not terribly\ncareful about it and such events cause minor data loss/corruption on a\nregular basis. But there are also probably some applications where\npeople are really fanatical about it.\n\n> In Robert's\n> case above, we send a \"success\", but it is only a success on the primary\n> and not on the synchronous standby.\n>\n> In the first case I mentioned, we commit without guaranteeing the client\n> knows, but in the second case, we tell the client success with a warning\n> that the synchronous standby didn't get the commit. Are clients even\n> checking warning messages? You see it in psql, but what about\n> applications that use Postgres. Do they even check for warnings?\n> Should administrators be informed via email or some command when this\n> happens?\n\nI suspect a lot of clients are not checking warning messages, but\nwhether that's really the server's problem is arguable. I think we've\ndeveloped a general practice over the years of trying to avoid warning\nmessages as a way of telling users about problems, and that's a good\nidea in general precisely because they might just get ignored, but\nthere are cases where it is really the only possible way forward. It\nwould certainly be pretty bad to have the COMMIT succeed on the local\nnode but produce an ERROR; that would doubtless be much more confusing\nthan what it's doing now. There's nothing at all to prevent\nadministrators from watching the logs for such warnings and taking\nwhatever action they deem appropriate.\n\nI continue to think that the root cause of this issue is that we can't\ndistinguish between cancelling the query and cancelling the sync rep\nwait. The client in this case is asking for both when it really only\nwants the former, and then ignoring the warning that the latter is\nwhat actually occurred.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 09:26:16 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 2 янв. 2020 г., в 19:13, Robert Haas <robertmhaas@gmail.com> написал(а):\n> \n> On Sun, Dec 29, 2019 at 4:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n>> Not loosing data - is a nice property of the database either.\n> \n> Sure, but there's more than one way to fix that problem, as I pointed\n> out in my first response.\nSorry, it took some more reading iterations of your message for me to understand the problem you are writing about.\n\nYou proposed two solutions:\n1. Client analyze warning an understand that data is not actually committed. This, as you pointed out, does not solve the problem: data is lost for another client, who never saw the warning.\nActually, \"client\" is a stateless number of connections unable to communicate with each other by any means beside database. They cannot share information about not committed transactions (they would need a database, thus chicken and the egg problem).\n\n2. Add another message \"CANCEL --force\" to stop synchronous replication for specific backend.\nWe already have a way to stop synchronous replication \"alter system set synchronous_standby_names to 'working.stand.by'; select pg_reload_conf();\". This will stop it for every backend, but \"CANCEL --force\" will be more picky.\nUser still can loose data when they issue idempotent query based on data, committed by \"CANCEL --force\". Moreover, user can loose data if his upsert is based on data committed by someone else with \"set synchronous_commit to off\".\nWe could fix upserts: make them wait for replication even if nothing was changed, but this will not cover the case when user is doing SELECT and decides not to insert anything.\nWe can fix SELECT: if user asks for synchronous_commit=remote_write - give him snapshot no newer than synchronously committed data. ISTM this would solve all above problems, but I do not see implications of this approach. We should add all XIDs to XIP if their commit LSN > sync rep LSN. But I'm not sure all other transactional mechanics will be OK with this.\n\nFrom practical point of view - when all writing queries use same synchronous_commit level - easiest solution is to just disallow cancel of sync replication. In psql we can just reset connection on second CTRL+C. That's more generic than \"CANCEL --force\".\n\nWhen all queries runs with same synchronous_commit there is no point in protocol message for canceling sync rep for single connection. Just drop that connection. Ignoring cancel is the only way to satisfy synchronous_commit level, which is constant for transaction.\nWhen queries run in various synchronous_commit - things are much more complicated. Adding protocol message to change synchronous_commit for running queries does not seems to be a viable option.\n\n> I continue to think that the root cause of this issue is that we can't\n> distinguish between cancelling the query and cancelling the sync rep\n> wait.\nYes, it is. But canceling sync rep wait exists already. Just change synchronous_stanby_names. Canceling sync rep for one client - is, effectively, changing synchronous commit level for running transaction. It opens a way for way more difficult complications.\n\n> The client in this case is asking for both when it really only\n> wants the former, and then ignoring the warning that the latter is\n> what actually occurred.\nClient is not ignoring warnings. Data is lost for the client which never received warning. If we could just fix our code, I would not be making so much noise. There are workarounds, but they are very pleasant to explain.\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 2 Jan 2020 22:26:16 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 10:26:16PM +0500, Andrey Borodin wrote:\n> \n> \n> > 2 янв. 2020 г., в 19:13, Robert Haas <robertmhaas@gmail.com> написал(а):\n> > \n> > On Sun, Dec 29, 2019 at 4:13 AM Andrey Borodin <x4mmm@yandex-team.ru> wrote:\n> >> Not loosing data - is a nice property of the database either.\n> > \n> > Sure, but there's more than one way to fix that problem, as I pointed\n> > out in my first response.\n> Sorry, it took some more reading iterations of your message for me to understand the problem you are writing about.\n> \n> You proposed two solutions:\n> 1. Client analyze warning an understand that data is not actually committed. This, as you pointed out, does not solve the problem: data is lost for another client, who never saw the warning.\n> Actually, \"client\" is a stateless number of connections unable to communicate with each other by any means beside database. They cannot share information about not committed transactions (they would need a database, thus chicken and the egg problem).\n\nActually, it might be worse than that. In my reading of\nRecordTransactionCommit(), we do this:\n\n\twrite to WAL\n\tflush WAL (durable)\n\tmake visible to other backends\n\treplicate\n\tcommunicate to the client\n\nI think this means we make the transaction commit visible to all\nbackends _before_ we replicate it, and potentially wait until we get a\nreplication reply to return SUCCESS to the client. This means other\nclients are acting on data that is durable on the local machine, but not\non the replicated machine, even if synchronous_standby_names is set.\n\nI feel this topic needs a lot more thought before we consider changing\nanything.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 10 Jan 2020 21:34:40 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\n> 11 янв. 2020 г., в 7:34, Bruce Momjian <bruce@momjian.us> написал(а):\n> \n> Actually, it might be worse than that. In my reading of\n> RecordTransactionCommit(), we do this:\n> \n> \twrite to WAL\n> \tflush WAL (durable)\n> \tmake visible to other backends\n> \treplicate\n> \tcommunicate to the client\n> \n> I think this means we make the transaction commit visible to all\n> backends _before_ we replicate it, and potentially wait until we get a\n> replication reply to return SUCCESS to the client.\nNo. Data is not visible to other backend when we await sync rep. It's easy to check.\nin one psql you can start waiting for sync rep:\npostgres=# \\d+ x\n Table \"public.x\"\n Column | Type | Collation | Nullable | Default | Storage | Stats target | Description \n--------+---------+-----------+----------+---------+----------+--------------+-------------\n key | integer | | not null | | plain | | \n data | text | | | | extended | | \nIndexes:\n \"x_pkey\" PRIMARY KEY, btree (key)\nAccess method: heap\n\npostgres=# alter system set synchronous_standby_names to 'nonexistent';\nALTER SYSTEM\npostgres=# select pg_reload_conf();\n2020-01-12 16:09:58.167 +05 [45677] LOG: received SIGHUP, reloading configuration files\n pg_reload_conf \n----------------\n t\n(1 row)\n\npostgres=# insert into x values (7, '7');\n\n\nIn other try to see inserted (already locally committed data)\n\npostgres=# select * from x where key = 7;\n key | data \n-----+------\n(0 rows)\n\n\ntry to insert same data and backend will hand on locks\n\npostgres=# insert into x values (7,'7') on conflict do nothing;\n\n ProcessQuery (in postgres) + 189 [0x1014b05bd]\n standard_ExecutorRun (in postgres) + 301 [0x101339fcd]\n ExecModifyTable (in postgres) + 1106 [0x101362b62]\n ExecInsert (in postgres) + 494 [0x10136344e]\n ExecCheckIndexConstraints (in postgres) + 570 [0x10133910a]\n check_exclusion_or_unique_constraint (in postgres) + 977 [0x101338db1]\n XactLockTableWait (in postgres) + 176 [0x101492770]\n LockAcquireExtended (in postgres) + 1274 [0x101493aaa]\n\n\n\nThanks!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Sun, 12 Jan 2020 16:18:38 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-12 16:18:38 +0500, Andrey Borodin wrote:\n> > 11 янв. 2020 г., в 7:34, Bruce Momjian <bruce@momjian.us> написал(а):\n> > \n> > Actually, it might be worse than that. In my reading of\n> > RecordTransactionCommit(), we do this:\n> > \n> > \twrite to WAL\n> > \tflush WAL (durable)\n> > \tmake visible to other backends\n> > \treplicate\n> > \tcommunicate to the client\n> > \n> > I think this means we make the transaction commit visible to all\n> > backends _before_ we replicate it, and potentially wait until we get a\n> > replication reply to return SUCCESS to the client.\n> No. Data is not visible to other backend when we await sync rep.\n\nYea, as the relevant comment in RecordTransactionCommit() says;\n\n\t * Note that at this stage we have marked clog, but still show as running\n\t * in the procarray and continue to hold locks.\n\t */\n\tif (wrote_xlog && markXidCommitted)\n\t\tSyncRepWaitForLSN(XactLastRecEnd, true);\n\n\nBut it's worthwhile to emphasize that data at that stage actually *can*\nbe visible on standbys. The fact that the transaction still shows as\nrunning via procarray, on the primary, does not influence visibility\ndeterminations on the standby.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jan 2020 14:53:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 15.01.2020 01:53, Andres Freund wrote:\n\n> On 2020-01-12 16:18:38 +0500, Andrey Borodin wrote:\n>>> 11 янв. 2020 г., в 7:34, Bruce Momjian <bruce@momjian.us> написал(а):\n>>>\n>>> Actually, it might be worse than that. In my reading of\n>>> RecordTransactionCommit(), we do this:\n>>>\n>>> \twrite to WAL\n>>> \tflush WAL (durable)\n>>> \tmake visible to other backends\n>>> \treplicate\n>>> \tcommunicate to the client\n>>>\n>>> I think this means we make the transaction commit visible to all\n>>> backends _before_ we replicate it, and potentially wait until we get a\n>>> replication reply to return SUCCESS to the client.\n>> No. Data is not visible to other backend when we await sync rep.\n> Yea, as the relevant comment in RecordTransactionCommit() says;\n>\n> \t * Note that at this stage we have marked clog, but still show as running\n> \t * in the procarray and continue to hold locks.\n> \t */\n> \tif (wrote_xlog && markXidCommitted)\n> \t\tSyncRepWaitForLSN(XactLastRecEnd, true);\n>\n>\n> But it's worthwhile to emphasize that data at that stage actually *can*\n> be visible on standbys. The fact that the transaction still shows as\n> running via procarray, on the primary, does not influence visibility\n> determinations on the standby.\n\n\nIn common case, consistent reading in cluster (even if remote_apply is \non) is available (and have to be) only on master node. For example, if \nrandom load balancer on read-only queries is established above master \nand sync replicas (while meeting remote_apply is on) it's possible to \ncatch the case when preceding reads would return data that will be \nabsent on subsequent ones.\nMoreover, such visible commits on sync standby are not durable from the \npoint of cluster view. For example, if we have two sync standbys then \nunder failover we can switch master to sync standby on which waiting \ncommit was not replicated but it was applied (and visible) on other \nstandby. This switching is completely safe because client haven't \nreceived acknowledge on commit request and that transaction is in \nindeterminate state, it can be as committed so aborted depending on \nwhich standby will be promoted.\n\n\n-- \nBest regards,\nMaksim Milyutin\n\n\n\n",
"msg_date": "Wed, 15 Jan 2020 13:49:35 +0300",
"msg_from": "Maksim Milyutin <milyutinma@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Hello.\n\nJust want to share some thoughts about how it looks from perspective\nof a high availability web-service application developer.\nBecause sometimes things look different from other sides. And\neverything looks like disaster to be honest.\n\nBut let's take it one at a time.\n\nFirst - the problem is not related to upsert queries only. It could be\nreproduced by plain INSERTS or UPDATES. For example:\n\n* client 1 inserts new records and waits for synchronous replication\n* client 1 cancels the query\n* clients 2, 3, 4 and 5 see new data and perform some actions outside\nof the database in external systems\n* master is switched to the replica with no WAL of new records replicated yet\n\nAs a result: newly inserted data are just gone, but external systems\nalready rely on it.\nAnd this is just a huge pain for the application and its developer.\n\nSecond - it is all not about the client who canceled the query. It may\nbe super clever and totally understand all of the tricky aspects and\nrisks of such action.\nBut it is about *other* clients who become able to see the\n\"non-existing\" data. They even have no option to detect such\nsituation.\n\nYes, currently there are a few ways for non-synchronous-replicated\ndata to become visible (for complex reasons of course):\n1) client cancels the query while waiting synchronous replications\n2) database restart\n3) kill -9 of backend waiting synchronous replications\n\nWhat is the main difference among 1 vs 2 and 3? Because 1 is performed\nnot only by humans.\nAnd moreover it is performed mostly by applications. And it happens\nright now on thousands on servers!\n\nCheck [1] and [2]. It is official JDBC driver for PostgreSQL. I am\nsure it is the most popular way to communicate with PostgreSQL these\ndays.\nAnd implementation of Statement::setQueryTimeout creates timer to send\ncancellation after the timeout. It is official and recommended way to\nlimit statement execution to some interval in JDBC.\nIn my project (and its libraries) it is used in dozens of places. It\nis also possible to search GitHub [4] to understand how widely it's\nused.\nFor example it is used by Spring framework[5], probably the most\npopular framework in the world for the rank-2 programming language.\n\nAnd situation is even worse. What is the case when setQueryTimeout\nstarts to cancel queries during synchronous replication like crazy?\nYes, it is the moment of losing connection between master and sync\nreplica (because all backends are now stuck in synrep). New master\nwill be elected in a few seconds.... (or maybe already elected and\nworking).\nAnd... Totally correct code cancels hundreds of queries stuck in\nsynrep making \"non-existing\" data available to be read for other\nclients in same availability zone....\n\nIt is just nightmare to be honest.\n\nThese days almost every web-service needs HA for postgres. And\npractically if your code (or some of library code) calls\nStatement::setQueryTimeout - your HA (for example - Patroni) is\nbroken.\nAnd it is really not easy to control setQueryTimeout call in modern\napplication with thousands of third-party libraries. Also, a lot of\napplications are in the support phase.\n\nAs for me - I am going to hack postgres jdbc driver to ignore\nsetQueryTimeout at all for now.\n\n>> I think proper solution here would be to add GUC to disallow cancellation of synchronous replication.\n> This sounds entirely insane to me. There is no possibility that you\n> can prevent a failure from occurring at this step.\n\nYes, maybe it is insane but looks like whole java-postgres-HA (and may\nbe others) world is going down. So, I believe we should even backport\nsuch an insane knob.\n\n\nAs developers of distributed systems we don't have many things to rely\non. And they are:\n1) if database clearly says something is committed - it is committed\nwith ACID guarantees\n2) anything else - it may be committed, may be not committed, may be\nwaiting to be committed\n\nAnd we've just lost letter D from ACID practically.\n\nThanks,\nMichail.\n\n[1] https://github.com/pgjdbc/pgjdbc/blob/23cce8ad35d9af6e2a1cb97fac69fdc0a7f94b42/pgjdbc/src/main/java/org/postgresql/core/QueryExecutorBase.java#L164-L200\n[2] https://github.com/pgjdbc/pgjdbc/blob/ed09fd1165f046ae956bf21b6c7882f1267fb8d7/pgjdbc/src/main/java/org/postgresql/jdbc/PgStatement.java#L538-L540\n[3] https://docs.oracle.com/javase/7/docs/api/java/sql/Statement.html#setQueryTimeout(int)\n[4] https://github.com/search?l=Java&q=statement.setQueryTimeout&type=Code\n[5] https://github.com/spring-projects/spring-framework/blob/master/spring-jdbc/src/main/java/org/springframework/jdbc/datasource/DataSourceUtils.java#L329-L343\n\n\n",
"msg_date": "Thu, 20 Feb 2020 18:51:09 +0300",
"msg_from": "Michail Nikolaev <michail.nikolaev@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On Sat, 2019-12-21 at 11:34 +0100, Marco Slot wrote:\n> The GUCs are not re-checked in the main loop in SyncRepWaitForLSN, so\n> backends will remain stuck there even if synchronous replication has\n> been (temporarily) disabled while they were waiting.\n\nIf you do:\n\n alter system set synchronous_standby_names='';\n select pg_reload_conf();\n\nit will release the backends waiting on sync rep.\n\nRegards,\n\tJeff Davis\n\n\n\n\n",
"msg_date": "Tue, 09 Jun 2020 11:32:51 -0700",
"msg_from": "Jeff Davis <pgsql@j-davis.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "> 9 июня 2020 г., в 23:32, Jeff Davis <pgsql@j-davis.com> написал(а):\n> \n> \n\nAfter using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\nI think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n\nPFA v3.\n\nBest regards, Andrey Borodin.",
"msg_date": "Wed, 9 Dec 2020 14:07:29 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "On 12/9/20 4:07 AM, Andrey Borodin wrote:\n>> 9 июня 2020 г., в 23:32, Jeff Davis <pgsql@j-davis.com> написал(а):\n>>\n> After using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\n> I think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n> \n> PFA v3.\n\nMaksim, Michail, thoughts on this new patch?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Thu, 11 Mar 2021 08:09:39 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "\n\nOn 2020/12/09 18:07, Andrey Borodin wrote:\n> \n> \n>> 9 июня 2020 г., в 23:32, Jeff Davis <pgsql@j-davis.com> написал(а):\n>>\n>>\n> \n> After using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\n> I think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n> \n> PFA v3.\n\nI don't think that preventing a backend from being canceled during waiting for\nsync rep actually addresses your issue. As mentioned upthread, there are\nother cases that can cause the issue, for example, restart of the server while\nbackends are waiting for sync rep.\n\nAs far as I understand your idea, what we should do is to make new transaction\nwait until WAL has been replicated to the standby up to the latest WAL record\ncommitted locally before starting? We don't need to prevent the cancellation\nduring sync rep wait.\n\nIf we do that, new transaction cannot see any changes by another transaction\nthat was canceled during sync rep, until all the committed WAL records are\nreplicated. Doesn't this address your issue? I think that this idea works in\nnot only cancellation case but also other cases.\n\nIf we want to control this new wait in application level, we can implement\nsomething like pg_wait_for_syncrep(pg_lsn) function. This function waits\nuntil WAL is replicated to the standby up to the specified lsn. For example,\nwe can execute pg_wait_for_syncrep(pg_current_wal_lsn()) in the application\nwhenever we need that consistent point.\n\nOther idea is to add new GUC. If this GUC is enabled, transaction waits for\nall the committed records to be replicated whenever it takes new snapshot\n(probably transaction needs to wait not only when starting but also taking\nnew snapshot). This prevents the transaction from seeing any data that\nhave not been replicated yet.\n\nRegards,\n\n-- \nFujii Masao\nAdvanced Computing Technology Center\nResearch and Development Headquarters\nNTT DATA CORPORATION\n\n\n",
"msg_date": "Thu, 11 Mar 2021 23:15:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Thanks for looking into this!\n\n> 11 марта 2021 г., в 19:15, Fujii Masao <masao.fujii@oss.nttdata.com> написал(а):\n> \n> \n> \n> On 2020/12/09 18:07, Andrey Borodin wrote:\n>>> 9 июня 2020 г., в 23:32, Jeff Davis <pgsql@j-davis.com> написал(а):\n>>> \n>>> \n>> After using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\n>> I think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n>> PFA v3.\n> \n> I don't think that preventing a backend from being canceled during waiting for\n> sync rep actually addresses your issue. As mentioned upthread, there are\n> other cases that can cause the issue, for example, restart of the server while\n> backends are waiting for sync rep.\nWell, the patch fully address _my_ issue :) My issue is breaking guarantees of synchronous replication by sending a \"cancel\" message. Which is send by most drivers automatically.\n\nThe patch does not need to address the issue of server restart - it's the job of the HA tool to prevent the start of a database service in a case when a new primary was elected.\n\nThe only case patch does not handle is sudden backend crash - Postgres will recover without a restart. I think it is a very small problem compared to \"cancel\". One needs not only failover but also SIGSEGV in the backend to encounter this problem. Anyway we can address this issue by adding one more GUC preventing PostmasterStateMachine() to invoke crash recovery when (FatalError && pmState == PM_NO_CHILDREN).\n\n\n> As far as I understand your idea, what we should do is to make new transaction\n> wait until WAL has been replicated to the standby up to the latest WAL record\n> committed locally before starting? We don't need to prevent the cancellation\n> during sync rep wait.\n> If we do that, new transaction cannot see any changes by another transaction\n> that was canceled during sync rep, until all the committed WAL records are\n> replicated. Doesn't this address your issue?\nPreventing any new transaction from starting during sync replication wait is not really an option. It would double the latency cost of synchronous replication for writing transactions (wait for RTT on start, wait for RTT on commit). And incur the same cost on reading transactions (which did not need it before).\n\n> I think that this idea works in\n> not only cancellation case but also other cases.\n> \n> If we want to control this new wait in application level, we can implement\n> something like pg_wait_for_syncrep(pg_lsn) function. This function waits\n> until WAL is replicated to the standby up to the specified lsn. For example,\n> we can execute pg_wait_for_syncrep(pg_current_wal_lsn()) in the application\n> whenever we need that consistent point.\nWe want this for every transaction running with synchronous_commit > local. We should not ask users to run one more \"make transaction durable\" statement. The \"COMMIT\" is this statement.\n\n> Other idea is to add new GUC. If this GUC is enabled, transaction waits for\n> all the committed records to be replicated whenever it takes new snapshot\n> (probably transaction needs to wait not only when starting but also taking\n> new snapshot). This prevents the transaction from seeing any data that\n> have not been replicated yet.\nIf we block new snapshots after local commit until successful replication we, in fact, linearize reads from standbys. The cost will be immense. The whole idea of MVCC is that writers do not block readers.\n\nThanks for the ideas!\n\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Thu, 11 Mar 2021 21:28:26 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Hi hackers,\n\n> >> After using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\n> >> I think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n> >> PFA v3.\n\nThis patch, although solving a concrete and important problem, looks\nmore like a quick workaround than an appropriate solution. Or is it\njust me?\n\nIdeally, the transaction should be committed only after getting a\nreply from the standby. If the user cancels the transaction, it\ndoesn't get committed anywhere. This is what people into distributed\nsystems would expect unless stated otherwise, at least. Although I\nrealize how complicated it is to implement, especially considering all\nthe possible corner cases (netsplit right after getting a reply, etc).\nMaybe we could come up with a less than ideal, but still sound and\neasy-to-understand model, which, as soon as you learned it, doesn't\nbring unexpected surprises to the user.\n\nI believe at this point it's important to agree if the community is\nready to accept a patch as is to make existing users suffer less and\niterate afterward. Or we choose not to do it and to come up with\nanother idea. Personally, I don't have any better ideas, thus maybe\naccepting Andrey's patch would be the lesser of two evils.\n\n-- \nBest regards,\nAleksander Alekseev\n\n\n",
"msg_date": "Fri, 23 Apr 2021 12:30:28 +0300",
"msg_from": "Aleksander Alekseev <aleksander@timescale.com>",
"msg_from_op": false,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "Hi Aleksander!\n\nThanks for looking into this.\n\n> 23 апр. 2021 г., в 14:30, Aleksander Alekseev <aleksander@timescale.com> написал(а):\n> \n> Hi hackers,\n> \n>>>> After using a patch for a while it became obvious that PANICing during termination is not a good idea. Even when we wait for synchronous replication. It generates undesired coredumps.\n>>>> I think in presence of SIGTERM it's reasonable to say that we cannot protect user anymore.\n>>>> PFA v3.\n> \n> This patch, although solving a concrete and important problem, looks\n> more like a quick workaround than an appropriate solution. Or is it\n> just me?\n> \n> Ideally, the transaction should be committed only after getting a\n> reply from the standby.\nGetting reply from the standby is a part of a commit. Commit is completed only when WAL reached standby. Commit, certainly, was initiated before getting reply from standby. We cannot commit only after we commit.\n\n> If the user cancels the transaction, it\n> doesn't get committed anywhere.\nThe problem is user tries to cancel a transaction after they asked for commit. We never promised rolling back committed transaction.\nWhen user asks for commit we insert commit record into WAL. And then wait when it is acknowledged by quorum of standbys and local storage.\nWe cannot discard this record on standbys. Or, at one point we will have to discard discard records. Or discard discard discard records.\n\n> This is what people into distributed\n> systems would expect unless stated otherwise, at least.\nI think, our transaction semantics is stated clearly in documentation.\n\n> Although I\n> realize how complicated it is to implement, especially considering all\n> the possible corner cases (netsplit right after getting a reply, etc).\n> Maybe we could come up with a less than ideal, but still sound and\n> easy-to-understand model, which, as soon as you learned it, doesn't\n> bring unexpected surprises to the user.\nThe model proposed by my patch sounds as follows:\ntransaction effects should not be observable on primary until requirements of synchronous_commit are satisfied.\n\nE.g. even if user issues cancel of committed locally transaction, we should not release locks held by this transaction.\nWhat unexpected surprises do you see in this model?\n\nThanks for reviewing!\n\nBest regards, Andrey Borodin.\n\n",
"msg_date": "Fri, 23 Apr 2021 15:19:49 +0500",
"msg_from": "Andrey Borodin <x4mmm@yandex-team.ru>",
"msg_from_op": true,
"msg_subject": "Re: Disallow cancellation of waiting for synchronous replication"
},
{
"msg_contents": "I came across this thread [1] to disallow canceling a transaction not\nyet confirmed by a synchronous replica. I think my proposed patch\nmight help that case as well, hence adding all involved in that thread\nto BCC, for one-time notification.\n\nAs mentioned in that thread, when sending a cancellation signal, the\nclient cannot be sure if the cancel signal was honored, and if the\ntransaction was cancelled successfully. In the attached patch, the\nbackend emits a NotificationResponse containing the current full\ntransaction id. It does so only if the relevant GUC is enabled, and\nwhen the top-transaction is being assigned the ID.\n\nThis information can be useful to the client, when:\ni) it wants to cancel a transaction _after_ issuing a COMMIT, and\nii) it wants to check the status of its transaction that it sent\nCOMMIT for, but never received a response (perhaps because the server\ncrashed).\n\nAdditionally, this information can be useful for middleware, like\nTransaction Processing Monitors, which can now transparently (without\nany change in application code) monitor the status of transactions (by\nwatching for the transaction status indicator in the ReadyForQuery\nprotocol message). They can use the transaction ID from the\nNotificationResponse to open a watcher, and on seeing either an 'E' or\n'I' payload in subsequent ReadyForQuery messages, close the watcher.\nOn server crash, or other adverse events, they can then use the\ntransaction IDs still being watched to check status of those\ntransactions, and take appropriate actions, e.g. retry any aborted\ntransactions.\n\nWe cannot use the elog() mechanism for this notification because it is\nsensitive to the value of client_min_messages. Hence I used the NOTIFY\ninfrastructure for this message. I understand that this usage violates\nsome expectations as to how NOTIFY messages are supposed to behave\n(see [2] below), but I think these are acceptable violations; open to\nhearing if/why this might not be acceptable, and any possible\nalternatives.\n\nI'm not very familiar with the parallel workers infrastructure, so the\npatch is missing any consideration for those.\n\nReviews welcome.\n\n[1]: subject was: Re: Disallow cancellation of waiting for synchronous\nreplication\nthread: https://www.postgresql.org/message-id/flat/C1F7905E-5DB2-497D-ABCC-E14D4DEE506C%40yandex-team.ru\n\n[2]:\n At present, NotificationResponse can only be sent outside a\n transaction, and thus it will not occur in the middle of a\n command-response series, though it might occur just before ReadyForQuery.\n It is unwise to design frontend logic that assumes that, however.\n Good practice is to be able to accept NotificationResponse at any\n point in the protocol.\n\nBest regards,\n--\nGurjeet Singh http://gurjeet.singh.im/",
"msg_date": "Tue, 22 Jun 2021 21:37:30 -0700",
"msg_from": "Gurjeet Singh <gurjeet@singh.im>",
"msg_from_op": false,
"msg_subject": "Automatic notification for top transaction IDs"
}
] |
[
{
"msg_contents": "Superuser can permit passwordless connections on postgres_fdw\n\nCurrently postgres_fdw doesn't permit a non-superuser to connect to a\nforeign server without specifying a password, or to use an\nauthentication mechanism that doesn't use the password. This is to avoid\nusing the settings and identity of the user running Postgres.\n\nHowever, this doesn't make sense for all authentication methods. We\ntherefore allow a superuser to set \"password_required 'false'\" for user\nmappings for the postgres_fdw. The superuser must ensure that the\nforeign server won't try to rely solely on the server identity (e.g.\ntrust, peer, ident) or use an authentication mechanism that relies on the\npassword settings (e.g. md5, scram-sha-256).\n\nThis feature is a prelude to better support for sslcert and sslkey\nsettings in user mappings.\n\nAuthor: Craig Ringer.\nDiscussion: https://postgr.es/m/075135da-545c-f958-fed0-5dcb462d6dae@2ndQuadrant.com\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/6136e94dcb88c50b6156aa646746565400e373d4\n\nModified Files\n--------------\ncontrib/postgres_fdw/connection.c | 42 +++++++++---\ncontrib/postgres_fdw/expected/postgres_fdw.out | 94 ++++++++++++++++++++++++++\ncontrib/postgres_fdw/option.c | 19 ++++++\ncontrib/postgres_fdw/sql/postgres_fdw.sql | 86 +++++++++++++++++++++++\ndoc/src/sgml/postgres-fdw.sgml | 24 +++++++\n5 files changed, 257 insertions(+), 8 deletions(-)",
"msg_date": "Fri, 20 Dec 2019 05:55:10 +0000",
"msg_from": "Andrew Dunstan <andrew@dunslane.net>",
"msg_from_op": true,
"msg_subject": "pgsql: Superuser can permit passwordless connections on postgres_fdw"
},
{
"msg_contents": "Hi Andrew,\n\nOn Fri, Dec 20, 2019 at 05:55:10AM +0000, Andrew Dunstan wrote:\n> Superuser can permit passwordless connections on postgres_fdw\n> \n> Currently postgres_fdw doesn't permit a non-superuser to connect to a\n> foreign server without specifying a password, or to use an\n> authentication mechanism that doesn't use the password. This is to avoid\n> using the settings and identity of the user running Postgres.\n> \n> However, this doesn't make sense for all authentication methods. We\n> therefore allow a superuser to set \"password_required 'false'\" for user\n> mappings for the postgres_fdw. The superuser must ensure that the\n> foreign server won't try to rely solely on the server identity (e.g.\n> trust, peer, ident) or use an authentication mechanism that relies on the\n> password settings (e.g. md5, scram-sha-256).\n> \n> This feature is a prelude to better support for sslcert and sslkey\n> settings in user mappings.\n\nAfter this commit a couple of buildfarm animals are unhappy with the\nregression tests of postgres_fdw:\n CREATE ROLE nosuper NOSUPERUSER;\n+WARNING: roles created by regression test cases should have names\n starting with \"regress_\"\n GRANT USAGE ON FOREIGN DATA WRAPPER postgres_fdw TO nosuper;\nIt is a project policy to only user roles prefixed by \"regress_\" in\nregression tests.\n\nThese is also a second type of failure:\n-HINT: Valid options in this context are: [...] krbsrvname [...]\n+HINT: Valid options in this context are: [...]\nThe diff here is that krbsrvname is not part of the list of valid\noptions. Anyway, as this list is build-dependent, I think that this\ntest needs some more design effort.\n--\nMichael",
"msg_date": "Fri, 20 Dec 2019 21:02:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "[ redirecting to -hackers ]\n\nMichael Paquier <michael@paquier.xyz> writes:\n> On Fri, Dec 20, 2019 at 05:55:10AM +0000, Andrew Dunstan wrote:\n>> Superuser can permit passwordless connections on postgres_fdw\n\n> After this commit a couple of buildfarm animals are unhappy with the\n> regression tests of postgres_fdw:\n\nYeah, the buildfarm is *very* unhappy with this.\n\n> CREATE ROLE nosuper NOSUPERUSER;\n> +WARNING: roles created by regression test cases should have names\n> starting with \"regress_\"\n\nThat one is just failure to follow the guidelines, and is easily\nfixed by adjusting the test case.\n\n> These is also a second type of failure:\n> -HINT: Valid options in this context are: [...] krbsrvname [...]\n> +HINT: Valid options in this context are: [...]\n> The diff here is that krbsrvname is not part of the list of valid\n> options. Anyway, as this list is build-dependent, I think that this\n> test needs some more design effort.\n\nThis is a bit messier. But I think that the discrepancy is not\nreally the fault of this patch: rather, it's a bug in the way the\nGSS support was put into libpq. I thought we had a policy that\nall builds would recognize all possible parameters and then\nperhaps fail later. Certainly the SSL parameters are implemented\nthat way. The #if's disabling GSS stuff in PQconninfoOptions[]\nare just broken, according to that policy.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 14:04:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "I wrote:\n> This is a bit messier. But I think that the discrepancy is not\n> really the fault of this patch: rather, it's a bug in the way the\n> GSS support was put into libpq. I thought we had a policy that\n> all builds would recognize all possible parameters and then\n> perhaps fail later. Certainly the SSL parameters are implemented\n> that way. The #if's disabling GSS stuff in PQconninfoOptions[]\n> are just broken, according to that policy.\n\nConcretely, I think we ought to do (and back-patch) the attached.\n\nI notice in testing this that the \"nosuper\" business added by\n6136e94dc is broken in more ways than what the buildfarm is\ncomplaining about: it leaves the role around at the end of the\ntest. That's a HUGE violation of project policy, for security\nreasons as well as the fact that it makes it impossible to run\n\"make installcheck\" twice without getting different results.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 20 Dec 2019 14:42:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 02:42:22PM -0500, Tom Lane wrote:\n> Concretely, I think we ought to do (and back-patch) the attached.\n\nThanks for the fix, I have not been able to look at that.\n\n> I notice in testing this that the \"nosuper\" business added by\n> 6136e94dc is broken in more ways than what the buildfarm is\n> complaining about: it leaves the role around at the end of the\n> test. That's a HUGE violation of project policy, for security\n> reasons as well as the fact that it makes it impossible to run\n> \"make installcheck\" twice without getting different results.\n\nRoles left behind at the end of a test are annoying. Here is an idea:\nmake pg_regress check if any roles prefixed by \"regress_\" are left\nbehind at the end of a test. This will not work until test_pg_dump is\ncleaned up, just a thought.\n--\nMichael",
"msg_date": "Sat, 21 Dec 2019 11:18:06 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Dec 20, 2019 at 02:42:22PM -0500, Tom Lane wrote:\n>> I notice in testing this that the \"nosuper\" business added by\n>> 6136e94dc is broken in more ways than what the buildfarm is\n>> complaining about: it leaves the role around at the end of the\n>> test.\n\n> Roles left behind at the end of a test are annoying. Here is an idea:\n> make pg_regress check if any roles prefixed by \"regress_\" are left\n> behind at the end of a test. This will not work until test_pg_dump is\n> cleaned up, just a thought.\n\nYeah, it's sort of annoying that the buildfarm didn't notice this\naspect of things. I'm not sure I want to spend cycles on checking\nit in every test run, though.\n\nMaybe we could have -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\nenable a check for that aspect along with what it does now?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 22:17:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 10:17:20PM -0500, Tom Lane wrote:\n> Yeah, it's sort of annoying that the buildfarm didn't notice this\n> aspect of things. I'm not sure I want to spend cycles on checking\n> it in every test run, though.\n> \n> Maybe we could have -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n> enable a check for that aspect along with what it does now?\n\nMakes sense to restrict that under the flag. Perhaps a catalog scan\nof pg_authid at the end of pg_regress and isolationtester then?\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 11:25:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 12:56 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 20, 2019 at 10:17:20PM -0500, Tom Lane wrote:\n> > Yeah, it's sort of annoying that the buildfarm didn't notice this\n> > aspect of things. I'm not sure I want to spend cycles on checking\n> > it in every test run, though.\n> >\n> > Maybe we could have -DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\n> > enable a check for that aspect along with what it does now?\n>\n> Makes sense to restrict that under the flag. Perhaps a catalog scan\n> of pg_authid at the end of pg_regress and isolationtester then?\n\n\nWhat's the preferred way to set that?\n\n\"configure CPPFLAGS=-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\"?\n\nMaybe I should add that to the sample buildfarm config ...\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 09:21:29 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> What's the preferred way to set that?\n> \"configure CPPFLAGS=-DENFORCE_REGRESSION_TEST_NAME_RESTRICTIONS\"?\n\nlongfin is doing it via config_env. I have no opinion on whether\nthat's the \"preferred\" way.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 19:07:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Superuser can permit passwordless connections on\n postgres_fdw"
}
] |
[
{
"msg_contents": "Hi\nI created a read-only role as follows:\npsql -p 5434 kidsdpn03\nCREATE ROLE kidsdpn03_ro PASSWORD 'xxx';\nALTER ROLE kidsdpn03_ro WITH LOGIN;\nGRANT CONNECT ON DATABASE kidsdpn03 TO kidsdpn03_ro;\nGRANT USAGE ON SCHEMA kidsdpn03 TO kidsdpn03_ro;\nGRANT SELECT ON ALL TABLES IN SCHEMA kidsdpn03 TO kidsdpn03_ro;\nGRANT SELECT ON ALL SEQUENCES IN SCHEMA kidsdpn03 TO kidsdpn03_ro;\nALTER DEFAULT PRIVILEGES IN SCHEMA kidsdpn03 GRANT SELECT ON TABLES TO kidsdpn03_ro;\nALTER ROLE kidsdpn03_ro SET search_path TO kidsdpn03;\n\nbut when i create new tables, i don't have read access to those new tables. \nAnybody can help to solve this problem ?\nThank you in advance\n\nDidier ROS\ndidier.ros@edf.fr\n\n\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.",
"msg_date": "Fri, 20 Dec 2019 13:01:50 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": true,
"msg_subject": "problem with read-only user"
},
{
"msg_contents": "ROS Didier <didier.ros@edf.fr> writes:\n> I created a read-only role as follows:\n> psql -p 5434 kidsdpn03\n> CREATE ROLE kidsdpn03_ro PASSWORD 'xxx';\n> ALTER ROLE kidsdpn03_ro WITH LOGIN;\n> GRANT CONNECT ON DATABASE kidsdpn03 TO kidsdpn03_ro;\n> GRANT USAGE ON SCHEMA kidsdpn03 TO kidsdpn03_ro;\n> GRANT SELECT ON ALL TABLES IN SCHEMA kidsdpn03 TO kidsdpn03_ro;\n> GRANT SELECT ON ALL SEQUENCES IN SCHEMA kidsdpn03 TO kidsdpn03_ro;\n> ALTER DEFAULT PRIVILEGES IN SCHEMA kidsdpn03 GRANT SELECT ON TABLES TO kidsdpn03_ro;\n> ALTER ROLE kidsdpn03_ro SET search_path TO kidsdpn03;\n\n> but when i create new tables, i don't have read access to those new tables. \n\nYou only showed us part of what you did ... but IIRC, \nALTER DEFAULT PRIVILEGES only affects privileges for objects\nsubsequently made by the same user that issued the command.\n(Otherwise it'd be a security issue.) So maybe you didn't\nmake the tables as the same user?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 09:04:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: problem with read-only user"
},
{
"msg_contents": "Hi Tom\n\tThanks for your answer.\n\tActually, you're right, the tables, the sequences are created by the user kidsdpn03 and another read-only role (kidsdpn03_ro) must interrogate these objects.\n\tSo every time the kidsdpn03 role creates a new table, the kidsdpn03_ro role will not have the rights to read them. Kidsdpn03_ro must be explicitly granted read rights on this objects.\n\tCan you confirm that if it was the kidsdpn03_ro role that created the tables, there would be no problem when accessing new tables?\n\tThanks in advance.\n\nDidier ROS\ndidier.ros@edf.fr\nTél. : +33 6 49 51 11 88\n\n-----Message d'origine-----\nDe : tgl@sss.pgh.pa.us [mailto:tgl@sss.pgh.pa.us] \nEnvoyé : vendredi 20 décembre 2019 15:05\nÀ : ROS Didier <didier.ros@edf.fr>\nCc : pgsql-hackers@postgresql.org; pgsql-sql@postgresql.org\nObjet : Re: problem with read-only user\n\nROS Didier <didier.ros@edf.fr> writes:\n> I created a read-only role as follows:\n> psql -p 5434 kidsdpn03\n> CREATE ROLE kidsdpn03_ro PASSWORD 'xxx'; ALTER ROLE kidsdpn03_ro WITH \n> LOGIN; GRANT CONNECT ON DATABASE kidsdpn03 TO kidsdpn03_ro; GRANT \n> USAGE ON SCHEMA kidsdpn03 TO kidsdpn03_ro; GRANT SELECT ON ALL TABLES \n> IN SCHEMA kidsdpn03 TO kidsdpn03_ro; GRANT SELECT ON ALL SEQUENCES IN \n> SCHEMA kidsdpn03 TO kidsdpn03_ro; ALTER DEFAULT PRIVILEGES IN SCHEMA \n> kidsdpn03 GRANT SELECT ON TABLES TO kidsdpn03_ro; ALTER ROLE \n> kidsdpn03_ro SET search_path TO kidsdpn03;\n\n> but when i create new tables, i don't have read access to those new tables. \n\nYou only showed us part of what you did ... but IIRC, ALTER DEFAULT PRIVILEGES only affects privileges for objects subsequently made by the same user that issued the command.\n(Otherwise it'd be a security issue.) So maybe you didn't make the tables as the same user?\n\n\t\t\tregards, tom lane\n\n\n\nCe message et toutes les pièces jointes (ci-après le 'Message') sont établis à l'intention exclusive des destinataires et les informations qui y figurent sont strictement confidentielles. Toute utilisation de ce Message non conforme à sa destination, toute diffusion ou toute publication totale ou partielle, est interdite sauf autorisation expresse.\n\nSi vous n'êtes pas le destinataire de ce Message, il vous est interdit de le copier, de le faire suivre, de le divulguer ou d'en utiliser tout ou partie. Si vous avez reçu ce Message par erreur, merci de le supprimer de votre système, ainsi que toutes ses copies, et de n'en garder aucune trace sur quelque support que ce soit. Nous vous remercions également d'en avertir immédiatement l'expéditeur par retour du message.\n\nIl est impossible de garantir que les communications par messagerie électronique arrivent en temps utile, sont sécurisées ou dénuées de toute erreur ou virus.\n____________________________________________________\n\nThis message and any attachments (the 'Message') are intended solely for the addressees. The information contained in this Message is confidential. Any use of information contained in this Message not in accord with its purpose, any dissemination or disclosure, either whole or partial, is prohibited except formal approval.\n\nIf you are not the addressee, you may not copy, forward, disclose or use any part of it. If you have received this message in error, please delete it and all copies from your system and notify the sender immediately by return message.\n\nE-mail communication cannot be guaranteed to be timely secure, error or virus-free.\n\n\n\n",
"msg_date": "Fri, 27 Dec 2019 08:56:55 +0000",
"msg_from": "ROS Didier <didier.ros@edf.fr>",
"msg_from_op": true,
"msg_subject": "RE: problem with read-only user"
}
] |
[
{
"msg_contents": "This is a usability complaint. If one knows enough about vacuum and/or\nlogging, I'm sure there's no issue.\n\nRight now vacuum shows:\n\n| 1 postgres=# VACUUM t; \n| 2 DEBUG: vacuuming \"public.t\"\n| 3 DEBUG: scanned index \"t_i_key\" to remove 999 row versions\n| 4 DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n| 5 DEBUG: \"t\": removed 999 row versions in 5 pages\n| 6 DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n| 7 DEBUG: index \"t_i_key\" now contains 999 row versions in 11 pages\n| 8 DETAIL: 999 index row versions were removed.\n| 9 0 index pages have been deleted, 0 are currently reusable.\n| 10 CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n| 11 DEBUG: \"t\": found 999 removable, 999 nonremovable row versions in 9 out of 9 pages\n| 12 DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 130886944\n| 13 There were 0 unused item identifiers.\n| 14 Skipped 0 pages due to buffer pins, 0 frozen pages.\n| 15 0 pages are entirely empty.\n| 16 CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\n| 17 VACUUM\n\n2: line showing action to be performed on table - good\n3-4: line showing action which WAS performed on index, but only after it's done\n5-6: line showing actions conditionally performed on table, but only after it's done\n7-10: line showing status of on index, but only after it's done\n11-16: line showing status of table; unconditional; good\n\nI'm proposing to output a message before 3, 5, and 7, like in the attached.\nThe messages are just placeholders; if there's any agreement this is an\nimprovement, I'll accept suggestions for better content.\n\nThis is confusing, at least to me. For example, the rusage output is shown\nnumerous times (depending on the number of indices and dead tuples). I (at\nleast) tend to think think that a past-tense followed by an \"elapsed\" time\nindicates that the process is neary done, and maybe just waiting on a fsync or\nmaybe some other synchronization. If one sees multiple indexes output quickly,\nyou can infer that the process is looping over them. When it's done with the\nindexes, it starts another phase, but doesn't say that (or what).\n\n#2 \"vacuuming\" line shows no rusage, and it's not very clear that the rusage\n\"DETAIL\" in line#3 (in this example) applies to line#2 \"scanning index\", so\nit's easy to think that the output is reporting that the whole command took\n0.00s elapsed, which is irritating when the command hasn't yet finished.\n\nAnother example from CSV logs, to show log times (keep in mind that VACUUM\nVERBOSE is less clear than the logfile, which has the adantage of a separate\ncolumn for DETAIL).\n\n| 1 2019-12-16 09:59:22.568+10 | vacuuming \"public.alarms\" | \n| 2 2019-12-16 09:59:47.662+10 | scanned index \"alarms_active_idx\" to remove 211746 row versions | CPU: user: 0.22 s, system: 0.00 s, elapsed: 0.46 s\n| 3 2019-12-16 09:59:48.036+10 | scanned index \"alarms_displayable_idx\" to remove 211746 row versions | CPU: user: 0.22 s, system: 0.00 s, elapsed: 0.37 s\n| 4 2019-12-16 09:59:48.788+10 | scanned index \"alarms_raw_current_idx\" to remove 211746 row versions | CPU: user: 0.28 s, system: 0.00 s, elapsed: 0.75 s\n| 5 2019-12-16 09:59:51.379+10 | scanned index \"alarms_alarm_id_linkage_back_idx\" to remove 211746 row versions | CPU: user: 1.04 s, system: 0.05 s, elapsed: 2.59 s\n| 6 2019-12-16 09:59:53.75+10 | scanned index \"alarms_alarm_id_linkage_idx\" to remove 211746 row versions | CPU: user: 0.99 s, system: 0.08 s, elapsed: 2.37 s\n| 7 2019-12-16 09:59:56.473+10 | scanned index \"alarms_pkey\" to remove 211746 row versions | CPU: user: 1.11 s, system: 0.08 s, elapsed: 2.72 s\n| 8 2019-12-16 10:00:35.142+10 | scanned index \"alarms_alarm_time_idx\" to remove 211746 row versions | CPU: user: 0.94 s, system: 0.08 s, elapsed: 38.66 s\n| 9 2019-12-16 10:00:37.002+10 | scanned index \"alarms_alarm_clear_time_idx\" to remove 211746 row versions | CPU: user: 0.72 s, system: 0.08 s, elapsed: 1.85 s\n| 10 2019-12-16 10:03:57.42+10 | \"alarms\": removed 211746 row versions in 83923 pages | CPU: user: 10.24 s, system: 2.28 s, elapsed: 200.41 s\n| 11 2019-12-16 10:03:57.425+10 | index \"alarms_active_idx\" now contains 32 row versions in 1077 pages | 57251 index row versions were removed. +\n| 13 2019-12-16 10:03:57.426+10 | index \"alarms_raw_current_idx\" now contains 1495 row versions in 1753 pages | 96957 index row versions were removed. +\n| 15 2019-12-16 10:03:57.426+10 | index \"alarms_displayable_idx\" now contains 32 row versions in 1129 pages | 55220 index row versions were removed. +\n| 16 2019-12-16 10:03:57.427+10 | index \"alarms_pkey\" now contains 2269786 row versions in 9909 pages | 197172 index row versions were removed. +\n| 17 2019-12-16 10:03:57.427+10 | index \"alarms_alarm_time_idx\" now contains 2269791 row versions in 10306 pages | 211745 index row versions were removed. +\n| 17 2019-12-16 10:03:57.427+10 | index \"alarms_alarm_id_linkage_idx\" now contains 2269786 row versions in 11141 pages | 211746 index row versions were removed. +\n| 19 2019-12-16 10:03:57.427+10 | index \"alarms_alarm_id_linkage_back_idx\" now contains 2269786 row versions in 11352 pages | 211746 index row versions were removed. +\n| 20 2019-12-16 10:03:57.428+10 | index \"alarms_alarm_clear_time_idx\" now contains 2269791 row versions in 9875 pages | 166886 index row versions were removed. +\n| 21 2019-12-16 10:03:57.43+10 | \"alarms\": found 9534 removable, 1093069 nonremovable row versions in 211956 out of 430749 pages | 1 dead row versions cannot be removed yet, oldest xmin: 133809389+\n| | | There were 562588 unused item identifiers. +\n| | | Skipped 0 pages due to buffer pins, 7066 frozen pages. +\n| | | 0 pages are entirely empty. +\n| | | CPU: user: 17.85 s, system: 5.40 s, elapsed: 274.86 s.\n| 22 2019-12-16 10:03:58.795+10 | \"pg_toast_17781\": removed 28 row versions in 7 pages | CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s\n| 23 2019-12-16 10:03:58.795+10 | scanned index \"pg_toast_17781_index\" to remove 28 row versions | CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.07 s\n| 24 2019-12-16 10:03:58.816+10 | index \"pg_toast_17781_index\" now contains 1503 row versions in 7 pages | 23 index row versions were removed. +\n| 25 2019-12-16 10:03:58.816+10 | \"pg_toast_17781\": found 23 removable, 1503 nonremovable row versions in 375 out of 375 pages | 0 dead row versions cannot be removed yet, oldest xmin:\n \n#9 shows result of action performed on index, followed by 3.5 minutes of\nsilence... This isn't very amusing when the last output says \"elapsed: 1.85s\",\nand when you don't know how many \"elapsed\" lines to expect (as bad as any\nprogress bar with multiple phases).\n\nAnother approach would be to somehow make it more clear (for vacuum or in\ngeneral) that the \"detail\" line is associated with the preceding output.\n\nJustin\n\n\n",
"msg_date": "Fri, 20 Dec 2019 11:11:32 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "vacuum verbose detail logs are unclear (show debug lines at *start*\n of each stage?)"
},
{
"msg_contents": "On Fri, Dec 20, 2019 at 12:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n\n> This is a usability complaint. If one knows enough about vacuum and/or\n> logging, I'm sure there's no issue.\n>\n\n\n> | 11 DEBUG: \"t\": found 999 removable, 999 nonremovable row versions in 9\n> out of 9 pages\n>\n\nI agree the mixture of pre-action and after-action reporting is rather\nconfusing sometimes. I'm more concerned about what the user sees in their\nterminal, though, rather than the server's log file.\n\nAlso, the above quoted line is confusing. It makes it sound like it found\nremovable items, but didn't actually remove them. I think that that is\ntaking grammatical parallelism too far. How about something like:\n\nDEBUG: \"t\": removed 999 row versions, found 999 nonremovable row versions\nin 9 out of 9 pages\n\nAlso, I'd appreciate a report on how many hint-bits were set, and how many\npages were marked all-visible and/or frozen. When I do a manual vacuum, it\nis more often for those purposes than it is for removing removable rows\n(which autovac generally does a good enough job of).\n\nAlso, is not so clear that \"nonremovable rows\" includes both live and\nrecently dead. Although hopefully reading the next line will clarify that,\nto the person who has enough background knowledge.\n\n\n\n> | 12 DETAIL: 0 dead row versions cannot be removed yet, oldest xmin:\n> 130886944\n> | 13 There were 0 unused item identifiers.\n> | 14 Skipped 0 pages due to buffer pins, 0 frozen pages.\n>\n\nIt is a bit weird that we don't report skipped all-visible pages here. It\nwas implicitly reported in the \"in 9 out of 9 pages\" message, but I think\nit should be reported explicitly as well.\n\nCheers,\n\nJeff\n\nOn Fri, Dec 20, 2019 at 12:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:This is a usability complaint. If one knows enough about vacuum and/or\nlogging, I'm sure there's no issue. | 11 DEBUG: \"t\": found 999 removable, 999 nonremovable row versions in 9 out of 9 pagesI agree the mixture of pre-action and after-action reporting is rather confusing sometimes. I'm more concerned about what the user sees in their terminal, though, rather than the server's log file.Also, the above quoted line is confusing. It makes it sound like it found removable items, but didn't actually remove them. I think that that is taking grammatical parallelism too far. How about something like:DEBUG: \"t\": removed 999 row versions, found 999 nonremovable row versions in 9 out of 9 pages Also, I'd appreciate a report on how many hint-bits were set, and how many pages were marked all-visible and/or frozen. When I do a manual vacuum, it is more often for those purposes than it is for removing removable rows (which autovac generally does a good enough job of). Also, is not so clear that \"nonremovable rows\" includes both live and recently dead. Although hopefully reading the next line will clarify that, to the person who has enough background knowledge. \n| 12 DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 130886944\n| 13 There were 0 unused item identifiers.\n| 14 Skipped 0 pages due to buffer pins, 0 frozen pages.It is a bit weird that we don't report skipped all-visible pages here. It was implicitly reported in the \"in 9 out of 9 pages\" message, but I think it should be reported explicitly as well.Cheers,Jeff",
"msg_date": "Sun, 29 Dec 2019 13:15:24 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear (show debug lines at\n *start* of each stage?)"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 01:15:24PM -0500, Jeff Janes wrote:\n> On Fri, Dec 20, 2019 at 12:11 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> \n> > This is a usability complaint. If one knows enough about vacuum and/or\n> > logging, I'm sure there's no issue.\n> \n> > | 11 DEBUG: \"t\": found 999 removable, 999 nonremovable row versions in 9 out of 9 pages\n> \n> I agree the mixture of pre-action and after-action reporting is rather\n> confusing sometimes. I'm more concerned about what the user sees in their\n> terminal, though, rather than the server's log file.\n\nSorry, I ran vacuum (not verbose) with client_min_messages=debug, which was confusing.\n\n> Also, the above quoted line is confusing. It makes it sound like it found\n> removable items, but didn't actually remove them. I think that that is\n> taking grammatical parallelism too far. How about something like:\n> \n> DEBUG: \"t\": removed 999 row versions, found 999 nonremovable row versions in 9 out of 9 pages\n\nSince da4ed8bf, lazy_vacuum_heap() actually says: \"removed %d [row versions] in\n%d pages\". Strangely, the \"found .. removable, .. nonremovable\" in\nlazy_scan_heap() is also from da4ed8bf. Should we change them to match ?\n\n> Also, I'd appreciate a report on how many hint-bits were set\n> and how many pages were marked all-visible and/or frozen.\n\nPossibly should fork this part to a different thread, but..\nhint bits are being set by heap_prune_chain():\n\n|#0 HeapTupleSatisfiesVacuum (htup=htup@entry=0x7fffabfcccc0, OldestXmin=OldestXmin@entry=536, buffer=buffer@entry=167) at heapam_visibility.c:1245\n|#1 0x00007fb6eb3eb848 in heap_prune_chain (prstate=0x7fffabfccf30, OldestXmin=536, rootoffnum=1, buffer=167, relation=0x7fb6eb1e6858) at pruneheap.c:488\n|#2 heap_page_prune (relation=relation@entry=0x7fb6eb1e6858, buffer=buffer@entry=167, OldestXmin=536, report_stats=report_stats@entry=false, latestRemovedXid=latestRemovedXid@entry=0x7fb6ed84a13c) at pruneheap.c:223\n|#3 0x00007fb6eb3f02a2 in lazy_scan_heap (aggressive=false, nindexes=0, Irel=0x0, vacrelstats=0x7fb6ed84a0c0, params=0x7fffabfcdfd0, onerel=0x7fb6eb1e6858) at vacuumlazy.c:970\n|#4 heap_vacuum_rel (onerel=0x7fb6eb1e6858, params=0x7fffabfcdfd0, bstrategy=<optimized out>) at vacuumlazy.c:302\n\nIn the attached, I moved heap_page_prune to avoid a second loop over items.\nThen, initdb crashed until I avoided calling heap_prepare_freeze_tuple() for\nHEAPTUPLE_DEAD. I'm not sure that's ok or maybe if it's exposing an issue.\nI'm also not sure if t_infomask!=oldt_infomask is the right test.\n\nOne of my usability complaints was that the DETAIL includes newlines, which\nmakes it not apparent that it's detail, or that it's associated with the\npreceding INFO. Should those all be separate DETAIL messages (currently, only\nthe first errdetail is used, but maybe they should be catted together\nusefully). Should errdetail do something with newlines, like change them to\n\\n\\t for output to the client (but not logfile). Should vacuum itself do\nsomething (but probably no change to logfiles).\n\nI remembered that log_statement_stats looks like this:\n\n2020-01-01 11:28:33.758 CST [3916] LOG: EXECUTOR STATISTICS\n2020-01-01 11:28:33.758 CST [3916] DETAIL: ! system usage stats:\n ! 0.050185 s user, 0.000217 s system, 0.050555 s elapsed\n ! [2.292346 s user, 0.215656 s system total]\n [...]\n\n\nIt calls errdetail_internal(\"%s\", str.data), same as vaccum, but the multi-line\ndetail messages are written like this:\n|appendStringInfo(&str, \"!\\t...\")\n|...\n|ereport(LOG,\n|\t(errmsg_internal(\"%s\", title),\n|\terrdetail_internal(\"%s\", str.data)));\n\nSince they can run multiple times, including rusage, and there's not currently\nany message shown before their action, I propose that lazy_vacuum_index/heap\nshould write VACUUM VERBOSE logs at DEBUG level. Or otherwise show a log\nbefore starting each action, at least those for which it logs completion.\n\nI'm not sure why this one doesn't use get ngettext() ? Missed at a8d585c0 ?\n|appendStringInfo(&buf, _(\"There were %.0f unused item identifiers.\\n\"),\n\nOr why this one uses _/gettext() ? (580ddcec suggests that I'm missing\nsomething?).\n|appendStringInfo(&buf, _(\"%s.\"), pg_rusage_show(&ru0));\n\nAnyway, now it looks like this:\npostgres=# VACUUM VERBOSE t;\nINFO: vacuuming \"pg_temp_3.t\"\nINFO: \"t\": removed 1998 row versions in 5 pages\nINFO: \"t\": removed 1998, found 999 nonremovable row versions in 9 out of 9 pages\nDETAIL: ! 0 dead row versions cannot be removed yet, oldest xmin: 4505\n! There were 0 unused item identifiers.\n! Skipped 0 pages due to buffer pins, 0 frozen pages.\n! 0 pages are entirely empty.\n! Marked 9 pages all visible, 4 pages frozen.\n! Wrote 1998 hint bits.\n! CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s.\nVACUUM\n\nThanks for your input.\n\nJustin",
"msg_date": "Sun, 12 Jan 2020 18:45:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage; show allvisible/frozen/hintbits"
},
{
"msg_contents": "Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\nI added to March CF https://commitfest.postgresql.org/27/2425/",
"msg_date": "Tue, 21 Jan 2020 07:49:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage; show allvisible/frozen/hintbits"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 07:49:34AM -0600, Justin Pryzby wrote:\n> Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\n> I added to March CF https://commitfest.postgresql.org/27/2425/\n\nPlease be careful with the sets of patches sent to a thread, just to\nsay that what you are sending is organized in a messy way, and that\nthis is not the only thread (I can see sometimes the same patches sent\nto multiple threads for no actual reason). First, patches 0001 and\n0002 have nothing to do with this thread. Patches 0003 and 0005 could\njust be merged together, visibly with 0004 as well as they treat of\nthe same concepts, actually related to this thread. My point is that\nit is harder to understand what you are trying to do, and that this is\ninconsistent with the threads created.\n\nFrom patch 0002:\n- * If the all-visible page is turned out to be all-frozen but not\n+ * If the all-visible page turned out to be all-frozen but not\n * marked, we should so mark it. Note that all_frozen is only valid\n * if all_visible is true, so we must check both.\nShouldn't the last part of the sentence be \"we should mark it so\"\ninstead of \"we should so mark it\"? I would rephrase the whole as\nfollows:\n\"If the all-visible page is all-frozen but not marked as such yet,\nmark it as all-frozen.\"\n\nFrom patch 0003:\n /*\n+ * Indent multi-line DETAIL if being sent to client (verbose)\n+ * We don't know if it's sent to the client (client_min_messages);\n+ * Also, that affects output to the logfile, too; assume that it's more\n+ * important to format messages requested by the client than to make\n+ * verbose logs pretty when also sent to the logfile.\n+ */\n+ msgprefix = elevel==INFO ? \"!\\t\" : \"\";\nSuch stuff gets a -1 from me. This is not project-like, and you make\nthe translation of those messages much harder than they should be.\n--\nMichael",
"msg_date": "Wed, 22 Jan 2020 14:34:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage; show allvisible/frozen/hintbits"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 02:34:57PM +0900, Michael Paquier wrote:\n> Shouldn't the last part of the sentence be \"we should mark it so\"\n> instead of \"we should so mark it\"? I would rephrase the whole as\n> follows:\n> \"If the all-visible page is all-frozen but not marked as such yet,\n> mark it as all-frozen.\"\n\nApplied this one to HEAD after chewing on it a bit.\n--\nMichael",
"msg_date": "Thu, 23 Jan 2020 15:57:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage; show allvisible/frozen/hintbits"
},
{
"msg_contents": "On Wed, Jan 22, 2020 at 02:34:57PM +0900, Michael Paquier wrote:\n> From patch 0003:\n> /*\n> + * Indent multi-line DETAIL if being sent to client (verbose)\n> + * We don't know if it's sent to the client (client_min_messages);\n> + * Also, that affects output to the logfile, too; assume that it's more\n> + * important to format messages requested by the client than to make\n> + * verbose logs pretty when also sent to the logfile.\n> + */\n> + msgprefix = elevel==INFO ? \"!\\t\" : \"\";\n> Such stuff gets a -1 from me. This is not project-like, and you make\n> the translation of those messages much harder than they should be.\n\nI don't see why its harder to translate ? Do you mean because it changes the\nstrings by adding %s ?\n\n- appendStringInfo(&sbuf, ngettext(\"%u page is entirely empty.\\n\",\n- \"%u pages are entirely empty.\\n\",\n+ appendStringInfo(&sbuf, ngettext(\"%s%u page is entirely empty.\\n\",\n+ \"%s%u pages are entirely empty.\\n\",\n...\n\nI did raise two questions regarding translation:\n\nI'm not sure why this one doesn't use get ngettext() ? Seems to have been\nmissed at a8d585c0.\n|appendStringInfo(&buf, _(\"There were %.0f unused item identifiers.\\n\"),\n\nOr why this one does use _/gettext() ? (580ddcec suggests that I'm missing\nsomething, but I just experimented, and it really seems to do nothing, since\n\"%s\" shouldn't be translated).\n|appendStringInfo(&buf, _(\"%s.\"), pg_rusage_show(&ru0));\n\nAlso, I realized it's possible to write different strings to the log vs the\nclient (with and without a prefix) by calling errdetail_internal() and\nerrdetail_log().\n\nHere's a version rebased on top of f942dfb9, and making use of errdetail_log.\nI'm not sure if it address your concern about translation, but it doesn't\nchange strings.\n\nI think it's not needed or desirable to change what's written to the logfile,\nsince CSV logs have a separate \"detail\" field, and text logs are indented. The\nserver log is unchanged:\n\n> 2020-01-25 23:08:40.451 CST [13971] INFO: \"t\": removed 0, found 160 nonremovable row versions in 1 out of 888 pages\n> 2020-01-25 23:08:40.451 CST [13971] DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 781\n> There were 0 unused item identifiers.\n> Skipped 0 pages due to buffer pins, 444 frozen pages.\n> 0 pages are entirely empty.\n> CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s.\n\nIf VERBOSE, then the client log has ! prefixes, with the style borrowed from\nShowUsage:\n\n> INFO: \"t\": removed 0, found 160 nonremovable row versions in 1 out of 888 pages\n> DETAIL: ! 0 dead row versions cannot be removed yet, oldest xmin: 781\n> ! There were 0 unused item identifiers.\n> ! Skipped 0 pages due to buffer pins, 444 frozen pages.\n> ! 0 pages are entirely empty.\n> ! CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.01 s.\n\nI mentioned before that maybe the client's messages with newlines should be\nindented similarly to how the they're done in the text logfile. I looked,\nthat's append_with_tabs() in elog.c. So that's a different possible\nimplementation, which would apply to any message with newlines (or possibly\njust DETAIL).\n\nI'll also fork the allvisible/frozen/hintbits patches to a separate thread.\n\nThanks,\nJustin",
"msg_date": "Sat, 25 Jan 2020 23:36:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage"
},
{
"msg_contents": "On 2020-01-26 06:36, Justin Pryzby wrote:\n> From a3d0b41435655615ab13f808ec7c30e53e596e50 Mon Sep 17 00:00:00 2001\n> From: Justin Pryzby<pryzbyj@telsasoft.com>\n> Date: Sat, 25 Jan 2020 21:25:37 -0600\n> Subject: [PATCH v3 1/4] Remove gettext erronously readded at 580ddce\n> \n> ---\n> src/backend/access/heap/vacuumlazy.c | 2 +-\n> 1 file changed, 1 insertion(+), 1 deletion(-)\n> \n> diff --git a/src/backend/access/heap/vacuumlazy.c b/src/backend/access/heap/vacuumlazy.c\n> index 8ce5011..8e8ea9d 100644\n> --- a/src/backend/access/heap/vacuumlazy.c\n> +++ b/src/backend/access/heap/vacuumlazy.c\n> @@ -1690,7 +1690,7 @@ lazy_scan_heap(Relation onerel, VacuumParams *params, LVRelStats *vacrelstats,\n> \t\t\t\t\t\t\t\t\t\"%u pages are entirely empty.\\n\",\n> \t\t\t\t\t\t\t\t\tempty_pages),\n> \t\t\t\t\t empty_pages);\n> -\tappendStringInfo(&buf, _(\"%s.\"), pg_rusage_show(&ru0));\n> +\tappendStringInfo(&buf, \"%s.\", pg_rusage_show(&ru0));\n> \n> \tereport(elevel,\n> \t\t\t(errmsg(\"\\\"%s\\\": found %.0f removable, %.0f nonremovable row versions in %u out of %u pages\",\n> -- 2.7.4\n\nWhy do you think it was erroneously added?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 27 Feb 2020 10:10:57 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage"
},
{
"msg_contents": "On Thu, Feb 27, 2020 at 10:10:57AM +0100, Peter Eisentraut wrote:\n> On 2020-01-26 06:36, Justin Pryzby wrote:\n> >Subject: [PATCH v3 1/4] Remove gettext erronously readded at 580ddce\n> >\n> >-\tappendStringInfo(&buf, _(\"%s.\"), pg_rusage_show(&ru0));\n> >+\tappendStringInfo(&buf, \"%s.\", pg_rusage_show(&ru0));\n> \n> Why do you think it was erroneously added?\n\nI was wrong.\n\nIt seemed useless to me to translate \"%s.\", but I see now that at least JA.PO\nuses a different terminator than \".\".\n\n$ git grep -C3 'msgid \"%s.\"$' '*.po' |grep msgstr\nsrc/backend/po/de.po-msgstr \"%s.\"\nsrc/backend/po/es.po-msgstr \"%s.\"\nsrc/backend/po/fr.po-msgstr \"%s.\"\nsrc/backend/po/id.po-msgstr \"%s.\"\nsrc/backend/po/it.po-msgstr \"%s.\"\nsrc/backend/po/ja.po-msgstr \"%s。\"\n\nBTW I only *happened* to see your message on the www interface. I didn't get\nthe original message. And the \"Resend email\" button didn't get it to me,\neither..\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 29 Feb 2020 13:59:42 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Wed, Jan 22, 2020 at 02:34:57PM +0900, Michael Paquier wrote:\n>> From patch 0003:\n>> /*\n>> + * Indent multi-line DETAIL if being sent to client (verbose)\n>> + * We don't know if it's sent to the client (client_min_messages);\n>> + * Also, that affects output to the logfile, too; assume that it's more\n>> + * important to format messages requested by the client than to make\n>> + * verbose logs pretty when also sent to the logfile.\n>> + */\n>> + msgprefix = elevel==INFO ? \"!\\t\" : \"\";\n>> Such stuff gets a -1 from me. This is not project-like, and you make\n>> the translation of those messages much harder than they should be.\n\n> I don't see why its harder to translate ?\n\nThe really fundamental problem with this is that you are trying to make\nthe server do what is properly the client's job, namely format messages\nnicely. Please read the message style guidelines [1], particularly\nthe bit about \"Formatting\", which basically says \"don't\":\n\n Formatting\n\n Don't put any specific assumptions about formatting into the message\n texts. Expect clients and the server log to wrap lines to fit their\n own needs. In long messages, newline characters (\\n) can be used to\n indicate suggested paragraph breaks. Don't end a message with a\n newline. Don't use tabs or other formatting characters. (In error\n context displays, newlines are automatically added to separate levels\n of context such as function calls.)\n\n Rationale: Messages are not necessarily displayed on terminal-type\n displays. In GUI displays or browsers these formatting instructions\n are at best ignored.\n\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/docs/devel/error-style-guide.html\n\n\n",
"msg_date": "Tue, 24 Mar 2020 17:58:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear;\n log at *start* of each stage"
},
{
"msg_contents": "> On 24 Mar 2020, at 22:58, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> The really fundamental problem with this is that you are trying to make\n> the server do what is properly the client's job, namely format messages\n> nicely. Please read the message style guidelines [1], particularly\n> the bit about \"Formatting\", which basically says \"don't\":\n\nThis thread has stalled since the last CF with Tom's raised issue unanswered,\nand the patch no longer applies. I'm closing this as Returned with Feedback,\nif there is an updated patchset then please re-open the entry.\n\ncheers ./daniel\n\n\n",
"msg_date": "Sun, 5 Jul 2020 22:48:42 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: vacuum verbose detail logs are unclear; log at *start* of each\n stage"
}
] |
[
{
"msg_contents": "This very small patch removes some duplicated code in pg_publication.\n\n-- \n�lvaro Herrera http://www.linkedin.com/in/alvherre",
"msg_date": "Fri, 20 Dec 2019 17:10:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "pg_publication repetitious code"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> This very small patch removes some duplicated code in pg_publication.\n\nSeems like the extra test on missing_oid is unnecessary:\n\n+\toid = get_publication_oid(pubname, missing_ok);\n+\tif (!OidIsValid(oid) && missing_ok)\n+\t\treturn NULL;\n\nAs coded, it's get_publication_oid's job to deal with that.\n\nOtherwise +1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Dec 2019 15:54:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_publication repetitious code"
}
] |
[
{
"msg_contents": "When updating a table row with generated columns, we only need to \nrecompute those generated columns whose base columns have changed in \nthis update and keep the rest unchanged. This can result in a \nsignificant performance benefit (easy to reproduce for example with a \ntsvector column). The required information was already kept in \nRangeTblEntry.extraUpdatedCols; we just have to make use of it.\n\nA small problem is that right now ExecSimpleRelationUpdate() does not \npopulate extraUpdatedCols. That needs fixing first. This is also \nrelated to the issue discussed in \"logical replication does not fire \nper-column triggers\"[0]. I'll leave my patch here while that issue is \nbeing resolved.\n\n\n[0]: \nhttps://www.postgresql.org/message-id/flat/21673e2d-597c-6afe-637e-e8b10425b240%402ndquadrant.com\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 21 Dec 2019 07:47:29 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Optimize update of tables with generated columns"
},
{
"msg_contents": "On 2019-12-21 07:47, Peter Eisentraut wrote:\n> When updating a table row with generated columns, we only need to\n> recompute those generated columns whose base columns have changed in\n> this update and keep the rest unchanged. This can result in a\n> significant performance benefit (easy to reproduce for example with a\n> tsvector column). The required information was already kept in\n> RangeTblEntry.extraUpdatedCols; we just have to make use of it.\n> \n> A small problem is that right now ExecSimpleRelationUpdate() does not\n> populate extraUpdatedCols. That needs fixing first.\n\nHere is an updated patch set that contains a fix for the issue above \n(should be backpatched IMO) and the actual performance patch as before.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 13 Feb 2020 14:39:41 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize update of tables with generated columns"
},
{
"msg_contents": "čt 13. 2. 2020 v 14:40 odesílatel Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> napsal:\n\n> On 2019-12-21 07:47, Peter Eisentraut wrote:\n> > When updating a table row with generated columns, we only need to\n> > recompute those generated columns whose base columns have changed in\n> > this update and keep the rest unchanged. This can result in a\n> > significant performance benefit (easy to reproduce for example with a\n> > tsvector column). The required information was already kept in\n> > RangeTblEntry.extraUpdatedCols; we just have to make use of it.\n> >\n> > A small problem is that right now ExecSimpleRelationUpdate() does not\n> > populate extraUpdatedCols. That needs fixing first.\n>\n> Here is an updated patch set that contains a fix for the issue above\n> (should be backpatched IMO) and the actual performance patch as before.\n>\n\n+ 1\n\nI tested check-world without problems, and changes of patch has sense for\nme.\n\nRegards\n\nPavel\n\n\n> --\n> Peter Eisentraut http://www.2ndQuadrant.com/\n> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n>\n\nčt 13. 2. 2020 v 14:40 odesílatel Peter Eisentraut <peter.eisentraut@2ndquadrant.com> napsal:On 2019-12-21 07:47, Peter Eisentraut wrote:\n> When updating a table row with generated columns, we only need to\n> recompute those generated columns whose base columns have changed in\n> this update and keep the rest unchanged. This can result in a\n> significant performance benefit (easy to reproduce for example with a\n> tsvector column). The required information was already kept in\n> RangeTblEntry.extraUpdatedCols; we just have to make use of it.\n> \n> A small problem is that right now ExecSimpleRelationUpdate() does not\n> populate extraUpdatedCols. That needs fixing first.\n\nHere is an updated patch set that contains a fix for the issue above \n(should be backpatched IMO) and the actual performance patch as before.+ 1I tested check-world without problems, and changes of patch has sense for me.RegardsPavel\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 13 Feb 2020 16:16:46 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Optimize update of tables with generated columns"
},
{
"msg_contents": "On 2020-02-13 16:16, Pavel Stehule wrote:\n> čt 13. 2. 2020 v 14:40 odesílatel Peter Eisentraut \n> <peter.eisentraut@2ndquadrant.com \n> <mailto:peter.eisentraut@2ndquadrant.com>> napsal:\n> \n> On 2019-12-21 07:47, Peter Eisentraut wrote:\n> > When updating a table row with generated columns, we only need to\n> > recompute those generated columns whose base columns have changed in\n> > this update and keep the rest unchanged. This can result in a\n> > significant performance benefit (easy to reproduce for example with a\n> > tsvector column). The required information was already kept in\n> > RangeTblEntry.extraUpdatedCols; we just have to make use of it.\n> >\n> > A small problem is that right now ExecSimpleRelationUpdate() does not\n> > populate extraUpdatedCols. That needs fixing first.\n> \n> Here is an updated patch set that contains a fix for the issue above\n> (should be backpatched IMO) and the actual performance patch as before.\n> \n> \n> + 1\n> \n> I tested check-world without problems, and changes of patch has sense \n> for me.\n\ncommitted, thanks\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 17 Feb 2020 16:16:40 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Optimize update of tables with generated columns"
}
] |
[
{
"msg_contents": "I just had to retrieve my jaw from the floor after reading this\nbit in RelationBuildPartitionDesc:\n\n * The system cache may be out of date; if so, we may find no pg_class\n * tuple or an old one where relpartbound is NULL. In that case, try\n * the table directly. We can't just AcceptInvalidationMessages() and\n * retry the system cache lookup because it's possible that a\n * concurrent ATTACH PARTITION operation has removed itself to the\n * ProcArray but yet added invalidation messages to the shared queue;\n * InvalidateSystemCaches() would work, but seems excessive.\n\nAs far as I can see, this argument is wrong on every count, and if it\nwere right, the code it is defending would still be wrong.\n\nIn the first place, it's claiming, based on no evidence, that our whole\napproach to syscaches is wrong. If this code needs to deal with obsolete\nsyscache entries then so do probably thousands of other places.\n\nIn the second place, the argument that AcceptInvalidationMessages wouldn't\nwork is BS, because the way that that *actually* works is that transaction\ncommit updates clog and sends inval messages before it releases locks.\nSo if you've acquired enough of a lock to be sure that the data you want\nto read is stable, then you do not need to worry about whether you've\nreceived any relevant inval messages. You have --- and you don't even\nneed to call AcceptInvalidationMessages for yourself, because lock\nacquisition already did, see e.g. LockRelationOid.\n\nIn the third place, if this imaginary risk that the syscache was out of\ndate were real, the code would be completely failing to deal with it,\nbecause all it is testing is whether it found a null relpartbound value.\nThat wouldn't handle the case where a non-null relpartbound is obsolete,\nwhich is what you'd expect after ATTACH PARTITION.\n\nFurthermore, if all of the above can be rebutted, then what's the argument\nthat reading pg_class directly will produce a better answer? The only way\nthat any of this could be useful is if you're trying to read data that is\nchanging under you because you didn't take an adequate lock. In that case\nthere's no guarantee that what you will read from pg_class is up-to-date\neither.\n\nIn reality, what this code is doing is examining relations that it found\nby reading pg_inherit, using an MVCC snapshot, so I do not see what is the\nargument for supposing that the pg_class cache is more out-of-date than\nthe pg_inherit data.\n\nUnsurprisingly, the code coverage report shows that this code path is\nnever taken. I think we could dike out partdesc.c lines 113-151\naltogether, and make the code just above there look more like every\nother syscache access in the backend.\n\nIf somebody's got some actual evidence that this is necessary, and not\na flight of feverish imagination, let's hear it. (And maybe let's\ndevelop an isolation test that exercises the code path, because there's\nsure little reason to believe it works right now.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 21 Dec 2019 13:28:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Bogus logic in RelationBuildPartitionDesc"
},
{
"msg_contents": "On Sat, Dec 21, 2019 at 10:28 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I just had to retrieve my jaw from the floor after reading this\n> bit in RelationBuildPartitionDesc:\n>\n> * The system cache may be out of date; if so, we may find no pg_class\n> * tuple or an old one where relpartbound is NULL. In that case, try\n> * the table directly. We can't just AcceptInvalidationMessages() and\n> * retry the system cache lookup because it's possible that a\n> * concurrent ATTACH PARTITION operation has removed itself to the\n> * ProcArray but yet added invalidation messages to the shared queue;\n> * InvalidateSystemCaches() would work, but seems excessive.\n>\n> As far as I can see, this argument is wrong on every count, and if it\n> were right, the code it is defending would still be wrong.\n>\n> In the first place, it's claiming, based on no evidence, that our whole\n> approach to syscaches is wrong. If this code needs to deal with obsolete\n> syscache entries then so do probably thousands of other places.\n\nNo, because ATTACH PARTITION uses only ShareUpdateExclusiveLock,\nunlike most other DDL.\n\n> In the second place, the argument that AcceptInvalidationMessages wouldn't\n> work is BS, because the way that that *actually* works is that transaction\n> commit updates clog and sends inval messages before it releases locks.\n> So if you've acquired enough of a lock to be sure that the data you want\n> to read is stable, then you do not need to worry about whether you've\n> received any relevant inval messages. You have --- and you don't even\n> need to call AcceptInvalidationMessages for yourself, because lock\n> acquisition already did, see e.g. LockRelationOid.\n\nNo, because ATTACH PARTITION uses only ShareUpdateExclusiveLock, which\ndoes not conflict with the AccessShareLock required to build a cache\nentry.\n\n> In the third place, if this imaginary risk that the syscache was out of\n> date were real, the code would be completely failing to deal with it,\n> because all it is testing is whether it found a null relpartbound value.\n> That wouldn't handle the case where a non-null relpartbound is obsolete,\n> which is what you'd expect after ATTACH PARTITION.\n\nNo, that's what you'd expect after DETACH PARTITION, but that takes\nAccessExclusiveLock, so the problem doesn't occur.\n\n> Furthermore, if all of the above can be rebutted, then what's the argument\n> that reading pg_class directly will produce a better answer? The only way\n> that any of this could be useful is if you're trying to read data that is\n> changing under you because you didn't take an adequate lock. In that case\n> there's no guarantee that what you will read from pg_class is up-to-date\n> either.\n\nThe argument is that the only possible change is the concurrent\naddition of a partition, and therefore the only thing that happen is\nto go from NULL to non-NULL. It can't go from non-NULL to NULL, nor\nfrom one non-NULL value to another.\n\n> Unsurprisingly, the code coverage report shows that this code path is\n> never taken. I think we could dike out partdesc.c lines 113-151\n> altogether, and make the code just above there look more like every\n> other syscache access in the backend.\n\nIt turns out that I tested this, and that if you do that, it's\npossible to produce failures. It's very hard to do so in the context\nof a regression test because they are low-probability, but they do\nhappen. I believe some of the testing details are in the original\nthread that's probably linked from the commit message that added those\nlines.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 23 Dec 2019 20:48:23 -0800",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Bogus logic in RelationBuildPartitionDesc"
}
] |
[
{
"msg_contents": "Forking thread \"WAL logging problem in 9.4.3?\" for this tangent:\n\nOn Mon, Dec 09, 2019 at 06:04:06PM +0900, Kyotaro Horiguchi wrote:\n> I don't understand why mdclose checks for (v->mdfd_vfd >= 0) of open\n> segment but anyway mdimmedsync is believing that that won't happen and\n> I follow the assumption. (I suspect that the if condition in mdclose\n> should be an assertion..)\n\nThat check helps when data_sync_retry=on and FileClose() raised an error in a\nprevious mdclose() invocation. However, the check is not sufficient to make\nthat case work; the attached test case (not for commit) gets an assertion\nfailure or SIGSEGV.\n\nI am inclined to fix this by decrementing md_num_open_segs before modifying\nmd_seg_fds (second attachment). An alternative would be to call\n_fdvec_resize() after every FileClose(), like mdtruncate() does; however, the\nrepalloc() overhead could be noticeable. (mdclose() is called much more\nfrequently than mdtruncate().)\n\n\nIncidentally, _mdfd_openseg() has this:\n\n\tif (segno <= reln->md_num_open_segs[forknum])\n\t\t_fdvec_resize(reln, forknum, segno + 1);\n\nThat should be >=, not <=. If the less-than case happened, this would delete\nthe record of a vfd for a higher-numbered segno. There's no live bug, because\nonly segno == reln->md_num_open_segs[forknum] actually happens. I am inclined\nto make an assertion of that and remove the condition:\n\n\tAssert(segno == reln->md_num_open_segs[forknum]);\n\t_fdvec_resize(reln, forknum, segno + 1);",
"msg_date": "Sun, 22 Dec 2019 01:19:30 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 01:19:30AM -0800, Noah Misch wrote:\n> I am inclined to fix this by decrementing md_num_open_segs before modifying\n> md_seg_fds (second attachment).\n\nThat leaked memory, since _fdvec_resize() assumes md_num_open_segs is also the\nallocated array length. The alternative is looking better:\n\n> An alternative would be to call\n> _fdvec_resize() after every FileClose(), like mdtruncate() does; however, the\n> repalloc() overhead could be noticeable. (mdclose() is called much more\n> frequently than mdtruncate().)\n\nI can skip repalloc() when the array length decreases, to assuage mdclose()'s\nworry. In the mdclose() case, the final _fdvec_resize(reln, fork, 0) will\nstill pfree() the array. Array elements that mdtruncate() frees today will\ninstead persist to end of transaction. That is okay, since mdtruncate()\ncrossing more than one segment boundary is fairly infrequent. For it to\nhappen, you must either create a >2G relation and then TRUNCATE it in the same\ntransaction, or VACUUM must find >1-2G of unused space at the end of the\nrelation. I'm now inclined to do it that way, attached.",
"msg_date": "Sun, 22 Dec 2019 12:21:00 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 10:19 PM Noah Misch <noah@leadboat.com> wrote:\n> Assert(segno == reln->md_num_open_segs[forknum]);\n> _fdvec_resize(reln, forknum, segno + 1);\n\nOh yeah, I spotted that part too but didn't follow up.\n\nhttps://www.postgresql.org/message-id/flat/CA%2BhUKG%2BNBw%2BuSzxF1os-SO6gUuw%3DcqO5DAybk6KnHKzgGvxhxA%40mail.gmail.com\n\n\n",
"msg_date": "Mon, 23 Dec 2019 09:33:29 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 09:33:29AM +1300, Thomas Munro wrote:\n> On Sun, Dec 22, 2019 at 10:19 PM Noah Misch <noah@leadboat.com> wrote:\n> > Assert(segno == reln->md_num_open_segs[forknum]);\n> > _fdvec_resize(reln, forknum, segno + 1);\n> \n> Oh yeah, I spotted that part too but didn't follow up.\n> \n> https://www.postgresql.org/message-id/flat/CA%2BhUKG%2BNBw%2BuSzxF1os-SO6gUuw%3DcqO5DAybk6KnHKzgGvxhxA%40mail.gmail.com\n\nThat patch of yours looks good.\n\n\n",
"msg_date": "Sun, 22 Dec 2019 12:47:24 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "Hello.\n\nAt Sun, 22 Dec 2019 12:21:00 -0800, Noah Misch <noah@leadboat.com> wrote in \n> On Sun, Dec 22, 2019 at 01:19:30AM -0800, Noah Misch wrote:\n> > I am inclined to fix this by decrementing md_num_open_segs before modifying\n> > md_seg_fds (second attachment).\n> \n> That leaked memory, since _fdvec_resize() assumes md_num_open_segs is also the\n> allocated array length. The alternative is looking better:\n\nI agree that v2 is cleaner in the light of modularity and fixes the\nmemory leak happens at re-open.\n\n> > An alternative would be to call\n> > _fdvec_resize() after every FileClose(), like mdtruncate() does; however, the\n> > repalloc() overhead could be noticeable. (mdclose() is called much more\n> > frequently than mdtruncate().)\n> \n> I can skip repalloc() when the array length decreases, to assuage mdclose()'s\n> worry. In the mdclose() case, the final _fdvec_resize(reln, fork, 0) will\n> still pfree() the array. Array elements that mdtruncate() frees today will\n> instead persist to end of transaction. That is okay, since mdtruncate()\n> crossing more than one segment boundary is fairly infrequent. For it to\n> happen, you must either create a >2G relation and then TRUNCATE it in the same\n> transaction, or VACUUM must find >1-2G of unused space at the end of the\n> relation. I'm now inclined to do it that way, attached.\n\n\t\t * It doesn't seem worthwhile complicating the code by having a more\n\t\t * aggressive growth strategy here; the number of segments doesn't\n\t\t * grow that fast, and the memory context internally will sometimes\n-\t\t * avoid doing an actual reallocation.\n+\t\t * avoid doing an actual reallocation. Likewise, since the number of\n+\t\t * segments doesn't shrink that fast, don't shrink at all. During\n+\t\t * mdclose(), we'll pfree the array at nseg==0.\n\nIf I understand it correctly, it is mentioning the number of the all\nsegment files in a fork, not the length of md_seg_fds arrays at a\ncertain moment. But actually _fdvec_resize is called for every segment\nopening during mdnblocks (just-after mdopen), and every segment\nclosing during mdclose and mdtruncate as mentioned here. We are going\nto omit pallocs only in the decreasing case.\n\nIf we regard repalloc as far faster than FileOpen/FileClose or we care\nabout only increase of segment number of mdopen'ed files and don't\ncare the frequent resize that happens during the functions above, then\nthe comment is right and we may resize the array in the\nsegment-by-segment manner.\n\nBut if they are comparable each other, or we don't want the array gets\nresized frequently, we might need to prevent repalloc from happening\non every segment increase, too.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 23 Dec 2019 19:41:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 07:41:49PM +0900, Kyotaro Horiguchi wrote:\n> At Sun, 22 Dec 2019 12:21:00 -0800, Noah Misch <noah@leadboat.com> wrote in \n> > On Sun, Dec 22, 2019 at 01:19:30AM -0800, Noah Misch wrote:\n> > > An alternative would be to call\n> > > _fdvec_resize() after every FileClose(), like mdtruncate() does; however, the\n> > > repalloc() overhead could be noticeable. (mdclose() is called much more\n> > > frequently than mdtruncate().)\n> > \n> > I can skip repalloc() when the array length decreases, to assuage mdclose()'s\n> > worry. In the mdclose() case, the final _fdvec_resize(reln, fork, 0) will\n> > still pfree() the array. Array elements that mdtruncate() frees today will\n> > instead persist to end of transaction. That is okay, since mdtruncate()\n> > crossing more than one segment boundary is fairly infrequent. For it to\n> > happen, you must either create a >2G relation and then TRUNCATE it in the same\n> > transaction, or VACUUM must find >1-2G of unused space at the end of the\n> > relation. I'm now inclined to do it that way, attached.\n> \n> \t\t * It doesn't seem worthwhile complicating the code by having a more\n> \t\t * aggressive growth strategy here; the number of segments doesn't\n> \t\t * grow that fast, and the memory context internally will sometimes\n> -\t\t * avoid doing an actual reallocation.\n> +\t\t * avoid doing an actual reallocation. Likewise, since the number of\n> +\t\t * segments doesn't shrink that fast, don't shrink at all. During\n> +\t\t * mdclose(), we'll pfree the array at nseg==0.\n> \n> If I understand it correctly, it is mentioning the number of the all\n> segment files in a fork, not the length of md_seg_fds arrays at a\n> certain moment. But actually _fdvec_resize is called for every segment\n> opening during mdnblocks (just-after mdopen), and every segment\n> closing during mdclose and mdtruncate as mentioned here. We are going\n> to omit pallocs only in the decreasing case.\n\nThat is a good point. How frequently one adds 1 GiB of data is not the main\nissue. mdclose() and subsequent re-opening of all segments will be more\nrelevant to overall performance.\n\n> If we regard repalloc as far faster than FileOpen/FileClose or we care\n> about only increase of segment number of mdopen'ed files and don't\n> care the frequent resize that happens during the functions above, then\n> the comment is right and we may resize the array in the\n> segment-by-segment manner.\n\nIn most cases, the array will fit into a power-of-two chunk, so repalloc()\nalready does the right thing. Once the table has more than ~1000 segments (~1\nTiB table size), the allocation will get a single-chunk block, and every\nsubsequent repalloc() will call realloc(). Even then, repalloc() probably is\nfar faster than File operations. Likely, I should just accept the extra\nrepalloc() calls and drop the \"else if\" change in _fdvec_resize().\n\n\n",
"msg_date": "Tue, 24 Dec 2019 11:57:39 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "At Tue, 24 Dec 2019 11:57:39 -0800, Noah Misch <noah@leadboat.com> wrote in \n> On Mon, Dec 23, 2019 at 07:41:49PM +0900, Kyotaro Horiguchi wrote:\n> > If I understand it correctly, it is mentioning the number of the all\n> > segment files in a fork, not the length of md_seg_fds arrays at a\n> > certain moment. But actually _fdvec_resize is called for every segment\n> > opening during mdnblocks (just-after mdopen), and every segment\n> > closing during mdclose and mdtruncate as mentioned here. We are going\n> > to omit pallocs only in the decreasing case.\n> \n> That is a good point. How frequently one adds 1 GiB of data is not the main\n> issue. mdclose() and subsequent re-opening of all segments will be more\n> relevant to overall performance.\n\nYes, that's exactly what I meant.\n\n> > If we regard repalloc as far faster than FileOpen/FileClose or we care\n> > about only increase of segment number of mdopen'ed files and don't\n> > care the frequent resize that happens during the functions above, then\n> > the comment is right and we may resize the array in the\n> > segment-by-segment manner.\n> \n> In most cases, the array will fit into a power-of-two chunk, so repalloc()\n> already does the right thing. Once the table has more than ~1000 segments (~1\n> TiB table size), the allocation will get a single-chunk block, and every\n> subsequent repalloc() will call realloc(). Even then, repalloc() probably is\n> far faster than File operations. Likely, I should just accept the extra\n> repalloc() calls and drop the \"else if\" change in _fdvec_resize().\n\nI'm not sure which is better. If we say we know that\nrepalloc(AllocSetRealloc) doesn't free memory at all, there's no point\nin calling repalloc for shrinking and we could omit that under the\nname of optimization. If we say we want to free memory as much as\npossible, we should call repalloc pretending to believe that that\nhappens.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Dec 2019 10:39:32 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 10:39:32AM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 24 Dec 2019 11:57:39 -0800, Noah Misch <noah@leadboat.com> wrote in \n> > On Mon, Dec 23, 2019 at 07:41:49PM +0900, Kyotaro Horiguchi wrote:\n> > > If we regard repalloc as far faster than FileOpen/FileClose or we care\n> > > about only increase of segment number of mdopen'ed files and don't\n> > > care the frequent resize that happens during the functions above, then\n> > > the comment is right and we may resize the array in the\n> > > segment-by-segment manner.\n> > \n> > In most cases, the array will fit into a power-of-two chunk, so repalloc()\n> > already does the right thing. Once the table has more than ~1000 segments (~1\n> > TiB table size), the allocation will get a single-chunk block, and every\n> > subsequent repalloc() will call realloc(). Even then, repalloc() probably is\n> > far faster than File operations. Likely, I should just accept the extra\n> > repalloc() calls and drop the \"else if\" change in _fdvec_resize().\n> \n> I'm not sure which is better. If we say we know that\n> repalloc(AllocSetRealloc) doesn't free memory at all, there's no point\n> in calling repalloc for shrinking and we could omit that under the\n> name of optimization. If we say we want to free memory as much as\n> possible, we should call repalloc pretending to believe that that\n> happens.\n\nAs long as we free the memory by the end of mdclose(), I think it doesn't\nmatter whether we freed memory in the middle of mdclose().\n\nI ran a crude benchmark that found PathNameOpenFile()+FileClose() costing at\nleast two hundred times as much as the repalloc() pair. Hence, I now plan not\nto avoid repalloc(), as attached. Crude benchmark code:\n\n\t#define NSEG 9000\n\tfor (i = 0; i < count1; i++)\n\t{\n\t\tint j;\n\n\t\tfor (j = 0; j < NSEG; ++j)\n\t\t{\n\t\t\tFile f = PathNameOpenFile(\"/etc/services\", O_RDONLY);\n\t\t\tif (f < 0)\n\t\t\t\telog(ERROR, \"fail open: %m\");\n\t\t\tFileClose(f);\n\t\t}\n\t}\n\n\tfor (i = 0; i < count2; i++)\n\t{\n\t\tint j;\n\t\tvoid *buf = palloc(1);\n\n\t\tfor (j = 2; j < NSEG; ++j)\n\t\t\tbuf = repalloc(buf, j * 8);\n\t\twhile (--j > 0)\n\t\t\tbuf = repalloc(buf, j * 8);\n\t}",
"msg_date": "Wed, 1 Jan 2020 23:46:02 -0800",
"msg_from": "Noah Misch <noah@leadboat.com>",
"msg_from_op": true,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
},
{
"msg_contents": "At Wed, 1 Jan 2020 23:46:02 -0800, Noah Misch <noah@leadboat.com> wrote in \n> On Wed, Dec 25, 2019 at 10:39:32AM +0900, Kyotaro Horiguchi wrote:\n> > I'm not sure which is better. If we say we know that\n> > repalloc(AllocSetRealloc) doesn't free memory at all, there's no point\n> > in calling repalloc for shrinking and we could omit that under the\n> > name of optimization. If we say we want to free memory as much as\n> > possible, we should call repalloc pretending to believe that that\n> > happens.\n> \n> As long as we free the memory by the end of mdclose(), I think it doesn't\n> matter whether we freed memory in the middle of mdclose().\n\nAgreed.\n\n> I ran a crude benchmark that found PathNameOpenFile()+FileClose() costing at\n> least two hundred times as much as the repalloc() pair. Hence, I now plan not\n> to avoid repalloc(), as attached. Crude benchmark code:\n\nI got about 25 times difference with -O0 and about 50 times with -O2.\n(xfs / CentOS8) It's smaller than I intuitively expected but perhaps\n50 times difference is large enough.\n\nThe patch looks good to me.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jan 2020 10:13:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: mdclose() does not cope w/ FileClose() failure"
}
] |
[
{
"msg_contents": "I noticed a buildfarm failure here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=skate&dt=2019-12-22%2007%3A49%3A22\n\n================== pgsql.build/src/test/regress/regression.diffs ==================\n*** /home/pgbf/buildroot/REL_10_STABLE/pgsql.build/src/test/regress/expected/timestamptz.out\t2019-12-13 08:51:47.000000000 +0100\n--- /home/pgbf/buildroot/REL_10_STABLE/pgsql.build/src/test/regress/results/timestamptz.out\t2019-12-22 09:00:00.000000000 +0100\n***************\n*** 27,33 ****\n SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'today';\n one \n -----\n! 1\n (1 row)\n \n SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'tomorrow';\n--- 27,33 ----\n SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'today';\n one \n -----\n! 2\n (1 row)\n \n SELECT count(*) AS One FROM TIMESTAMPTZ_TBL WHERE d1 = timestamp with time zone 'tomorrow';\n\n\nJudging by the reported timestamp on the results file, this is an instance\nof the problem mentioned in the comments in timestamptz.sql:\n\n-- NOTE: it is possible for this part of the test to fail if the transaction\n-- block is entered exactly at local midnight; then 'now' and 'today' have\n-- the same values and the counts will come out different.\n\nOn most machines it'd be pretty hard to hit that window; I speculate that\n\"skate\" has got a very low-resolution system clock, making the window\nlarger. Nonetheless, a test that's got designed-in failure modes is\nannoying. We can dodge this by separating the test for \"now\" from the\ntests for the today/tomorrow/etc input strings, as attached.\nAny objections?\n\n\t\t\tregards, tom lane",
"msg_date": "Sun, 22 Dec 2019 11:11:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Avoiding a small risk of failure in timestamp(tz) regression tests"
}
] |
[
{
"msg_contents": "Buildfarm member drongo has been failing the initdb TAP test in the\n9.4 branch for the last week or two:\n\n# Running: rm -rf 'C:\\prog\\bf\\root\\REL9_4_STABLE\\pgsql.build\\src\\bin\\initdb\\tmp_check\\tmp_testAHN7'/*\n'rm' is not recognized as an internal or external command,\noperable program or batch file.\nBail out! system rm -rf 'C:\\prog\\bf\\root\\REL9_4_STABLE\\pgsql.build\\src\\bin\\initdb\\tmp_check\\tmp_testAHN7'/* failed: 256\n\nThe test has not changed; rather, it looks like drongo wasn't\ntrying to run it before.\n\nThis test is passing in the newer branches --- evidently due to\nthe 9.5-era commit 1a629c1b1, which removed this TAP script's\ndependency on \"rm -rf\". So we should either back-patch that\ncommit into 9.4 or undo whatever configuration change caused\ndrongo to try to run more tests. I favor the former.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Dec 2019 19:24:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Drongo vs. 9.4 initdb TAP test"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 07:24:09PM -0500, Tom Lane wrote:\n> This test is passing in the newer branches --- evidently due to\n> the 9.5-era commit 1a629c1b1, which removed this TAP script's\n> dependency on \"rm -rf\". So we should either back-patch that\n> commit into 9.4 or undo whatever configuration change caused\n> drongo to try to run more tests. I favor the former.\n\nI would prefer simply removing the dependency of rm -rf in the tests,\neven if that's for a short time as 9.4 is EOL in two months. A\nback-patch applies without conflicts, and the tests are able to pass.\nWould you prefer doing it yourself? I have not checked yet on\nWindows, better to make sure that it does not fail.\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 09:53:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Drongo vs. 9.4 initdb TAP test"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Sun, Dec 22, 2019 at 07:24:09PM -0500, Tom Lane wrote:\n>> This test is passing in the newer branches --- evidently due to\n>> the 9.5-era commit 1a629c1b1, which removed this TAP script's\n>> dependency on \"rm -rf\". So we should either back-patch that\n>> commit into 9.4 or undo whatever configuration change caused\n>> drongo to try to run more tests. I favor the former.\n\n> I would prefer simply removing the dependency of rm -rf in the tests,\n> even if that's for a short time as 9.4 is EOL in two months.\n\nI'd vote for back-patching 1a629c1b1 as-is, or is that what you meant?\n\n> A back-patch applies without conflicts, and the tests are able to pass.\n> Would you prefer doing it yourself? I have not checked yet on\n> Windows, better to make sure that it does not fail.\n\nI don't have the ability to test it on Windows --- if you want to do that,\nfeel free to do so and push.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 22 Dec 2019 19:57:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Drongo vs. 9.4 initdb TAP test"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 07:57:34PM -0500, Tom Lane wrote:\n> Michael Paquier <michael@paquier.xyz> writes:\n>> On Sun, Dec 22, 2019 at 07:24:09PM -0500, Tom Lane wrote:\n>>> This test is passing in the newer branches --- evidently due to\n>>> the 9.5-era commit 1a629c1b1, which removed this TAP script's\n>>> dependency on \"rm -rf\". So we should either back-patch that\n>>> commit into 9.4 or undo whatever configuration change caused\n>>> drongo to try to run more tests. I favor the former.\n> \n>> I would prefer simply removing the dependency of rm -rf in the tests,\n>> even if that's for a short time as 9.4 is EOL in two months.\n> \n> I'd vote for back-patching 1a629c1b1 as-is, or is that what you meant?\n\nYes, that's what I meant.\n\n>> A back-patch applies without conflicts, and the tests are able to pass.\n>> Would you prefer doing it yourself? I have not checked yet on\n>> Windows, better to make sure that it does not fail.\n> \n> I don't have the ability to test it on Windows --- if you want to do that,\n> feel free to do so and push.\n\nThanks, done. The original commit had a typo in one comment, fixed by\na9793e07 later on so I have included this fix as well here.\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 10:53:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Drongo vs. 9.4 initdb TAP test"
}
] |
[
{
"msg_contents": "Hi all,\n\nI was working on some stuff for table AMs, and I got to wonder it we\nhad better rename amapi.h to indexam.h and amapi.c to indexam.c, so as\nthings are more consistent with table AM. It is a bit annoying to\nname the files dedicated to index AMs with what looks like now a too\ngeneric name. That would require switching a couple of header files\nfor existing module developers, which is always annoying, but the move\nmakes sense thinking long-term?\n\nAny thoughts?\n--\nMichael",
"msg_date": "Mon, 23 Dec 2019 14:34:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Should we rename amapi.h and amapi.c? "
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 02:34:34PM +0900, Michael Paquier wrote:\n> Hi all,\n> \n> I was working on some stuff for table AMs, and I got to wonder it we\n> had better rename amapi.h to indexam.h and amapi.c to indexam.c, so as\n> things are more consistent with table AM. It is a bit annoying to\n> name the files dedicated to index AMs with what looks like now a too\n> generic name. That would require switching a couple of header files\n> for existing module developers, which is always annoying, but the move\n> makes sense thinking long-term?\n\n+1 for being more specific about which AM we're talking about.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 23 Dec 2019 21:08:10 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "On Sun, Dec 22, 2019 at 9:34 PM Michael Paquier <michael@paquier.xyz> wrote:\n\n> Hi all,\n>\n> I was working on some stuff for table AMs, and I got to wonder it we\n> had better rename amapi.h to indexam.h and amapi.c to indexam.c, so as\n> things are more consistent with table AM. It is a bit annoying to\n> name the files dedicated to index AMs with what looks like now a too\n> generic name. That would require switching a couple of header files\n> for existing module developers, which is always annoying, but the move\n> makes sense thinking long-term?\n>\n> Any thoughts?\n>\n\nI had raised the same earlier and [1] has response from Andres, which was\n\"We probably should rename it, but not in 12...\"\n\n[1]\nhttps://www.postgresql.org/message-id/20190508215135.4eljnhnle5xp3jwb%40alap3.anarazel.de\n\nOn Sun, Dec 22, 2019 at 9:34 PM Michael Paquier <michael@paquier.xyz> wrote:Hi all,\n\nI was working on some stuff for table AMs, and I got to wonder it we\nhad better rename amapi.h to indexam.h and amapi.c to indexam.c, so as\nthings are more consistent with table AM. It is a bit annoying to\nname the files dedicated to index AMs with what looks like now a too\ngeneric name. That would require switching a couple of header files\nfor existing module developers, which is always annoying, but the move\nmakes sense thinking long-term?\n\nAny thoughts?I had raised the same earlier and [1] has response from Andres, which was \"We probably should rename it, but not in 12...\"[1] https://www.postgresql.org/message-id/20190508215135.4eljnhnle5xp3jwb%40alap3.anarazel.de",
"msg_date": "Mon, 23 Dec 2019 12:28:36 -0800",
"msg_from": "Ashwin Agrawal <aagrawal@pivotal.io>",
"msg_from_op": false,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 12:28:36PM -0800, Ashwin Agrawal wrote:\n> I had raised the same earlier and [1] has response from Andres, which was\n> \"We probably should rename it, but not in 12...\"\n> \n> [1]\n> https://www.postgresql.org/message-id/20190508215135.4eljnhnle5xp3jwb%40alap3.anarazel.de\n\nOkay, glad to see that this has been mentioned. So let's do some\nrenaming for v13 then. I have studied first if we had better remove\namapi.c, then move amvalidate() to amvalidate.c and the handler lookup\nroutine to indexam.c as it already exists, but keeping things ordered\nas they are makes sense to limit spreading too much dependencies with\nthe syscache mainly, so instead the attached patch does the following\nchanges:\n- amapi.h -> indexam.h\n- amapi.c -> indexamapi.c. Here we have an equivalent in access/table/\nas tableamapi.c.\n- amvalidate.c -> indexamvalidate.c\n- amvalidate.h -> indexamvalidate.h\n- genam.c -> indexgenam.c\n\nPlease note that we have also amcmds.c and amcmds.c in the code, but\nthe former could be extended to have utilities for table AMs, and the\nlatter applies to both, so they are better left untouched in my\nopinion.\n--\nMichael",
"msg_date": "Tue, 24 Dec 2019 11:57:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 3:57 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Dec 23, 2019 at 12:28:36PM -0800, Ashwin Agrawal wrote:\n> > I had raised the same earlier and [1] has response from Andres, which was\n> > \"We probably should rename it, but not in 12...\"\n> >\n> > [1]\n> > https://www.postgresql.org/message-id/20190508215135.4eljnhnle5xp3jwb%40alap3.anarazel.de\n>\n> Okay, glad to see that this has been mentioned. So let's do some\n> renaming for v13 then. I have studied first if we had better remove\n> amapi.c, then move amvalidate() to amvalidate.c and the handler lookup\n> routine to indexam.c as it already exists, but keeping things ordered\n> as they are makes sense to limit spreading too much dependencies with\n> the syscache mainly, so instead the attached patch does the following\n> changes:\n> - amapi.h -> indexam.h\n> - amapi.c -> indexamapi.c. Here we have an equivalent in access/table/\n> as tableamapi.c.\n> - amvalidate.c -> indexamvalidate.c\n> - amvalidate.h -> indexamvalidate.h\n> - genam.c -> indexgenam.c\n>\n> Please note that we have also amcmds.c and amcmds.c in the code, but\n> the former could be extended to have utilities for table AMs, and the\n> latter applies to both, so they are better left untouched in my\n> opinion.\n\nLooks good to me. There are still references to amapi.c in various\n.po files, but those should rather be taken care of with the next\nupdate-po cycle right?\n\n\n",
"msg_date": "Tue, 24 Dec 2019 09:32:23 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 09:32:23AM +0100, Julien Rouhaud wrote:\n> Looks good to me. There are still references to amapi.c in various\n> .po files, but those should rather be taken care of with the next\n> update-po cycle right?\n\nYes, these are updated as part of the translation updates.\n--\nMichael",
"msg_date": "Tue, 24 Dec 2019 17:58:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n> the syscache mainly, so instead the attached patch does the following\n> changes:\n> - amapi.h -> indexam.h\n> - amapi.c -> indexamapi.c. Here we have an equivalent in access/table/\n> as tableamapi.c.\n> - amvalidate.c -> indexamvalidate.c\n> - amvalidate.h -> indexamvalidate.h\n> - genam.c -> indexgenam.c\n>\n\nPatch applies cleanly, compiles, make check-world ok.\n\nThe change does not attempt to keep included files in ab order. Should it \ndo that, or is it fixed later by some reindentation phase?\n\n-- \nFabien.",
"msg_date": "Tue, 24 Dec 2019 14:22:22 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 02:22:22PM +0100, Fabien COELHO wrote:\n> The change does not attempt to keep included files in ab order. Should it do\n> that, or is it fixed later by some reindentation phase?\n\nYeah, it should. Committed after fixing all that stuff.\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 10:26:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
},
{
"msg_contents": "Hi,\n\n(Moving discussion from [1] to this thread)\n\nOn 2019-12-28 11:32:26 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2019-12-27 08:20:17 +0900, Michael Paquier wrote:\n> >> Hm, I am not sure that it is actually that much used, such stuff is\n> >> very specialized.\n> \n> > That's true for some of this, but e.g. genam.h is pretty widely\n> > included. I mean, you had to adapt like 100+ files and while like 30 or\n> > so of those are in implementation details of individual indexes, the\n> > rest is not.\n> \n> This may suggest that we should think about an actual refactoring,\n> rather than just mechanical renaming. Do these results mean that\n> we've allowed index API details to bleed into the wrong places?\n\nI think the biggest API bleed is systable_* - that's legitimately needed\nin a lot of places. But not actually appropriately a part of\n\"generalized index access method definitions.\".\n\nFurthermore I think genam.h suffers from trying to provide somewhat\ndistinct sets of interfaces:\n- general handling of indexes: index_open/close ...\n- index scan implementation: index_beginscan, ...\n index_parallelscan_initialize, ...\n- systable scan implementation: systable_*\n- low level index interaction helpers: IndexBuildResult, IndexVacuumInfo,\n- index implementation helpers: index_store_float8_orderby_distances, ...\n\nNow obviously we'd not want to split things quite that granular, but it\ndoes seem like separating out external interface, systable_*, and AM\noriented things into a header each would make some sense.\n\nGreetings,\n\nAndres Freund\n\n[1] https://postgr.es/m/18016.1577550746%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 2 Jan 2020 06:21:09 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Should we rename amapi.h and amapi.c?"
}
] |
[
{
"msg_contents": "Hi all,\n\nCurrently, we include the function name string in each FmgrBuiltin\nstruct, whose size is 24 bytes on 64 bit platforms. As far as I can\ntell, the name is usually unused, so the attached (WIP, untested)\npatch stores it separately, reducing this struct to 16 bytes.\n\nWe can go one step further and allocate the names as a single\ncharacter string, reducing the binary size. It doesn't help much to\nstore offsets, since there are ~40k characters, requiring 32-bit\noffsets. If we instead compute the offset on the fly from stored name\nlengths, we can use 8-bit values, saving 19kB of space in the binary\nover using string pointers.\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 23 Dec 2019 09:52:18 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "reduce size of fmgr_builtins array"
},
{
"msg_contents": "I wrote:\n\n> Currently, we include the function name string in each FmgrBuiltin\n> struct, whose size is 24 bytes on 64 bit platforms. As far as I can\n> tell, the name is usually unused, so the attached (WIP, untested)\n> patch stores it separately, reducing this struct to 16 bytes.\n>\n> We can go one step further and allocate the names as a single\n> character string, reducing the binary size. It doesn't help much to\n> store offsets, since there are ~40k characters, requiring 32-bit\n> offsets. If we instead compute the offset on the fly from stored name\n> lengths, we can use 8-bit values, saving 19kB of space in the binary\n> over using string pointers.\n\nI tested with the attached C function to make sure\nfmgr_internal_function() still returned the correct answer. I assume\nthis is not a performance critical function, but I still wanted to see\nif there was a visible performance regression. I get this when calling\nfmgr_internal_function() 100k times:\n\nmaster: 833ms\npatch: 886ms\n\nThe point of the patch is to increase the likelihood of\nfmgr_isbuiltin() finding the fmgr_builtins[] element in L1 cache. It\nseems harder to put a number on that for a realistic workload, but\nreducing the array size by 1/3 couldn't hurt. I'll go ahead and add\nthis to the commitfest.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Wed, 1 Jan 2020 17:15:47 -0600",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: reduce size of fmgr_builtins array"
},
{
"msg_contents": "On 02/01/2020 01:15, John Naylor wrote:\n> I wrote:\n> \n>> Currently, we include the function name string in each FmgrBuiltin\n>> struct, whose size is 24 bytes on 64 bit platforms. As far as I can\n>> tell, the name is usually unused, so the attached (WIP, untested)\n>> patch stores it separately, reducing this struct to 16 bytes.\n>>\n>> We can go one step further and allocate the names as a single\n>> character string, reducing the binary size. It doesn't help much to\n>> store offsets, since there are ~40k characters, requiring 32-bit\n>> offsets. If we instead compute the offset on the fly from stored name\n>> lengths, we can use 8-bit values, saving 19kB of space in the binary\n>> over using string pointers.\n> \n> I tested with the attached C function to make sure\n> fmgr_internal_function() still returned the correct answer. I assume\n> this is not a performance critical function, but I still wanted to see\n> if there was a visible performance regression. I get this when calling\n> fmgr_internal_function() 100k times:\n> \n> master: 833ms\n> patch: 886ms\n\nHmm. I was actually expecting this to slightly speed up \nfmgr_internal_function(), now that all the names fit in a smaller amount \nof cache. I guess there are more branches or a data dependency or \nsomething now. I'm not too worried about that though. If it mattered, we \nshould switch to binary searching the array.\n\n> The point of the patch is to increase the likelihood of\n> fmgr_isbuiltin() finding the fmgr_builtins[] element in L1 cache. It\n> seems harder to put a number on that for a realistic workload, but\n> reducing the array size by 1/3 couldn't hurt.\n\nYeah. Nevertheless, it would be nice to be able to demonstrate the \nbenefit in some test, at least. It feels hard to justify committing a \nperformance patch if we can't show the benefit. Otherwise, we should \njust try to keep it as simple as possible, to optimize for readability.\n\nA similar approach was actually discussed a couple of years back: \nhttps://www.postgresql.org/message-id/bd13812c-c4ae-3788-5b28-5633beed2929%40iki.fi. \nThe conclusion then was that it's not worth the trouble or the code \ncomplication. So I think this patch is Rejected, unless you can come up \nwith a test case that concretely shows the benefit.\n\n- Heikki\n\n\n",
"msg_date": "Tue, 7 Jan 2020 15:08:46 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: reduce size of fmgr_builtins array"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 9:08 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n> Yeah. Nevertheless, it would be nice to be able to demonstrate the\n> benefit in some test, at least. It feels hard to justify committing a\n> performance patch if we can't show the benefit. Otherwise, we should\n> just try to keep it as simple as possible, to optimize for readability.\n>\n> A similar approach was actually discussed a couple of years back:\n> https://www.postgresql.org/message-id/bd13812c-c4ae-3788-5b28-5633beed2929%40iki.fi.\n> The conclusion then was that it's not worth the trouble or the code\n> complication. So I think this patch is Rejected, unless you can come up\n> with a test case that concretely shows the benefit.\n\nThanks for reviewing! As expected, a microbenchmark didn't show a\ndifference. I could try profiling in some workload, but I don't think\nthe benefit would be worth the effort involved. I've marked it\nrejected.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 18 Jan 2020 12:52:26 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: reduce size of fmgr_builtins array"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhile doing testing of \"parallel vacuum\" patch, I found that size of index\nrelation is not reducing even after deleting all the tuples and firing\nvacuum command. I am not sure that this is expected behavior or not. For\nreference, below I am giving one example.\n\npostgres=# create table test (a int);\nCREATE TABLE\npostgres=# create index indx1 on test (a);\nCREATE INDEX\npostgres=# insert into test (select a from generate_series(1,100000) a);\nINSERT 0 100000\npostgres=# analyze ;\nANALYZE\npostgres=# select relpages, relname from pg_class where relname = 'indx1';\n relpages | relname\n----------+---------\n 276 | indx1\n(1 row)\n\n-- delete all the tuples from table.\npostgres=# delete from test ;\nDELETE 100000\n\n-- do vacuum to test tables\npostgres=# vacuum test ;\nVACUUM\n\n-- check relpages in 'indx1' and 'test'\npostgres=# select relpages, relname from pg_class where relname = 'indx1';\n relpages | relname\n----------+---------\n 276 | indx1\n(1 row)\n\n-- do vacuum to all the tables and check relpages in 'indx1'\npostgres=# vacuum ;\nVACUUM\npostgres=# select relpages, relname from pg_class where relname = 'indx1';\n relpages | relname\n----------+---------\n 276 | indx1\n(1 row)\n\n-- check relpages in 'test' table\npostgres=# select relpages, relname from pg_class where relname = 'test';\n relpages | relname\n----------+---------\n 0 | test\n(1 row)\n\n\n From above example, we can see that after deleting all the tuples from\ntable and firing vacuum command, size of table is reduced but size of index\nrelation is same as before vacuum.\n\nPlease let me your thoughts.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHi All,While doing testing of \"parallel vacuum\" patch, I found that size of index relation is not reducing even after deleting all the tuples and firing vacuum command. I am not sure that this is expected behavior or not. For reference, below I am giving one example.postgres=# create table test (a int);CREATE TABLEpostgres=# create index indx1 on test (a);CREATE INDEXpostgres=# insert into test (select a from generate_series(1,100000) a);INSERT 0 100000postgres=# analyze ;ANALYZEpostgres=# select relpages, relname from pg_class where relname = 'indx1'; relpages | relname ----------+--------- 276 | indx1(1 row)-- delete all the tuples from table.postgres=# delete from test ;DELETE 100000-- do vacuum to test tablespostgres=# vacuum test ;VACUUM-- check relpages in 'indx1' and 'test'postgres=# select relpages, relname from pg_class where relname = 'indx1'; relpages | relname ----------+--------- 276 | indx1(1 row)-- do vacuum to all the tables and check relpages in 'indx1' postgres=# vacuum ;VACUUMpostgres=# select relpages, relname from pg_class where relname = 'indx1'; relpages | relname ----------+--------- 276 | indx1(1 row)-- check relpages in 'test' tablepostgres=# select relpages, relname from pg_class where relname = 'test'; relpages | relname ----------+--------- 0 | test(1 row)From above example, we can see that after deleting all the tuples from table and firing vacuum command, size of table is reduced but size of index relation is same as before vacuum.Please let me your thoughts.Thanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Dec 2019 00:35:08 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "relpages of btree indexes are not truncating even after deleting all\n the tuples from table and doing vacuum"
},
{
"msg_contents": "On Mon, Dec 23, 2019 at 11:05 AM Mahendra Singh <mahi6run@gmail.com> wrote:\n> From above example, we can see that after deleting all the tuples from table and firing vacuum command, size of table is reduced but size of index relation is same as before vacuum.\n\nVACUUM is only able to make existing empty pages in indexes recyclable\nby future page splits within the same index. It is not possible for it\nto reclaim space for the filesystem. Workload characteristics tend to\ndetermine whether or not this limitation is truly important.\n\nYou can observe which pages are \"free\" in this sense (i.e. whether\nthey've been placed by the FSM for recycling) by using\ncontrib/pg_freespacemap.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Mon, 23 Dec 2019 13:11:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: relpages of btree indexes are not truncating even after deleting\n all the tuples from table and doing vacuum"
},
{
"msg_contents": "On Tue, 24 Dec 2019 at 02:41, Peter Geoghegan <pg@bowt.ie> wrote:\n>\n> On Mon, Dec 23, 2019 at 11:05 AM Mahendra Singh <mahi6run@gmail.com> wrote:\n> > From above example, we can see that after deleting all the tuples from table and firing vacuum command, size of table is reduced but size of index relation is same as before vacuum.\n>\n> VACUUM is only able to make existing empty pages in indexes recyclable\n> by future page splits within the same index. It is not possible for it\n> to reclaim space for the filesystem. Workload characteristics tend to\n> determine whether or not this limitation is truly important.\n\nThank you for the clarification.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Dec 2019 10:32:07 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: relpages of btree indexes are not truncating even after deleting\n all the tuples from table and doing vacuum"
}
] |
[
{
"msg_contents": "Per a recent thread, these patches remove string literals split with\n\\-escaped newlines. The first is for the message \"materialize mode\nrequired, but it is not allowed in this context\" where it's more\nprevalent, and we keep perpetuating it; the second is for other\nmessages, whose bulk is in dblink and tablefunc. I think the split is\npointless and therefore propose to push both together as a single\ncommit, but maybe somebody would like me to leave those contrib modules\nalone.\n\nThere are many other error messages that are split with no \\; I would\nprefer not to have them, but maybe it would be too intrusive to change\nthem all. So let's do this for now and remove this one point of\nugliness.\n\n-- \n�lvaro Herrera 39�49'30\"S 73�17'W",
"msg_date": "Mon, 23 Dec 2019 16:51:56 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "string literal continuations in C"
},
{
"msg_contents": "On 12/23/19 2:51 PM, Alvaro Herrera wrote:\n> Per a recent thread, these patches remove string literals split with\n> \\-escaped newlines. The first is for the message \"materialize mode\n> required, but it is not allowed in this context\" where it's more\n> prevalent, and we keep perpetuating it; the second is for other\n> messages, whose bulk is in dblink and tablefunc. I think the split is\n> pointless and therefore propose to push both together as a single\n> commit, but maybe somebody would like me to leave those contrib modules\n> alone.\n\nI take it since I was explicitly CC'd that the contrib comment was aimed\nat me? I likely copied the convention from somewhere else in the\nbackend, but I don't care either way if you want to change them. However\nI guess we should coordinate since I've been berated regarding error\ncodes and will likely go change at least two of them in tablefunc soon\n(not likely before Thursday though)...\n\nJoe\n\n-- \nCrunchy Data - http://crunchydata.com\nPostgreSQL Support for Secure Enterprises\nConsulting, Training, & Open Source Development",
"msg_date": "Tue, 24 Dec 2019 06:43:20 -0500",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: string literal continuations in C"
}
] |
[
{
"msg_contents": "Bonjour Michaᅵl, hello devs,\n\nAs suggested in \"cce64a51\", this patch make pgbench use postgres logging \ncapabilities.\n\nI tried to use fatal/error/warning/info/debug where appropriate.\n\nSome printing to stderr remain for some pgbench specific output.\n\nThe patch fixes a inconsistent test case name that I noticed in passing.\n\n-- \nFabien.",
"msg_date": "Tue, 24 Dec 2019 11:17:31 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 11:17:31AM +0100, Fabien COELHO wrote:\n> Some printing to stderr remain for some pgbench specific output.\n\nHmm. Wouldn't it make sense to output the log generated as\ninformation from the test using pg_log_info() instead of using\nfprintf(stderr) (the logs of the initial data load, progress report)?\nIt seems to me that this would be consistent with the other tools we\nhave, and being able to make a difference with the level of logs is\nkind of a nice property of logging.c as you can grep easily for one\nproblems instead of looking at multiple patterns matching an error in\nthe logs. Note also an error in the scripts does not report an\nerror. Another thing is that messages logged would need to be\ntranslated. I think that's nice, but perhaps others don't like that\nor may think that's not a good idea. Who knows..\n\n> The patch fixes a inconsistent test case name that I noticed in passing.\n>\n> @@ -157,7 +157,7 @@ my @options = (\n> \t\t\tqr{error while setting random seed from --random-seed option}\n> \t\t]\n> \t],\n> -\t[ 'bad partition type', '-i --partition-method=BAD', [qr{\"range\"}, qr{\"hash\"}, qr{\"BAD\"}] ],\n> +\t[ 'bad partition method', '-i --partition-method=BAD', [qr{\"range\"}, qr{\"hash\"}, qr{\"BAD\"}] ],\n> \t[ 'bad partition number', '-i --partitions -1', [ qr{invalid number of partitions: \"-1\"} ] ],\n\nNo problem with this one from me, I'll fix it if there are no\nobjections.\n--\nMichael",
"msg_date": "Sun, 29 Dec 2019 20:11:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On 2019-12-24 11:17, Fabien COELHO wrote:\n> As suggested in \"cce64a51\", this patch make pgbench use postgres logging\n> capabilities.\n> \n> I tried to use fatal/error/warning/info/debug where appropriate.\n> \n> Some printing to stderr remain for some pgbench specific output.\n\nThe patch seems pretty straightforward, but this\n\n+/*\n+ * Convenient shorcuts\n+ */\n+#define fatal pg_log_fatal\n+#define error pg_log_error\n+#define warning pg_log_warning\n+#define info pg_log_info\n+#define debug pg_log_debug\n\nseems counterproductive. Let's just use the normal function names.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 12:10:13 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "\n\nOn December 31, 2019 8:10:13 PM GMT+09:00, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\n> seems counterproductive. Let's just use the normal function names.\n\n+1.\n-- \nMichael\n\n\n",
"msg_date": "Tue, 31 Dec 2019 21:39:36 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Bonjour Michaᅵl, et excellente annᅵe 2020 !\n\n> Hmm. Wouldn't it make sense to output the log generated as\n> information from the test using pg_log_info() instead of using\n> fprintf(stderr) (the logs of the initial data load, progress report)?\n\nFor the progress report, the reason I decided against is that the lines \nare already long enough with data (for the progress report: tps, latency, \netc.), and prepending \"pgbench info\" or equivalent in front of every line \ndoes not look very useful and make it more likely that actually useful \ndata could be pushed out of the terminal width.\n\nFor data load, ISTM that people are used to it like that. Moreover, I do \nnot think that the \\r recently-added trick can work with the logging \nstuff, so I left it out as well altogether.\n\n> It seems to me that this would be consistent with the other tools we\n> have, and being able to make a difference with the level of logs is\n> kind of a nice property of logging.c as you can grep easily for one\n> problems instead of looking at multiple patterns matching an error in\n> the logs. Note also an error in the scripts does not report an\n> error. Another thing is that messages logged would need to be\n> translated. I think that's nice, but perhaps others don't like that\n> or may think that's not a good idea. Who knows..\n\nDunno about translation. ISTM that pgbench is mostly not translated, not \nsure why.\n\n-- \nFabien.",
"msg_date": "Wed, 1 Jan 2020 22:19:52 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Hello Peter,\n\n> The patch seems pretty straightforward, but this\n>\n> +/*\n> + * Convenient shorcuts\n> + */\n> +#define fatal pg_log_fatal\n> +#define error pg_log_error\n> +#define warning pg_log_warning\n> +#define info pg_log_info\n> +#define debug pg_log_debug\n>\n> seems counterproductive. Let's just use the normal function names.\n\nI'm trying to keep the column width under control, but if you like it \nwider, here it is.\n\nCompared to v1 I have also made a few changes to be more consistent when \nusing fatal/error/info.\n\n-- \nFabien.",
"msg_date": "Wed, 1 Jan 2020 22:55:29 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Wed, Jan 01, 2020 at 10:19:52PM +0100, Fabien COELHO wrote:\n> Bonjour Michaël, et excellente année 2020 !\n\nToi aussi! Bonne année.\n\n>> Hmm. Wouldn't it make sense to output the log generated as\n>> information from the test using pg_log_info() instead of using\n>> fprintf(stderr) (the logs of the initial data load, progress report)?\n> \n> For the progress report, the reason I decided against is that the lines are\n> already long enough with data (for the progress report: tps, latency, etc.),\n> and prepending \"pgbench info\" or equivalent in front of every line does not\n> look very useful and make it more likely that actually useful data could be\n> pushed out of the terminal width.\n\nHm. Okay. That would limit the patch to only report errors in the\nfirst round of changes, which is fine by me.\n\n> For data load, ISTM that people are used to it like that. Moreover, I do not\n> think that the \\r recently-added trick can work with the logging stuff, so I\n> left it out as well altogether.\n\nIt could be possible to create new custom options for logging.c. We\nalready have one as of PG_LOG_FLAG_TERSE to make the output of psql\ncompatible with regression tests and such. These are just thoughts\nabout the control of:\n- the progname is appended to the error string or not.\n- CR/LF as last character.\n\n> Dunno about translation. ISTM that pgbench is mostly not translated, not\n> sure why.\n\nBecause as a benchmark tool that's not really worth it and its output\nis rather technical hence translating it would be more challenging?\nPerhaps others more used to translation work could chime in the\ndiscussion?\n--\nMichael",
"msg_date": "Thu, 2 Jan 2020 22:37:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On 2020-01-01 22:55, Fabien COELHO wrote:\n> I'm trying to keep the column width under control, but if you like it\n> wider, here it is.\n> \n> Compared to v1 I have also made a few changes to be more consistent when\n> using fatal/error/info.\n\nThe renaming of debug to debug_level seems unnecessary and unrelated.\n\nIn runShellCommand(), you removed some but not all argv[0] from the \noutput messages. I'm not sure what the intent was there.\n\nI would also recommend these changes:\n\n- pg_log_fatal(\"query failed: %s\", sql);\n- pg_log_error(\"%s\", PQerrorMessage(con));\n+ pg_log_fatal(\"query failed: %s\", PQerrorMessage(con));\n+ pg_log_info(\"query was: %s\", sql);\n\nThis puts the most important information first.\n\n- pg_log_error(\"connection to database \\\"%s\\\" failed\", dbName);\n- pg_log_error(\"%s\", PQerrorMessage(conn));\n+ pg_log_error(\"connection to database \\\"%s\\\" failed: %s\",\n+ dbName, PQerrorMessage(conn));\n\nLine break here is unnecessary.\n\nIn both cases, pg_dump has similar messages that can serve as reference.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 11:55:20 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Hello Peter,\n\n>> Compared to v1 I have also made a few changes to be more consistent when\n>> using fatal/error/info.\n>\n> The renaming of debug to debug_level seems unnecessary and unrelated.\n\nIndeed. It was when I used \"debug\" as a shorthand for \"pg_log_debug\".\n\n> In runShellCommand(), you removed some but not all argv[0] from the output \n> messages. I'm not sure what the intent was there.\n\nWithout looking at the context I thought that argv[0] was the program \nname, which is not the case here. I put it back everywhere, including the \nDEBUG message.\n\n> I would also recommend these changes:\n>\n> - pg_log_fatal(\"query failed: %s\", sql);\n> - pg_log_error(\"%s\", PQerrorMessage(con));\n> + pg_log_fatal(\"query failed: %s\", PQerrorMessage(con));\n> + pg_log_info(\"query was: %s\", sql);\n>\n> This puts the most important information first.\n\nOk.\n\n> - pg_log_error(\"connection to database \\\"%s\\\" failed\", dbName);\n> - pg_log_error(\"%s\", PQerrorMessage(conn));\n> + pg_log_error(\"connection to database \\\"%s\\\" failed: %s\",\n> + dbName, PQerrorMessage(conn));\n>\n> Line break here is unnecessary.\n\nOk. I homogeneised another similar message.\n\nPatch v3 attached hopefully fixes all of the above.\n\n-- \nFabien.",
"msg_date": "Fri, 3 Jan 2020 13:01:18 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 01:01:18PM +0100, Fabien COELHO wrote:\n> Without looking at the context I thought that argv[0] was the program name,\n> which is not the case here. I put it back everywhere, including the DEBUG\n> message.\n\nThe variable names in Command are confusing IMO...\n\n> Ok. I homogeneised another similar message.\n> \n> Patch v3 attached hopefully fixes all of the above.\n\n+ pg_log_error(\"gaussian parameter must be at least \"\n+ \"%f (not %f)\", MIN_GAUSSIAN_PARAM, param);\nI would keep all the error message strings to be on the same line.\nThat makes grepping for them easier on the same, and that's the usual\nconvention even if these are larger than 72-80 characters.\n\n #ifdef DEBUG\n- printf(\"shell parameter name: \\\"%s\\\", value: \\\"%s\\\"\\n\", argv[1], res);\n+ pg_log_debug(\"%s: shell parameter name: \\\"%s\\\", value: \\\"%s\\\"\", argv[0], argv[1], res);\n #endif\nWorth removing this ifdef?\n\n- fprintf(stderr, \"%s\", PQerrorMessage(con));\n+ pg_log_fatal(\"unexpected copy in result\");\n+ pg_log_error(\"%s\", PQerrorMessage(con));\n exit(1);\n[...]\n- fprintf(stderr, \"%s\", PQerrorMessage(con));\n+ pg_log_fatal(\"cannot count number of branches\");\n+ pg_log_error(\"%s\", PQerrorMessage(con));\nThese are inconsistent with the rest, why not combining both?\n\nI think that I would just remove the \"debug\" variable defined in\npgbench.c all together, and switch the messages for the duration and\nthe one in executeMetaCommand to use info-level logging..\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 17:15:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Bonjour Michaël,\n\n>> Without looking at the context I thought that argv[0] was the program name,\n>> which is not the case here. I put it back everywhere, including the DEBUG\n>> message.\n>\n> The variable names in Command are confusing IMO...\n\nSomehow, yes. Note that there is a logic, it will indeed be the argv of \nthe called shell command… And I do not think it is the point of this patch \nto solve this possible confusion.\n\n> + pg_log_error(\"gaussian parameter must be at least \"\n> + \"%f (not %f)\", MIN_GAUSSIAN_PARAM, param);\n\n> I would keep all the error message strings to be on the same line.\n> That makes grepping for them easier on the same, and that's the usual\n> convention even if these are larger than 72-80 characters.\n\nOk. I also did other similar cases accordingly.\n\n> #ifdef DEBUG\n> Worth removing this ifdef?\n\nYep, especially as it is the only instance. Done.\n\n> + pg_log_fatal(\"unexpected copy in result\");\n> + pg_log_error(\"%s\", PQerrorMessage(con));\n\n> + pg_log_fatal(\"cannot count number of branches\");\n> + pg_log_error(\"%s\", PQerrorMessage(con));\n\n> These are inconsistent with the rest, why not combining both?\n\nOk, done.\n\n> I think that I would just remove the \"debug\" variable defined in \n> pgbench.c all together, and switch the messages for the duration and the \n> one in executeMetaCommand to use info-level logging..\n\nOk, done.\n\nPatch v4 attached addresses all these points.\n\n-- \nFabien.",
"msg_date": "Mon, 6 Jan 2020 13:36:23 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 01:36:23PM +0100, Fabien COELHO wrote:\n>> I think that I would just remove the \"debug\" variable defined in\n>> pgbench.c all together, and switch the messages for the duration and the\n>> one in executeMetaCommand to use info-level logging..\n> \n> Ok, done.\n\nThanks. The output then gets kind of inconsistent when using --debug:\npgbench: client 2 executing script \"<builtin: TPC-B (sort of)>\"\nclient 2 executing \\set aid\nclient 2 executing \\set bid\nclient 2 executing \\set tid\nclient 2 executing \\set delta\n\nMy point was to just modify the code so as this uses pg_log_debug(),\nwith a routine doing some reverse-engineering of the Command data to\ngenerate a string to show up in the logs. Sorry for the confusion..\nAnd there is no need to use __pg_log_level either which should remain\ninternal to logging.h IMO.\n\nWe'd likely want a similar business in syntax_error() to be fully\nconsistent with all other code paths dealing with an error showing up\nbefore exiting.\n\nNo idea what others think here. I may be too much pedantic.\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 17:24:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>>> I think that I would just remove the \"debug\" variable defined in\n>>> pgbench.c all together, and switch the messages for the duration and the\n>>> one in executeMetaCommand to use info-level logging..\n>>\n>> Ok, done.\n>\n> Thanks. The output then gets kind of inconsistent when using --debug:\n> pgbench: client 2 executing script \"<builtin: TPC-B (sort of)>\"\n> client 2 executing \\set aid\n> client 2 executing \\set bid\n> client 2 executing \\set tid\n> client 2 executing \\set delta\n>\n> My point was to just modify the code so as this uses pg_log_debug(),\n> with a routine doing some reverse-engineering of the Command data to\n> generate a string to show up in the logs. Sorry for the confusion..\n> And there is no need to use __pg_log_level either which should remain\n> internal to logging.h IMO.\n\nFor the first case with the output you point out, there is a loop to \ngenerate the output. I do not think that we want to pay the cost of \ngenerating the string and then throw it away afterwards when not under \ndebug, esp as string manipulation is not that cheap, so we need to enter \nthe thing only when under debug. However, there is no easy way to do that \nwithout accessing __pg_log_level. It could be hidden in a macro to create, \nbut that's it.\n\nFor the second case I called pg_log_debug just once.\n\n> We'd likely want a similar business in syntax_error() to be fully\n> consistent with all other code paths dealing with an error showing up\n> before exiting.\n\nThe syntax error is kind of complicated because there is the location \ninformation which is better left as is, IMHO. I moved remainder to a \nPQExpBuffer and pg_log_fatal.\n\n> No idea what others think here. I may be too much pedantic.\n\nMaybe a little:-)\n\nNote that I submitted another patch to use PQExpBuffer wherever possible \nin pgbench, especially to get rid of doubtful snprintf/strlen patterns.\n\nPatch v5 attached tries to follow your above suggestions.\n\n-- \nFabien.",
"msg_date": "Tue, 7 Jan 2020 10:32:41 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "> Patch v5 attached tries to follow your above suggestions.\n\nPatch v6 makes syntax error location code more compact and tests the case.\n\n-- \nFabien.",
"msg_date": "Wed, 8 Jan 2020 13:07:41 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On 2020-01-08 13:07, Fabien COELHO wrote:\n> \n>> Patch v5 attached tries to follow your above suggestions.\n> \n> Patch v6 makes syntax error location code more compact and tests the case.\n\nCommitted.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 14:27:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "\n\n>> Patch v6 makes syntax error location code more compact and tests the case.\n>\n> Committed.\n\nThanks!\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:02:31 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 02:27:46PM +0100, Peter Eisentraut wrote:\n> Committed.\n\nThat was fast.\n\n- if (debug)\n+ if (unlikely(__pg_log_level <= PG_LOG_DEBUG))\n {\nI am surprised that you kept this one, while syntax_error() has been\nchanged in a more modular way.\n--\nMichael",
"msg_date": "Wed, 8 Jan 2020 23:12:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On 2020-01-08 15:12, Michael Paquier wrote:\n> On Wed, Jan 08, 2020 at 02:27:46PM +0100, Peter Eisentraut wrote:\n>> Committed.\n> \n> That was fast.\n> \n> - if (debug)\n> + if (unlikely(__pg_log_level <= PG_LOG_DEBUG))\n> {\n> I am surprised that you kept this one,\n\nI'm not happy about it, but it seems OK for now. We can continue to \nimprove here.\n\n> while syntax_error() has been\n> changed in a more modular way.\n\nI don't follow what you mean by that.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:31:46 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 03:31:46PM +0100, Peter Eisentraut wrote:\n> On 2020-01-08 15:12, Michael Paquier wrote:\n>> while syntax_error() has been\n>> changed in a more modular way.\n> \n> I don't follow what you mean by that.\n\nThe first versions of the patch did not change syntax_error(), and the\nversion committed has switched to use PQExpBufferData there. I think\nthat we should just do the same for the debug logs executing the meta\ncommands. This way, we get an output consistent with what's printed\nout for sending or receiving stuff. Please see the attached.\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 13:02:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> I don't follow what you mean by that.\n>\n> The first versions of the patch did not change syntax_error(), and the\n> version committed has switched to use PQExpBufferData there. I think\n> that we should just do the same for the debug logs executing the meta\n> commands. This way, we get an output consistent with what's printed\n> out for sending or receiving stuff. Please see the attached.\n\nYep, I thought of it, but I was not very keen on having a malloc/free \ncycle just for one debug message. However under debug this is probably not \nan issue.\n\nYour patch works for me. IT can avoid some level of format interpretation \noverheads by switching to Char/Str functions, see first attachement.\n\nThe other point is the test on __pg_log_level, see second attached.\n\n-- \nFabien.",
"msg_date": "Thu, 9 Jan 2020 10:28:21 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 10:28:21AM +0100, Fabien COELHO wrote:\n> Yep, I thought of it, but I was not very keen on having a malloc/free cycle\n> just for one debug message. However under debug this is probably not an\n> issue.\n\nConsistency is more important here IMO, so applied.\n\n> Your patch works for me. IT can avoid some level of format interpretation\n> overheads by switching to Char/Str functions, see first attachement.\n\nI kept both grouped to avoid any unnecessary churn with the\nmanipulation of PQExpBufferData.\n\n> The other point is the test on __pg_log_level, see second attached.\n\nMay be better to discuss that on a separate thread as that's not only\nrelated to pgbench.\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 09:06:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On 2020-Jan-09, Fabien COELHO wrote:\n\n> -\tif (unlikely(__pg_log_level <= PG_LOG_DEBUG))\n> +\tif (pg_log_debug_level)\n> \t{\n\nUmm ... I find the original exceedingly ugly, but the new line is\ntotally impenetrable.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 21:27:42 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 09:27:42PM -0300, Alvaro Herrera wrote:\n> On 2020-Jan-09, Fabien COELHO wrote:\n>> -\tif (unlikely(__pg_log_level <= PG_LOG_DEBUG))\n>> +\tif (pg_log_debug_level)\n>> \t{\n> \n> Umm ... I find the original exceedingly ugly, but the new line is\n> totally impenetrable.\n\nMaybe just a pg_logging_get_level() for consistency with the\n_set_level() one, and then compare the returned result with\nPG_LOG_DEBUG in pgbench?\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 09:45:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-09, Fabien COELHO wrote:\n>> -\tif (unlikely(__pg_log_level <= PG_LOG_DEBUG))\n>> +\tif (pg_log_debug_level)\n>> \t{\n\n> Umm ... I find the original exceedingly ugly, but the new line is\n> totally impenetrable.\n\nSo, I had not been paying any attention to this thread, but that\nsnippet is already enough to set off alarm bells.\n\n1. (problem with already-committed code, evidently) The C standard is\nquite clear that\n\n -- All identifiers that begin with an underscore and\n either an uppercase letter or another underscore are\n always reserved for any use.\n\n -- All identifiers that begin with an underscore are\n always reserved for use as identifiers with file scope\n in both the ordinary and tag name spaces.\n\n\"Reserved\" in this context appears to mean \"reserved for use by\nsystem headers and/or compiler special behaviors\".\n\nDeclaring our own global variables with double-underscore prefixes is not\njust asking for trouble, it's waving a red flag in front of a bull.\n\n\n2. (problem with proposed patch) I share Alvaro's allergy for replacing\nuses of a common variable with a bunch of macros, especially macros that\ndon't look like macros. That's not reducing the reader's cognitive\nburden. I'd even say it's actively misleading the reader, because what\nthe new code *looks* like it's doing is referencing several independent\nglobal variables. We don't need our code to qualify as an entry for\nthe Obfuscated C Contest.\n\nThe notational confusion could be solved perhaps by writing the macros\nwith function-like parentheses, but it still doesn't seem like an\nimprovement. In particular, the whole point here is to have a common\nidiom for logging, but I'm unconvinced that every frontend program\nshould be using unlikely() in this particular way. Maybe it's unlikely\nfor pgbench's usage that verbose logging would be turned on, but why\nshould we build in an assumption that that's universally the case?\n\nTBH, my recommendation would be to drop *all* of these likely()\nand unlikely() calls. What evidence have you got that those are\nmeaningfully improving the quality of the generated code? And if\nthey're buried inside macros, they certainly aren't doing anything\nuseful in terms of documenting the code.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 20:09:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 08:09:29PM -0500, Tom Lane wrote:\n> TBH, my recommendation would be to drop *all* of these likely()\n> and unlikely() calls. What evidence have you got that those are\n> meaningfully improving the quality of the generated code? And if\n> they're buried inside macros, they certainly aren't doing anything\n> useful in terms of documenting the code.\n\nYes. I am wondering if we should not rework this part of the logging\nwith something like the attached. My 2c, thoughts welcome.\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 13:08:44 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Bonjour Michaᅵl,\n\n>> TBH, my recommendation would be to drop *all* of these likely()\n>> and unlikely() calls. What evidence have you got that those are\n>> meaningfully improving the quality of the generated code? And if\n>> they're buried inside macros, they certainly aren't doing anything\n>> useful in terms of documenting the code.\n>\n> Yes. I am wondering if we should not rework this part of the logging\n> with something like the attached. My 2c, thoughts welcome.\n\nISTM that the intent is to minimise the performance impact of ignored \npg_log calls, especially when under debug where it is most likely to be \nthe case AND that they may be in critical places.\n\nCompared to dealing with the level inside the call, the use of the level \nvariable avoids a call-test-return cycle in this case, and the unlikely \nshould help the compiler reorder instructions so that no actual branch is \ntaken under the common case.\n\nSo I think that the current situation is a good thing at least for debug.\n\nFor other levels, they are on by default AND would not be placed at \ncritical performance points, so the whole effort of avoiding the call are \nmoot.\n\nI agree with Tom that __pg_log_level variable name violates usages.\n\nISTM that switching the variable to explicitely global solves the issues, \nand that possible the level test can be moved to inside the function for \nall but the debug level. See attached which reprises some of your idea, \nbut keep the outside filtering for the debug level.\n\n-- \nFabien.",
"msg_date": "Fri, 10 Jan 2020 08:52:17 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 08:52:17AM +0100, Fabien COELHO wrote:\n> Compared to dealing with the level inside the call, the use of the level\n> variable avoids a call-test-return cycle in this case, and the unlikely\n> should help the compiler reorder instructions so that no actual branch is\n> taken under the common case.\n> \n> So I think that the current situation is a good thing at least for debug.\n\nIf you look at some of my messages on other threads, you would likely\nnotice that my mood of the day is to not design things which try to\noutsmart a user's expectations :)\n\nSo I would stand on the position to just remove those likely/unlikely\nparts if we want this logging to be generic.\n\n> For other levels, they are on by default AND would not be placed at critical\n> performance points, so the whole effort of avoiding the call are moot.\n> \n> I agree with Tom that __pg_log_level variable name violates usages.\n\nMy own taste would be to still keep the variable local to logging.c,\nand use a \"get\"-like routine to be consistent with the \"set\" part. I\ndon't have to be right, let's see where this discussion leads us.\n\n(I mentioned that upthread, but I don't think it is a good idea to\ndiscuss about a redesign of those routines on a thread about pgbench\nbased on $subject. All the main players are here so it likely does\nnot matter, but..)\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 17:27:51 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "Michaᅵl,\n\n>> So I think that the current situation is a good thing at least for debug.\n>\n> If you look at some of my messages on other threads, you would likely\n> notice that my mood of the day is to not design things which try to\n> outsmart a user's expectations :)\n\nI'm not following you.\n\nISTM that user expectations is that the message is printed when the level \nrequires it, and that the performance impact is minimal otherwise.\n\nI'm not aiming at anything different.\n\n> So I would stand on the position to just remove those likely/unlikely\n> parts if we want this logging to be generic.\n\nIt is unclear to me whether your point is about the whole \"if\", or only \nthe compiler directive itself (i.e. \"likely\" and \"unlikely\").\n\nI'll assume the former. I do not think we should \"want\" logging to be \ngeneric per se, but only if it makes sense from a performance and feature \npoint of view.\n\nFor the normal case (standard level, no debug), there is basically no \ndifference because the message is going to be printed anyway: either it is \ncheck+call+work, or call+check+work. Anything is fine. The directive would \nhelp the compiler reorder instructions so that usual case does not inccur \na jump.\n\nFor debug messages, things are different: removing the external test & \nunlikely would have a detrimental effect on performance when not \ndebugging, which is most of the time, because you would pay the cost of \nevaluating arguments and call/return cycle on each message anyway. That \ncan be bad if a debug message is place in some critical place.\n\nSo the right place of the the debug check is early. Once this is done, \nthen why not doing that for all other level for consistency? This is the \ncurrent situation.\n\nIf the check is moved inside the call, then there is a performance benefit \nto repeat it for debug, which is a pain because then it would be there \ntwice in that case, and it creates an exception. The fact that some macro \nare simplified is not very useful because this is not really user visible.\n\nSo IMHO the current situation is fine, but the __variable name. So ISTM \nthat the attached is the simplest and more reasonable option to fix this.\n\n>> For other levels, they are on by default AND would not be placed at critical\n>> performance points, so the whole effort of avoiding the call are moot.\n>>\n>> I agree with Tom that __pg_log_level variable name violates usages.\n>\n> My own taste would be to still keep the variable local to logging.c,\n> and use a \"get\"-like routine to be consistent with the \"set\" part. I\n> don't have to be right, let's see where this discussion leads us.\n\nThis would defeat the point of avoiding a function call, if a function \ncall is needed to check whether the other function call is needed:-)\n\nHence the macro.\n\n> (I mentioned that upthread, but I don't think it is a good idea to\n> discuss about a redesign of those routines on a thread about pgbench\n> based on $subject. All the main players are here so it likely does\n> not matter, but..)\n\nYep. I hesitated to be the one to do it, and ISTM that the problem is \nsmall enough so that it can be resolved without a new thread. I may be \nnaᅵvely wrong:-)\n\n-- \nFabien.",
"msg_date": "Fri, 10 Jan 2020 17:39:40 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": true,
"msg_subject": "Re: pgbench - use pg logging capabilities"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 05:39:40PM +0100, Fabien COELHO wrote:\n> So IMHO the current situation is fine, but the __variable name. So ISTM that\n> the attached is the simplest and more reasonable option to fix this.\n\nI'd rather hear more from others at this point. Peter's opinion, as\nthe main author behind logging.c/h, would be good to have here.\n--\nMichael",
"msg_date": "Sat, 11 Jan 2020 16:37:08 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgbench - use pg logging capabilities"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nIn other thread \"[HACKERS] Block level parallel vacuum\"[1], Prabhat Kumar\nSahu reported a random assert failure but he got only once and he was not\nable to reproduce it. In that thread [2], Amit Kapila suggested some points\nto reproduce assert. I tried to reproduce and I was able to reproduce it\nconsistently.\n\nBelow are the steps to reproduce assert:\n*Configure sett*ing:\nlog_min_messages=debug1\nautovacuum_naptime = 5s\nautovacuum = on\n\npostgres=# create temporary table temp1(c1 int);\nCREATE TABLE\npostgres=# \\d+\n List of relations\n Schema | Name | Type | Owner | Persistence | Size | Description\n-----------+-------+-------+----------+-------------+---------+-------------\n pg_temp_3 | temp1 | table | mahendra | temporary | 0 bytes |\n(1 row)\n\npostgres=# drop schema pg_temp_3 cascade;\nNOTICE: drop cascades to table temp1\nDROP SCHEMA\npostgres=# \\d+\nDid not find any relations.\npostgres=# create temporary table temp2(c1 int);\nCREATE TABLE\npostgres=# \\d+\nDid not find any relations.\npostgres=# select pg_sleep(6);\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\npostgres=#\n\n\n*Stack Trace:*\nelinux-2.5-12.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64\nopenssl-libs-1.0.2k-12.el7.x86_64 pcre-8.32-17.el7.x86_64\nxz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64\n(gdb) bt\n#0 0x00007f80b2ef9277 in __GI_raise (sig=sig@entry=6) at\n../nptl/sysdeps/unix/sysv/linux/raise.c:56\n#1 0x00007f80b2efa968 in __GI_abort () at abort.c:90\n#2 0x0000000000ecdd4e in ExceptionalCondition (conditionName=0x11a9bcb\n\"strvalue != NULL\", errorType=0x11a9bbb \"FailedAssertion\",\nfileName=0x11a9bb0 \"snprintf.c\", lineNumber=442)\n at assert.c:67\n#3 0x0000000000f80122 in dopr (target=0x7ffe902e44d0, format=0x10e8fe5\n\".%s\\\"\", args=0x7ffe902e45b8) at snprintf.c:442\n#4 0x0000000000f7f821 in pg_vsnprintf (str=0x18cd480 \"autovacuum: dropping\norphan temp table \\\"postgres.\", '\\177' <repeats 151 times>..., count=1024,\n fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\",\nargs=0x7ffe902e45b8) at snprintf.c:195\n#5 0x0000000000f74cb3 in pvsnprintf (buf=0x18cd480 \"autovacuum: dropping\norphan temp table \\\"postgres.\", '\\177' <repeats 151 times>..., len=1024,\n fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\",\nargs=0x7ffe902e45b8) at psprintf.c:110\n#6 0x0000000000f7775b in appendStringInfoVA (str=0x7ffe902e45d0,\nfmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\",\nargs=0x7ffe902e45b8) at stringinfo.c:149\n#7 0x0000000000ecf5de in errmsg (fmt=0x10e8fb8 \"autovacuum: dropping\norphan temp table \\\"%s.%s.%s\\\"\") at elog.c:832\n#8 0x0000000000aef625 in do_autovacuum () at autovacuum.c:2253\n#9 0x0000000000aedfae in AutoVacWorkerMain (argc=0, argv=0x0) at\nautovacuum.c:1693\n#10 0x0000000000aed82f in StartAutoVacWorker () at autovacuum.c:1487\n#11 0x0000000000b1773a in StartAutovacuumWorker () at postmaster.c:5562\n#12 0x0000000000b16c13 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5279\n#13 <signal handler called>\n#14 0x00007f80b2fb8c53 in __select_nocancel () at\n../sysdeps/unix/syscall-template.S:81\n#15 0x0000000000b0da27 in ServerLoop () at postmaster.c:1691\n#16 0x0000000000b0cfa2 in PostmasterMain (argc=3, argv=0x18cb290) at\npostmaster.c:1400\n#17 0x000000000097868a in main (argc=3, argv=0x18cb290) at main.c:210\n\nereport(LOG,\n(errmsg(\"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\",\nget_database_name(MyDatabaseId),\nget_namespace_name(classForm->relnamespace),\nNameStr(classForm->relname))));\n\nI debugged and found that \"get_namespace_name(classForm->relnamespace)\" was\nnull so it was crashing.\n\nThis bug is introduced or exposed from below mentioned commit:\n*commit 246a6c8f7b237cc1943efbbb8a7417da9288f5c4*\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: Mon Aug 13 11:49:04 2018 +0200\n\n Make autovacuum more aggressive to remove orphaned temp tables\n\n Commit dafa084, added in 10, made the removal of temporary orphaned\n tables more aggressive. This commit makes an extra step into the\n\nBefore above commit, we were not getting any assert failure but \\d+ was not\nshowing any temp table info after \"drop schema pg_temp_3 cascade\" (for\nthose tables are created after drooping schema) .\n\nAs per my analysis, I can see that while drooping schema of temporary\ntable, we are not setting myTempNamespace to invalid so at the time of\ncreating again temporary table, we are not creating proper schema.\n\nWe can fix this problem by either one way 1) reset myTempNamespace to\ninvalid while drooping schema of temp table 2) should not allow to drop\ntemporary table schema\n\nPlease let me know your thoughts to fix this problem.\n\n[1]:\nhttps://www.postgresql.org/message-id/CANEvxPorfG2Ck3kuDkm5tWpK%2B3uCzRiibOJ-Lk4ZJ6wHP4KJfA%40mail.gmail.com\n[2]:\nhttps://www.postgresql.org/message-id/CAA4eK1L-Y7vyo%2BypH55kFHy1HS%3D4h1ZWQ%2B5fthKBgOdQzz4hOw%40mail.gmail.com\n\nThanks and Regards\nMahendra Siingh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHi hackers,In other thread \"[HACKERS] Block level parallel vacuum\"[1], Prabhat Kumar Sahu reported a random assert failure but he got only once and he was not able to reproduce it. In that thread [2], Amit Kapila suggested some points to reproduce assert. I tried to reproduce and I was able to reproduce it consistently.Below are the steps to reproduce assert:Configure setting:log_min_messages=debug1autovacuum_naptime = 5sautovacuum = onpostgres=# create temporary table temp1(c1 int);CREATE TABLEpostgres=# \\d+ List of relations Schema | Name | Type | Owner | Persistence | Size | Description -----------+-------+-------+----------+-------------+---------+------------- pg_temp_3 | temp1 | table | mahendra | temporary | 0 bytes | (1 row)postgres=# drop schema pg_temp_3 cascade;NOTICE: drop cascades to table temp1DROP SCHEMApostgres=# \\d+ Did not find any relations.postgres=# create temporary table temp2(c1 int);CREATE TABLEpostgres=# \\d+ Did not find any relations.postgres=# select pg_sleep(6);WARNING: terminating connection because of crash of another server processDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.postgres=# Stack Trace:elinux-2.5-12.el7.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 openssl-libs-1.0.2k-12.el7.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64(gdb) bt#0 0x00007f80b2ef9277 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56#1 0x00007f80b2efa968 in __GI_abort () at abort.c:90#2 0x0000000000ecdd4e in ExceptionalCondition (conditionName=0x11a9bcb \"strvalue != NULL\", errorType=0x11a9bbb \"FailedAssertion\", fileName=0x11a9bb0 \"snprintf.c\", lineNumber=442) at assert.c:67#3 0x0000000000f80122 in dopr (target=0x7ffe902e44d0, format=0x10e8fe5 \".%s\\\"\", args=0x7ffe902e45b8) at snprintf.c:442#4 0x0000000000f7f821 in pg_vsnprintf (str=0x18cd480 \"autovacuum: dropping orphan temp table \\\"postgres.\", '\\177' <repeats 151 times>..., count=1024, fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=0x7ffe902e45b8) at snprintf.c:195#5 0x0000000000f74cb3 in pvsnprintf (buf=0x18cd480 \"autovacuum: dropping orphan temp table \\\"postgres.\", '\\177' <repeats 151 times>..., len=1024, fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=0x7ffe902e45b8) at psprintf.c:110#6 0x0000000000f7775b in appendStringInfoVA (str=0x7ffe902e45d0, fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\", args=0x7ffe902e45b8) at stringinfo.c:149#7 0x0000000000ecf5de in errmsg (fmt=0x10e8fb8 \"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\") at elog.c:832#8 0x0000000000aef625 in do_autovacuum () at autovacuum.c:2253#9 0x0000000000aedfae in AutoVacWorkerMain (argc=0, argv=0x0) at autovacuum.c:1693#10 0x0000000000aed82f in StartAutoVacWorker () at autovacuum.c:1487#11 0x0000000000b1773a in StartAutovacuumWorker () at postmaster.c:5562#12 0x0000000000b16c13 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5279#13 <signal handler called>#14 0x00007f80b2fb8c53 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:81#15 0x0000000000b0da27 in ServerLoop () at postmaster.c:1691#16 0x0000000000b0cfa2 in PostmasterMain (argc=3, argv=0x18cb290) at postmaster.c:1400#17 0x000000000097868a in main (argc=3, argv=0x18cb290) at main.c:210ereport(LOG,(errmsg(\"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"\",get_database_name(MyDatabaseId),get_namespace_name(classForm->relnamespace),NameStr(classForm->relname))));I debugged and found that \"get_namespace_name(classForm->relnamespace)\" was null so it was crashing.This bug is introduced or exposed from below mentioned commit:commit 246a6c8f7b237cc1943efbbb8a7417da9288f5c4Author: Michael Paquier <michael@paquier.xyz>Date: Mon Aug 13 11:49:04 2018 +0200 Make autovacuum more aggressive to remove orphaned temp tables Commit dafa084, added in 10, made the removal of temporary orphaned tables more aggressive. This commit makes an extra step into theBefore above commit, we were not getting any assert failure but \\d+ was not showing any temp table info after \"drop schema pg_temp_3 cascade\" (for those tables are created after drooping schema) .As per my analysis, I can see that while drooping schema of temporary table, we are not setting myTempNamespace to invalid so at the time of creating again temporary table, we are not creating proper schema.We can fix this problem by either one way 1) reset myTempNamespace to invalid while drooping schema of temp table 2) should not allow to drop temporary table schemaPlease let me know your thoughts to fix this problem.[1]: https://www.postgresql.org/message-id/CANEvxPorfG2Ck3kuDkm5tWpK%2B3uCzRiibOJ-Lk4ZJ6wHP4KJfA%40mail.gmail.com[2]: https://www.postgresql.org/message-id/CAA4eK1L-Y7vyo%2BypH55kFHy1HS%3D4h1ZWQ%2B5fthKBgOdQzz4hOw%40mail.gmail.comThanks and Regards\nMahendra Siingh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Tue, 24 Dec 2019 16:50:58 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assert failure due to \"drop schema pg_temp_3 cascade\" for temporary\n tables and \\d+ is not showing any info after drooping temp table schema"
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 04:50:58PM +0530, Mahendra Singh wrote:\n> We can fix this problem by either one way 1) reset myTempNamespace to\n> invalid while drooping schema of temp table 2) should not allow to drop\n> temporary table schema\n\n(Please note that it is better not to cross-post on multiple lists, so\nI have removed pgsql-bugs from CC.) \n\nThere is a little bit more to that, as we would basically need to do\nthe work of RemoveTempRelationsCallback() once the temp schema is\ndropped, callback registered when the schema is correctly created at\ntransaction commit (also we need to make sure that\nRemoveTempRelationsCallback is not called or unregistered if we were\nto authorize DROP SCHEMA on a temp schema). And then all the reset\ndone at the beginning of AtEOXact_Namespace() would need to happen.\n\nAnyway, as dropping a temporary schema leads to an inconsistent\nbehavior when recreating new temporary objects in a session that\ndropped it, that nobody has actually complained on the matter, and\nthat in concept a temporary schema is linked to the session that\ncreated it, I think that we have a lot of arguments to just forbid the\noperation from happening. Please note as well that it is possible to\ndrop temporary schemas of other sessions, still this is limited to\nowners of the schema.\n\nIn short, let's tighten the logic, and we had better back-patch this\none all the way down, 9.4 being broken. Attached is a patch to do\nthat. The error message generated depends on the state of the session\nso I have not added a test for this reason, and the check is added\nbefore the ACL check. We could make the error message more generic,\nlike \"cannot drop temporary namespace\". Any thoughts?\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 11:22:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "At Wed, 25 Dec 2019 11:22:03 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> Anyway, as dropping a temporary schema leads to an inconsistent\n> behavior when recreating new temporary objects in a session that\n> dropped it, that nobody has actually complained on the matter, and\n> that in concept a temporary schema is linked to the session that\n\nAgreed.\n\n> created it, I think that we have a lot of arguments to just forbid the\n> operation from happening. Please note as well that it is possible to\n> drop temporary schemas of other sessions, still this is limited to\n> owners of the schema.\n> \n> In short, let's tighten the logic, and we had better back-patch this\n> one all the way down, 9.4 being broken. Attached is a patch to do\n> that. The error message generated depends on the state of the session\n> so I have not added a test for this reason, and the check is added\n> before the ACL check. We could make the error message more generic,\n> like \"cannot drop temporary namespace\". Any thoughts?\n\nJust inhibiting the action seems reasonable to me.\n\nStill the owner can drop temporary namespace on another session or\npg_toast_temp_x of the current session.\n\nisTempnamespace(address.objectId) doesn't work for the purpose.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Dec 2019 12:18:26 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 12:18:26PM +0900, Kyotaro Horiguchi wrote:\n> Still the owner can drop temporary namespace on another session or\n> pg_toast_temp_x of the current session.\n\nArf. Yes, this had better be isAnyTempNamespace() so as we complain\nabout all of them.\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 12:24:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Wed, 25 Dec 2019 at 07:52, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Dec 24, 2019 at 04:50:58PM +0530, Mahendra Singh wrote:\n> > We can fix this problem by either one way 1) reset myTempNamespace to\n> > invalid while drooping schema of temp table 2) should not allow to drop\n> > temporary table schema\n>\n> (Please note that it is better not to cross-post on multiple lists, so\n\nSorry. I was not aware of multiple mail ids. I will take care in future mails.\n\n> I have removed pgsql-bugs from CC.)\n\nThanks.\n\n> There is a little bit more to that, as we would basically need to do\n> the work of RemoveTempRelationsCallback() once the temp schema is\n> dropped, callback registered when the schema is correctly created at\n> transaction commit (also we need to make sure that\n> RemoveTempRelationsCallback is not called or unregistered if we were\n> to authorize DROP SCHEMA on a temp schema). And then all the reset\n> done at the beginning of AtEOXact_Namespace() would need to happen.\n>\n\nThanks for quick detailed analysis.\n\n> Anyway, as dropping a temporary schema leads to an inconsistent\n> behavior when recreating new temporary objects in a session that\n> dropped it, that nobody has actually complained on the matter, and\n> that in concept a temporary schema is linked to the session that\n> created it, I think that we have a lot of arguments to just forbid the\n> operation from happening. Please note as well that it is possible to\n> drop temporary schemas of other sessions, still this is limited to\n> owners of the schema.\n\nYes, you are right that we can drop temporary schema of other sessions.\n\nEven after applying your attached patch, I am getting same assert\nfailure because I am able to drop \" temporary schema\" from other\nsession so I think, we should not allow to drop any temporary schema\nfrom any session.\n\n> In short, let's tighten the logic, and we had better back-patch this\n> one all the way down, 9.4 being broken. Attached is a patch to do\n\nYes, I also verified that we have to back-patch till v9.4.\n\n> that. The error message generated depends on the state of the session\n> so I have not added a test for this reason, and the check is added\n> before the ACL check. We could make the error message more generic,\n> like \"cannot drop temporary namespace\". Any thoughts?\n\nI think, we can make error message as \"cannot drop temporary schema\"\n\nWhile applying attached patch on HEAD, I got below warnings:\n\n[mahendra@localhost postgres]$ git apply drop-temp-schema-v1.patch\ndrop-temp-schema-v1.patch:9: trailing whitespace.\n /*\ndrop-temp-schema-v1.patch:10: trailing whitespace.\n * Prevent drop of a temporary schema as this would mess up with\ndrop-temp-schema-v1.patch:11: trailing whitespace.\n * the end-of-session callback cleaning up all temporary objects.\ndrop-temp-schema-v1.patch:12: trailing whitespace.\n * As the in-memory state is not cleaned up either here, upon\ndrop-temp-schema-v1.patch:13: trailing whitespace.\n * recreation of a temporary schema within the same session the\nerror: patch failed: src/backend/commands/dropcmds.c:101\nerror: src/backend/commands/dropcmds.c: patch does not apply\n\nI think, above warnings are due to \"trailing CRs\" in patch.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 25 Dec 2019 10:07:58 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 10:07:58AM +0530, Mahendra Singh wrote:\n> Yes, you are right that we can drop temporary schema of other sessions.\n\nI have mentioned that upthread, and basically we need to use\nisAnyTempNamespace() here. My mistake.\n\n> While applying attached patch on HEAD, I got below warnings:\n\nThe patch applies cleanly for me.\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 17:13:37 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 12:24:10PM +0900, Michael Paquier wrote:\n> Arf. Yes, this had better be isAnyTempNamespace() so as we complain\n> about all of them.\n\nOkay, finally coming back to that. Attached is an updated patch with\npolished comments and the fixed logic.\n--\nMichael",
"msg_date": "Thu, 26 Dec 2019 22:53:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Thu, 26 Dec 2019 at 19:23, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Dec 25, 2019 at 12:24:10PM +0900, Michael Paquier wrote:\n> > Arf. Yes, this had better be isAnyTempNamespace() so as we complain\n> > about all of them.\n>\n> Okay, finally coming back to that. Attached is an updated patch with\n> polished comments and the fixed logic.\n\nThanks Michael for patch.\n\nPatch is fixing all the issues.\n\nI think, we can add a regression test for this.\npostgres=# create temporary table temp(c1 int);\nCREATE TABLE\npostgres=# drop schema pg_temp_3 cascade ;\nERROR: cannot drop temporary namespace \"pg_temp_3\"\npostgres=#\n\nI have one doubt. Please give me your opinion on below doubt.\nLet suppose, I connected 10 sessions at a time and created 1 temporary\ntable to each session. Then it is creating schema from pg_temp_3 to\npg_temp_12 (one schema for each temp table session). After that, I\nclosed all the 10 sessions but if I connect again any session and\nchecking all the schema, It is still showing pg_temp_3 to pg_temp_12.\nIs this expected behavior ? or we should not display any temp table\nschema after closing session. I thought that auto_vacuum wlll drop all\nthe temp table schema but it is not drooping.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Dec 2019 20:20:14 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Mahendra Singh <mahi6run@gmail.com> writes:\n> I think, we can add a regression test for this.\n> postgres=# create temporary table temp(c1 int);\n> CREATE TABLE\n> postgres=# drop schema pg_temp_3 cascade ;\n> ERROR: cannot drop temporary namespace \"pg_temp_3\"\n> postgres=#\n\nNo, we can't, because the particular temp namespace used by a given\nsession isn't stable.\n\n> I thought that auto_vacuum wlll drop all\n> the temp table schema but it is not drooping.\n\nGenerally speaking, once a particular pg_temp_N schema exists it's\nnever dropped, just recycled for use by subsequent sessions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 12:51:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Thu, 26 Dec 2019 at 23:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Mahendra Singh <mahi6run@gmail.com> writes:\n> > I think, we can add a regression test for this.\n> > postgres=# create temporary table temp(c1 int);\n> > CREATE TABLE\n> > postgres=# drop schema pg_temp_3 cascade ;\n> > ERROR: cannot drop temporary namespace \"pg_temp_3\"\n> > postgres=#\n>\n> No, we can't, because the particular temp namespace used by a given\n> session isn't stable.\n>\n> > I thought that auto_vacuum wlll drop all\n> > the temp table schema but it is not drooping.\n>\n> Generally speaking, once a particular pg_temp_N schema exists it's\n> never dropped, just recycled for use by subsequent sessions.\n\nOkay. Understood. Thanks for clarification.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Dec 2019 00:33:03 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 12:33:03AM +0530, Mahendra Singh wrote:\n> On Thu, 26 Dec 2019 at 23:21, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> No, we can't, because the particular temp namespace used by a given\n>> session isn't stable.\n\nAnd I'd prefer keep the name of the namespace in the error message,\nbecause the information is helpful.\n\n>>> I thought that auto_vacuum wlll drop all\n>>> the temp table schema but it is not drooping.\n>>\n>> Generally speaking, once a particular pg_temp_N schema exists it's\n>> never dropped, just recycled for use by subsequent sessions.\n> \n> Okay. Understood. Thanks for clarification.\n\nPlease see RemoveTempRelations() for the details, which uses\nPERFORM_DELETION_SKIP_ORIGINAL to avoid a drop of the temp schema, and\njust work on all the objects the schema includes.\n\nAnd committed down to 9.4. We use much more \"temporary schema\" in\nerror messages actually, so I have switched to that.\n--\nMichael",
"msg_date": "Fri, 27 Dec 2019 18:06:04 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 4:06 AM Michael Paquier <michael@paquier.xyz> wrote:\n> And committed down to 9.4. We use much more \"temporary schema\" in\n> error messages actually, so I have switched to that.\n\nI think this was a bad idea and that it should be reverted. It seems\nto me that the problem here is that you introduced a feature which had\na bug, namely that it couldn't tolerate concurrency, and when somebody\ndiscovered the bug, you \"fixed\" it not by making the code able to\ntolerate concurrent activity but by preventing concurrent activity\nfrom happening in the first place. I think that's wrong on general\nprinciple.\n\nIn this specific case, DROP SCHEMA on another temporary sessions's\nschema is a feature which has existed for a very long time and which I\nhave used on multiple occasions to repair damaged databases. Suppose,\nfor example, there's a catalog entry that prevents the schema from\nbeing dropped. Before this commit, you could fix it or delete the\nentry and then retry the drop. Now, you can't. You can maybe wait for\nautovacuum to retry it or something, assuming autovacuum is working\nand you're in multi-user mode.\n\nBut even if that weren't the case, this seems like a very fragile fix.\nMaybe someday we'll allow multiple autovacuum workers in the same\ndatabase, and the problem comes back. Maybe some user who can't drop\nthe schema because of this arbitrary prohibition will find themselves\nforced to delete the pg_namespace row by hand and then crash the\nserver. Most server code is pretty careful that to either tolerate\nmissing system catalog tuples or elog(ERROR), not crash (e.g. cache\nlookup failed for ...). This code shouldn't be an exception to that\nrule.\n\nAlso, as a matter of procedure, 3 days from first post to commit is\nnot a lot, especially when the day something is posted is Christmas\nEve.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Dec 2019 07:37:15 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 07:37:15AM -0500, Robert Haas wrote:\n> I think this was a bad idea and that it should be reverted. It seems\n> to me that the problem here is that you introduced a feature which had\n> a bug, namely that it couldn't tolerate concurrency, and when somebody\n> discovered the bug, you \"fixed\" it not by making the code able to\n> tolerate concurrent activity but by preventing concurrent activity\n> from happening in the first place. I think that's wrong on general\n> principle.\n\nSorry for the delay, there was a long period off here so I could not\nhave a serious look.\n\nThe behavior of the code in 246a6c8 has changed so as a non-existing\ntemporary namespace is considered as not in use, in which case\nautovacuum would consider this relation to be orphaned, and it would\nthen try to drop it. Anyway, just a revert of the patch is not a good\nidea either, because keeping around the old behavior allows any user\nto create orphaned relations that autovacuum would just ignore in\n9.4~10, leading to ultimately a forced shutdown of the instance as no\ncleanup can happen if this goes unnoticed. This also puts pg_class\ninto an inconsistent state as pg_class entries would include\nreferences to a namespace that does not exist for sessions still\nholding its own references to myTempNamespace/myTempToastNamespace.\n\n> In this specific case, DROP SCHEMA on another temporary sessions's\n> schema is a feature which has existed for a very long time and which I\n> have used on multiple occasions to repair damaged databases. Suppose,\n> for example, there's a catalog entry that prevents the schema from\n> being dropped. Before this commit, you could fix it or delete the\n> entry and then retry the drop. Now, you can't. You can maybe wait for\n> autovacuum to retry it or something, assuming autovacuum is working\n> and you're in multi-user mode.\n\nThis behavior is broken since its introduction then per the above. If\nwe were to allow DROP SCHEMA to work properly on temporary schema, we\nwould need to do more than what we have now, and that does not involve\njust mimicking DISCARD TEMP if you really wish to be able to drop the\nschema entirely and not only the objects it includes. Allowing a\ntemporary schema to be dropped only if it is owned by the current\nsession would be simple enough to implement, but I think that allowing\nthat to work properly for a schema owned by another session would be\nrather difficult to implement for little gains. Now, if you still\nwish to be able to do a DROP SCHEMA on a temporary schema, I have no\nobjections to allow doing that, but under some conditions. So I would\nrecommend to restrict it so as this operation is not allowed by\ndefault, and I think we ought to use allow_system_table_mods to\ncontrol that, because if you were to do that you are an operator and\nyou know what you are doing. Normally :)\n\n> But even if that weren't the case, this seems like a very fragile fix.\n> Maybe someday we'll allow multiple autovacuum workers in the same\n> database, and the problem comes back. Maybe some user who can't drop\n> the schema because of this arbitrary prohibition will find themselves\n> forced to delete the pg_namespace row by hand and then crash the\n> server. Most server code is pretty careful that to either tolerate\n> missing system catalog tuples or elog(ERROR), not crash (e.g. cache\n> lookup failed for ...). This code shouldn't be an exception to that\n> rule.\n\nYou are right here, things could be done better in 11 and newer\nversions, still there are multiple ways to do that. Here are three\nsuggestions:\n1) Issue an elog(ERROR) as that's what we do usually for lookup errors\nand such when seeing an orphaned relation which refers to a\nnon-existing namespace. But this would prevent autovacuum to do\nany kind of work and just loop over-and-over on the same error, just\nbloating the database involved.\n2) Ignore the relation and leave it around, though we really have been\nfighting to make autovacuum more aggressive, so that would defeat the\nwork done lately for that purpose.\n3) Still drop the orphaned relation even if it references to a\nnon-existing schema, generating an appropriate LOG message so as the\nproblem comes from an incorrect lookup at the namespace name.\n\nAttached is a patch doing two things:\na) Control DROP SCHEMA on a temporary namespace using\nallow_system_table_mods.\nb) Generate a non-buggy LOG message if trying to remove a temp\nrelation referring to a temporary schema that does not exist, using\n\"(null)\" as a replacement for the schema name.\n\nMy suggestion is to do a) down to 9.4 if that's thought to be helpful\nto have, and at least Robert visibly thinks so, then b) in 11~ as\nthat's where 246a6c8 exists. Comments welcome.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 10:42:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 8:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> The behavior of the code in 246a6c8 has changed so as a non-existing\n> temporary namespace is considered as not in use, in which case\n> autovacuum would consider this relation to be orphaned, and it would\n> then try to drop it. Anyway, just a revert of the patch is not a good\n> idea either, because keeping around the old behavior allows any user\n> to create orphaned relations that autovacuum would just ignore in\n> 9.4~10, leading to ultimately a forced shutdown of the instance as no\n> cleanup can happen if this goes unnoticed. This also puts pg_class\n> into an inconsistent state as pg_class entries would include\n> references to a namespace that does not exist for sessions still\n> holding its own references to myTempNamespace/myTempToastNamespace.\n\nI'm not arguing for a revert of 246a6c8. I think we should just change this:\n\n ereport(LOG,\n (errmsg(\"autovacuum: dropping orphan\ntemp table \\\"%s.%s.%s\\\"\",\n get_database_name(MyDatabaseId),\n\nget_namespace_name(classForm->relnamespace),\n NameStr(classForm->relname))));\n\nTo look more like:\n\nchar *nspname = get_namespace_name(classForm->relnamespace);\nif (nspname != NULL)\n ereport(...\"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"...)\nelse\n ereport(...\"autovacuum: dropping orphan temp table with OID %u\"....)\n\nIf we do that, then I think we can just revert\na052f6cbb84e5630d50b68586cecc127e64be639 completely. As a side\nbenefit, this would also provide some insurance against other\npossibly-problematic situations, like a corrupted pg_class row with a\ngarbage value in the relnamespace field, which is something I've seen\nmultiple times in the field.\n\nI can't quite understand your comments about why we shouldn't do that,\nbut the reported bug is just a null pointer reference. Incredibly,\nautovacuum.c seems to have been using get_namespace_name() without a\nnull check since 2006, so it's not really the fault of your patch as I\nhad originally thought. I wonder how in the world we've managed to get\naway with it for as long as we have.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Jan 2020 12:25:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> I'm not arguing for a revert of 246a6c8. I think we should just change this:\n> ...\n> To look more like:\n\n> char *nspname = get_namespace_name(classForm->relnamespace);\n> if (nspname != NULL)\n> ereport(...\"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"...)\n> else\n> ereport(...\"autovacuum: dropping orphan temp table with OID %u\"....)\n\n> If we do that, then I think we can just revert\n> a052f6cbb84e5630d50b68586cecc127e64be639 completely.\n\n+1 to both of those --- although I think we could still provide the\ntable name in the null-nspname case.\n\n> autovacuum.c seems to have been using get_namespace_name() without a\n> null check since 2006, so it's not really the fault of your patch as I\n> had originally thought. I wonder how in the world we've managed to get\n> away with it for as long as we have.\n\nMaybe we haven't. It's not clear that infrequent autovac crashes would\nget reported to us, or that we'd successfully find the cause if they were.\n\nI think what you propose above is a back-patchable bug fix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 12:33:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 12:33:47PM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> I'm not arguing for a revert of 246a6c8. I think we should just change this:\n>> ...\n>> To look more like:\n> \n>> char *nspname = get_namespace_name(classForm->relnamespace);\n>> if (nspname != NULL)\n>> ereport(...\"autovacuum: dropping orphan temp table \\\"%s.%s.%s\\\"...)\n>> else\n>> ereport(...\"autovacuum: dropping orphan temp table with OID %u\"....)\n> \n>> If we do that, then I think we can just revert\n>> a052f6cbb84e5630d50b68586cecc127e64be639 completely.\n> \n> +1 to both of those --- although I think we could still provide the\n> table name in the null-nspname case.\n\nOkay for the first one, printing the OID sounds like a good idea.\nLike Tom, I would prefer keeping the relation name with \"(null)\" for\nthe schema name. Or even better, could we just print the OID all the\ntime? What's preventing us from showing that information in the first\nplace? And that still looks good to have when debugging issues IMO\nfor orphaned entries.\n\nFor the second one, I would really wish that we keep the restriction\nput in place by a052f6c until we actually figure out how to make the\noperation safe in the ways we want it to work because this puts\nthe catalogs into an inconsistent state for any object type able to\nuse a temporary schema, like functions, domains etc. for example able\nto use \"pg_temp\" as a synonym for the temp namespace name. And any\nconnected user is able to do that. On top of that, except for tables,\nthese could remain as orphaned entries after a crash, no? Allowing\nthe operation only via allow_system_table_mods gives an exit path\nactually if you really wish to do so, which is fine by me as startup\ncontrols that, aka an administrator.\n\nIn short, I don't think that it is sane to keep in place the property,\nvisibly accidental (?) for any user to create inconsistent catalog\nentries using a static state in the session which is incorrect in\nnamespace.c, except if we make DROP SCHEMA on a temporary schema have\na behavior close to DISCARD TEMP. Again, for the owner of the session\nthat's simple, no clear idea how to do that safely when the drop is\ndone from another session not owning the temp schema.\n\n> Maybe we haven't. It's not clear that infrequent autovac crashes would\n> get reported to us, or that we'd successfully find the cause if they were.\n> \n> I think what you propose above is a back-patchable bug fix.\n\nYeah, likely it is safer to fix the logs in the long run down to 9.4.\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 09:22:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 7:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n> Okay for the first one, printing the OID sounds like a good idea.\n> Like Tom, I would prefer keeping the relation name with \"(null)\" for\n> the schema name. Or even better, could we just print the OID all the\n> time? What's preventing us from showing that information in the first\n> place? And that still looks good to have when debugging issues IMO\n> for orphaned entries.\n\nI think we should have two different messages, rather than trying to\nshoehorn things into one message using a fake schema name.\n\n> For the second one, I would really wish that we keep the restriction\n> put in place by a052f6c until we actually figure out how to make the\n> operation safe in the ways we want it to work because this puts\n> the catalogs into an inconsistent state for any object type able to\n> use a temporary schema, like functions, domains etc. for example able\n> to use \"pg_temp\" as a synonym for the temp namespace name. And any\n> connected user is able to do that.\n\nSo what?\n\n> On top of that, except for tables,\n> these could remain as orphaned entries after a crash, no?\n\nTables, too, although they want have storage any more. But your patch\nin no way prevents that. It just makes it harder to fix when it does\nhappen. So I see no advantages of it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 7 Jan 2020 10:59:22 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Mon, Jan 6, 2020 at 7:22 PM Michael Paquier <michael@paquier.xyz> wrote:\n>> For the second one, I would really wish that we keep the restriction\n>> put in place by a052f6c until we actually figure out how to make the\n>> operation safe in the ways we want it to work because this puts\n>> the catalogs into an inconsistent state for any object type able to\n>> use a temporary schema, like functions, domains etc. for example able\n>> to use \"pg_temp\" as a synonym for the temp namespace name. And any\n>> connected user is able to do that.\n\n> So what?\n\nI still agree with Robert that a052f6c is a bad idea. It's not the case\nthat that's blocking \"any connected user\" from causing an issue. The\ntemp schemas are always owned by the bootstrap superuser, so only a\nsuperuser could delete them. All that that patch is doing is preventing\nsuperusers from doing something that they could reasonably wish to do,\nand that is perfectly safe when there's not concurrent usage of the\nschema. We are not normally that nanny-ish, and the case for being so\nhere seems pretty thin.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 13:06:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 01:06:08PM -0500, Tom Lane wrote:\n> I still agree with Robert that a052f6c is a bad idea. It's not the case\n> that that's blocking \"any connected user\" from causing an issue. The\n> temp schemas are always owned by the bootstrap superuser, so only a\n> superuser could delete them. All that that patch is doing is preventing\n> superusers from doing something that they could reasonably wish to do,\n> and that is perfectly safe when there's not concurrent usage of the\n> schema. We are not normally that nanny-ish, and the case for being so\n> here seems pretty thin.\n\nOkay, I am running out of arguments then, so attached is a patch to\naddress things. I would also prefer if we keep the relation name in\nthe log even if the namespace is missing.\n--\nMichael",
"msg_date": "Wed, 8 Jan 2020 09:44:09 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> Okay, I am running out of arguments then, so attached is a patch to\n> address things. I would also prefer if we keep the relation name in\n> the log even if the namespace is missing.\n\nA couple of thoughts:\n\n* Please revert a052f6c as a separate commit specifically doing that,\nso that when it comes time to make the release notes, it's clear that\na052f6c doesn't require documentation.\n\n* I think the check on log_min_messages <= LOG is probably wrong, since\nLOG sorts out of order for this purpose. Compare is_log_level_output()\nin elog.c. I'd suggest not bothering with trying to optimize away the\nget_namespace_name call here; we shouldn't be in this code path often\nenough for performance to matter, and nobody ever cared about it before.\n\n* I don't greatly like the notation\n dropping orphan temp table \\\"%s.(null).%s\\\" ...\nand I bet Robert won't either. Not sure offhand about a better\nidea --- maybe\n dropping orphan temp table \\\"%s\\\" with OID %u in database \\\"%s\\\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 19:55:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Tue, Jan 07, 2020 at 07:55:17PM -0500, Tom Lane wrote:\n> * Please revert a052f6c as a separate commit specifically doing that,\n> so that when it comes time to make the release notes, it's clear that\n> a052f6c doesn't require documentation.\n\nOkay. Committed the revert first then.\n\n> * I think the check on log_min_messages <= LOG is probably wrong, since\n> LOG sorts out of order for this purpose. Compare is_log_level_output()\n> in elog.c. I'd suggest not bothering with trying to optimize away the\n> get_namespace_name call here; we shouldn't be in this code path often\n> enough for performance to matter, and nobody ever cared about it before.\n\nDone.\n\n> * I don't greatly like the notation\n> dropping orphan temp table \\\"%s.(null).%s\\\" ...\n> and I bet Robert won't either. Not sure offhand about a better\n> idea --- maybe\n> dropping orphan temp table \\\"%s\\\" with OID %u in database \\\"%s\\\"\n\nAnd done this way as per the attached. I am of course open to\nobjections or better ideas, though this looks formulation looks pretty\ngood to me. Robert?\n--\nMichael",
"msg_date": "Wed, 8 Jan 2020 10:56:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 10:56:01AM +0900, Michael Paquier wrote:\n> And done this way as per the attached. I am of course open to\n> objections or better ideas, though this looks formulation looks pretty\n> good to me. Robert?\n\nJust to be clear here, I would like to commit this patch and backpatch\nwith the current formulation in the error strings in the follow-up\ndays. In 9.4~10, the error cannot be reached, but that feels safer if\nwe begin to work again on this portion of the autovacuum code. So if\nyou would like to object, that's the moment..\n--\nMichael",
"msg_date": "Thu, 9 Jan 2020 13:06:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Thu, 9 Jan 2020 at 09:36, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Wed, Jan 08, 2020 at 10:56:01AM +0900, Michael Paquier wrote:\n> > And done this way as per the attached. I am of course open to\n> > objections or better ideas, though this looks formulation looks pretty\n> > good to me. Robert?\n>\n> Just to be clear here, I would like to commit this patch and backpatch\n> with the current formulation in the error strings in the follow-up\n> days. In 9.4~10, the error cannot be reached, but that feels safer if\n> we begin to work again on this portion of the autovacuum code. So if\n> you would like to object, that's the moment..\n> --\n\nHi,\nI reviewed and tested the patch. After applying patch, I am getting other\nassert failure.\n\npostgres=# CREATE TEMPORARY TABLE temp (a int);\nCREATE TABLE\npostgres=# \\d\n List of relations\n Schema | Name | Type | Owner\n-----------+------+-------+----------\n pg_temp_3 | temp | table | mahendra\n(1 row)\n\npostgres=# drop schema pg_temp_3 cascade ;\nNOTICE: drop cascades to table temp\nDROP SCHEMA\npostgres=# \\d\nDid not find any relations.\npostgres=# CREATE TEMPORARY TABLE temp (a int);\nCREATE TABLE\npostgres=# \\d\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\npostgres=#\n\n*Stack trace:*\n(gdb) bt\n#0 0x00007f7d749bd277 in __GI_raise (sig=sig@entry=6) at\n../nptl/sysdeps/unix/sysv/linux/raise.c:56\n#1 0x00007f7d749be968 in __GI_abort () at abort.c:90\n#2 0x0000000000eca3c4 in ExceptionalCondition (conditionName=0x114cc08\n\"relation->rd_backend != InvalidBackendId\", errorType=0x114ca8b\n\"FailedAssertion\",\n fileName=0x114c8b0 \"relcache.c\", lineNumber=1123) at assert.c:67\n#3 0x0000000000eaacb9 in RelationBuildDesc (targetRelId=16392,\ninsertIt=true) at relcache.c:1123\n#4 0x0000000000eadf61 in RelationIdGetRelation (relationId=16392) at\nrelcache.c:2021\n#5 0x000000000049f370 in relation_open (relationId=16392, lockmode=8) at\nrelation.c:59\n#6 0x000000000064ccda in heap_drop_with_catalog (relid=16392) at\nheap.c:1890\n#7 0x00000000006435f3 in doDeletion (object=0x2d623c0, flags=21) at\ndependency.c:1360\n#8 0x0000000000643180 in deleteOneObject (object=0x2d623c0,\ndepRel=0x7ffcb9636290, flags=21) at dependency.c:1261\n#9 0x0000000000640d97 in deleteObjectsInList (targetObjects=0x2dce438,\ndepRel=0x7ffcb9636290, flags=21) at dependency.c:271\n#10 0x0000000000640ed6 in performDeletion (object=0x7ffcb96363b0,\nbehavior=DROP_CASCADE, flags=21) at dependency.c:356\n#11 0x0000000000aebc3d in do_autovacuum () at autovacuum.c:2269\n#12 0x0000000000aea478 in AutoVacWorkerMain (argc=0, argv=0x0) at\nautovacuum.c:1693\n#13 0x0000000000ae9cf9 in StartAutoVacWorker () at autovacuum.c:1487\n#14 0x0000000000b13cdc in StartAutovacuumWorker () at postmaster.c:5562\n#15 0x0000000000b131b5 in sigusr1_handler (postgres_signal_arg=10) at\npostmaster.c:5279\n#16 <signal handler called>\n#17 0x00007f7d74a7cc53 in __select_nocancel () at\n../sysdeps/unix/syscall-template.S:81\n#18 0x0000000000b09fc9 in ServerLoop () at postmaster.c:1691\n#19 0x0000000000b09544 in PostmasterMain (argc=3, argv=0x2ce2290) at\npostmaster.c:1400\n#20 0x0000000000974b43 in main (argc=3, argv=0x2ce2290) at main.c:210\n\nI think, before committing 1st patch, we should fix this crash also and\nthen we should commit all the patches.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, 9 Jan 2020 at 09:36, Michael Paquier <michael@paquier.xyz> wrote:>> On Wed, Jan 08, 2020 at 10:56:01AM +0900, Michael Paquier wrote:> > And done this way as per the attached. I am of course open to> > objections or better ideas, though this looks formulation looks pretty> > good to me. Robert?>> Just to be clear here, I would like to commit this patch and backpatch> with the current formulation in the error strings in the follow-up> days. In 9.4~10, the error cannot be reached, but that feels safer if> we begin to work again on this portion of the autovacuum code. So if> you would like to object, that's the moment..> --Hi,I reviewed and tested the patch. After applying patch, I am getting other assert failure.postgres=# CREATE TEMPORARY TABLE temp (a int);CREATE TABLEpostgres=# \\d List of relations Schema | Name | Type | Owner -----------+------+-------+---------- pg_temp_3 | temp | table | mahendra(1 row)postgres=# drop schema pg_temp_3 cascade ;NOTICE: drop cascades to table tempDROP SCHEMApostgres=# \\dDid not find any relations.postgres=# CREATE TEMPORARY TABLE temp (a int);CREATE TABLEpostgres=# \\dWARNING: terminating connection because of crash of another server processDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request.postgres=#Stack trace:(gdb) bt#0 0x00007f7d749bd277 in __GI_raise (sig=sig@entry=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:56#1 0x00007f7d749be968 in __GI_abort () at abort.c:90#2 0x0000000000eca3c4 in ExceptionalCondition (conditionName=0x114cc08 \"relation->rd_backend != InvalidBackendId\", errorType=0x114ca8b \"FailedAssertion\", fileName=0x114c8b0 \"relcache.c\", lineNumber=1123) at assert.c:67#3 0x0000000000eaacb9 in RelationBuildDesc (targetRelId=16392, insertIt=true) at relcache.c:1123#4 0x0000000000eadf61 in RelationIdGetRelation (relationId=16392) at relcache.c:2021#5 0x000000000049f370 in relation_open (relationId=16392, lockmode=8) at relation.c:59#6 0x000000000064ccda in heap_drop_with_catalog (relid=16392) at heap.c:1890#7 0x00000000006435f3 in doDeletion (object=0x2d623c0, flags=21) at dependency.c:1360#8 0x0000000000643180 in deleteOneObject (object=0x2d623c0, depRel=0x7ffcb9636290, flags=21) at dependency.c:1261#9 0x0000000000640d97 in deleteObjectsInList (targetObjects=0x2dce438, depRel=0x7ffcb9636290, flags=21) at dependency.c:271#10 0x0000000000640ed6 in performDeletion (object=0x7ffcb96363b0, behavior=DROP_CASCADE, flags=21) at dependency.c:356#11 0x0000000000aebc3d in do_autovacuum () at autovacuum.c:2269#12 0x0000000000aea478 in AutoVacWorkerMain (argc=0, argv=0x0) at autovacuum.c:1693#13 0x0000000000ae9cf9 in StartAutoVacWorker () at autovacuum.c:1487#14 0x0000000000b13cdc in StartAutovacuumWorker () at postmaster.c:5562#15 0x0000000000b131b5 in sigusr1_handler (postgres_signal_arg=10) at postmaster.c:5279#16 <signal handler called>#17 0x00007f7d74a7cc53 in __select_nocancel () at ../sysdeps/unix/syscall-template.S:81#18 0x0000000000b09fc9 in ServerLoop () at postmaster.c:1691#19 0x0000000000b09544 in PostmasterMain (argc=3, argv=0x2ce2290) at postmaster.c:1400#20 0x0000000000974b43 in main (argc=3, argv=0x2ce2290) at main.c:210I think, before committing 1st patch, we should fix this crash also and then we should commit all the patches.Thanks and RegardsMahendra Singh ThalorEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 10 Jan 2020 11:56:37 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 11:56:37AM +0530, Mahendra Singh Thalor wrote:\n> I reviewed and tested the patch. After applying patch, I am getting other\n> assert failure.\n>\n> I think, before committing 1st patch, we should fix this crash also and\n> then we should commit all the patches.\n\nI have somewhat managed to break my environment for a couple of days\nso as I got zero testing done with assertions, so I missed this one.\nThanks for the lookup! The environment is fixed since.\n\nThis code path uses an assertion that would become incorrect once you\nare able to create in pg_class temporary relations which rely on a\ntemporary schema that does not exist anymore, because its schema has\nbeen dropped it, and that's what you are doing. The assertion does\nnot concern only autovacuum originally, as it would fail each time a\nsession tries to build a relation descriptor for its cache with a\nrelation using a non-existing namespace. I have not really dug if\nthat's actually possible to trigger.. Anyway.\n\nSo, on the one hand, saying that we allow orphaned temporary relations\nto be dropped even if their schema does not exist is what autovacuum\ndoes now more aggressively, so that can help to avoid having to clean\nup yourself orphaned entries from catalogs, following up with their\ntoast entries, etc. And this approach makes the assertion lose its\nmeaning for autovacuum.\n\nOn the other hand keeping this assertion makes sure that we never try\nto load incorrect relcache entries, and just make autovacuum less\naggressive by ignoring orphaned entries with incorrect namespace\nreferences, though the user experience in fixing the cluster means\nmanual manipulation of the catalogs. This is something I understood\nwe'd like to avoid as much as possible, while keeping autovacuum\naggressive on the removal as that can ease the life of people fixing a\ncluster. So this would bring us back to a point intermediate of\n246a6c8.\n\nThis makes me wonder how much we should try to outsmart somebody which\nputs the catalogs in such a inconsistent state. Hmm. Perhaps at the\nend autovacuum should just ignore such entries and just don't help the\nuser at all as this also comes with its own issues with the storage\nlevel as well as smgr.c uses rd_backend. And if the user plays with\ntemporary namespaces like that with superuser rights, he likely knows\nwhat he is doing. Perhaps not :D, in which case autovacuum may not be\nthe best thing to decide that. I still think we should make the log\nof autovacuum.c for orphaned relations more careful with its coding\nthough, and fix it with the previous patch. The documentation of\nisTempNamespaceInUse() could gain in clarity, just a nit from me while\nlooking at the surroundings. And actually I found an issue with its\nlogic, as the routine would not consider a temp namespace in use for a\nsession's own MyBackendId. As that's only used for autovacuum, this\nhas no consequence, but let's be correct in hte long run.\n\nAnd this gives the attached after a closer lookup. Thoughts?\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 17:01:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 05:01:25PM +0900, Michael Paquier wrote:\n> This makes me wonder how much we should try to outsmart somebody which\n> puts the catalogs in such a inconsistent state. Hmm. Perhaps at the\n> end autovacuum should just ignore such entries and just don't help the\n> user at all as this also comes with its own issues with the storage\n> level as well as smgr.c uses rd_backend. And if the user plays with\n> temporary namespaces like that with superuser rights, he likely knows\n> what he is doing. Perhaps not :D, in which case autovacuum may not be\n> the best thing to decide that. I still think we should make the log\n> of autovacuum.c for orphaned relations more careful with its coding\n> though, and fix it with the previous patch. The documentation of\n> isTempNamespaceInUse() could gain in clarity, just a nit from me while\n> looking at the surroundings. And actually I found an issue with its\n> logic, as the routine would not consider a temp namespace in use for a\n> session's own MyBackendId. As that's only used for autovacuum, this\n> has no consequence, but let's be correct in hte long run.\n> \n> And this gives the attached after a closer lookup. Thoughts?\n\nThinking more about it, this has a race condition if a temporary\nschema is removed after collecting the OIDs in the drop phase. So the\nupdated attached is actually much more conservative and does not need\nan update of the log message, without giving up on the improvements\ndone in v11~. In 9.4~10, the code of the second phase relies on\nGetTempNamespaceBackendId() which causes an orphaned relation to not\nbe dropped in the event of a missing namespace. I'll just leave that\nalone for a couple of days now..\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 20:07:48 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, 10 Jan 2020 at 16:37, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 10, 2020 at 05:01:25PM +0900, Michael Paquier wrote:\n> > This makes me wonder how much we should try to outsmart somebody which\n> > puts the catalogs in such a inconsistent state. Hmm. Perhaps at the\n> > end autovacuum should just ignore such entries and just don't help the\n> > user at all as this also comes with its own issues with the storage\n> > level as well as smgr.c uses rd_backend. And if the user plays with\n> > temporary namespaces like that with superuser rights, he likely knows\n> > what he is doing. Perhaps not :D, in which case autovacuum may not be\n> > the best thing to decide that. I still think we should make the log\n> > of autovacuum.c for orphaned relations more careful with its coding\n> > though, and fix it with the previous patch. The documentation of\n> > isTempNamespaceInUse() could gain in clarity, just a nit from me while\n> > looking at the surroundings. And actually I found an issue with its\n> > logic, as the routine would not consider a temp namespace in use for a\n> > session's own MyBackendId. As that's only used for autovacuum, this\n> > has no consequence, but let's be correct in hte long run.\n> >\n> > And this gives the attached after a closer lookup. Thoughts?\n>\n> Thinking more about it, this has a race condition if a temporary\n> schema is removed after collecting the OIDs in the drop phase. So the\n> updated attached is actually much more conservative and does not need\n> an update of the log message, without giving up on the improvements\n> done in v11~. In 9.4~10, the code of the second phase relies on\n> GetTempNamespaceBackendId() which causes an orphaned relation to not\n> be dropped in the event of a missing namespace. I'll just leave that\n> alone for a couple of days now..\n> --\n\nThanks for the patch. I am not getting any crash but \\d is not showing\nany temp table if we drop temp schema and create again temp table.\n\npostgres=# create temporary table test1 (a int);\nCREATE TABLE\npostgres=# \\d\n List of relations\n Schema | Name | Type | Owner\n-----------+-------+-------+----------\n pg_temp_3 | test1 | table | mahendra\n(1 row)\n\npostgres=# drop schema pg_temp_3 cascade ;\nNOTICE: drop cascades to table test1\nDROP SCHEMA\npostgres=# \\d\nDid not find any relations.\npostgres=# create temporary table test1 (a int);\nCREATE TABLE\npostgres=# \\d\nDid not find any relations.\npostgres=#\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 10 Jan 2020 17:54:21 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> [ patch to skip tables if get_namespace_name fails ]\n\nThis doesn't seem like a very good idea to me. Is there any\nevidence that it's fixing an actual problem? What if the table\nyou're skipping is holding back datfrozenxid?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 10 Jan 2020 09:50:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 09:50:58AM -0500, Tom Lane wrote:\n> This doesn't seem like a very good idea to me. Is there any\n> evidence that it's fixing an actual problem? What if the table\n> you're skipping is holding back datfrozenxid?\n\nThat's the point I wanted to make sure: we don't because autovacuum\nhas never actually been able to do that and because the cluster is\nput in this state by a superuser after issuing DROP SCHEMA on its\ntemporary schema, which allows many fancy things based on the\ninconsistent state the session is on. Please see see for example\nREL_10_STABLE where GetTempNamespaceBackendId() would return\nInvalidBackendId when the namespace does not exist, so the drop is\nskipped. 246a6c8 (designed to track if a backend slot is using a temp\nnamespace or not, allowing cleanup of orphaned tables if the namespace\nis around, still not used yet by the session it is assigned to) has\nchanged the logic, accidentally actually, to also allow an orphaned\ntemp table to be dropped even if its namespace does not exist\nanymore.\n\nIf we say that it's fine for autovacuum to allow the drop of such\ninconsistent pg_class entries, then we would need to either remove or\nrelax the assertion in relcache.c:1123 (RelationBuildDesc, should only\nautovacuum be allowed to do so?) to begin to allow autovacuum to\nremove temp relations. However, this does not sound like a correct\nthing to do IMO. So, note that if autovacuum is allowed to do so, you\nbasically defeat partially the purpose of the assertion added by\ndebcec7d in relcache.c. Another thing noticeable is that If\nautovacuum does the pg_class entry drops, the on-disk files for the\ntemp relations would remain until the cluster is restarted by the\nway.\n--\nMichael",
"msg_date": "Sat, 11 Jan 2020 09:03:23 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 05:54:21PM +0530, Mahendra Singh Thalor wrote:\n> Thanks for the patch. I am not getting any crash but \\d is not showing\n> any temp table if we drop temp schema and create again temp table.\n\nThat's expected. As discussed on this thread, the schema has been\ndropped by a superuser and there are cases where it is helpful to do\nso, so the relation you have created after DROP SCHEMA relies on an\ninconsistent session state. If you actually try to use \\d with a\nrelation name that matches the one you just created, psql would just\nshow nothing for the namespace name.\n--\nMichael",
"msg_date": "Sat, 11 Jan 2020 10:41:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 08:07:48PM +0900, Michael Paquier wrote:\n> Thinking more about it, this has a race condition if a temporary\n> schema is removed after collecting the OIDs in the drop phase. So the\n> updated attached is actually much more conservative and does not need\n> an update of the log message, without giving up on the improvements\n> done in v11~. In 9.4~10, the code of the second phase relies on\n> GetTempNamespaceBackendId() which causes an orphaned relation to not\n> be dropped in the event of a missing namespace. I'll just leave that\n> alone for a couple of days now..\n\nAnd back on that one, I still like better the solution as of the\nattached which skips any relations with their namespace gone missing\nas 246a6c87's intention was only to allow orphaned temp relations to\nbe dropped by autovacuum when a backend slot is connected, but not\nusing yet its own temp namespace.\n\nIf we want the drop of temp relations to work properly, more thoughts\nare needed regarding the storage part, and I am not actually sure that\nit is autovacuum's job to handle that better.\n\nAny thoughts?\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 13:06:18 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Thu, 16 Jan 2020 at 09:36, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 10, 2020 at 08:07:48PM +0900, Michael Paquier wrote:\n> > Thinking more about it, this has a race condition if a temporary\n> > schema is removed after collecting the OIDs in the drop phase. So the\n> > updated attached is actually much more conservative and does not need\n> > an update of the log message, without giving up on the improvements\n> > done in v11~. In 9.4~10, the code of the second phase relies on\n> > GetTempNamespaceBackendId() which causes an orphaned relation to not\n> > be dropped in the event of a missing namespace. I'll just leave that\n> > alone for a couple of days now..\n>\n> And back on that one, I still like better the solution as of the\n> attached which skips any relations with their namespace gone missing\n> as 246a6c87's intention was only to allow orphaned temp relations to\n> be dropped by autovacuum when a backend slot is connected, but not\n> using yet its own temp namespace.\n>\n> If we want the drop of temp relations to work properly, more thoughts\n> are needed regarding the storage part, and I am not actually sure that\n> it is autovacuum's job to handle that better.\n>\n> Any thoughts?\nHi,\n\nPatch looks good to me and it is fixing the assert failure.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 28 Feb 2020 11:29:56 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 11:29:56AM +0530, Mahendra Singh Thalor wrote:\n> Patch looks good to me and it is fixing the assert failure.\n\nThanks for looking at the patch. Bringing the code to act\nconsistently with what was done in 246a6c8 still looks like the\ncorrect direction to take, in short don't drop temp relations created\nwithout an existing temp schema and ignore them instead of creating\nmore issues with the storage, so I'd like to apply and back-patch this\nstuff down to 11. First, let's wait a couple of extra days.\n--\nMichael",
"msg_date": "Fri, 28 Feb 2020 16:17:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> And back on that one, I still like better the solution as of the\n> attached which skips any relations with their namespace gone missing\n> as 246a6c87's intention was only to allow orphaned temp relations to\n> be dropped by autovacuum when a backend slot is connected, but not\n> using yet its own temp namespace.\n\nSimply skipping the drop looks like basically the right fix to me.\n\nA tiny nit is that using \"get_namespace_name(...) != NULL\" as a test for\nexistence of the namespace seems a bit weird/unreadable. I'd be more\ninclined to code that as a SearchSysCacheExists test, at least in the\nplace where you don't actually need the namespace name.\n\nAlso, I notice that isTempNamespaceInUse is already detecting the case\nwhere the namespace doesn't exist or isn't really a temp namespace.\nI wonder whether it'd be better to teach that to return an indicator about\nthe namespace not being what you think it is. That would force us to look\nat its other callers to see if any of them have related bugs, which seems\nlike a good thing to check --- and even if they don't, having to think\nabout the point in future call sites might forestall new bugs.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Feb 2020 12:20:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "I wrote:\n> Also, I notice that isTempNamespaceInUse is already detecting the case\n> where the namespace doesn't exist or isn't really a temp namespace.\n> I wonder whether it'd be better to teach that to return an indicator about\n> the namespace not being what you think it is. That would force us to look\n> at its other callers to see if any of them have related bugs, which seems\n> like a good thing to check --- and even if they don't, having to think\n> about the point in future call sites might forestall new bugs.\n\nAfter poking around, I see there aren't any other callers. But I think\nthat the cause of this bug is clearly failure to think carefully about\nthe different cases that isTempNamespaceInUse is recognizing, so that\nthe right way to fix it is more like the attached.\n\nIn the back branches, we could leave isTempNamespaceInUse() in place\nbut unused, just in case somebody is calling it. I kind of doubt that\nanyone is, given the small usage in core, but maybe.\n\n\t\t\tregards, tom lane",
"msg_date": "Fri, 28 Feb 2020 13:45:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 01:45:29PM -0500, Tom Lane wrote:\n> After poking around, I see there aren't any other callers. But I think\n> that the cause of this bug is clearly failure to think carefully about\n> the different cases that isTempNamespaceInUse is recognizing, so that\n> the right way to fix it is more like the attached.\n\nGood idea, thanks. Your suggestion looks good to me.\n\n> In the back branches, we could leave isTempNamespaceInUse() in place\n> but unused, just in case somebody is calling it. I kind of doubt that\n> anyone is, given the small usage in core, but maybe.\n\nI doubt that there are any external callers, but I'd rather leave the\npast API in place on back-branches.\n--\nMichael",
"msg_date": "Sat, 29 Feb 2020 08:21:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> On Fri, Feb 28, 2020 at 01:45:29PM -0500, Tom Lane wrote:\n>> After poking around, I see there aren't any other callers. But I think\n>> that the cause of this bug is clearly failure to think carefully about\n>> the different cases that isTempNamespaceInUse is recognizing, so that\n>> the right way to fix it is more like the attached.\n\n> Good idea, thanks. Your suggestion looks good to me.\n\nWill push that, thanks for looking.\n\n>> In the back branches, we could leave isTempNamespaceInUse() in place\n>> but unused, just in case somebody is calling it. I kind of doubt that\n>> anyone is, given the small usage in core, but maybe.\n\n> I doubt that there are any external callers, but I'd rather leave the\n> past API in place on back-branches.\n\nAgreed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 28 Feb 2020 19:23:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
},
{
"msg_contents": "On Fri, Feb 28, 2020 at 07:23:38PM -0500, Tom Lane wrote:\n> Will push that, thanks for looking.\n\nThanks for the commit.\n--\nMichael",
"msg_date": "Sat, 29 Feb 2020 18:46:41 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Assert failure due to \"drop schema pg_temp_3 cascade\" for\n temporary tables and \\d+ is not showing any info after drooping temp table\n schema"
}
] |
[
{
"msg_contents": "Hi,\n\nWhile performing below operations with Master-Slave configuration, Slave is\ncrashed.\nBelow are the steps to reproduce:\n\n-- create a Slave using pg_basebackup and start:\n./pg_basebackup -v -R -D d2 -p 55510\nmkdir /home/centos/ts1\n\n-- Session 1(Master):\n./psql postgres -p 55510\n\nCREATE TABLESPACE ts1 location '/home/centos/ts1';\nCREATE TABLE tab1 (c1 INTEGER, c2 TEXT, c3 point) tablespace ts1;\ninsert into tab1 (select x, x||'_c2',point (x,x) from\ngenerate_series(1,100000) x);\n\n-- Cancel the below update query in middle and then vacuum:\nupdate tab1 set c1=c1+2 , c3=point(10,10) where c1 <=90000;\nvacuum(analyze) tab1(c3, c2);\n\npostgres=# update tab1 set c1=c1+2 , c3=point(10,10) where c1 <=90000;\n^CCancel request sent\nERROR: canceling statement due to user request\n\npostgres=# vacuum(analyze) tab1(c3, c2);\nVACUUM\n\nOR\n\npostgres=# vacuum(analyze) tab1(c3, c2);\nERROR: index \"pg_toast_16385_index\" contains unexpected zero page at block\n0\nHINT: Please REINDEX it.\n\n-- session 2: (slave)\n./psql postgres -p 55520\n\n-- Below select query is crashed:\nselect count(*) from tab1_2;\n\npostgres=# select count(*) from tab1_2;\nWARNING: terminating connection because of crash of another server process\nDETAIL: The postmaster has commanded this server process to roll back the\ncurrent transaction and exit, because another server process exited\nabnormally and possibly corrupted shared memory.\nHINT: In a moment you should be able to reconnect to the database and\nrepeat your command.\nserver closed the connection unexpectedly\nThis probably means the server terminated abnormally\nbefore or while processing the request.\n\n-- Below is the stack trace:\n[centos@parallel-vacuum-testing bin]$ gdb -q -c d2/core.20509 postgres\nReading symbols from /home/centos/PGsrc/postgresql/inst/bin/postgres...done.\n[New LWP 20509]\n[Thread debugging using libthread_db enabled]\nUsing host libthread_db library \"/lib64/libthread_db.so.1\".\nCore was generated by `postgres: startup recovering\n000000010000000000000006 '.\nProgram terminated with signal 6, Aborted.\n#0 0x00007f42d2565337 in raise () from /lib64/libc.so.6\nMissing separate debuginfos, use: debuginfo-install\nglibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64\nkrb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64\nlibselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64\npcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64\n(gdb) bt\n#0 0x00007f42d2565337 in raise () from /lib64/libc.so.6\n#1 0x00007f42d2566a28 in abort () from /lib64/libc.so.6\n#2 0x0000000000a94c55 in errfinish (dummy=0) at elog.c:590\n#3 0x0000000000a9729a in elog_finish (elevel=22, fmt=0xb30a10 \"WAL\ncontains references to invalid pages\") at elog.c:1465\n#4 0x000000000057cb10 in log_invalid_page (node=..., forkno=MAIN_FORKNUM,\nblkno=470, present=false) at xlogutils.c:96\n#5 0x000000000057d64e in XLogReadBufferExtended (rnode=...,\nforknum=MAIN_FORKNUM, blkno=470, mode=RBM_NORMAL) at xlogutils.c:472\n#6 0x000000000057d386 in XLogReadBufferForRedoExtended (record=0x1b4a9c8,\nblock_id=0 '\\000', mode=RBM_NORMAL, get_cleanup_lock=true,\nbuf=0x7ffda55b39d4)\n at xlogutils.c:390\n#7 0x00000000004f12b5 in heap_xlog_clean (record=0x1b4a9c8) at\nheapam.c:7744\n#8 0x00000000004f4ebe in heap2_redo (record=0x1b4a9c8) at heapam.c:8891\n#9 0x000000000056cceb in StartupXLOG () at xlog.c:7202\n#10 0x000000000086cb0c in StartupProcessMain () at startup.c:170\n#11 0x0000000000582150 in AuxiliaryProcessMain (argc=2,\nargv=0x7ffda55b4600) at bootstrap.c:451\n#12 0x000000000086ba0f in StartChildProcess (type=StartupProcess) at\npostmaster.c:5461\n#13 0x000000000086685d in PostmasterMain (argc=5, argv=0x1b49d50) at\npostmaster.c:1392\n#14 0x0000000000775bb1 in main (argc=5, argv=0x1b49d50) at main.c:210\n(gdb)\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nHi,While performing below operations with Master-Slave configuration, Slave is crashed.Below are the steps to reproduce:-- create a Slave using pg_basebackup and start:./pg_basebackup -v -R -D d2 -p 55510mkdir /home/centos/ts1-- Session 1(Master):./psql postgres -p 55510CREATE TABLESPACE ts1 location '/home/centos/ts1';CREATE TABLE tab1 (c1 INTEGER, c2 TEXT, c3 point) tablespace ts1;insert into tab1 (select x, x||'_c2',point (x,x) from generate_series(1,100000) x);-- Cancel the below update query in middle and then vacuum:update tab1 set c1=c1+2 , c3=point(10,10) where c1 <=90000;vacuum(analyze) tab1(c3, c2);postgres=# update tab1 set c1=c1+2 , c3=point(10,10) where c1 <=90000;^CCancel request sentERROR: canceling statement due to user requestpostgres=# vacuum(analyze) tab1(c3, c2);VACUUMORpostgres=# vacuum(analyze) tab1(c3, c2);ERROR: index \"pg_toast_16385_index\" contains unexpected zero page at block 0HINT: Please REINDEX it.-- session 2: (slave)./psql postgres -p 55520-- Below select query is crashed:select count(*) from tab1_2;postgres=# select count(*) from tab1_2;WARNING: terminating connection because of crash of another server processDETAIL: The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.HINT: In a moment you should be able to reconnect to the database and repeat your command.server closed the connection unexpectedly\tThis probably means the server terminated abnormally\tbefore or while processing the request.-- Below is the stack trace:[centos@parallel-vacuum-testing bin]$ gdb -q -c d2/core.20509 postgresReading symbols from /home/centos/PGsrc/postgresql/inst/bin/postgres...done.[New LWP 20509][Thread debugging using libthread_db enabled]Using host libthread_db library \"/lib64/libthread_db.so.1\".Core was generated by `postgres: startup recovering 000000010000000000000006 '.Program terminated with signal 6, Aborted.#0 0x00007f42d2565337 in raise () from /lib64/libc.so.6Missing separate debuginfos, use: debuginfo-install glibc-2.17-292.el7.x86_64 keyutils-libs-1.5.8-3.el7.x86_64 krb5-libs-1.15.1-37.el7_7.2.x86_64 libcom_err-1.42.9-16.el7.x86_64 libselinux-2.5-14.1.el7.x86_64 openssl-libs-1.0.2k-19.el7.x86_64 pcre-8.32-17.el7.x86_64 zlib-1.2.7-18.el7.x86_64(gdb) bt#0 0x00007f42d2565337 in raise () from /lib64/libc.so.6#1 0x00007f42d2566a28 in abort () from /lib64/libc.so.6#2 0x0000000000a94c55 in errfinish (dummy=0) at elog.c:590#3 0x0000000000a9729a in elog_finish (elevel=22, fmt=0xb30a10 \"WAL contains references to invalid pages\") at elog.c:1465#4 0x000000000057cb10 in log_invalid_page (node=..., forkno=MAIN_FORKNUM, blkno=470, present=false) at xlogutils.c:96#5 0x000000000057d64e in XLogReadBufferExtended (rnode=..., forknum=MAIN_FORKNUM, blkno=470, mode=RBM_NORMAL) at xlogutils.c:472#6 0x000000000057d386 in XLogReadBufferForRedoExtended (record=0x1b4a9c8, block_id=0 '\\000', mode=RBM_NORMAL, get_cleanup_lock=true, buf=0x7ffda55b39d4) at xlogutils.c:390#7 0x00000000004f12b5 in heap_xlog_clean (record=0x1b4a9c8) at heapam.c:7744#8 0x00000000004f4ebe in heap2_redo (record=0x1b4a9c8) at heapam.c:8891#9 0x000000000056cceb in StartupXLOG () at xlog.c:7202#10 0x000000000086cb0c in StartupProcessMain () at startup.c:170#11 0x0000000000582150 in AuxiliaryProcessMain (argc=2, argv=0x7ffda55b4600) at bootstrap.c:451#12 0x000000000086ba0f in StartChildProcess (type=StartupProcess) at postmaster.c:5461#13 0x000000000086685d in PostmasterMain (argc=5, argv=0x1b49d50) at postmaster.c:1392#14 0x0000000000775bb1 in main (argc=5, argv=0x1b49d50) at main.c:210(gdb) -- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Tue, 24 Dec 2019 17:29:25 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Server crash with Master-Slave configuration."
},
{
"msg_contents": "On Tue, Dec 24, 2019 at 05:29:25PM +0530, Prabhat Sahu wrote:\n> While performing below operations with Master-Slave configuration, Slave is\n> crashed.\n> Below are the steps to reproduce:\n> \n> -- create a Slave using pg_basebackup and start:\n> ./pg_basebackup -v -R -D d2 -p 55510\n> mkdir /home/centos/ts1\n> \n> -- Session 1(Master):\n> ./psql postgres -p 55510\n> \n> CREATE TABLESPACE ts1 location '/home/centos/ts1';\n\nYour mistake is here. Both primary and standby are on the same host,\nso CREATE TABLESPACE would point to a path that overlap for both\nclusters as the tablespace path is registered the WAL replayed,\nleading to various weird behaviors. What you need to do instead is to\ncreate the tablespace before taking the base backup, and then take the\nbase backup using pg_basebackup's --tablespace-mapping.\n--\nMichael",
"msg_date": "Wed, 25 Dec 2019 11:31:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Server crash with Master-Slave configuration."
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 8:01 AM Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Tue, Dec 24, 2019 at 05:29:25PM +0530, Prabhat Sahu wrote:\n> > While performing below operations with Master-Slave configuration, Slave\n> is\n> > crashed.\n> > Below are the steps to reproduce:\n> >\n> > -- create a Slave using pg_basebackup and start:\n> > ./pg_basebackup -v -R -D d2 -p 55510\n> > mkdir /home/centos/ts1\n> >\n> > -- Session 1(Master):\n> > ./psql postgres -p 55510\n> >\n> > CREATE TABLESPACE ts1 location '/home/centos/ts1';\n>\n> Your mistake is here. Both primary and standby are on the same host,\n> so CREATE TABLESPACE would point to a path that overlap for both\n> clusters as the tablespace path is registered the WAL replayed,\n> leading to various weird behaviors. What you need to do instead is to\n> create the tablespace before taking the base backup, and then take the\n> base backup using pg_basebackup's --tablespace-mapping.\n\nThanks Michael for pointing it out, I have re-tested the scenario\nwith \"--tablespace-mapping=OLDDIR=NEWDIR\" option of pg_basebackup, and now\nits working fine.\nBut I think, instead of the crash, a proper error message would be better.\n\n\n> --\n> Michael\n>\n\n\n-- \n\nWith Regards,\n\nPrabhat Kumar Sahu\nSkype ID: prabhat.sahu1984\nEnterpriseDB Software India Pvt. Ltd.\n\nThe Postgres Database Company\n\nOn Wed, Dec 25, 2019 at 8:01 AM Michael Paquier <michael@paquier.xyz> wrote:On Tue, Dec 24, 2019 at 05:29:25PM +0530, Prabhat Sahu wrote:\n> While performing below operations with Master-Slave configuration, Slave is\n> crashed.\n> Below are the steps to reproduce:\n> \n> -- create a Slave using pg_basebackup and start:\n> ./pg_basebackup -v -R -D d2 -p 55510\n> mkdir /home/centos/ts1\n> \n> -- Session 1(Master):\n> ./psql postgres -p 55510\n> \n> CREATE TABLESPACE ts1 location '/home/centos/ts1';\n\nYour mistake is here. Both primary and standby are on the same host,\nso CREATE TABLESPACE would point to a path that overlap for both\nclusters as the tablespace path is registered the WAL replayed,\nleading to various weird behaviors. What you need to do instead is to\ncreate the tablespace before taking the base backup, and then take the\nbase backup using pg_basebackup's --tablespace-mapping.Thanks Michael for pointing it out, I have re-tested the scenario with \"--tablespace-mapping=OLDDIR=NEWDIR\" option of pg_basebackup, and now its working fine.But I think, instead of the crash, a proper error message would be better. \n--\nMichael\n-- \nWith Regards,Prabhat Kumar SahuSkype ID: prabhat.sahu1984EnterpriseDB Software India Pvt. Ltd.The Postgres Database Company",
"msg_date": "Wed, 25 Dec 2019 11:58:45 +0530",
"msg_from": "Prabhat Sahu <prabhat.sahu@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "Re: Server crash with Master-Slave configuration."
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 1:29 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com>\nwrote:\n\n> Thanks Michael for pointing it out, I have re-tested the scenario\n> with \"--tablespace-mapping=OLDDIR=NEWDIR\" option of pg_basebackup, and now\n> its working fine.\n> But I think, instead of the crash, a proper error message would be\n> better.\n>\n\nIt appears from the stack trace you sent that it emits a PANIC, which seems\nlike a proper error message to me.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\nOn Wed, Dec 25, 2019 at 1:29 AM Prabhat Sahu <prabhat.sahu@enterprisedb.com> wrote:Thanks Michael for pointing it out, I have re-tested the scenario with \"--tablespace-mapping=OLDDIR=NEWDIR\" option of pg_basebackup, and now its working fine.But I think, instead of the crash, a proper error message would be better. It appears from the stack trace you sent that it emits a PANIC, which seems like a proper error message to me.-- Robert HaasEnterpriseDB: http://www.enterprisedb.comThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 27 Dec 2019 21:46:09 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Server crash with Master-Slave configuration."
}
] |
[
{
"msg_contents": "I found this comment in fe-connect.c:\n\n /*\n * If GSSAPI is enabled and we have a credential cache, try to\n * set it up before sending startup messages. If it's already\n * operating, don't try SSL and instead just build the startup\n * packet.\n */\n\nI'm not sure I understand this correctly. Why does it say \"just build\nthe startup\" packet about the SSL thing, when in reality the SSL block\nbelow is unrelated to the GSS logic? If I consider that SSL is just a\ntypo for GSS, then the comment doesn't seem to describe the logic\neither, because what it does is go to CONNECTION_GSS_STARTUP state which\n*doesn't* \"build the startup packet\" in the sense of pqBuildStartupPacket2/3,\nbut instead it just does pqPacketSend (which is what the SSL block below\ncalls \"request SSL instead of sending the startup packet\").\n\nAlso, it says \"... and we have a credential cache, try to set it up...\" but I\nthink it should say \"if we *don't* have a credential cache\".\n\nNow that I've read this code half a dozen times, I think I'm starting to\nvaguely understand how it works, but I would have expected the comment\nto explain it so that I didn't have to do that.\n\nCan we discuss a better wording for this comment? I wrote this, but I\ndon't think it captures all the nuances in this code:\n\n /*\n * If GSSAPI is enabled, we need a credential cache; we may\n * already have it, or set it up if not. Then, if we don't\n * have a GSS context, request it and switch to\n * CONNECTION_GSS_STARTUP to wait for the response.\n *\n * Fail altogether if GSS is required but cannot be had.\n */\n\nThanks!\n\n-- \n�lvaro Herrera http://www.twitter.com/alvherre\n\n\n",
"msg_date": "Tue, 24 Dec 2019 12:15:20 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "weird libpq GSSAPI comment"
},
{
"msg_contents": "Greetings,\n\n(I've added Robbie to this thread, so he can correct me if/when I go\nwrong in my descriptions regarding the depths of GSSAPI ;)\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> I found this comment in fe-connect.c:\n> \n> /*\n> * If GSSAPI is enabled and we have a credential cache, try to\n> * set it up before sending startup messages. If it's already\n> * operating, don't try SSL and instead just build the startup\n> * packet.\n> */\n> \n> I'm not sure I understand this correctly. Why does it say \"just build\n> the startup\" packet about the SSL thing, when in reality the SSL block\n> below is unrelated to the GSS logic? If I consider that SSL is just a\n> typo for GSS, then the comment doesn't seem to describe the logic\n> either, because what it does is go to CONNECTION_GSS_STARTUP state which\n> *doesn't* \"build the startup packet\" in the sense of pqBuildStartupPacket2/3,\n> but instead it just does pqPacketSend (which is what the SSL block below\n> calls \"request SSL instead of sending the startup packet\").\n\nSSL there isn't a typo for GSS. The \"startup packet\" being referred to\nin that comment is specifically the \"request GSS\" that's sent via the\nfollowing pqPacketSend, not the pqBuildStartupPacket one. I agree\nthat's a bit confusing and we should probably reword that, since\n\"Startup Packet\" has a specific meaning in this area of the code.\n\n> Also, it says \"... and we have a credential cache, try to set it up...\" but I\n> think it should say \"if we *don't* have a credential cache\".\n\nNo, we call pg_GSS_have_cred_cache() here, which goes on to call\ngss_acquire_cred(), which, as the comment above that function says,\nchecks to see if we can acquire credentials by making sure that we *do*\nhave a credential cache. If we *don't* have a credential cache, then we\nfall back to SSL (and then to non-SSL).\n\n> Now that I've read this code half a dozen times, I think I'm starting to\n> vaguely understand how it works, but I would have expected the comment\n> to explain it so that I didn't have to do that.\n\nI'm concerned that you don't quite understand it though, I'm afraid.\n\n> Can we discuss a better wording for this comment? I wrote this, but I\n> don't think it captures all the nuances in this code:\n> \n> /*\n> * If GSSAPI is enabled, we need a credential cache; we may\n> * already have it, or set it up if not. Then, if we don't\n> * have a GSS context, request it and switch to\n> * CONNECTION_GSS_STARTUP to wait for the response.\n> *\n> * Fail altogether if GSS is required but cannot be had.\n> */\n\nWe don't set up a credential cache at any point in this code, we only\ncheck to see if one exists, and only in that case do we send a request\nto start GSSAPI encryption (if it's allowed for us to do so).\n\nMaybe part of the confusion here is that there's two different things- a\ncredential cache, and then a credential *handle*. Calling\ngss_acquire_cred() will, if a credential *cache* exists, return to us a\ncredential *handle* (in the form of conn->gcred) that we then pass to\ngss_init_sec_context().\n\nThere's then also a GSS *context* (conn->gctx), which gets set up when\nwe first call gss_init_sec_context(), and is then used throughout a\nconnection.\n\nTypically, the credential cache is actually created when you log into a\nkerberized system, but if not, you can create one by using 'kinit'\nmanually.\n\nHopefully that helps. I'm certainly happy to work with you to reword\nthe comment, of course, but let's make sure there's agreement and\nunderstanding of what the code does first.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 27 Dec 2019 14:48:21 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "On 2019-Dec-27, Stephen Frost wrote:\n\n> Maybe part of the confusion here is that there's two different things- a\n> credential cache, and then a credential *handle*. Calling\n> gss_acquire_cred() will, if a credential *cache* exists, return to us a\n> credential *handle* (in the form of conn->gcred) that we then pass to\n> gss_init_sec_context().\n\nHmm, ok, yeah I certainly didn't understand that -- I was thinking that\nthe call was creating the credential cache itself, not a *handle* to\naccess it (I suppose that terminology must be clear to somebody familiar\nwith GSS).\n\n> Hopefully that helps. I'm certainly happy to work with you to reword\n> the comment, of course, but let's make sure there's agreement and\n> understanding of what the code does first.\n\nHow about this?\n\n * If GSSAPI is enabled and we can reach a credential cache,\n * set up a handle for it; if it's operating, just send a\n * GSS startup message, instead of the SSL negotiation and\n * regular startup message below.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 17:23:32 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n\n> Greetings,\n>\n> (I've added Robbie to this thread, so he can correct me if/when I go\n> wrong in my descriptions regarding the depths of GSSAPI ;)\n\nHi, appreciate the CC since I'm not subscribed anymore. Thanks for your\npatience while I was PTO.\n\n> * Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n>> I found this comment in fe-connect.c:\n>> \n>> /*\n>> * If GSSAPI is enabled and we have a credential cache, try to\n>> * set it up before sending startup messages. If it's already\n>> * operating, don't try SSL and instead just build the startup\n>> * packet.\n>> */\n>> \n>> I'm not sure I understand this correctly. Why does it say \"just\n>> build the startup\" packet about the SSL thing, when in reality the\n>> SSL block below is unrelated to the GSS logic? If I consider that\n>> SSL is just a typo for GSS, then the comment doesn't seem to describe\n>> the logic either, because what it does is go to\n>> CONNECTION_GSS_STARTUP state which *doesn't* \"build the startup\n>> packet\" in the sense of pqBuildStartupPacket2/3, but instead it just\n>> does pqPacketSend (which is what the SSL block below calls \"request\n>> SSL instead of sending the startup packet\").\n>\n> SSL there isn't a typo for GSS. The \"startup packet\" being referred to\n> in that comment is specifically the \"request GSS\" that's sent via the\n> following pqPacketSend, not the pqBuildStartupPacket one. I agree\n> that's a bit confusing and we should probably reword that, since\n> \"Startup Packet\" has a specific meaning in this area of the code.\n\nThe comment refers to the first `if`, mostly. The idea is that we want\nto check whether we *can* perform GSSAPI negotiation, and skip it\notherwise - which is determined by attempting to acquire credentials.\nThere will be false positives for this check, but no false negatives,\nand it's a step that GSSAPI performs as part of negotiation anyway so it\ncosts us basically nothing since we cache the result.\n\nThe \"startup packet\" the comment refers to is that just below on 2867 -\nthe pqBuildStartupPacket one. The flow is:\n\n1. Set up GSSAPI, if possible.\n2. Set up TLS, if possible.\n3. Send startup packet.\n\n>> Also, it says \"... and we have a credential cache, try to set it\n>> up...\" but I think it should say \"if we *don't* have a credential\n>> cache\".\n>\n> No, we call pg_GSS_have_cred_cache() here, which goes on to call\n> gss_acquire_cred(), which, as the comment above that function says,\n> checks to see if we can acquire credentials by making sure that we *do*\n> have a credential cache. If we *don't* have a credential cache, then we\n> fall back to SSL (and then to non-SSL).\n\nRight.\n\n>> Now that I've read this code half a dozen times, I think I'm starting\n>> to vaguely understand how it works, but I would have expected the\n>> comment to explain it so that I didn't have to do that.\n>\n> I'm concerned that you don't quite understand it though, I'm afraid.\n\nSame. I tried to model after the TLS code for this. That has the\nfollowing comment:\n\n If SSL is enabled and we haven't already got it running, request it\n instead of sending the startup message.\n\n>> Can we discuss a better wording for this comment? I wrote this, but I\n>> don't think it captures all the nuances in this code:\n>> \n>> /*\n>> * If GSSAPI is enabled, we need a credential cache; we may\n>> * already have it, or set it up if not. Then, if we don't\n>> * have a GSS context, request it and switch to\n>> * CONNECTION_GSS_STARTUP to wait for the response.\n>> *\n>> * Fail altogether if GSS is required but cannot be had.\n>> */\n>\n> We don't set up a credential cache at any point in this code, we only\n> check to see if one exists, and only in that case do we send a request\n> to start GSSAPI encryption (if it's allowed for us to do so).\n>\n> Maybe part of the confusion here is that there's two different things- a\n> credential cache, and then a credential *handle*. Calling\n> gss_acquire_cred() will, if a credential *cache* exists, return to us a\n> credential *handle* (in the form of conn->gcred) that we then pass to\n> gss_init_sec_context().\n\nThis is true, though we tend to play fast and loose with that\ndistinction and I'm guilty of doing so here.\n\n> There's then also a GSS *context* (conn->gctx), which gets set up when\n> we first call gss_init_sec_context(), and is then used throughout a\n> connection.\n>\n> Typically, the credential cache is actually created when you log into a\n> kerberized system, but if not, you can create one by using 'kinit'\n> manually.\n>\n> Hopefully that helps. I'm certainly happy to work with you to reword\n> the comment, of course, but let's make sure there's agreement and\n> understanding of what the code does first.\n\nHow do you feel about something like this:\n\n If GSSAPI encryption is enabled and we can acquire GSSAPI\n credentials, try to set up GSSAPI encryption instead of sending the\n startup message. If we succeed, don't set up SSL.\n\nThanks,\n--Robbie",
"msg_date": "Fri, 03 Jan 2020 14:24:00 -0500",
"msg_from": "Robbie Harwood <rharwood@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n\n> How about this?\n>\n> * If GSSAPI is enabled and we can reach a credential cache,\n> * set up a handle for it; if it's operating, just send a\n> * GSS startup message, instead of the SSL negotiation and\n> * regular startup message below.\n\nDue to the way postgres handled this historically, there are two ways\nGSSAPI can be used: for connection encryption, and for authentication\nonly. We perform the same dance of sending a \"request packet\" for\nGSSAPI encryption as we do for TLS encryption. So I'd like us to be\nprecise about which one we're talking about here (encryption).\n\nThe GSSAPI idiom I should have used is \"can acquire credentials\" (i.e.,\ninstead of \"can reach a credential cache\" in your proposal).\n\nThere's no such thing as a \"GSS startup message\". After negotiating\nGSSAPI/TLS encryption (or failing to do so), we send the same things in\nall cases, which includes negotiation of authentication mechanism if\nany. (Negotiating GSSAPI for authentication after negotiating GSSAPI\nfor encryption will short-circuit rather than establishing a second\ncontext, if I remember right.)\n\nI wonder if part of the confusion might be due to the synonyms we're\nusing here for \"in use\". Things seem to be \"got running\", \"set up\",\n\"operating\", \"negotiated\", ... - maybe that's part of the barrier to\nunderstanding?\n\nThanks,\n--Robbie",
"msg_date": "Fri, 03 Jan 2020 15:01:25 -0500",
"msg_from": "Robbie Harwood <rharwood@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Greetings,\n\n* Robbie Harwood (rharwood@redhat.com) wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> \n> > How about this?\n> >\n> > * If GSSAPI is enabled and we can reach a credential cache,\n> > * set up a handle for it; if it's operating, just send a\n> > * GSS startup message, instead of the SSL negotiation and\n> > * regular startup message below.\n> \n> Due to the way postgres handled this historically, there are two ways\n> GSSAPI can be used: for connection encryption, and for authentication\n> only. We perform the same dance of sending a \"request packet\" for\n> GSSAPI encryption as we do for TLS encryption. So I'd like us to be\n> precise about which one we're talking about here (encryption).\n\nAlright, that's fair.\n\n> The GSSAPI idiom I should have used is \"can acquire credentials\" (i.e.,\n> instead of \"can reach a credential cache\" in your proposal).\n\nOk.\n\n> There's no such thing as a \"GSS startup message\". After negotiating\n> GSSAPI/TLS encryption (or failing to do so), we send the same things in\n> all cases, which includes negotiation of authentication mechanism if\n> any. (Negotiating GSSAPI for authentication after negotiating GSSAPI\n> for encryption will short-circuit rather than establishing a second\n> context, if I remember right.)\n\nYes, you can see that around src/backend/libpq/auth.c:538 where we skip\nstraight to pg_GSS_checkauth() if we already have encryption up and\nrunning, and if we don't then we go through pg_GSS_recvauth() (which\nwill eventually call pg_GSS_checkauth() too).\n\n> I wonder if part of the confusion might be due to the synonyms we're\n> using here for \"in use\". Things seem to be \"got running\", \"set up\",\n> \"operating\", \"negotiated\", ... - maybe that's part of the barrier to\n> understanding?\n\nHow about something like this?\n\n * If GSSAPI Encryption is enabled, then call pg_GSS_have_cred_cache()\n * which will return true if we can acquire credentials (and give us a\n * handle to use in conn->gcred), and then send a packet to the server\n * asking for GSSAPI Encryption (and skip past SSL negotiation and\n * regular startup below).\n\nThanks,\n\nStephen",
"msg_date": "Mon, 6 Jan 2020 16:03:22 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n\n>> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n>\n> How about something like this?\n>\n> * If GSSAPI Encryption is enabled, then call pg_GSS_have_cred_cache()\n> * which will return true if we can acquire credentials (and give us a\n> * handle to use in conn->gcred), and then send a packet to the server\n> * asking for GSSAPI Encryption (and skip past SSL negotiation and\n> * regular startup below).\n\nThis looks correct to me (and uses plenty of parentheticals, so it feels\nin keeping with something I'd write) :)\n\nThanks,\n--Robbie",
"msg_date": "Mon, 06 Jan 2020 16:21:30 -0500",
"msg_from": "Robbie Harwood <rharwood@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "On 2020-Jan-06, Stephen Frost wrote:\n\n> > I wonder if part of the confusion might be due to the synonyms we're\n> > using here for \"in use\". Things seem to be \"got running\", \"set up\",\n> > \"operating\", \"negotiated\", ... - maybe that's part of the barrier to\n> > understanding?\n> \n> How about something like this?\n> \n> * If GSSAPI Encryption is enabled, then call pg_GSS_have_cred_cache()\n> * which will return true if we can acquire credentials (and give us a\n> * handle to use in conn->gcred), and then send a packet to the server\n> * asking for GSSAPI Encryption (and skip past SSL negotiation and\n> * regular startup below).\n\nWFM. (I'm not sure why you uppercase Encryption, though.)\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 18:38:17 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Hello,\n\nOn 2020-Jan-06, Robbie Harwood wrote:\n\n> This looks correct to me (and uses plenty of parentheticals, so it feels\n> in keeping with something I'd write) :)\n\n(You know, long ago I used to write with a lot of parenthicals (even\nnested ones). But I read somewhere that prose is more natural for\nnormal people without them, so I mostly stopped using them.)\n\nCheers,\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 18:41:45 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Greetings,\n\n* Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> On 2020-Jan-06, Stephen Frost wrote:\n> \n> > > I wonder if part of the confusion might be due to the synonyms we're\n> > > using here for \"in use\". Things seem to be \"got running\", \"set up\",\n> > > \"operating\", \"negotiated\", ... - maybe that's part of the barrier to\n> > > understanding?\n> > \n> > How about something like this?\n> > \n> > * If GSSAPI Encryption is enabled, then call pg_GSS_have_cred_cache()\n> > * which will return true if we can acquire credentials (and give us a\n> > * handle to use in conn->gcred), and then send a packet to the server\n> > * asking for GSSAPI Encryption (and skip past SSL negotiation and\n> > * regular startup below).\n> \n> WFM. (I'm not sure why you uppercase Encryption, though.)\n\nOk, great, attached is an actual patch which I'll push soon if there\naren't any other comments.\n\nThanks!\n\nStephen",
"msg_date": "Mon, 6 Jan 2020 16:53:49 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
},
{
"msg_contents": "Greetings,\n\n* Stephen Frost (sfrost@snowman.net) wrote:\n> * Alvaro Herrera (alvherre@2ndquadrant.com) wrote:\n> > On 2020-Jan-06, Stephen Frost wrote:\n> > > > I wonder if part of the confusion might be due to the synonyms we're\n> > > > using here for \"in use\". Things seem to be \"got running\", \"set up\",\n> > > > \"operating\", \"negotiated\", ... - maybe that's part of the barrier to\n> > > > understanding?\n> > > \n> > > How about something like this?\n> > > \n> > > * If GSSAPI Encryption is enabled, then call pg_GSS_have_cred_cache()\n> > > * which will return true if we can acquire credentials (and give us a\n> > > * handle to use in conn->gcred), and then send a packet to the server\n> > > * asking for GSSAPI Encryption (and skip past SSL negotiation and\n> > > * regular startup below).\n> > \n> > WFM. (I'm not sure why you uppercase Encryption, though.)\n> \n> Ok, great, attached is an actual patch which I'll push soon if there\n> aren't any other comments.\n\nPushed.\n\nThanks!\n\nStephen",
"msg_date": "Wed, 8 Jan 2020 10:58:09 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: weird libpq GSSAPI comment"
}
] |
[
{
"msg_contents": "Hi Hackers,\n\nI have accidentally noticed that pg_replication_slot_advance only \nchanges in-memory state of the slot when its type is physical. Its new \nvalue does not survive restart.\n\nReproduction steps:\n\n1) Create new slot and remember its restart_lsn\n\nSELECT pg_create_physical_replication_slot('slot1', true);\nSELECT * from pg_replication_slots;\n\n2) Generate some dummy WAL\n\nCHECKPOINT;\nSELECT pg_switch_wal();\nCHECKPOINT;\nSELECT pg_switch_wal();\n\n3) Advance slot to the value of pg_current_wal_insert_lsn()\n\nSELECT pg_replication_slot_advance('slot1', '0/160001A0');\n\n4) Check that restart_lsn has been updated\n\nSELECT * from pg_replication_slots;\n\n5) Restart server and check restart_lsn again. It should be the same as \nin the step 1.\n\n\nI dig into the code and it happens because of this if statement:\n\n /* Update the on disk state when lsn was updated. */\n if (XLogRecPtrIsInvalid(endlsn))\n {\n ReplicationSlotMarkDirty();\n ReplicationSlotsComputeRequiredXmin(false);\n ReplicationSlotsComputeRequiredLSN();\n ReplicationSlotSave();\n }\n\nActually, endlsn is always a valid LSN after the execution of \nreplication slot advance guts. It works for logical slots only by \nchance, since there is an implicit ReplicationSlotMarkDirty() call \ninside LogicalConfirmReceivedLocation.\n\nAttached is a small patch, which fixes this bug. I have tried to\nstick to the same logic in this 'if (XLogRecPtrIsInvalid(endlsn))'\nand now pg_logical_replication_slot_advance and\npg_physical_replication_slot_advance return InvalidXLogRecPtr if\nno-op.\n\nWhat do you think?\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\nP.S. CCed Simon and Michael as they are the last who seriously touched \npg_replication_slot_advance code.",
"msg_date": "Tue, 24 Dec 2019 20:12:32 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Physical replication slot advance is not persistent"
},
{
"msg_contents": "At Tue, 24 Dec 2019 20:12:32 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in \n> I dig into the code and it happens because of this if statement:\n> \n> /* Update the on disk state when lsn was updated. */\n> if (XLogRecPtrIsInvalid(endlsn))\n> {\n> ReplicationSlotMarkDirty();\n> ReplicationSlotsComputeRequiredXmin(false);\n> ReplicationSlotsComputeRequiredLSN();\n> ReplicationSlotSave();\n> }\n\nYes, it seems just broken.\n\n> Attached is a small patch, which fixes this bug. I have tried to\n> stick to the same logic in this 'if (XLogRecPtrIsInvalid(endlsn))'\n> and now pg_logical_replication_slot_advance and\n> pg_physical_replication_slot_advance return InvalidXLogRecPtr if\n> no-op.\n> \n> What do you think?\n\nI think we shoudn't change the definition of\npg_*_replication_slot_advance since the result is user-facing.\n\nThe functions return a invalid value only when the slot had the\ninvalid value and failed to move the position. I think that happens\nonly for uninitalized slots.\n\nAnyway what we should do there is dirtying the slot when the operation\ncan be assumed to have been succeeded.\n\nAs the result I think what is needed there is just checking if the\nreturned lsn is equal or larger than moveto. Doen't the following\nchange work?\n\n-\tif (XLogRecPtrIsInvalid(endlsn))\n+\tif (moveto <= endlsn)\n\nreagrds.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 25 Dec 2019 13:03:37 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 25.12.2019 07:03, Kyotaro Horiguchi wrote:\n> At Tue, 24 Dec 2019 20:12:32 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in\n>> I dig into the code and it happens because of this if statement:\n>>\n>> /* Update the on disk state when lsn was updated. */\n>> if (XLogRecPtrIsInvalid(endlsn))\n>> {\n>> ReplicationSlotMarkDirty();\n>> ReplicationSlotsComputeRequiredXmin(false);\n>> ReplicationSlotsComputeRequiredLSN();\n>> ReplicationSlotSave();\n>> }\n> Yes, it seems just broken.\n>\n>> Attached is a small patch, which fixes this bug. I have tried to\n>> stick to the same logic in this 'if (XLogRecPtrIsInvalid(endlsn))'\n>> and now pg_logical_replication_slot_advance and\n>> pg_physical_replication_slot_advance return InvalidXLogRecPtr if\n>> no-op.\n>>\n>> What do you think?\n> I think we shoudn't change the definition of\n> pg_*_replication_slot_advance since the result is user-facing.\n\nYes, that was my main concern too. OK.\n\n> The functions return a invalid value only when the slot had the\n> invalid value and failed to move the position. I think that happens\n> only for uninitalized slots.\n>\n> Anyway what we should do there is dirtying the slot when the operation\n> can be assumed to have been succeeded.\n>\n> As the result I think what is needed there is just checking if the\n> returned lsn is equal or larger than moveto. Doen't the following\n> change work?\n>\n> -\tif (XLogRecPtrIsInvalid(endlsn))\n> +\tif (moveto <= endlsn)\n\nYep, it helps with physical replication slot persistence after advance, \nbut the whole validation (moveto <= endlsn) does not make sense for me. \nThe value of moveto should be >= than minlsn == confirmed_flush / \nrestart_lsn, while endlsn == retlsn is also always initialized with \nconfirmed_flush / restart_lsn. Thus, your condition seems to be true in \nany case, even if it was no-op one, which we were intended to catch.\n\nActually, if we do not want to change pg_*_replication_slot_advance, we \ncan just add straightforward validation that either confirmed_flush, or \nrestart_lsn changed after slot advance guts execution. It will be a \nlittle bit bulky, but much more clear and will never be affected by \npg_*_replication_slot_advance logic change.\n\n\nAnother weird part I have found is this assignment inside \npg_logical_replication_slot_advance:\n\n/* Initialize our return value in case we don't do anything */\nretlsn = MyReplicationSlot->data.confirmed_flush;\n\nIt looks redundant, since later we do the same assignment, which should \nbe reachable in any case.\n\nI will recheck everything again and try to come up with something during \nthis week.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Wed, 25 Dec 2019 16:51:57 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 25.12.2019 16:51, Alexey Kondratov wrote:\n> On 25.12.2019 07:03, Kyotaro Horiguchi wrote:\n>> As the result I think what is needed there is just checking if the\n>> returned lsn is equal or larger than moveto. Doen't the following\n>> change work?\n>>\n>> - if (XLogRecPtrIsInvalid(endlsn))\n>> + if (moveto <= endlsn)\n>\n> Yep, it helps with physical replication slot persistence after \n> advance, but the whole validation (moveto <= endlsn) does not make \n> sense for me. The value of moveto should be >= than minlsn == \n> confirmed_flush / restart_lsn, while endlsn == retlsn is also always \n> initialized with confirmed_flush / restart_lsn. Thus, your condition \n> seems to be true in any case, even if it was no-op one, which we were \n> intended to catch.\n>\n> I will recheck everything again and try to come up with something \n> during this week.\n\nIf I get it correctly, then we already keep previous slot position in \nthe minlsn, so we just have to compare endlsn with minlsn and treat \nendlsn <= minlsn as a no-op without slot state flushing.\n\nAttached is a patch that does this, so it fixes the bug without \naffecting any user-facing behavior. Detailed comment section and DEBUG \noutput are also added. What do you think now?\n\nI have also forgotten to mention that all versions down to 11.0 should \nbe affected with this bug.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Wed, 25 Dec 2019 20:28:04 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "At Wed, 25 Dec 2019 20:28:04 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in \n> > Yep, it helps with physical replication slot persistence after\n> > advance, but the whole validation (moveto <= endlsn) does not make\n> > sense for me. The value of moveto should be >= than minlsn ==\n> > confirmed_flush / restart_lsn, while endlsn == retlsn is also always\n> > initialized with confirmed_flush / restart_lsn. Thus, your condition\n> > seems to be true in any case, even if it was no-op one, which we were\n> > intended to catch.\n...\n> If I get it correctly, then we already keep previous slot position in\n> the minlsn, so we just have to compare endlsn with minlsn and treat\n> endlsn <= minlsn as a no-op without slot state flushing.\n\nI think you're right about the condition. (endlsn cannot be less than\nminlsn, though) But I came to think that we shouldn't use locations in\nthat decision.\n\n> Attached is a patch that does this, so it fixes the bug without\n> affecting any user-facing behavior. Detailed comment section and DEBUG\n> output are also added. What do you think now?\n> \n> I have also forgotten to mention that all versions down to 11.0 should\n> be affected with this bug.\n\npg_replication_slot_advance is the only caller of\npg_logical/physical_replication_slot_advacne so there's no apparent\ndeterminant on who-does-what about dirtying and other housekeeping\ncalculation like *ComputeRequired*() functions, but the current shape\nseems a kind of inconsistent between logical and physical.\n\nI think pg_logaical/physical_replication_slot_advance should dirty the\nslot if they actually changed anything. And\npg_replication_slot_advance should do the housekeeping if the slots\nare dirtied. (Otherwise both the caller function should dirty the\nslot in lieu of the two.)\n\nThe attached does that.\n\nregards.\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 26 Dec 2019 17:33:49 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 26.12.2019 11:33, Kyotaro Horiguchi wrote:\n> At Wed, 25 Dec 2019 20:28:04 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in\n>>> Yep, it helps with physical replication slot persistence after\n>>> advance, but the whole validation (moveto <= endlsn) does not make\n>>> sense for me. The value of moveto should be >= than minlsn ==\n>>> confirmed_flush / restart_lsn, while endlsn == retlsn is also always\n>>> initialized with confirmed_flush / restart_lsn. Thus, your condition\n>>> seems to be true in any case, even if it was no-op one, which we were\n>>> intended to catch.\n> ...\n>> If I get it correctly, then we already keep previous slot position in\n>> the minlsn, so we just have to compare endlsn with minlsn and treat\n>> endlsn <= minlsn as a no-op without slot state flushing.\n> I think you're right about the condition. (endlsn cannot be less than\n> minlsn, though) But I came to think that we shouldn't use locations in\n> that decision.\n>\n>> Attached is a patch that does this, so it fixes the bug without\n>> affecting any user-facing behavior. Detailed comment section and DEBUG\n>> output are also added. What do you think now?\n>>\n>> I have also forgotten to mention that all versions down to 11.0 should\n>> be affected with this bug.\n> pg_replication_slot_advance is the only caller of\n> pg_logical/physical_replication_slot_advacne so there's no apparent\n> determinant on who-does-what about dirtying and other housekeeping\n> calculation like *ComputeRequired*() functions, but the current shape\n> seems a kind of inconsistent between logical and physical.\n>\n> I think pg_logaical/physical_replication_slot_advance should dirty the\n> slot if they actually changed anything. And\n> pg_replication_slot_advance should do the housekeeping if the slots\n> are dirtied. (Otherwise both the caller function should dirty the\n> slot in lieu of the two.)\n>\n> The attached does that.\n\nBoth approaches looks fine for me: my last patch with as minimal \nintervention as possible and yours refactoring. I think that it is a \nright direction to let everyone who modifies slot->data also mark slot \nas dirty.\n\nI found some comment section in your code as rather misleading:\n\n+��� ��� /*\n+�������� * We don't need to dirty the slot only for the above change, \nbut dirty\n+��� ��� �* this slot for the same reason with\n+��� ��� �* pg_logical_replication_slot_advance.\n+��� ��� �*/\n\nWe just modified MyReplicationSlot->data, which is \"On-Disk data of a \nreplication slot, preserved across restarts.\", so it definitely should \nbe marked as dirty, not because pg_logical_replication_slot_advance does \nthe same.\n\nAlso I think that using this transient variable in \nReplicationSlotIsDirty is not necessary. MyReplicationSlot is already a \npointer to the slot in shared memory.\n\n+��� ReplicationSlot *slot = MyReplicationSlot;\n+\n+��� Assert(MyReplicationSlot != NULL);\n+\n+��� SpinLockAcquire(&slot->mutex);\n\nOtherwise it looks fine for me, so attached is the same diff, but with \nthese proposed corrections.\n\nAnother concern is that ReplicationSlotIsDirty is added with the only \none user. It also cannot be used by SaveSlotToPath due to the \nsimultaneous usage of both flags dirty and just_dirtied there.\n\nIn that way, I hope that we should call ReplicationSlotSave \nunconditionally in the pg_replication_slot_advance, so slot will be \nsaved or not automatically based on the slot->dirty flag. In the same \ntime, ReplicationSlotsComputeRequiredXmin and \nReplicationSlotsComputeRequiredLSN should be called by anyone, who \nmodifies xmin and LSN fields in the slot. Otherwise, currently we are \ngetting some leaky abstractions.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 26 Dec 2019 16:35:31 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 2019-12-26 16:35, Alexey Kondratov wrote:\n> \n> Another concern is that ReplicationSlotIsDirty is added with the only\n> one user. It also cannot be used by SaveSlotToPath due to the\n> simultaneous usage of both flags dirty and just_dirtied there.\n> \n> In that way, I hope that we should call ReplicationSlotSave\n> unconditionally in the pg_replication_slot_advance, so slot will be\n> saved or not automatically based on the slot->dirty flag. In the same\n> time, ReplicationSlotsComputeRequiredXmin and\n> ReplicationSlotsComputeRequiredLSN should be called by anyone, who\n> modifies xmin and LSN fields in the slot. Otherwise, currently we are\n> getting some leaky abstractions.\n> \n\nIt seems that there was even a race in the order of actions inside \npg_replication_slot_advance, it did following:\n\n- ReplicationSlotMarkDirty();\n- ReplicationSlotsComputeRequiredXmin(false);\n- ReplicationSlotsComputeRequiredLSN();\n- ReplicationSlotSave();\n\n1) Mark slot as dirty, which actually does nothing immediately, but \nsetting dirty flag;\n2) Do compute new global required LSN;\n3) Flush slot state to disk.\n\nIf someone will utilise old WAL and after that crash will happen between \nsteps 2) and 3), then we start with old value of restart_lsn, but \nwithout required WAL. I do not know how to properly reproduce it without \ngdb and power off, so the chance is pretty low, but still it could be a \ncase.\n\nLogical slots were not affected again, since there was a proper \noperations order (with comments) and slot flushing routines inside \nLogicalConfirmReceivedLocation.\n\nThus, in the attached patch I have decided to do not perform slot \nflushing in the pg_replication_slot_advance at all and do it in the \npg_physical_replication_slot_advance instead, as it is done in the \nLogicalConfirmReceivedLocation.\n\nSince this bugfix have not moved forward during the week, I will put it \non the 01.2020 commitfest. Kyotaro, if you do not object I will add you \nas a reviewer, as you have already gave a lot of feedback, thank you for \nthat!\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Sun, 29 Dec 2019 15:12:16 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "Hello.\n\nAt Sun, 29 Dec 2019 15:12:16 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in \n> On 2019-12-26 16:35, Alexey Kondratov wrote:\n> > Another concern is that ReplicationSlotIsDirty is added with the only\n> > one user. It also cannot be used by SaveSlotToPath due to the\n> > simultaneous usage of both flags dirty and just_dirtied there.\n> > In that way, I hope that we should call ReplicationSlotSave\n> > unconditionally in the pg_replication_slot_advance, so slot will be\n> > saved or not automatically based on the slot->dirty flag. In the same\n> > time, ReplicationSlotsComputeRequiredXmin and\n> > ReplicationSlotsComputeRequiredLSN should be called by anyone, who\n> > modifies xmin and LSN fields in the slot. Otherwise, currently we are\n> > getting some leaky abstractions.\n\nSounds reasonable.\n\n> It seems that there was even a race in the order of actions inside\n> pg_replication_slot_advance, it did following:\n> \n> - ReplicationSlotMarkDirty();\n> - ReplicationSlotsComputeRequiredXmin(false);\n> - ReplicationSlotsComputeRequiredLSN();\n> - ReplicationSlotSave();\n> \n> 1) Mark slot as dirty, which actually does nothing immediately, but\n> setting dirty flag;\n> 2) Do compute new global required LSN;\n> 3) Flush slot state to disk.\n> \n> If someone will utilise old WAL and after that crash will happen\n> between steps 2) and 3), then we start with old value of restart_lsn,\n> but without required WAL. I do not know how to properly reproduce it\n> without gdb and power off, so the chance is pretty low, but still it\n> could be a case.\n\nIn the first place we advance required LSN for every reply message but\nsave slot data only at checkpoint on physical repliation. Such a\nstrict guarantee seems too much.\n\nOr we might need to save dirty slots just before the required LSN goes\ninto the next segment, but it would be a separate issue.\n\n> Logical slots were not affected again, since there was a proper\n> operations order (with comments) and slot flushing routines inside\n> LogicalConfirmReceivedLocation.\n\ncopy_replication_slot doen't follow that, but the function can go into\nthe similar situation from a bit different cause. If the required LSN\nhad been advanced by a move of the original slot before the function\nrecomputes the required LSN, there could be a case where the new slot\nis missing required WAL segment. But that is a defferent issue, too.\n\n> Thus, in the attached patch I have decided to do not perform slot\n> flushing in the pg_replication_slot_advance at all and do it in the\n> pg_physical_replication_slot_advance instead, as it is done in the\n> LogicalConfirmReceivedLocation.\n\nThat causes a logical slot not being saved when only confirmed_flush\nwas changed. (I'm not sure about that the slot would be saved twice if\nother than confirmed_flush had been changed..)\n\n> Since this bugfix have not moved forward during the week, I will put\n> it on the 01.2020 commitfest. Kyotaro, if you do not object I will add\n> you as a reviewer, as you have already gave a lot of feedback, thank\n> you for that!\n\nI'm fine with that.\n\n\n+\t\t/* Compute global required LSN if restart_lsn was changed */\n+\t\tif (updated_restart)\n+\t\t\tReplicationSlotsComputeRequiredLSN();\n..\n-\t\t\tReplicationSlotsComputeRequiredLSN();\n\n\nI seems intentional, considering performance, based on the same\nthought as the comment of PhysicalConfirmReceivedLocation.\n\nI think we shouldn't touch the paths used by replication protocol. And\ndon't we focus on how we make a change of a replication slot from SQL\ninterface persistent? It seems to me that generaly we don't need to\nsave dirty slots other than checkpoint, but the SQL function seems\nwanting the change to be saved immediately.\n\nAs the result, please find the attached, which is following only the\nfirst paragraph cited above.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 09 Jan 2020 15:36:07 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 09.01.2020 09:36, Kyotaro Horiguchi wrote:\n> Hello.\n>\n> At Sun, 29 Dec 2019 15:12:16 +0300, Alexey Kondratov <a.kondratov@postgrespro.ru> wrote in\n>> On 2019-12-26 16:35, Alexey Kondratov wrote:\n>>> Another concern is that ReplicationSlotIsDirty is added with the only\n>>> one user. It also cannot be used by SaveSlotToPath due to the\n>>> simultaneous usage of both flags dirty and just_dirtied there.\n>>> In that way, I hope that we should call ReplicationSlotSave\n>>> unconditionally in the pg_replication_slot_advance, so slot will be\n>>> saved or not automatically based on the slot->dirty flag. In the same\n>>> time, ReplicationSlotsComputeRequiredXmin and\n>>> ReplicationSlotsComputeRequiredLSN should be called by anyone, who\n>>> modifies xmin and LSN fields in the slot. Otherwise, currently we are\n>>> getting some leaky abstractions.\n> Sounds reasonable.\n\nGreat, so it seems that we have reached some agreement about who should \nmark slot as dirty, at least for now.\n\n>\n>> If someone will utilise old WAL and after that crash will happen\n>> between steps 2) and 3), then we start with old value of restart_lsn,\n>> but without required WAL. I do not know how to properly reproduce it\n>> without gdb and power off, so the chance is pretty low, but still it\n>> could be a case.\n> In the first place we advance required LSN for every reply message but\n> save slot data only at checkpoint on physical repliation. Such a\n> strict guarantee seems too much.\n>\n> ...\n>\n> I think we shouldn't touch the paths used by replication protocol. And\n> don't we focus on how we make a change of a replication slot from SQL\n> interface persistent? It seems to me that generaly we don't need to\n> save dirty slots other than checkpoint, but the SQL function seems\n> wanting the change to be saved immediately.\n>\n> As the result, please find the attached, which is following only the\n> first paragraph cited above.\n\nOK, I have definitely overthought that, thanks. This looks like a \nminimal subset of changes that actually solves the bug. I would only \nprefer to keep some additional comments (something like the attached), \notherwise after half a year it will be unclear again, why we save slot \nunconditionally here.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 16 Jan 2020 20:09:09 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 08:09:09PM +0300, Alexey Kondratov wrote:\n> OK, I have definitely overthought that, thanks. This looks like a minimal\n> subset of changes that actually solves the bug. I would only prefer to keep\n> some additional comments (something like the attached), otherwise after half\n> a year it will be unclear again, why we save slot unconditionally here.\n\nSince this email, Andres has sent an email that did not reach the\ncommunity lists, but where all the participants of this thread were in\nCC. Here is a summary of the points raised (please correct me if that\ndoes not sound right to you, Andres):\n1) The slot advancing has to mark the slot as dirty, but should we\nmake the change persistent at the end of the function or should we\nwait for a checkpoint to do the work, meaning that any update done to\nthe slot would be lost if a crash occurs in-between? Note that we\nhave this commit in slotfuncs.c for\npg_logical_replication_slot_advance():\n * Dirty the slot so it's written out at the next checkpoint.\n * We'll still lose its position on crash, as documented, but it's\n * better than always losing the position even on clean restart.\n\nThis comment refers to the documentation for the logical decoding\nsection (see logicaldecoding-replication-slots in\nlogicaldecoding.sgml), and even if nothing can be done until the slot\nadvance function reaches its hand, we ought to make the data\npersistent if we can.\n\nThe original commit that introduced slot advancing is 9c7d06d. Here\nis the thread, where this point was not really mentioned by the way:\nhttps://www.postgresql.org/message-id/5c26ff40-8452-fb13-1bea-56a0338a809a@2ndquadrant.com\n\n2) pg_replication_slot_advance() includes this code, which is broken:\n /* Update the on disk state when lsn was updated. */\n if (XLogRecPtrIsInvalid(endlsn))\n {\n ReplicationSlotMarkDirty();\n ReplicationSlotsComputeRequiredXmin(false);\n ReplicationSlotsComputeRequiredLSN();\n ReplicationSlotSave();\n }\nHere the deal is that endlsn, aka the LSN where the slot has been\nadvanced (or its current position if no progress has been done) never \ngets to be set to InvalidXLogRecPtr as of f731cfa, and that this work\nshould be done only when endlsn has done some progress. It seems to\nme that this should have been the opposite to begin with in 9c7d06d,\naka do the save if endlsn is valid.\n\n3) The amount of testing related to slot advancing could be better\nwith cluster-wide operations.\n\n@@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr\nmoveto)\n MyReplicationSlot->data.restart_lsn = moveto;\n\n SpinLockRelease(&MyReplicationSlot->mutex);\n retlsn = moveto;\n+\n+ ReplicationSlotMarkDirty();\n+\n+ /* We moved retart_lsn, update the global value. */\n+ ReplicationSlotsComputeRequiredLSN();\nI think that the proposed patch is missing a call to\nReplicationSlotsComputeRequiredXmin() here for physical slots.\n\nSo, I have been looking at this patch by myself, and updated it so as\nthe extra slot save is done only if any advancing has been done, on\ntop of the other computations that had better be around for\nconsistency. The patch includes TAP tests for physical and logical\nslots' durability across restarts.\n\nThoughts?\n--\nMichael",
"msg_date": "Mon, 20 Jan 2020 15:45:40 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 20 Jan 2020, 09:45 +0300, Michael Paquier <michael@paquier.xyz>, wrote:\n>\n> So, I have been looking at this patch by myself, and updated it so as\n> the extra slot save is done only if any advancing has been done, on\n> top of the other computations that had better be around for\n> consistency. The patch includes TAP tests for physical and logical\n> slots' durability across restarts.\n>\n> Thoughts?\n\n\nI still think that this extra check of whether any advance just happened or not just adds extra complexity. We could use slot dirtiness for the same purpose and slot saving routines check it automatically. Anyway, approach with adding a new flag should resolve this bug as well, of course, and maybe it will be a bit more transparent and explicit.\n\nJust eyeballed your patch and it looks fine at a first glance, excepting the logical slot advance part:\n\n retlsn = MyReplicationSlot->data.confirmed_flush;\n+ *advance_done = true;\n\n /* free context, call shutdown callback */\n FreeDecodingContext(ctx);\n\nI am not sure that this is a right place to set advance_done flag to true. Wouldn’t it be set to true even in the case of no-op, when LogicalConfirmReceivedLocation was never executed? Probably we should set the flag near the LogicalConfirmReceivedLocation call?\n\n\n--\nAlexey Kondratov\n\n\n\n\n\n\n\nOn 20 Jan 2020, 09:45 +0300, Michael Paquier <michael@paquier.xyz>, wrote:\n\nSo, I have been looking at this patch by myself, and updated it so as\nthe extra slot save is done only if any advancing has been done, on\ntop of the other computations that had better be around for\nconsistency. The patch includes TAP tests for physical and logical\nslots' durability across restarts. \n\nThoughts? \n\n\nI still think that this extra check of whether any advance just happened or not just adds extra complexity. We could use slot dirtiness for the same purpose and slot saving routines check it automatically. Anyway, approach with adding a new flag should resolve this bug as well, of course, and maybe it will be a bit more transparent and explicit.\n\nJust eyeballed your patch and it looks fine at a first glance, excepting the logical slot advance part:\n\n\n retlsn = MyReplicationSlot->data.confirmed_flush;\n+ *advance_done = true;\n \n /* free context, call shutdown callback */\n FreeDecodingContext(ctx);\n\nI am not sure that this is a right place to set advance_done flag to true. Wouldn’t it be set to true even in the case of no-op, when LogicalConfirmReceivedLocation was never executed? Probably we should set the flag near the LogicalConfirmReceivedLocation call?\n\n\n--\nAlexey Kondratov",
"msg_date": "Mon, 20 Jan 2020 17:50:06 +0300",
"msg_from": "a.kondratov@postgrespro.ru",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-20 15:45:40 +0900, Michael Paquier wrote:\n> On Thu, Jan 16, 2020 at 08:09:09PM +0300, Alexey Kondratov wrote:\n> > OK, I have definitely overthought that, thanks. This looks like a minimal\n> > subset of changes that actually solves the bug. I would only prefer to keep\n> > some additional comments (something like the attached), otherwise after half\n> > a year it will be unclear again, why we save slot unconditionally here.\n> \n> Since this email, Andres has sent an email that did not reach the\n> community lists, but where all the participants of this thread were in\n> CC.\n\nUgh, that was an accident.\n\n\n> Here is a summary of the points raised (please correct me if that\n> does not sound right to you, Andres):\n\n> 1) The slot advancing has to mark the slot as dirty, but should we\n> make the change persistent at the end of the function or should we\n> wait for a checkpoint to do the work, meaning that any update done to\n> the slot would be lost if a crash occurs in-between? Note that we\n> have this commit in slotfuncs.c for\n> pg_logical_replication_slot_advance():\n> * Dirty the slot so it's written out at the next checkpoint.\n> * We'll still lose its position on crash, as documented, but it's\n> * better than always losing the position even on clean restart.\n> \n> This comment refers to the documentation for the logical decoding\n> section (see logicaldecoding-replication-slots in\n> logicaldecoding.sgml), and even if nothing can be done until the slot\n> advance function reaches its hand, we ought to make the data\n> persistent if we can.\n\nThat doesn't really seem like a meaningful reference, because the\nconcerns between constantly streaming out changes (where we don't want\nto fsync every single transaction), and doing so in a manual advance\nthrough an sql function, seem different.\n\n\n> 3) The amount of testing related to slot advancing could be better\n> with cluster-wide operations.\n> \n> @@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr\n> moveto)\n> MyReplicationSlot->data.restart_lsn = moveto;\n> \n> SpinLockRelease(&MyReplicationSlot->mutex);\n> retlsn = moveto;\n> +\n> + ReplicationSlotMarkDirty();\n> +\n> + /* We moved retart_lsn, update the global value. */\n> + ReplicationSlotsComputeRequiredLSN();\n> I think that the proposed patch is missing a call to\n> ReplicationSlotsComputeRequiredXmin() here for physical slots.\n\nHm. It seems ok to include, but I don't think omitting it currently has\nnegative effects?\n\n\n> So, I have been looking at this patch by myself, and updated it so as\n> the extra slot save is done only if any advancing has been done, on\n> top of the other computations that had better be around for\n> consistency.\n\nHm, I don't necessarily what that's necessary.\n\n\n> The patch includes TAP tests for physical and logical slots'\n> durability across restarts.\n\nCool!\n\n\n> diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c\n> index bb69683e2a..af3e114fc9 100644\n> --- a/src/backend/replication/slotfuncs.c\n> +++ b/src/backend/replication/slotfuncs.c\n> @@ -359,17 +359,20 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n> * checkpoints.\n> */\n> static XLogRecPtr\n> -pg_physical_replication_slot_advance(XLogRecPtr moveto)\n> +pg_physical_replication_slot_advance(XLogRecPtr moveto, bool *advance_done)\n> {\n> \tXLogRecPtr\tstartlsn = MyReplicationSlot->data.restart_lsn;\n> \tXLogRecPtr\tretlsn = startlsn;\n> \n> +\t*advance_done = false;\n> +\n> \tif (startlsn < moveto)\n> \t{\n> \t\tSpinLockAcquire(&MyReplicationSlot->mutex);\n> \t\tMyReplicationSlot->data.restart_lsn = moveto;\n> \t\tSpinLockRelease(&MyReplicationSlot->mutex);\n> \t\tretlsn = moveto;\n> +\t\t*advance_done = true;\n> \t}\n> \n> \treturn retlsn;\n\nHm. Why did you choose not to use endlsn as before (except being\nbroken), or something? It seems quite conceivable somebody is using\nthese functions in an extension.\n\n\n\n\n> +# Test physical slot advancing and its durability. Create a new slot on\n> +# the primary, not used by any of the standbys. This reserves WAL at creation.\n> +my $phys_slot = 'phys_slot';\n> +$node_master->safe_psql('postgres',\n> +\t\"SELECT pg_create_physical_replication_slot('$phys_slot', true);\");\n> +$node_master->psql('postgres', \"\n> +\tCREATE TABLE tab_phys_slot (a int);\n> +\tINSERT INTO tab_phys_slot VALUES (generate_series(1,10));\");\n> +my $psql_rc = $node_master->psql('postgres',\n> +\t\"SELECT pg_replication_slot_advance('$phys_slot', 'FF/FFFFFFFF');\");\n> +is($psql_rc, '0', 'slot advancing works with physical slot');\n\nHm. I'm think testing this with real LSNs is a better idea. What if the\nnode actually already is past FF/FFFFFFFF at this point? Quite unlikely,\nI know, but still. I.e. why not get the current LSN after the INSERT,\nand assert that the slot, after the restart, is that?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Jan 2020 11:00:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "Thanks for looking this.\n\nAt Mon, 20 Jan 2020 11:00:14 -0800, Andres Freund <andres@anarazel.de> wrote in \n> > Here is a summary of the points raised (please correct me if that\n> > does not sound right to you, Andres):\n> \n> > 1) The slot advancing has to mark the slot as dirty, but should we\n> > make the change persistent at the end of the function or should we\n> > wait for a checkpoint to do the work, meaning that any update done to\n> > the slot would be lost if a crash occurs in-between? Note that we\n> > have this commit in slotfuncs.c for\n> > pg_logical_replication_slot_advance():\n> > * Dirty the slot so it's written out at the next checkpoint.\n> > * We'll still lose its position on crash, as documented, but it's\n> > * better than always losing the position even on clean restart.\n> > \n> > This comment refers to the documentation for the logical decoding\n> > section (see logicaldecoding-replication-slots in\n> > logicaldecoding.sgml), and even if nothing can be done until the slot\n> > advance function reaches its hand, we ought to make the data\n> > persistent if we can.\n> \n> That doesn't really seem like a meaningful reference, because the\n> concerns between constantly streaming out changes (where we don't want\n> to fsync every single transaction), and doing so in a manual advance\n> through an sql function, seem different.\n\nYes, that is the reason I didn't suggest not to save the file there.\nI don't have a clear opinion on it but I agree that users expect that\nany changes they made from SQL interface should survive a\ncrash-recovery.\n\n> > 3) The amount of testing related to slot advancing could be better\n> > with cluster-wide operations.\n> > \n> > @@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr\n> > moveto)\n> > MyReplicationSlot->data.restart_lsn = moveto;\n> > \n> > SpinLockRelease(&MyReplicationSlot->mutex);\n> > retlsn = moveto;\n> > +\n> > + ReplicationSlotMarkDirty();\n> > +\n> > + /* We moved retart_lsn, update the global value. */\n> > + ReplicationSlotsComputeRequiredLSN();\n> > I think that the proposed patch is missing a call to\n> > ReplicationSlotsComputeRequiredXmin() here for physical slots.\n> \n> Hm. It seems ok to include, but I don't think omitting it currently has\n> negative effects?\n\nI think no. It is updated sooner or later when replication proceeds\nand received a reply message.\n\n> > So, I have been looking at this patch by myself, and updated it so as\n> > the extra slot save is done only if any advancing has been done, on\n> > top of the other computations that had better be around for\n> > consistency.\n> \n> Hm, I don't necessarily what that's necessary.\n\nOn the other hand, no negitve effect by the extra saving of the file\nas far as the SQL function itself is not called extremely\nfrequently. If I read Andres's comment above correctly, I agree not to\nadd complexity to supress the \"needless\" saving of the file.\n\n> > The patch includes TAP tests for physical and logical slots'\n> > durability across restarts.\n> \n> Cool!\n> \n> \n> > diff --git a/src/backend/replication/slotfuncs.c b/src/backend/replication/slotfuncs.c\n> > index bb69683e2a..af3e114fc9 100644\n> > --- a/src/backend/replication/slotfuncs.c\n> > +++ b/src/backend/replication/slotfuncs.c\n> > @@ -359,17 +359,20 @@ pg_get_replication_slots(PG_FUNCTION_ARGS)\n> > * checkpoints.\n> > */\n> > static XLogRecPtr\n> > -pg_physical_replication_slot_advance(XLogRecPtr moveto)\n> > +pg_physical_replication_slot_advance(XLogRecPtr moveto, bool *advance_done)\n> > {\n> > \tXLogRecPtr\tstartlsn = MyReplicationSlot->data.restart_lsn;\n> > \tXLogRecPtr\tretlsn = startlsn;\n> > \n> > +\t*advance_done = false;\n> > +\n> > \tif (startlsn < moveto)\n> > \t{\n> > \t\tSpinLockAcquire(&MyReplicationSlot->mutex);\n> > \t\tMyReplicationSlot->data.restart_lsn = moveto;\n> > \t\tSpinLockRelease(&MyReplicationSlot->mutex);\n> > \t\tretlsn = moveto;\n> > +\t\t*advance_done = true;\n> > \t}\n> > \n> > \treturn retlsn;\n> \n> Hm. Why did you choose not to use endlsn as before (except being\n> broken), or something? It seems quite conceivable somebody is using\n> these functions in an extension.\n> \n> \n> \n> \n> > +# Test physical slot advancing and its durability. Create a new slot on\n> > +# the primary, not used by any of the standbys. This reserves WAL at creation.\n> > +my $phys_slot = 'phys_slot';\n> > +$node_master->safe_psql('postgres',\n> > +\t\"SELECT pg_create_physical_replication_slot('$phys_slot', true);\");\n> > +$node_master->psql('postgres', \"\n> > +\tCREATE TABLE tab_phys_slot (a int);\n> > +\tINSERT INTO tab_phys_slot VALUES (generate_series(1,10));\");\n> > +my $psql_rc = $node_master->psql('postgres',\n> > +\t\"SELECT pg_replication_slot_advance('$phys_slot', 'FF/FFFFFFFF');\");\n> > +is($psql_rc, '0', 'slot advancing works with physical slot');\n> \n> Hm. I'm think testing this with real LSNs is a better idea. What if the\n> node actually already is past FF/FFFFFFFF at this point? Quite unlikely,\n> I know, but still. I.e. why not get the current LSN after the INSERT,\n> and assert that the slot, after the restart, is that?\n\n+1.\n\n\n(continuation of (3))\nAt Mon, 20 Jan 2020 15:45:40 +0900, Michael Paquier <michael@paquier.xyz> wrote in\n(@@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr moveto))\n> + /* We moved retart_lsn, update the global value. */\n> + ReplicationSlotsComputeRequiredLSN();\n> I think that the proposed patch is missing a call to\n> ReplicationSlotsComputeRequiredXmin() here for physical slots.\n\nNo. pg_physical_replication_slot_advance doesn't make an advance of\neffective_(catalog)_xmin so it is just useless. It would be necessary\nif it were in pg_replication_slot_advance, its caller.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 21 Jan 2020 09:39:40 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Fri, 17 Jan 2020 at 01:09, Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> > I think we shouldn't touch the paths used by replication protocol. And\n> > don't we focus on how we make a change of a replication slot from SQL\n> > interface persistent? It seems to me that generaly we don't need to\n> > save dirty slots other than checkpoint, but the SQL function seems\n> > wanting the change to be saved immediately.\n\nPLEASE do not make the streaming replication interface force flushes!\n\nThe replication interface should not immediately flush changes to the\nslot replay position on advance. It should be marked dirty and left to\nbe flushed by the next checkpoint. Doing otherwise potentially\nintroduces a lot of unnecessary fsync()s and may have an unpleasant\nimpact on performance.\n\nClients of the replication protocol interface should be doing their\nown position tracking on the client side. They should not ever be\nrelying on the server side restart position for correctness, since it\ncan go backwards on crash and restart. Any that do rely on it are\nincorrect. I should propose a docs change that explains how the server\nand client restart position tracking interacts on both phy and logical\nrep since it's not really covered right now and naïve client\nimplementations will be wrong.\n\nI don't really care if the SQL interface forces an immediate flush\nsince it's never going to have good performance anyway.\n\nIt's already impossible to write a strictly correct and crash safe\nclient with the SQL interface. Adding forced flushing won't make that\nany better or worse.\n\nThe SQL interface advances the slot restart position and marks the\nslot dirty *before the client has confirmed receipt of the data and\nflushed it to disk*. So there's a data loss window. If the client\ndisconnects or crashes before all the data from that function call is\nsafely flushed to disk it may lose the data, then be unable to fetch\nit again from the server because of the restart_lsn position advance.\n\nReally, we should add a \"no_advance_position\" option to the SQL\ninterface, then expect the client to call a second function that\nexplicitly advances the restart_lsn and confirmed_flush_lsn. Otherwise\nno SQL interface client can be crashsafe.\n\n\n",
"msg_date": "Tue, 21 Jan 2020 09:44:12 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 09:44:12AM +0800, Craig Ringer wrote:\n> PLEASE do not make the streaming replication interface force flushes!\n\nYeah, that's a bad idea. FWIW, my understanding is that this has been\nonly proposed in v3, and this has been discarded:\nhttps://www.postgresql.org/message-id/175c2760666a78205e053207794c0f8f@postgrespro.ru\n\n> The replication interface should not immediately flush changes to the\n> slot replay position on advance. It should be marked dirty and left to\n> be flushed by the next checkpoint. Doing otherwise potentially\n> introduces a lot of unnecessary fsync()s and may have an unpleasant\n> impact on performance.\n\nSome portions of the advancing code tells a different story. It seems\nto me that the intention behind the first implementation of slot\nadvancing was to get things flushed if any advancing was done. The\ncheck doing that is actually broken from the start, but that's another\nstory. Could you check with Petr what was the intention here or drag\nhis attention to this thread? He is the original author of the\nfeature. So his output would be nice to have.\n\n> Clients of the replication protocol interface should be doing their\n> own position tracking on the client side. They should not ever be\n> relying on the server side restart position for correctness, since it\n> can go backwards on crash and restart. Any that do rely on it are\n> incorrect. I should propose a docs change that explains how the server\n> and client restart position tracking interacts on both phy and logical\n> rep since it's not really covered right now and naïve client\n> implementations will be wrong.\n>\n> I don't really care if the SQL interface forces an immediate flush\n> since it's never going to have good performance anyway.\n\nOkay, the flush could be optional as well, but that's a different\ndiscussion. The docs of logical decoding mention that slot data may\ngo backwards in the event of a crash. If you have improvements for\nthat, surely that's welcome.\n\n> The SQL interface advances the slot restart position and marks the\n> slot dirty *before the client has confirmed receipt of the data and\n> flushed it to disk*. So there's a data loss window. If the client\n> disconnects or crashes before all the data from that function call is\n> safely flushed to disk it may lose the data, then be unable to fetch\n> it again from the server because of the restart_lsn position advance.\n\nWell, you have the same class of problems with for example synchronous\nreplication. The only state a client can be sure of is that it\nreceived a confirmation that the operation happens and completed.\nAny other state tells that the operation may have happened. Or not.\nNow, being sure that the data of the new slot has been flushed once\nthe advancing function is done once the client got the confirmation\nthat the work is done is a property which could be interesting to some\nclass of applications.\n\n> Really, we should add a \"no_advance_position\" option to the SQL\n> interface, then expect the client to call a second function that\n> explicitly advances the restart_lsn and confirmed_flush_lsn. Otherwise\n> no SQL interface client can be crashsafe.\n\nHm. Could you elaborate more what you mean here? I am not sure to\nunderstand. Note that calling the advance function multiple times has\nno effects, and the result returned is the actual restart_lsn (or\nconfirmed_flush_lsn of course).\n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 12:05:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 11:00:14AM -0800, Andres Freund wrote:\n> On 2020-01-20 15:45:40 +0900, Michael Paquier wrote:\n>> 1) The slot advancing has to mark the slot as dirty, but should we\n>> make the change persistent at the end of the function or should we\n>> wait for a checkpoint to do the work, meaning that any update done to\n>> the slot would be lost if a crash occurs in-between? Note that we\n>> have this commit in slotfuncs.c for\n>> pg_logical_replication_slot_advance():\n>> * Dirty the slot so it's written out at the next checkpoint.\n>> * We'll still lose its position on crash, as documented, but it's\n>> * better than always losing the position even on clean restart.\n>> \n>> This comment refers to the documentation for the logical decoding\n>> section (see logicaldecoding-replication-slots in\n>> logicaldecoding.sgml), and even if nothing can be done until the slot\n>> advance function reaches its hand, we ought to make the data\n>> persistent if we can.\n> \n> That doesn't really seem like a meaningful reference, because the\n> concerns between constantly streaming out changes (where we don't want\n> to fsync every single transaction), and doing so in a manual advance\n> through an sql function, seem different.\n\nNo disagreement with that, still it is the only reference we have in\nthe docs about that. I think that we should take the occasion to\nupdate the docs of the advancing functions accordingly with what we\nthink is the best choice; should the slot information be flushed at\nthe end of the function, or at the follow-up checkpoint?\n\n>> 3) The amount of testing related to slot advancing could be better\n>> with cluster-wide operations.\n>> \n>> @@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr\n>> moveto)\n>> MyReplicationSlot->data.restart_lsn = moveto;\n>> \n>> SpinLockRelease(&MyReplicationSlot->mutex);\n>> retlsn = moveto;\n>> +\n>> + ReplicationSlotMarkDirty();\n>> +\n>> + /* We moved retart_lsn, update the global value. */\n>> + ReplicationSlotsComputeRequiredLSN();\n>> I think that the proposed patch is missing a call to\n>> ReplicationSlotsComputeRequiredXmin() here for physical slots.\n> \n> Hm. It seems ok to include, but I don't think omitting it currently has\n> negative effects?\n\neffective_xmin can be used by WAL senders with physical slots. It\nseems safer in the long term to include it, IMO.\n\n>> static XLogRecPtr\n>> -pg_physical_replication_slot_advance(XLogRecPtr moveto)\n>> +pg_physical_replication_slot_advance(XLogRecPtr moveto, bool *advance_done)\n>> {\n>> \tXLogRecPtr\tstartlsn = MyReplicationSlot->data.restart_lsn;\n>> \tXLogRecPtr\tretlsn = startlsn;\n>> \n>> +\t*advance_done = false;\n>> +\n>> \tif (startlsn < moveto)\n>> \t{\n>> \t\tSpinLockAcquire(&MyReplicationSlot->mutex);\n>> \t\tMyReplicationSlot->data.restart_lsn = moveto;\n>> \t\tSpinLockRelease(&MyReplicationSlot->mutex);\n>> \t\tretlsn = moveto;\n>> +\t\t*advance_done = true;\n>> \t}\n>> \n>> \treturn retlsn;\n> \n> Hm. Why did you choose not to use endlsn as before (except being\n> broken), or something?\n\nWhen doing repetitive calls of the advancing functions, the advancing\nhappens in the first call, and the next ones do nothing, so if no\nupdates is done there is no meaning to flush the slot information.\n\n> It seems quite conceivable somebody is using these functions in an\n> extension. \n\nNot sure I get that, pg_physical_replication_slot_advance and\npg_logical_replication_slot_advance are static in slotfuncs.c.\n\n> Hm. I'm think testing this with real LSNs is a better idea. What if the\n> node actually already is past FF/FFFFFFFF at this point? Quite unlikely,\n> I know, but still. I.e. why not get the current LSN after the INSERT,\n> and assert that the slot, after the restart, is that?\n\nSure. If not disabling autovacuum in the test, we'd just need to make\nsure if that advancing is at least ahead of the INSERT position.\n\nAnyway, I am still not sure if we should got down the road to just\nmark the slot as dirty if any advancing is done and let the follow-up\ncheckpoint to the work, if the advancing function should do the slot\nflush, or if we choose one and make the other an optional choice (not\nfor back-branches, obviously. Based on my reading of the code, my\nguess is that a flush should happen at the end of the advancing\nfunction.\n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 14:07:30 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "((On Tue, 21 Jan 2020 at 11:06, Michael Paquier <michael@paquier.xyz> wrote:\n\n> > The replication interface should not immediately flush changes to the\n> > slot replay position on advance. It should be marked dirty and left to\n> > be flushed by the next checkpoint. Doing otherwise potentially\n> > introduces a lot of unnecessary fsync()s and may have an unpleasant\n> > impact on performance.\n>\n> Some portions of the advancing code tells a different story. It seems\n> to me that the intention behind the first implementation of slot\n> advancing was to get things flushed if any advancing was done.\n\nTaking a step back here, I have no concerns with proposed changes for\npg_replication_slot_advance(). Disregard my comments about safety with\nthe SQL interface for the purposes of this thread, they apply only to\nlogical slots and are really unrelated to\npg_replication_slot_advance().\n\nRe your comment above: For slot advances in general the flush to disk\nis done lazily for performance reasons, but I think you meant\npg_replication_slot_advance() specifically.\n\npg_replication_slot_advance() doesn't appear to make any promises as\nto immediate durability either way. It updates the required LSN\nimmediately with ReplicationSlotsUpdateRequiredLSN() so it\ntheoretically marks WAL as removable before it's flushed. But I don't\nthink we'll ever actually remove any WAL segments until checkpoint, at\nwhich point we'll also flush any dirty slots, so it doesn't really\nmatter. For logical slots the lsn and xmin are both protected by the\neffective/actual tracking logic and can't advance until the slot is\nflushed.\n\nThe app might be surprised if the slot goes backwards after an\npg_replication_slot_advance() followed by a server crash though.\n\n> The\n> check doing that is actually broken from the start, but that's another\n> story. Could you check with Petr what was the intention here or drag\n> his attention to this thread? He is the original author of the\n> feature. So his output would be nice to have.\n\nI'll ask him. He's pretty bogged at the moment though, and I've done a\nlot of work in this area too. (See e.g. the catalog_xmin in hot\nstandby feedback changes).\n\n> > The SQL interface advances the slot restart position and marks the\n> > slot dirty *before the client has confirmed receipt of the data and\n> > flushed it to disk*. So there's a data loss window. If the client\n> > disconnects or crashes before all the data from that function call is\n> > safely flushed to disk it may lose the data, then be unable to fetch\n> > it again from the server because of the restart_lsn position advance.\n>\n> Well, you have the same class of problems with for example synchronous\n> replication. The only state a client can be sure of is that it\n> received a confirmation that the operation happens and completed.\n> Any other state tells that the operation may have happened. Or not.\n> Now, being sure that the data of the new slot has been flushed once\n> the advancing function is done once the client got the confirmation\n> that the work is done is a property which could be interesting to some\n> class of applications.\n\nThat's what we already provide for the streaming interface for slot access.\n\nI agree there's no need to shove a fix to the SQL interface for\nphys/logical slots into this same discussion. I'm just trying to make\nsure we don't \"fix\" a \"bug\" that's actually an important part of the\ndesign by trying to fix a perceived-missing flush in the streaming\ninterface too. I am not at all confident that the test coverage for\nthis is sufficient right now, since we lack a good way to make\npostgres delay various lazy internal activity to let us reliably\nexamine intermediate states in a race-free way, so I'm not sure tests\nwould catch it.\n\n> > Really, we should add a \"no_advance_position\" option to the SQL\n> > interface, then expect the client to call a second function that\n> > explicitly advances the restart_lsn and confirmed_flush_lsn. Otherwise\n> > no SQL interface client can be crashsafe.\n>\n> Hm. Could you elaborate more what you mean here? I am not sure to\n> understand. Note that calling the advance function multiple times has\n> no effects, and the result returned is the actual restart_lsn (or\n> confirmed_flush_lsn of course).\n\nI've probably confused things a bit here. I don't mind if whether or\nnot pg_replication_slot_advance() forces an immediate flush, I think\nthere are reasonable arguments in both directions.\n\nIn the above I was talking about how pg_logical_slot_get_changes()\npresently advances the slot position immediately, so if the client\nloses its connection before reading and flushing all the data it may\nbe unable to recover. And while pg_logical_slot_peek_changes() lets\nthe app read the data w/o advancing the slot, it has to then do a\nseparate pg_replication_slot_advance() which has to do the decoding\nwork again. I'd like to improve that, but I didn't intend to confuse\nor sidetrack this discussion in the process. Sorry.\n\nWe don't have a SQL-level interface for reading physical WAL so there\nare no corresponding concerns there.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 21 Jan 2020 14:57:33 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, Jan 21, 2020 at 02:07:30PM +0900, Michael Paquier wrote:\n> On Mon, Jan 20, 2020 at 11:00:14AM -0800, Andres Freund wrote:\n>> Hm. I'm think testing this with real LSNs is a better idea. What if the\n>> node actually already is past FF/FFFFFFFF at this point? Quite unlikely,\n>> I know, but still. I.e. why not get the current LSN after the INSERT,\n>> and assert that the slot, after the restart, is that?\n> \n> Sure. If not disabling autovacuum in the test, we'd just need to make\n> sure if that advancing is at least ahead of the INSERT position.\n\nActually, as the advancing happens only up to this position we just\nneed to make sure that the LSN reported by the slot is the same as the\nposition advanced to. I have switched the test to just do that\ninstead of using a fake LSN.\n\n> Anyway, I am still not sure if we should got down the road to just\n> mark the slot as dirty if any advancing is done and let the follow-up\n> checkpoint to the work, if the advancing function should do the slot\n> flush, or if we choose one and make the other an optional choice (not\n> for back-branches, obviously. Based on my reading of the code, my\n> guess is that a flush should happen at the end of the advancing\n> function.\n\nI have been chewing on this point for a couple of days, and as we may\nactually crash between the moment the slot is marked as dirty and the\nmoment the slot information is made consistent, we still have a risk\nto have the slot go backwards even if the slot information is saved.\nThe window is much narrower, but well, the docs of logical decoding\nmention that this risk exists. And the patch becomes much more\nsimple without changing the actual behavior present since the feature\nhas been introduced for logical slots. There could be a point in\nhaving a new option to flush the slot information, or actually a\nseparate function to flush the slot information, but let's keep that\nfor a future possibility.\n\nSo attached is an updated patch which addresses the problem just by\nmarking a physical slot as dirty if any advancing is done. Some\ndocumentation is added about the fact that an advance is persistent\nonly at the follow-up checkpoint. And the tests are fixed to not use\na fake LSN but instead advance to the latest LSN position produced.\n\nAny objections?\n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 17:01:14 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, 28 Jan 2020 at 16:01, Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Tue, Jan 21, 2020 at 02:07:30PM +0900, Michael Paquier wrote:\n> > On Mon, Jan 20, 2020 at 11:00:14AM -0800, Andres Freund wrote:\n> >> Hm. I'm think testing this with real LSNs is a better idea. What if the\n> >> node actually already is past FF/FFFFFFFF at this point? Quite unlikely,\n> >> I know, but still. I.e. why not get the current LSN after the INSERT,\n> >> and assert that the slot, after the restart, is that?\n> >\n> > Sure. If not disabling autovacuum in the test, we'd just need to make\n> > sure if that advancing is at least ahead of the INSERT position.\n>\n> Actually, as the advancing happens only up to this position we just\n> need to make sure that the LSN reported by the slot is the same as the\n> position advanced to. I have switched the test to just do that\n> instead of using a fake LSN.\n>\n> > Anyway, I am still not sure if we should got down the road to just\n> > mark the slot as dirty if any advancing is done and let the follow-up\n> > checkpoint to the work, if the advancing function should do the slot\n> > flush, or if we choose one and make the other an optional choice (not\n> > for back-branches, obviously. Based on my reading of the code, my\n> > guess is that a flush should happen at the end of the advancing\n> > function.\n>\n> I have been chewing on this point for a couple of days, and as we may\n> actually crash between the moment the slot is marked as dirty and the\n> moment the slot information is made consistent, we still have a risk\n> to have the slot go backwards even if the slot information is saved.\n> The window is much narrower, but well, the docs of logical decoding\n> mention that this risk exists. And the patch becomes much more\n> simple without changing the actual behavior present since the feature\n> has been introduced for logical slots. There could be a point in\n> having a new option to flush the slot information, or actually a\n> separate function to flush the slot information, but let's keep that\n> for a future possibility.\n>\n> So attached is an updated patch which addresses the problem just by\n> marking a physical slot as dirty if any advancing is done. Some\n> documentation is added about the fact that an advance is persistent\n> only at the follow-up checkpoint. And the tests are fixed to not use\n> a fake LSN but instead advance to the latest LSN position produced.\n>\n> Any objections?\n\nLGTM. Thankyou.\n\n-- \n Craig Ringer http://www.2ndQuadrant.com/\n 2ndQuadrant - PostgreSQL Solutions for the Enterprise\n\n\n",
"msg_date": "Tue, 28 Jan 2020 17:45:47 +0800",
"msg_from": "Craig Ringer <craig@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "At Tue, 28 Jan 2020 17:45:47 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in \n> On Tue, 28 Jan 2020 at 16:01, Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Tue, Jan 21, 2020 at 02:07:30PM +0900, Michael Paquier wrote:\n> > > On Mon, Jan 20, 2020 at 11:00:14AM -0800, Andres Freund wrote:\n> > >> Hm. I'm think testing this with real LSNs is a better idea. What if the\n> > >> node actually already is past FF/FFFFFFFF at this point? Quite unlikely,\n> > >> I know, but still. I.e. why not get the current LSN after the INSERT,\n> > >> and assert that the slot, after the restart, is that?\n> > >\n> > > Sure. If not disabling autovacuum in the test, we'd just need to make\n> > > sure if that advancing is at least ahead of the INSERT position.\n> >\n> > Actually, as the advancing happens only up to this position we just\n> > need to make sure that the LSN reported by the slot is the same as the\n> > position advanced to. I have switched the test to just do that\n> > instead of using a fake LSN.\n> >\n> > > Anyway, I am still not sure if we should got down the road to just\n> > > mark the slot as dirty if any advancing is done and let the follow-up\n> > > checkpoint to the work, if the advancing function should do the slot\n> > > flush, or if we choose one and make the other an optional choice (not\n> > > for back-branches, obviously. Based on my reading of the code, my\n> > > guess is that a flush should happen at the end of the advancing\n> > > function.\n> >\n> > I have been chewing on this point for a couple of days, and as we may\n> > actually crash between the moment the slot is marked as dirty and the\n> > moment the slot information is made consistent, we still have a risk\n> > to have the slot go backwards even if the slot information is saved.\n> > The window is much narrower, but well, the docs of logical decoding\n> > mention that this risk exists. And the patch becomes much more\n> > simple without changing the actual behavior present since the feature\n> > has been introduced for logical slots. There could be a point in\n> > having a new option to flush the slot information, or actually a\n> > separate function to flush the slot information, but let's keep that\n> > for a future possibility.\n> >\n> > So attached is an updated patch which addresses the problem just by\n> > marking a physical slot as dirty if any advancing is done. Some\n> > documentation is added about the fact that an advance is persistent\n> > only at the follow-up checkpoint. And the tests are fixed to not use\n> > a fake LSN but instead advance to the latest LSN position produced.\n> >\n> > Any objections?\n> \n> LGTM. Thankyou.\n\nI agree not to save slots immediately. The code is wrtten as described\nabove. The TAP test is correct.\n\nBut the doc part looks a bit too detailed to me. Couldn't we explain\nthat without the word 'dirty'?\n\n- and it will not be moved beyond the current insert location. Returns\n- name of the slot and real position to which it was advanced to.\n+ and it will not be moved beyond the current insert location. Returns\n+ name of the slot and real position to which it was advanced to. The\n+ updated slot is marked as dirty if any advancing is done, with its\n+ information being written out at the follow-up checkpoint. In the\n+ event of a crash, the slot may return to an earlier position.\n\nand it will not be moved beyond the current insert location. Returns\nname of the slot and real position to which it was advanced to. The\ninformation of the updated slot is scheduled to be written out at the\nfollow-up checkpoint if any advancing is done. In the event of a\ncrash, the slot may return to an earlier position.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Tue, 28 Jan 2020 21:14:34 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 28.01.2020 15:14, Kyotaro Horiguchi wrote:\n> At Tue, 28 Jan 2020 17:45:47 +0800, Craig Ringer <craig@2ndquadrant.com> wrote in\n>> On Tue, 28 Jan 2020 at 16:01, Michael Paquier <michael@paquier.xyz> wrote:\n>>> So attached is an updated patch which addresses the problem just by\n>>> marking a physical slot as dirty if any advancing is done. Some\n>>> documentation is added about the fact that an advance is persistent\n>>> only at the follow-up checkpoint. And the tests are fixed to not use\n>>> a fake LSN but instead advance to the latest LSN position produced.\n>>>\n>>> Any objections?\n>> LGTM. Thankyou.\n> I agree not to save slots immediately. The code is wrtten as described\n> above. The TAP test is correct.\n\n+1, removing this broken saving code path from \npg_replication_slot_advance and marking slot as dirty looks good to me. \nIt solves the issue and does not add any unnecessary complexity.\n\n>\n> But the doc part looks a bit too detailed to me. Couldn't we explain\n> that without the word 'dirty'?\n>\n> - and it will not be moved beyond the current insert location. Returns\n> - name of the slot and real position to which it was advanced to.\n> + and it will not be moved beyond the current insert location. Returns\n> + name of the slot and real position to which it was advanced to. The\n> + updated slot is marked as dirty if any advancing is done, with its\n> + information being written out at the follow-up checkpoint. In the\n> + event of a crash, the slot may return to an earlier position.\n>\n> and it will not be moved beyond the current insert location. Returns\n> name of the slot and real position to which it was advanced to. The\n> information of the updated slot is scheduled to be written out at the\n> follow-up checkpoint if any advancing is done. In the event of a\n> crash, the slot may return to an earlier position.\n\nJust searched through the *.sgml files, we already use terms 'dirty' and \n'flush' applied to writing out pages during checkpoints. Here we are \ntrying to describe the very similar process, but in relation to \nreplication slots, so it looks fine for me. In the same time, the term \n'schedule' is used for VACUUM, constraint check or checkpoint itself.\n\n\nRegards\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Tue, 28 Jan 2020 18:06:06 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 06:06:06PM +0300, Alexey Kondratov wrote:\n> On 28.01.2020 15:14, Kyotaro Horiguchi wrote:\n>> I agree not to save slots immediately. The code is wrtten as described\n>> above. The TAP test is correct.\n> \n> +1, removing this broken saving code path from pg_replication_slot_advance\n> and marking slot as dirty looks good to me. It solves the issue and does not\n> add any unnecessary complexity.\n\nOk, good. So I am seeing no objections on that part :D\n\n>> But the doc part looks a bit too detailed to me. Couldn't we explain\n>> that without the word 'dirty'?\n>> \n>> - and it will not be moved beyond the current insert location. Returns\n>> - name of the slot and real position to which it was advanced to.\n>> + and it will not be moved beyond the current insert location. Returns\n>> + name of the slot and real position to which it was advanced to. The\n>> + updated slot is marked as dirty if any advancing is done, with its\n>> + information being written out at the follow-up checkpoint. In the\n>> + event of a crash, the slot may return to an earlier position.\n>> \n>> and it will not be moved beyond the current insert location. Returns\n>> name of the slot and real position to which it was advanced to. The\n>> information of the updated slot is scheduled to be written out at the\n>> follow-up checkpoint if any advancing is done. In the event of a\n>> crash, the slot may return to an earlier position.\n> \n> Just searched through the *.sgml files, we already use terms 'dirty' and\n> 'flush' applied to writing out pages during checkpoints. Here we are trying\n> to describe the very similar process, but in relation to replication slots,\n> so it looks fine for me. In the same time, the term 'schedule' is used for\n> VACUUM, constraint check or checkpoint itself.\n\nHonestly, I was a bit on the fence for the term \"dirty\" when typing\nthis paragraph, so I kind of agree with Horiguchi-san's point that it\ncould be confusing when applied to replication slots, because there is\nno other reference in the docs about the link between the two\nconcepts. So, I would go for a more simplified sentence for the first\npart, keeping the second sentence intact:\n\"The information of the updated slot is written out at the follow-up\ncheckpoint if any advancing is done. In the event of a crash, the\nslot may return to an earlier position.\"\n--\nMichael",
"msg_date": "Wed, 29 Jan 2020 15:45:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "At Wed, 29 Jan 2020 15:45:56 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Jan 28, 2020 at 06:06:06PM +0300, Alexey Kondratov wrote:\n> > On 28.01.2020 15:14, Kyotaro Horiguchi wrote:\n> >> But the doc part looks a bit too detailed to me. Couldn't we explain\n> >> that without the word 'dirty'?\n..\n> >> and it will not be moved beyond the current insert location. Returns\n> >> name of the slot and real position to which it was advanced to. The\n> >> information of the updated slot is scheduled to be written out at the\n> >> follow-up checkpoint if any advancing is done. In the event of a\n> >> crash, the slot may return to an earlier position.\n> > \n> > Just searched through the *.sgml files, we already use terms 'dirty' and\n> > 'flush' applied to writing out pages during checkpoints. Here we are trying\n> > to describe the very similar process, but in relation to replication slots,\n> > so it looks fine for me. In the same time, the term 'schedule' is used for\n> > VACUUM, constraint check or checkpoint itself.\n> \n> Honestly, I was a bit on the fence for the term \"dirty\" when typing\n> this paragraph, so I kind of agree with Horiguchi-san's point that it\n> could be confusing when applied to replication slots, because there is\n> no other reference in the docs about the link between the two\n> concepts. So, I would go for a more simplified sentence for the first\n> part, keeping the second sentence intact:\n> \"The information of the updated slot is written out at the follow-up\n> checkpoint if any advancing is done. In the event of a crash, the\n> slot may return to an earlier position.\"\n\nLooks perfect.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 29 Jan 2020 17:10:20 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 05:10:20PM +0900, Kyotaro Horiguchi wrote:\n> Looks perfect.\n\nThanks Horiguchi-san and others. Applied and back-patched down to\n11.\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 11:19:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 30.01.2020 05:19, Michael Paquier wrote:\n> On Wed, Jan 29, 2020 at 05:10:20PM +0900, Kyotaro Horiguchi wrote:\n>> Looks perfect.\n> Thanks Horiguchi-san and others. Applied and back-patched down to\n> 11.\n\nGreat! Thanks for getting this done.\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n\n",
"msg_date": "Fri, 31 Jan 2020 14:14:03 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-28 17:01:14 +0900, Michael Paquier wrote:\n> So attached is an updated patch which addresses the problem just by\n> marking a physical slot as dirty if any advancing is done. Some\n> documentation is added about the fact that an advance is persistent\n> only at the follow-up checkpoint. And the tests are fixed to not use\n> a fake LSN but instead advance to the latest LSN position produced.\n\n> \n> -\t/* Update the on disk state when lsn was updated. */\n> -\tif (XLogRecPtrIsInvalid(endlsn))\n> -\t{\n> -\t\tReplicationSlotMarkDirty();\n> -\t\tReplicationSlotsComputeRequiredXmin(false);\n> -\t\tReplicationSlotsComputeRequiredLSN();\n> -\t\tReplicationSlotSave();\n> -\t}\n> -\n\nI am quite confused by the wholesale removal of these lines. That wasn't\nin previous versions of the patch. As far as I can tell not calling\nReplicationSlotsComputeRequiredLSN() for the physical slot leads to the\nglobal minimum LSN never beeing advanced, and thus WAL reserved by the\nslot not being removable. Only if there's some independent call to\nReplicationSlotsComputeRequiredLSN() XLogSetReplicationSlotMinimumLSN()\nwill be called, allowing for slots to advance.\n\nI realize this stuff has been broken since the introduction in\n9c7d06d6068 (due to the above being if (XLogRecPtrIsInvalid()) rather\nthan if (!XLogRecPtrIsInvalid()) , but this seems to make it even worse?\n\n\nI find it really depressing how much obviously untested stuff gets\nadded in this area.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 9 Jun 2020 10:19:04 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 2020-06-09 20:19, Andres Freund wrote:\n> Hi,\n> \n> On 2020-01-28 17:01:14 +0900, Michael Paquier wrote:\n>> So attached is an updated patch which addresses the problem just by\n>> marking a physical slot as dirty if any advancing is done. Some\n>> documentation is added about the fact that an advance is persistent\n>> only at the follow-up checkpoint. And the tests are fixed to not use\n>> a fake LSN but instead advance to the latest LSN position produced.\n> \n>> \n>> -\t/* Update the on disk state when lsn was updated. */\n>> -\tif (XLogRecPtrIsInvalid(endlsn))\n>> -\t{\n>> -\t\tReplicationSlotMarkDirty();\n>> -\t\tReplicationSlotsComputeRequiredXmin(false);\n>> -\t\tReplicationSlotsComputeRequiredLSN();\n>> -\t\tReplicationSlotSave();\n>> -\t}\n>> -\n> \n> I am quite confused by the wholesale removal of these lines. That \n> wasn't\n> in previous versions of the patch. As far as I can tell not calling\n> ReplicationSlotsComputeRequiredLSN() for the physical slot leads to the\n> global minimum LSN never beeing advanced, and thus WAL reserved by the\n> slot not being removable. Only if there's some independent call to\n> ReplicationSlotsComputeRequiredLSN() XLogSetReplicationSlotMinimumLSN()\n> will be called, allowing for slots to advance.\n> \n> I realize this stuff has been broken since the introduction in\n> 9c7d06d6068 (due to the above being if (XLogRecPtrIsInvalid()) rather\n> than if (!XLogRecPtrIsInvalid()) , but this seems to make it even \n> worse?\n> \n\nYes, there was a ReplicationSlotsComputeRequiredLSN() call inside \npg_physical_replication_slot_advance() in the v5 of the patch:\n\n@@ -370,6 +370,11 @@ pg_physical_replication_slot_advance(XLogRecPtr \nmoveto)\n \t\tMyReplicationSlot->data.restart_lsn = moveto;\n \t\tSpinLockRelease(&MyReplicationSlot->mutex);\n \t\tretlsn = moveto;\n+\n+\t\tReplicationSlotMarkDirty();\n+\n+\t\t/* We moved retart_lsn, update the global value. */\n+\t\tReplicationSlotsComputeRequiredLSN();\n\nBut later it has been missed and we have not noticed that.\n\nI think that adding it back as per attached will be enough.\n\n> \n> I find it really depressing how much obviously untested stuff gets\n> added in this area.\n> \n\nPrior to this patch pg_replication_slot_advance was not being tested at \nall. Unfortunately, added tests appeared to be not enough to cover all \ncases. It seems that the whole machinery of WAL holding and trimming is \nworth to be tested more thoroughly.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Tue, 09 Jun 2020 21:01:13 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, Jun 09, 2020 at 09:01:13PM +0300, Alexey Kondratov wrote:\n> Yes, there was a ReplicationSlotsComputeRequiredLSN() call inside\n> pg_physical_replication_slot_advance() in the v5 of the patch:\n>\n> But later it has been missed and we have not noticed that.\n>\n> I think that adding it back as per attached will be enough.\n\n[ scratches head... ]\nIndeed, this part gets wrong and we would have to likely rely on a WAL\nsender to do this calculation once a new flush location is received,\nbut that may not happen in some cases. It feels more natural to do\nthat in the location where the slot is marked as dirty, and there \nis no need to move around an extra check to see if the slot has\nactually been advanced or not. Or we could just call the routine once\nany advancing is attempted? That would be unnecessary if no advancing\nis done.\n\n> > I find it really depressing how much obviously untested stuff gets\n> > added in this area.\n>\n> Prior to this patch pg_replication_slot_advance was not being tested\n> at all.\n> Unfortunately, added tests appeared to be not enough to cover all\n> cases. It\n> seems that the whole machinery of WAL holding and trimming is worth\n> to be\n> tested more thoroughly.\n\nI think that it would be interesting if we had a SQL representation of\nthe contents of XLogCtlData (wanted that a couple of times). Now we\nare actually limited to use a checkpoint and check that past segments\nare getting recycled by looking at the contents of pg_wal. Doing that\nhere does not cause the existing tests to be much more expensive as we\nonly need one extra call to pg_switch_wal(), mostly. Please see the\nattached.\n--\nMichael",
"msg_date": "Wed, 10 Jun 2020 15:53:53 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "At Wed, 10 Jun 2020 15:53:53 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Tue, Jun 09, 2020 at 09:01:13PM +0300, Alexey Kondratov wrote:\n> > Yes, there was a ReplicationSlotsComputeRequiredLSN() call inside\n> > pg_physical_replication_slot_advance() in the v5 of the patch:\n> >\n> > But later it has been missed and we have not noticed that.\n> >\n> > I think that adding it back as per attached will be enough.\n\nSure.\n\n> [ scratches head... ]\n> Indeed, this part gets wrong and we would have to likely rely on a WAL\n> sender to do this calculation once a new flush location is received,\n> but that may not happen in some cases. It feels more natural to do\n> that in the location where the slot is marked as dirty, and there \n> is no need to move around an extra check to see if the slot has\n> actually been advanced or not. Or we could just call the routine once\n> any advancing is attempted? That would be unnecessary if no advancing\n> is done.\n\nWe don't call the function so frequently. I don't think it can be a\nproblem to update replicationSlotMinLSN every time trying advancing.\n\n> > > I find it really depressing how much obviously untested stuff gets\n> > > added in this area.\n> >\n> > Prior to this patch pg_replication_slot_advance was not being tested\n> > at all.\n> > Unfortunately, added tests appeared to be not enough to cover all\n> > cases. It\n> > seems that the whole machinery of WAL holding and trimming is worth\n> > to be\n> > tested more thoroughly.\n> \n> I think that it would be interesting if we had a SQL representation of\n> the contents of XLogCtlData (wanted that a couple of times). Now we\n> are actually limited to use a checkpoint and check that past segments\n> are getting recycled by looking at the contents of pg_wal. Doing that\n> here does not cause the existing tests to be much more expensive as we\n> only need one extra call to pg_switch_wal(), mostly. Please see the\n> attached.\n\nThe test in the patch looks fine to me and worked well for me.\n\nUsing smaller wal_segment_size (1(MB) worked for me) reduces the cost\nof the check, but I'm not sure it's worth doing.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 10 Jun 2020 17:38:44 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 2020-06-10 11:38, Kyotaro Horiguchi wrote:\n> At Wed, 10 Jun 2020 15:53:53 +0900, Michael Paquier\n> <michael@paquier.xyz> wrote in\n>> > > I find it really depressing how much obviously untested stuff gets\n>> > > added in this area.\n>> >\n>> > Prior to this patch pg_replication_slot_advance was not being tested\n>> > at all.\n>> > Unfortunately, added tests appeared to be not enough to cover all\n>> > cases. It\n>> > seems that the whole machinery of WAL holding and trimming is worth\n>> > to be\n>> > tested more thoroughly.\n>> \n>> I think that it would be interesting if we had a SQL representation of\n>> the contents of XLogCtlData (wanted that a couple of times). Now we\n>> are actually limited to use a checkpoint and check that past segments\n>> are getting recycled by looking at the contents of pg_wal. Doing that\n>> here does not cause the existing tests to be much more expensive as we\n>> only need one extra call to pg_switch_wal(), mostly. Please see the\n>> attached.\n> \n> The test in the patch looks fine to me and worked well for me.\n> \n> Using smaller wal_segment_size (1(MB) worked for me) reduces the cost\n> of the check, but I'm not sure it's worth doing.\n> \n\nNew test reproduces this issue well. Left it running for a couple of \nhours in repeat and it seems to be stable.\n\nJust noted that we do not need to keep $phys_restart_lsn_pre:\n\nmy $phys_restart_lsn_pre = $node_master->safe_psql('postgres',\n\t\"SELECT restart_lsn from pg_replication_slots WHERE slot_name = \n'$phys_slot';\"\n);\nchomp($phys_restart_lsn_pre);\n\nwe can safely use $current_lsn used for pg_replication_slot_advance(), \nsince reatart_lsn is set as is there. It may make the test a bit simpler \nas well.\n\n\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company\n\n\n",
"msg_date": "Wed, 10 Jun 2020 20:57:17 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Wed, Jun 10, 2020 at 08:57:17PM +0300, Alexey Kondratov wrote:\n> New test reproduces this issue well. Left it running for a couple of hours\n> in repeat and it seems to be stable.\n\nThanks for testing. I have been thinking about the minimum xmin and\nLSN computations on advancing, and actually I have switched the\nrecomputing to be called at the end of pg_replication_slot_advance().\nThis may be a waste if no advancing is done, but it could also be an\nadvantage to enforce a recalculation of the thresholds for each\nfunction call. And that's more consistent with the slot copy, drop\nand creation.\n\n> we can safely use $current_lsn used for pg_replication_slot_advance(), since\n> reatart_lsn is set as is there. It may make the test a bit simpler as well.\n\nWe could do that. Now I found cleaner the direct comparison of\npg_replication_slots.restart before and after the restart. So I have\nkept it.\n--\nMichael",
"msg_date": "Tue, 16 Jun 2020 16:27:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Tue, Jun 16, 2020 at 04:27:27PM +0900, Michael Paquier wrote:\n> We could do that. Now I found cleaner the direct comparison of\n> pg_replication_slots.restart before and after the restart. So I have\n> kept it.\n\nAnd done. There were conflicts in 001_stream_rep.pl for 11 and 12 but\nI have reworked the patch on those branches to have a minimum amount\nof diffs with the other branches. This part additionally needed to\nstop standby_1 before running the last part of the test to be able to\ndrop its physical slot on the primary.\n--\nMichael",
"msg_date": "Thu, 18 Jun 2020 16:47:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On 2020-06-16 10:27, Michael Paquier wrote:\n> On Wed, Jun 10, 2020 at 08:57:17PM +0300, Alexey Kondratov wrote:\n>> New test reproduces this issue well. Left it running for a couple of \n>> hours\n>> in repeat and it seems to be stable.\n> \n> Thanks for testing. I have been thinking about the minimum xmin and\n> LSN computations on advancing, and actually I have switched the\n> recomputing to be called at the end of pg_replication_slot_advance().\n> This may be a waste if no advancing is done, but it could also be an\n> advantage to enforce a recalculation of the thresholds for each\n> function call. And that's more consistent with the slot copy, drop\n> and creation.\n> \n\nSorry for a bit late response, but I see a couple of issues with this \nmodified version of the patch in addition to the waste call if no \nadvancing is done, mentioned by you:\n\n1. Both ReplicationSlotsComputeRequiredXmin() and \nReplicationSlotsComputeRequiredLSN() may have already been done in the \nLogicalConfirmReceivedLocation() if it was a logical slot. It may be \nfine and almost costless to do it twice, but it looks untidy for me.\n\n2. It seems that we do not need ReplicationSlotsComputeRequiredXmin() at \nall if it was a physical slot, since we do not modify xmin in the \npg_physical_replication_slot_advance(), doesn't it?\n\nThat's why I wanted (somewhere around v5 of the patch in this thread) to \nmove all dirtying and recomputing calls to the places, where xmin / lsn \nslot modifications are actually done — \npg_physical_replication_slot_advance() and \nLogicalConfirmReceivedLocation(). LogicalConfirmReceivedLocation() \nalready does this, so we only needed to teach \npg_physical_replication_slot_advance() to do the same.\n\nHowever, just noted that LogicalConfirmReceivedLocation() only does \nReplicationSlotsComputeRequiredLSN() if updated_xmin flag was set, which \nlooks wrong from my perspective, since updated_xmin and updated_restart \nflags are set separately.\n\nThat way, I would solve this all as per attached, which works well for \nme, but definitely worth of a better testing.\n\n\nRegards\n-- \nAlexey Kondratov\n\nPostgres Professional https://www.postgrespro.com\nRussian Postgres Company",
"msg_date": "Thu, 18 Jun 2020 21:46:28 +0300",
"msg_from": "Alexey Kondratov <a.kondratov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Fri, Jun 19, 2020 at 12:16 AM Alexey Kondratov\n<a.kondratov@postgrespro.ru> wrote:\n>\n> On 2020-06-16 10:27, Michael Paquier wrote:\n> > On Wed, Jun 10, 2020 at 08:57:17PM +0300, Alexey Kondratov wrote:\n> >> New test reproduces this issue well. Left it running for a couple of\n> >> hours\n> >> in repeat and it seems to be stable.\n> >\n> > Thanks for testing. I have been thinking about the minimum xmin and\n> > LSN computations on advancing, and actually I have switched the\n> > recomputing to be called at the end of pg_replication_slot_advance().\n> > This may be a waste if no advancing is done, but it could also be an\n> > advantage to enforce a recalculation of the thresholds for each\n> > function call. And that's more consistent with the slot copy, drop\n> > and creation.\n> >\n>\n> Sorry for a bit late response, but I see a couple of issues with this\n> modified version of the patch in addition to the waste call if no\n> advancing is done, mentioned by you:\n>\n> 1. Both ReplicationSlotsComputeRequiredXmin() and\n> ReplicationSlotsComputeRequiredLSN() may have already been done in the\n> LogicalConfirmReceivedLocation() if it was a logical slot.\n>\n\nI think it is not done in all cases, see the else part in\nLogicalConfirmReceivedLocation.\n\nLogicalConfirmReceivedLocation\n{\n..\nelse\n{\nSpinLockAcquire(&MyReplicationSlot->mutex);\nMyReplicationSlot->data.confirmed_flush = lsn;\nSpinLockRelease(&MyReplicationSlot->mutex);\n}\n..\n}\n\n>\n> However, just noted that LogicalConfirmReceivedLocation() only does\n> ReplicationSlotsComputeRequiredLSN() if updated_xmin flag was set, which\n> looks wrong from my perspective, since updated_xmin and updated_restart\n> flags are set separately.\n>\n\nI see your point but it is better to back such a change by some test case.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jul 2020 16:12:49 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
},
{
"msg_contents": "On Thu, Jul 09, 2020 at 04:12:49PM +0530, Amit Kapila wrote:\n> On Fri, Jun 19, 2020 at 12:16 AM Alexey Kondratov\n> <a.kondratov@postgrespro.ru> wrote:\n>> 1. Both ReplicationSlotsComputeRequiredXmin() and\n>> ReplicationSlotsComputeRequiredLSN() may have already been done in the\n>> LogicalConfirmReceivedLocation() if it was a logical slot.\n>>\n> \n> I think it is not done in all cases, see the else part in\n> LogicalConfirmReceivedLocation.\n> \n> LogicalConfirmReceivedLocation\n> {\n> ..\n> else\n> {\n> SpinLockAcquire(&MyReplicationSlot->mutex);\n> MyReplicationSlot->data.confirmed_flush = lsn;\n> SpinLockRelease(&MyReplicationSlot->mutex);\n> }\n> ..\n> }\n\nThanks Amit, and sorry for the late catchup. The choice of computing\nthe minimum LSN and xmin across all slots at the end of\npg_replication_slot_advance() is deliberate. That's more consistent\nwith the slot creation, copy and drop for one, and that was also the\nintention of the original code (actually a no-op as introduced by\n9c7d06d). This also brings an interesting property to the advancing\nroutines to be able to enforce a recomputation without having to wait\nfor a checkpoint or a WAL sender to do so. So, while there may be\ncases where we don't need this recomputation to happen, and there may\nbe cases where it may be a waste, the code simplicity and consistency\nare IMO reasons enough to keep this code as it is now.\n--\nMichael",
"msg_date": "Fri, 10 Jul 2020 10:45:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Physical replication slot advance is not persistent"
}
] |
[
{
"msg_contents": "Hi,\n\nAs the document explains, column defaults can be specified separately for\neach partition. But I found that INSERT via the partitioned table ignores\nthat default. Is this expected behavior or bug?\n\nCREATE TABLE test (i INT, j INT) PARTITION BY RANGE (i);\nCREATE TABLE test1 PARTITION OF test (j DEFAULT 99) FOR VALUES FROM (1) TO (10);\nINSERT INTO test VALUES (1, DEFAULT);\nINSERT INTO test1 VALUES (2, DEFAULT);\nSELECT * FROM test;\n i | j\n---+--------\n 1 | (null)\n 2 | 99\n(2 rows)\n\nIn the above example, INSERT accessing directly to the partition uses\nthe default, but INSERT via the partitioned table not.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Wed, 25 Dec 2019 12:19:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "table partition and column default"
},
{
"msg_contents": "Fujii-san,\n\nOn Wed, Dec 25, 2019 at 12:19 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> Hi,\n>\n> As the document explains, column defaults can be specified separately for\n> each partition. But I found that INSERT via the partitioned table ignores\n> that default. Is this expected behavior or bug?\n>\n> CREATE TABLE test (i INT, j INT) PARTITION BY RANGE (i);\n> CREATE TABLE test1 PARTITION OF test (j DEFAULT 99) FOR VALUES FROM (1) TO (10);\n> INSERT INTO test VALUES (1, DEFAULT);\n> INSERT INTO test1 VALUES (2, DEFAULT);\n> SELECT * FROM test;\n> i | j\n> ---+--------\n> 1 | (null)\n> 2 | 99\n> (2 rows)\n>\n> In the above example, INSERT accessing directly to the partition uses\n> the default, but INSERT via the partitioned table not.\n\nThis is as of now expected.\n\nIIRC, there was some discussion about implementing a feature whereby\npartition's default will used for an attribute if it's null even after\nconsidering the parent table's default, that is, when no default value\nis defined in the parent. The details are at toward the end of this\nthread:\n\nhttps://www.postgresql.org/message-id/flat/578398af46350effe7111895a4856b87b02e000e.camel%402ndquadrant.com\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 25 Dec 2019 13:56:15 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 1:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> Fujii-san,\n>\n> On Wed, Dec 25, 2019 at 12:19 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > As the document explains, column defaults can be specified separately for\n> > each partition. But I found that INSERT via the partitioned table ignores\n> > that default. Is this expected behavior or bug?\n> >\n> > CREATE TABLE test (i INT, j INT) PARTITION BY RANGE (i);\n> > CREATE TABLE test1 PARTITION OF test (j DEFAULT 99) FOR VALUES FROM (1) TO (10);\n> > INSERT INTO test VALUES (1, DEFAULT);\n> > INSERT INTO test1 VALUES (2, DEFAULT);\n> > SELECT * FROM test;\n> > i | j\n> > ---+--------\n> > 1 | (null)\n> > 2 | 99\n> > (2 rows)\n> >\n> > In the above example, INSERT accessing directly to the partition uses\n> > the default, but INSERT via the partitioned table not.\n>\n> This is as of now expected.\n>\n> IIRC, there was some discussion about implementing a feature whereby\n> partition's default will used for an attribute if it's null even after\n> considering the parent table's default, that is, when no default value\n> is defined in the parent. The details are at toward the end of this\n> thread:\n>\n> https://www.postgresql.org/message-id/flat/578398af46350effe7111895a4856b87b02e000e.camel%402ndquadrant.com\n\nThanks for pointing that thread!\n\nAs you mentioned in that thread, I also think that this current\nbehavior (maybe restriction) should be documented.\nWhat about adding the note like \"a partition's default value is\nnot applied when inserting a tuple through a partitioned table.\"\ninto the document?\n\nPatch attached.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 25 Dec 2019 17:40:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 5:40 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> On Wed, Dec 25, 2019 at 1:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > IIRC, there was some discussion about implementing a feature whereby\n> > partition's default will used for an attribute if it's null even after\n> > considering the parent table's default, that is, when no default value\n> > is defined in the parent. The details are at toward the end of this\n> > thread:\n> >\n> > https://www.postgresql.org/message-id/flat/578398af46350effe7111895a4856b87b02e000e.camel%402ndquadrant.com\n>\n> Thanks for pointing that thread!\n>\n> As you mentioned in that thread, I also think that this current\n> behavior (maybe restriction) should be documented.\n> What about adding the note like \"a partition's default value is\n> not applied when inserting a tuple through a partitioned table.\"\n> into the document?\n\nAgreed.\n\n> Patch attached.\n\nThanks for creating the patch, looks good to me.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Wed, 25 Dec 2019 17:47:38 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 5:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Wed, Dec 25, 2019 at 5:40 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > On Wed, Dec 25, 2019 at 1:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > IIRC, there was some discussion about implementing a feature whereby\n> > > partition's default will used for an attribute if it's null even after\n> > > considering the parent table's default, that is, when no default value\n> > > is defined in the parent. The details are at toward the end of this\n> > > thread:\n> > >\n> > > https://www.postgresql.org/message-id/flat/578398af46350effe7111895a4856b87b02e000e.camel%402ndquadrant.com\n> >\n> > Thanks for pointing that thread!\n> >\n> > As you mentioned in that thread, I also think that this current\n> > behavior (maybe restriction) should be documented.\n> > What about adding the note like \"a partition's default value is\n> > not applied when inserting a tuple through a partitioned table.\"\n> > into the document?\n>\n> Agreed.\n>\n> > Patch attached.\n>\n> Thanks for creating the patch, looks good to me.\n\nThanks for reviewing the patch. Committed!\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 26 Dec 2019 15:11:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "Fuji-san,\n\nOn Thu, Dec 26, 2019 at 7:12 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Wed, Dec 25, 2019 at 5:47 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> >\n> > On Wed, Dec 25, 2019 at 5:40 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > On Wed, Dec 25, 2019 at 1:56 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > > > IIRC, there was some discussion about implementing a feature whereby\n> > > > partition's default will used for an attribute if it's null even after\n> > > > considering the parent table's default, that is, when no default value\n> > > > is defined in the parent. The details are at toward the end of this\n> > > > thread:\n> > > >\n> > > > https://www.postgresql.org/message-id/flat/578398af46350effe7111895a4856b87b02e000e.camel%402ndquadrant.com\n> > >\n> > > Thanks for pointing that thread!\n> > >\n> > > As you mentioned in that thread, I also think that this current\n> > > behavior (maybe restriction) should be documented.\n> > > What about adding the note like \"a partition's default value is\n> > > not applied when inserting a tuple through a partitioned table.\"\n> > > into the document?\n> >\n> > Agreed.\n> >\n> > > Patch attached.\n> >\n> > Thanks for creating the patch, looks good to me.\n>\n> Thanks for reviewing the patch. Committed!\n\nI saw that you only pushed it on master, shouldn't we backpatch it\ndown to pg10 as this is the declarative partitioning behavior since\nthe beginning?\n\n\n",
"msg_date": "Thu, 26 Dec 2019 10:21:45 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 6:21 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> On Thu, Dec 26, 2019 at 7:12 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > Thanks for reviewing the patch. Committed!\n>\n> I saw that you only pushed it on master, shouldn't we backpatch it\n> down to pg10 as this is the declarative partitioning behavior since\n> the beginning?\n\nI had meant to reply to this but somehow forgot.\n\nI agree that it might be a good idea to back-patch this down to PG 10.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Tue, 4 Feb 2020 13:56:30 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partition and column default"
},
{
"msg_contents": "\n\nOn 2020/02/04 13:56, Amit Langote wrote:\n> On Thu, Dec 26, 2019 at 6:21 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>> On Thu, Dec 26, 2019 at 7:12 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>> Thanks for reviewing the patch. Committed!\n>>\n>> I saw that you only pushed it on master, shouldn't we backpatch it\n>> down to pg10 as this is the declarative partitioning behavior since\n>> the beginning?\n> \n> I had meant to reply to this but somehow forgot.\n> \n> I agree that it might be a good idea to back-patch this down to PG 10.\n\nBack-patched to v10. Thanks Julien and Amit for pointing out this!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Wed, 5 Feb 2020 14:09:29 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partition and column default"
}
] |
[
{
"msg_contents": "Hi,\n\nI'd like to propose to add pg_file_sync() function into contrib/adminpack.\nThis function fsyncs the specified file or directory named by its argument.\nIMO this is useful, for example, when you want to fsync the file that\npg_file_write() writes out or that COPY TO exports the data into,\nfor durability. Thought?\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Wed, 25 Dec 2019 22:00:36 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> Hi,\n>\n> I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n> This function fsyncs the specified file or directory named by its argument.\n> IMO this is useful, for example, when you want to fsync the file that\n> pg_file_write() writes out or that COPY TO exports the data into,\n> for durability. Thought?\n\n+1, that seems like a useful wrapper. Looking at existing functions,\nI see that there's a pg_file_rename() in adminpack, but it doesn't use\ndurable_rename nor does it try to perform any fsync. Same for\npg_file_unlink vs. durable_unlink. It's probably worth fixing that at\nthe same time?\n\n\n",
"msg_date": "Wed, 25 Dec 2019 15:12:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Hello,\n\nOn 2019/12/25 23:12, Julien Rouhaud wrote:\n> On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n>>\n>> Hi,\n>>\n>> I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n>> This function fsyncs the specified file or directory named by its argument.\n>> IMO this is useful, for example, when you want to fsync the file that\n>> pg_file_write() writes out or that COPY TO exports the data into,\n>> for durability. Thought?\n\n+1 too. I have a thought, but maybe it is just a nitpicking.\n\npg_file_sync() calls fsync_fname() function from fd.c. And I think it \nmight bring problems because fsync_fname() uses data_sync_elevel() to \nget elevel. As a result if data_sync_retry GUC is false fsync_fname() \nmight raise PANIC message.\n\nIt isn't case if a file doesn't exist. But if there are no permissions \non the file:\n\nPANIC: could not open file \"testfile\": Permissions denied\nserver closed the connection unexpectedly\n\nIt could be fixed by implementing a function like \npg_file_sync_internal() or by making the function fsync_fname_ext() \nexternal.\n\n> +1, that seems like a useful wrapper. Looking at existing functions,\n> I see that there's a pg_file_rename() in adminpack, but it doesn't use\n> durable_rename nor does it try to perform any fsync. Same for\n> pg_file_unlink vs. durable_unlink. It's probably worth fixing that at\n> the same time?\n\nI think it might be a different patch.\n\n-- \nArthur\n\n\n",
"msg_date": "Mon, 6 Jan 2020 15:20:13 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 03:20:13PM +0900, Arthur Zakirov wrote:\n> It isn't case if a file doesn't exist. But if there are no permissions on\n> the file:\n> \n> PANIC: could not open file \"testfile\": Permissions denied\n> server closed the connection unexpectedly\n> \n> It could be fixed by implementing a function like pg_file_sync_internal() or\n> by making the function fsync_fname_ext() external.\n\nThe patch uses stat() to make sure that the file exists and has no\nissues. Though it could be a problem with any kind of TOCTOU-like\nissues (looking at you, Windows, for ENOPERM), so I agree that it\nwould make more sense to use pg_fsync() here with a fd opened first.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 15:42:39 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 3:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 06, 2020 at 03:20:13PM +0900, Arthur Zakirov wrote:\n> > It isn't case if a file doesn't exist. But if there are no permissions on\n> > the file:\n> >\n> > PANIC: could not open file \"testfile\": Permissions denied\n> > server closed the connection unexpectedly\n> >\n> > It could be fixed by implementing a function like pg_file_sync_internal() or\n> > by making the function fsync_fname_ext() external.\n>\n> The patch uses stat() to make sure that the file exists and has no\n> issues. Though it could be a problem with any kind of TOCTOU-like\n> issues (looking at you, Windows, for ENOPERM), so I agree that it\n> would make more sense to use pg_fsync() here with a fd opened first.\n\nI agree that it's not good for pg_file_sync() to cause a PANIC.\nI updated the patch so that pg_file_sync() uses fsync_fname_ext()\ninstead of fsync_fname() as Arthur suggested.\n\nIt's one of ideas to make pg_file_sync() open the file and directly call\npg_fsync(). But fsync_fname_ext() has already such code and\nI'd like to avoid the code duplication.\n\nAttached is the updated version of the patch.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Thu, 9 Jan 2020 15:31:47 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 11:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n> > This function fsyncs the specified file or directory named by its argument.\n> > IMO this is useful, for example, when you want to fsync the file that\n> > pg_file_write() writes out or that COPY TO exports the data into,\n> > for durability. Thought?\n>\n> +1, that seems like a useful wrapper. Looking at existing functions,\n> I see that there's a pg_file_rename() in adminpack, but it doesn't use\n> durable_rename nor does it try to perform any fsync. Same for\n> pg_file_unlink vs. durable_unlink. It's probably worth fixing that at\n> the same time?\n\nI don't think that's a bug. I'm not sure if every users of those functions\nneed durable rename and unlink at the expense of performance.\nSo IMO it's better to add new argument like \"durable\" to those functions\nand durable_rename or _unlink is used only if it's true.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 9 Jan 2020 15:43:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 7:43 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Wed, Dec 25, 2019 at 11:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > >\n> > > Hi,\n> > >\n> > > I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n> > > This function fsyncs the specified file or directory named by its argument.\n> > > IMO this is useful, for example, when you want to fsync the file that\n> > > pg_file_write() writes out or that COPY TO exports the data into,\n> > > for durability. Thought?\n> >\n> > +1, that seems like a useful wrapper. Looking at existing functions,\n> > I see that there's a pg_file_rename() in adminpack, but it doesn't use\n> > durable_rename nor does it try to perform any fsync. Same for\n> > pg_file_unlink vs. durable_unlink. It's probably worth fixing that at\n> > the same time?\n>\n> I don't think that's a bug. I'm not sure if every users of those functions\n> need durable rename and unlink at the expense of performance.\n> So IMO it's better to add new argument like \"durable\" to those functions\n> and durable_rename or _unlink is used only if it's true.\n\nIt's probably a POLA violation. I'm pretty sure that most people\nusing those functions would expect that a successful call to\npg_file_unlink() mean that the file cannot raise from the dead even\nwith certain unlikely circumstances, at least I'd expect so. If\nperformance is a problem here, I'd rather have a new wrapper with a\nsync flag that defaults to true so it's possible to disable it if\nneeded instead of calling a different function. That being said, I\nagree with Arthur, it should be handled in a different patch.\n\n\n",
"msg_date": "Thu, 9 Jan 2020 14:34:10 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 7:31 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Mon, Jan 6, 2020 at 3:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> >\n> > On Mon, Jan 06, 2020 at 03:20:13PM +0900, Arthur Zakirov wrote:\n> > > It isn't case if a file doesn't exist. But if there are no permissions on\n> > > the file:\n> > >\n> > > PANIC: could not open file \"testfile\": Permissions denied\n> > > server closed the connection unexpectedly\n> > >\n> > > It could be fixed by implementing a function like pg_file_sync_internal() or\n> > > by making the function fsync_fname_ext() external.\n> >\n> > The patch uses stat() to make sure that the file exists and has no\n> > issues. Though it could be a problem with any kind of TOCTOU-like\n> > issues (looking at you, Windows, for ENOPERM), so I agree that it\n> > would make more sense to use pg_fsync() here with a fd opened first.\n>\n> I agree that it's not good for pg_file_sync() to cause a PANIC.\n> I updated the patch so that pg_file_sync() uses fsync_fname_ext()\n> instead of fsync_fname() as Arthur suggested.\n>\n> It's one of ideas to make pg_file_sync() open the file and directly call\n> pg_fsync(). But fsync_fname_ext() has already such code and\n> I'd like to avoid the code duplication.\n\nThis looks good to me.\n\n> Attached is the updated version of the patch.\n\n+ <row>\n+ <entry><function>pg_catalog.pg_file_sync(filename text)</function></entry>\n+ <entry><type>boolean</type></entry>\n+ <entry>\n+ Sync a file or directory\n+ </entry>\n+ </row>\n\n\"Flush to disk\" looks better than \"sync\" here.\n\n+/* ------------------------------------\n+ * pg_file_sync\n+ *\n+ * We REVOKE EXECUTE on the function from PUBLIC.\n+ * Users can then grant access to it based on their policies.\n+ */\n\nI think that pg_write_server_files should be allowed to call that\nfunction by default.\n\n\n",
"msg_date": "Thu, 9 Jan 2020 14:39:42 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Thu, Jan 9, 2020 at 7:43 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > On Wed, Dec 25, 2019 at 11:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > > I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n> > > > This function fsyncs the specified file or directory named by its argument.\n> > > > IMO this is useful, for example, when you want to fsync the file that\n> > > > pg_file_write() writes out or that COPY TO exports the data into,\n> > > > for durability. Thought?\n> > >\n> > > +1, that seems like a useful wrapper. Looking at existing functions,\n> > > I see that there's a pg_file_rename() in adminpack, but it doesn't use\n> > > durable_rename nor does it try to perform any fsync. Same for\n> > > pg_file_unlink vs. durable_unlink. It's probably worth fixing that at\n> > > the same time?\n> >\n> > I don't think that's a bug. I'm not sure if every users of those functions\n> > need durable rename and unlink at the expense of performance.\n> > So IMO it's better to add new argument like \"durable\" to those functions\n> > and durable_rename or _unlink is used only if it's true.\n> \n> It's probably a POLA violation. I'm pretty sure that most people\n> using those functions would expect that a successful call to\n> pg_file_unlink() mean that the file cannot raise from the dead even\n> with certain unlikely circumstances, at least I'd expect so. If\n> performance is a problem here, I'd rather have a new wrapper with a\n> sync flag that defaults to true so it's possible to disable it if\n> needed instead of calling a different function. That being said, I\n> agree with Arthur, it should be handled in a different patch.\n\nWhy would you expect that when it isn't the case for the filesystem\nitself..? I agree with Fujii on this- you should have to explicitly ask\nfor us to do more than the equivilant filesystem-level operation. We\nshouldn't be forcing that on you.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Jan 2020 12:16:07 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 6:16 PM Stephen Frost <sfrost@snowman.net> wrote:\n>\n> Greetings,\n>\n> * Julien Rouhaud (rjuju123@gmail.com) wrote:\n> > On Thu, Jan 9, 2020 at 7:43 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > On Wed, Dec 25, 2019 at 11:11 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > > On Wed, Dec 25, 2019 at 2:01 PM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > > > > I'd like to propose to add pg_file_sync() function into contrib/adminpack.\n> > > > > This function fsyncs the specified file or directory named by its argument.\n> > > > > IMO this is useful, for example, when you want to fsync the file that\n> > > > > pg_file_write() writes out or that COPY TO exports the data into,\n> > > > > for durability. Thought?\n> > > >\n> > > > +1, that seems like a useful wrapper. Looking at existing functions,\n> > > > I see that there's a pg_file_rename() in adminpack, but it doesn't use\n> > > > durable_rename nor does it try to perform any fsync. Same for\n> > > > pg_file_unlink vs. durable_unlink. It's probably worth fixing that at\n> > > > the same time?\n> > >\n> > > I don't think that's a bug. I'm not sure if every users of those functions\n> > > need durable rename and unlink at the expense of performance.\n> > > So IMO it's better to add new argument like \"durable\" to those functions\n> > > and durable_rename or _unlink is used only if it's true.\n> >\n> > It's probably a POLA violation. I'm pretty sure that most people\n> > using those functions would expect that a successful call to\n> > pg_file_unlink() mean that the file cannot raise from the dead even\n> > with certain unlikely circumstances, at least I'd expect so. If\n> > performance is a problem here, I'd rather have a new wrapper with a\n> > sync flag that defaults to true so it's possible to disable it if\n> > needed instead of calling a different function. That being said, I\n> > agree with Arthur, it should be handled in a different patch.\n>\n> Why would you expect that when it isn't the case for the filesystem\n> itself..?\n\nJust a usual habit with durable property.\n\n> I agree with Fujii on this- you should have to explicitly ask\n> for us to do more than the equivilant filesystem-level operation. We\n> shouldn't be forcing that on you.\n\nI just checked other somehow related cases and saw that for instance\nCOPY TO doesn't flush data either, so it's indeed the expected\nbehavior.\n\n\n",
"msg_date": "Thu, 9 Jan 2020 18:31:27 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Thu, Jan 9, 2020 at 6:16 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> Why would you expect that when it isn't the case for the filesystem\n>> itself..?\n\n> Just a usual habit with durable property.\n\nI tend to agree with Stephen on this, mainly because the point of\nthese adminpack functions is to expose filesystem access. If these\nfunctions were more \"database-y\" and less \"filesystem-y\", I'd agree\nwith trying to impose database-like consistency requirements.\n\nWe don't have to expose every wart of the filesystem semantics\n--- for example, it would be reasonable to make pg_file_sync()\nDo The Right Thing when applied to a directory, even if the\nparticular platform we're on doesn't behave sanely for that.\nBut having fsync separated from write is a pretty fundamental part\nof most filesystems' semantics, so we ought not try to hide that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 12:51:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> On Thu, Jan 9, 2020 at 7:31 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> >\n> > On Mon, Jan 6, 2020 at 3:42 PM Michael Paquier <michael@paquier.xyz> wrote:\n> > >\n> > > On Mon, Jan 06, 2020 at 03:20:13PM +0900, Arthur Zakirov wrote:\n> > > > It isn't case if a file doesn't exist. But if there are no permissions on\n> > > > the file:\n> > > >\n> > > > PANIC: could not open file \"testfile\": Permissions denied\n> > > > server closed the connection unexpectedly\n> > > >\n> > > > It could be fixed by implementing a function like pg_file_sync_internal() or\n> > > > by making the function fsync_fname_ext() external.\n> > >\n> > > The patch uses stat() to make sure that the file exists and has no\n> > > issues. Though it could be a problem with any kind of TOCTOU-like\n> > > issues (looking at you, Windows, for ENOPERM), so I agree that it\n> > > would make more sense to use pg_fsync() here with a fd opened first.\n> >\n> > I agree that it's not good for pg_file_sync() to cause a PANIC.\n> > I updated the patch so that pg_file_sync() uses fsync_fname_ext()\n> > instead of fsync_fname() as Arthur suggested.\n> >\n> > It's one of ideas to make pg_file_sync() open the file and directly call\n> > pg_fsync(). But fsync_fname_ext() has already such code and\n> > I'd like to avoid the code duplication.\n>\n> This looks good to me.\n>\n> > Attached is the updated version of the patch.\n>\n> + <row>\n> + <entry><function>pg_catalog.pg_file_sync(filename text)</function></entry>\n> + <entry><type>boolean</type></entry>\n> + <entry>\n> + Sync a file or directory\n> + </entry>\n> + </row>\n>\n> \"Flush to disk\" looks better than \"sync\" here.\n\nI changed the doc that way. Thanks for the review!\n\n> I think that pg_write_server_files should be allowed to call that\n> function by default.\n\nBut pg_write_server_files users are not allowed to execute\nother functions like pg_file_write() by default. So doing that\nchange only for pg_file_sync() looks strange to me.\n\nRegards,\n\n-- \nFujii Masao",
"msg_date": "Fri, 10 Jan 2020 18:50:12 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 06:50:12PM +0900, Fujii Masao wrote:\n> I changed the doc that way. Thanks for the review!\n\n+ <para>\n+ <function>pg_file_sync</function> fsyncs the specified file or directory\n+ named by <parameter>filename</parameter>. Returns true on success,\n+ an error is thrown otherwise (e.g., the specified file is not present).\n+ </para>\nWhat's the point of having a function that returns a boolean if it\njust returns true all the time? Wouldn't it be better to have a set\nof semantics closer to the unlink() part, where the call of stat()\nfails with an ERROR for (errno != ENOENT) and the fsync call returns\nfalse with a WARNING?\n\n+SELECT pg_file_sync('global'); -- sync directory\n+ pg_file_sync\n+--------------\n+ t\n+(1 row)\ninstallcheck deployments may not like that.\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 20:16:20 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 10:50 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>\n> On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > I think that pg_write_server_files should be allowed to call that\n> > function by default.\n>\n> But pg_write_server_files users are not allowed to execute\n> other functions like pg_file_write() by default. So doing that\n> change only for pg_file_sync() looks strange to me.\n\nAh indeed. I'm wondering if that's an oversight of the original\ndefault role patch or voluntary.\n\n\n",
"msg_date": "Fri, 10 Jan 2020 12:22:53 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 10, 2020 at 8:16 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 10, 2020 at 06:50:12PM +0900, Fujii Masao wrote:\n> > I changed the doc that way. Thanks for the review!\n\nThanks for the review!\n\n> + <para>\n> + <function>pg_file_sync</function> fsyncs the specified file or directory\n> + named by <parameter>filename</parameter>. Returns true on success,\n> + an error is thrown otherwise (e.g., the specified file is not present).\n> + </para>\n> What's the point of having a function that returns a boolean if it\n> just returns true all the time? Wouldn't it be better to have a set\n> of semantics closer to the unlink() part, where the call of stat()\n> fails with an ERROR for (errno != ENOENT) and the fsync call returns\n> false with a WARNING?\n\nI'm not sure if returning false with WARNING only in some error cases\nis really good idea or not. At least for me, it's more intuitive to\nreturn true on success and emit an ERROR otherwise. I'd like to hear\nmore opinions about this. Also if returning true on success is rather\nconfusing, we can change its return type to void.\n\n> +SELECT pg_file_sync('global'); -- sync directory\n> + pg_file_sync\n> +--------------\n> + t\n> +(1 row)\n> installcheck deployments may not like that.\n\nCould you elaborate why? But if it's not good to sync the existing directory\nin the regression test, we may need to give up testing the sync of directory.\nAnother idea is to add another function like pg_mkdir() into adminpack\nand use the directory that we newly created by using that function,\nfor the test. Or better idea?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Sat, 11 Jan 2020 02:12:15 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Hello,\n\nOn Sat, Jan 11, 2020 at 2:12 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > + <para>\n> > + <function>pg_file_sync</function> fsyncs the specified file or directory\n> > + named by <parameter>filename</parameter>. Returns true on success,\n> > + an error is thrown otherwise (e.g., the specified file is not present).\n> > + </para>\n> > What's the point of having a function that returns a boolean if it\n> > just returns true all the time? Wouldn't it be better to have a set\n> > of semantics closer to the unlink() part, where the call of stat()\n> > fails with an ERROR for (errno != ENOENT) and the fsync call returns\n> > false with a WARNING?\n>\n> I'm not sure if returning false with WARNING only in some error cases\n> is really good idea or not. At least for me, it's more intuitive to\n> return true on success and emit an ERROR otherwise. I'd like to hear\n> more opinions about this. Also if returning true on success is rather\n> confusing, we can change its return type to void.\n\nI think it would be more consistent to pg_file_unlink().\n\nOther functions throw an ERROR and return a number or set of records\nexcept pg_file_rename(), which in some cases throws a WARNING and\nreturns a boolean result.\n\n-- \nArthur\n\n\n",
"msg_date": "Sat, 11 Jan 2020 19:04:50 +0900",
"msg_from": "Artur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 02:12:15AM +0900, Fujii Masao wrote:\n> I'm not sure if returning false with WARNING only in some error cases\n> is really good idea or not. At least for me, it's more intuitive to\n> return true on success and emit an ERROR otherwise. I'd like to hear\n> more opinions about this. Also if returning true on success is rather\n> confusing, we can change its return type to void.\n\nAn advantage of not issuing an ERROR if that when working on a list of\nfiles (for example a WITH RECURSIVE on the whole data directory?), you\ncan then know which files could not be synced instead of seeing one\nERROR about one file, while being unsure about the state of the\nothers.\n\n> Could you elaborate why? But if it's not good to sync the existing directory\n> in the regression test, we may need to give up testing the sync of directory.\n> Another idea is to add another function like pg_mkdir() into adminpack\n> and use the directory that we newly created by using that function,\n> for the test. Or better idea?\n\nWe should avoid potentially costly tests in any regression scenario if\nwe have a way to do so. I like your idea of having a pg_mkdir(), that\nfeels more natural to have as there is already pg_file_write().\n--\nMichael",
"msg_date": "Mon, 13 Jan 2020 22:46:00 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 2:46 PM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Sat, Jan 11, 2020 at 02:12:15AM +0900, Fujii Masao wrote:\n> > I'm not sure if returning false with WARNING only in some error cases\n> > is really good idea or not. At least for me, it's more intuitive to\n> > return true on success and emit an ERROR otherwise. I'd like to hear\n> > more opinions about this. Also if returning true on success is rather\n> > confusing, we can change its return type to void.\n>\n> An advantage of not issuing an ERROR if that when working on a list of\n> files (for example a WITH RECURSIVE on the whole data directory?), you\n> can then know which files could not be synced instead of seeing one\n> ERROR about one file, while being unsure about the state of the\n> others.\n\nActually, can't it create a security hazard, for instance if you call\npg_file_sync() on a heap file and the calls errors out, since it's\nbypassing data_sync_retry?\n\n\n",
"msg_date": "Mon, 13 Jan 2020 15:39:32 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Mon, Jan 13, 2020 at 03:39:32PM +0100, Julien Rouhaud wrote:\n> Actually, can't it create a security hazard, for instance if you call\n> pg_file_sync() on a heap file and the calls errors out, since it's\n> bypassing data_sync_retry?\n\nAre you mistaking security with durability here? By default, the\nfunction proposed is only executable by a superuser, so that's not\nreally a security concern.. But I agree that failing to detect a\nPANIC on a fsync for a sensitive Postgres file could lead to\ncorruptions. That's why we PANIC these days. \n--\nMichael",
"msg_date": "Tue, 14 Jan 2020 15:17:56 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 7:18 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Mon, Jan 13, 2020 at 03:39:32PM +0100, Julien Rouhaud wrote:\n> > Actually, can't it create a security hazard, for instance if you call\n> > pg_file_sync() on a heap file and the calls errors out, since it's\n> > bypassing data_sync_retry?\n>\n> Are you mistaking security with durability here?\n\nYes, data durability sorry.\n\n> By default, the\n> function proposed is only executable by a superuser, so that's not\n> really a security concern.. But I agree that failing to detect a\n> PANIC on a fsync for a sensitive Postgres file could lead to\n> corruptions. That's why we PANIC these days.\n\nExactly. My concern is that some superuser may not be aware that\npg_file_sync could actually corrupt data, so there should be a big red\nwarning explaining that.\n\n\n",
"msg_date": "Tue, 14 Jan 2020 07:26:07 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Hello,\n\n> On Sut, Jan 11, 2020 at 2:12 Fujii Masao <masao.fujii@gmail.com>:\n> I'm not sure if returning false with WARNING only in some error cases\n> is really good idea or not. At least for me, it's more intuitive to\n> return true on success and emit an ERROR otherwise. I'd like to hear\n> more opinions about this.\n\n+1.\nAs a user, I expect these adminpack functions to do similar behaviors\nto the corresponding system calls.\nSystem calls for flushing data to disk(fsync on Linux and FlushFileBuffers\n on Windows) return different codes on success and failure, and when it\nfails we can get error messages. So it seems straightforward for me to\n 'return true on success and emit an ERROR otherwise'.\n\n\n> > On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com>\nwrote:\n> > >\n> > > I think that pg_write_server_files should be allowed to call that\n> > > function by default.\n> >\n> > But pg_write_server_files users are not allowed to execute\n> > other functions like pg_file_write() by default. So doing that\n> > change only for pg_file_sync() looks strange to me.\n\n> Ah indeed. I'm wondering if that's an oversight of the original\n> default role patch or voluntary.\n\nIt's not directly related to the patch, but as far as I read the\nmanual below, I expected pg_write_server_files users could execute\n adminpack functions.\n\n | Table 21.1 Default Roles\n | pg_write_server_files: Allow writing to files in any location the\ndatabase can access on the server with COPY and other file-access functions.\n\n\n--\nAtsushi Torikoshi\n\nHello,> On Sut, Jan 11, 2020 at 2:12 Fujii Masao <masao.fujii@gmail.com>:> I'm not sure if returning false with WARNING only in some error cases> is really good idea or not. At least for me, it's more intuitive to> return true on success and emit an ERROR otherwise. I'd like to hear> more opinions about this.+1.As a user, I expect these adminpack functions to do similar behaviorsto the corresponding system calls.System calls for flushing data to disk(fsync on Linux and FlushFileBuffers on Windows) return different codes on success and failure, and when itfails we can get error messages. So it seems straightforward for me to 'return true on success and emit an ERROR otherwise'.> > On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:> > >> > > I think that pg_write_server_files should be allowed to call that> > > function by default.> >> > But pg_write_server_files users are not allowed to execute> > other functions like pg_file_write() by default. So doing that> > change only for pg_file_sync() looks strange to me.> Ah indeed. I'm wondering if that's an oversight of the original> default role patch or voluntary.It's not directly related to the patch, but as far as I read themanual below, I expected pg_write_server_files users could execute adminpack functions. | Table 21.1 Default Roles | pg_write_server_files: Allow writing to files in any location the database can access on the server with COPY and other file-access functions.--Atsushi Torikoshi",
"msg_date": "Wed, 15 Jan 2020 00:08:06 +0900",
"msg_from": "Atsushi Torikoshi <atorik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 4:08 PM Atsushi Torikoshi <atorik@gmail.com> wrote:\n>\n> > On Sut, Jan 11, 2020 at 2:12 Fujii Masao <masao.fujii@gmail.com>:\n> > > But pg_write_server_files users are not allowed to execute\n> > > other functions like pg_file_write() by default. So doing that\n> > > change only for pg_file_sync() looks strange to me.\n>\n> > Ah indeed. I'm wondering if that's an oversight of the original\n> > default role patch or voluntary.\n>\n> It's not directly related to the patch, but as far as I read the\n> manual below, I expected pg_write_server_files users could execute\n> adminpack functions.\n>\n> | Table 21.1 Default Roles\n> | pg_write_server_files: Allow writing to files in any location the database can access on the server with COPY and other file-access functions.\n\nActually, pg_write_server_files has enough privileges to execute those\nfunctions anywhere on the FS as far as C code is concerned, provided\nthat the user running postgres daemon is allowed to (see\nconvert_and_check_filename), but won't be allowed to do so by default\nas it won't have EXECUTE privilege on the functions.\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:50:57 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Fri, Jan 10, 2020 at 10:50 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > I think that pg_write_server_files should be allowed to call that\n> > > function by default.\n> >\n> > But pg_write_server_files users are not allowed to execute\n> > other functions like pg_file_write() by default. So doing that\n> > change only for pg_file_sync() looks strange to me.\n> \n> Ah indeed. I'm wondering if that's an oversight of the original\n> default role patch or voluntary.\n\nIt was intentional.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 14 Jan 2020 14:57:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "Le mar. 14 janv. 2020 à 22:57, Stephen Frost <sfrost@snowman.net> a écrit :\n\n> Greetings,\n>\n> * Julien Rouhaud (rjuju123@gmail.com) wrote:\n> > On Fri, Jan 10, 2020 at 10:50 AM Fujii Masao <masao.fujii@gmail.com>\n> wrote:\n> > > On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com>\n> wrote:\n> > > > I think that pg_write_server_files should be allowed to call that\n> > > > function by default.\n> > >\n> > > But pg_write_server_files users are not allowed to execute\n> > > other functions like pg_file_write() by default. So doing that\n> > > change only for pg_file_sync() looks strange to me.\n> >\n> > Ah indeed. I'm wondering if that's an oversight of the original\n> > default role patch or voluntary.\n>\n> It was intentional.\n>\n\nok, thanks for the clarification.\n\n>\n\nLe mar. 14 janv. 2020 à 22:57, Stephen Frost <sfrost@snowman.net> a écrit :Greetings,\n\n* Julien Rouhaud (rjuju123@gmail.com) wrote:\n> On Fri, Jan 10, 2020 at 10:50 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> > On Thu, Jan 9, 2020 at 10:39 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > I think that pg_write_server_files should be allowed to call that\n> > > function by default.\n> >\n> > But pg_write_server_files users are not allowed to execute\n> > other functions like pg_file_write() by default. So doing that\n> > change only for pg_file_sync() looks strange to me.\n> \n> Ah indeed. I'm wondering if that's an oversight of the original\n> default role patch or voluntary.\n\nIt was intentional.ok, thanks for the clarification.",
"msg_date": "Tue, 14 Jan 2020 23:12:18 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 1:49 Julien Rouhaud <rjuju123@gmail.com>:\n\n> Actually, pg_write_server_files has enough privileges to execute those\n> functions anywhere on the FS as far as C code is concerned, provided\n> that the user running postgres daemon is allowed to (see\n> convert_and_check_filename), but won't be allowed to do so by default\n> as it won't have EXECUTE privilege on the functions.\n>\n\nI see, thanks for your explanation.\n\n--\nRegards,\nAtsushi Torikoshi\n\nOn Wed, Jan 15, 2020 at 1:49 Julien Rouhaud <rjuju123@gmail.com>:\nActually, pg_write_server_files has enough privileges to execute those\nfunctions anywhere on the FS as far as C code is concerned, provided\nthat the user running postgres daemon is allowed to (see\nconvert_and_check_filename), but won't be allowed to do so by default\nas it won't have EXECUTE privilege on the functions.I see, thanks for your explanation.--Regards,Atsushi Torikoshi",
"msg_date": "Wed, 15 Jan 2020 21:59:59 +0900",
"msg_from": "Atsushi Torikoshi <atorik@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 10:08 AM Atsushi Torikoshi <atorik@gmail.com> wrote:\n> fails we can get error messages. So it seems straightforward for me to\n> 'return true on success and emit an ERROR otherwise'.\n\nI agree with reporting errors using ERROR, but it seems to me that we\nought to then make the function return 'void' rather than 'bool'.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 09:51:24 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 09:51:24AM -0500, Robert Haas wrote:\n> On Tue, Jan 14, 2020 at 10:08 AM Atsushi Torikoshi <atorik@gmail.com> wrote:\n>> fails we can get error messages. So it seems straightforward for me to\n>> 'return true on success and emit an ERROR otherwise'.\n> \n> I agree with reporting errors using ERROR, but it seems to me that we\n> ought to then make the function return 'void' rather than 'bool'.\n\nYeah, that should be either ERROR and return a void result, or issue a\nWARNING/ERROR (depending on the code path, maybe PANIC?) with a\nboolean status returned if there is a WARNING. Mixing both concepts\nwith an ERROR all the time and always a true status is just weird,\nbecause you know that if no errors are raised then the status will be\nalways true. So there is no point to have a boolean status to begin\nwith.\n--\nMichael",
"msg_date": "Fri, 17 Jan 2020 13:36:52 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On 2020/01/17 13:36, Michael Paquier wrote:\n> On Thu, Jan 16, 2020 at 09:51:24AM -0500, Robert Haas wrote:\n>> On Tue, Jan 14, 2020 at 10:08 AM Atsushi Torikoshi <atorik@gmail.com> wrote:\n>>> fails we can get error messages. So it seems straightforward for me to\n>>> 'return true on success and emit an ERROR otherwise'.\n>>\n>> I agree with reporting errors using ERROR, but it seems to me that we\n>> ought to then make the function return 'void' rather than 'bool'.\n> \n> Yeah, that should be either ERROR and return a void result, or issue a\n> WARNING/ERROR (depending on the code path, maybe PANIC?) with a\n> boolean status returned if there is a WARNING. Mixing both concepts\n> with an ERROR all the time and always a true status is just weird,\n> because you know that if no errors are raised then the status will be\n> always true. So there is no point to have a boolean status to begin\n> with.\n\nOK, so our consensus is to return void on success and throw an error\notherwise. Attached is the updated version of the patch.\n\nRegards,\n\n--\nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Fri, 17 Jan 2020 16:05:03 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "\n\nOn 2020/01/13 22:46, Michael Paquier wrote:\n> On Sat, Jan 11, 2020 at 02:12:15AM +0900, Fujii Masao wrote:\n>> I'm not sure if returning false with WARNING only in some error cases\n>> is really good idea or not. At least for me, it's more intuitive to\n>> return true on success and emit an ERROR otherwise. I'd like to hear\n>> more opinions about this. Also if returning true on success is rather\n>> confusing, we can change its return type to void.\n> \n> An advantage of not issuing an ERROR if that when working on a list of\n> files (for example a WITH RECURSIVE on the whole data directory?), you\n> can then know which files could not be synced instead of seeing one\n> ERROR about one file, while being unsure about the state of the\n> others.\n> \n>> Could you elaborate why? But if it's not good to sync the existing directory\n>> in the regression test, we may need to give up testing the sync of directory.\n>> Another idea is to add another function like pg_mkdir() into adminpack\n>> and use the directory that we newly created by using that function,\n>> for the test. Or better idea?\n> \n> We should avoid potentially costly tests in any regression scenario if\n> we have a way to do so. I like your idea of having a pg_mkdir(), that\n> feels more natural to have as there is already pg_file_write().\n\nBTW, in the latest patch that I posted upthread, I changed\nthe directory to sync for the test from \"global\" to \"pg_stat\"\nbecause pg_stat is empty while the server is running,\nand syncing it would not be so costly.\nIntroduing pg_mkdir() (maybe pg_rmdir() would be also necessary)\nis an idea, but it's better to do that as a separate patch\nif it's really necessary.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 17 Jan 2020 16:18:21 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On 2020/01/17 16:05, Fujii Masao wrote:\n> On 2020/01/17 13:36, Michael Paquier wrote:\n>> Yeah, that should be either ERROR and return a void result, or issue a\n>> WARNING/ERROR (depending on the code path, maybe PANIC?) with a\n>> boolean status returned if there is a WARNING.� Mixing both concepts\n>> with an ERROR all the time and always a true status is just weird,\n>> because you know that if no errors are raised then the status will be\n>> always true.� So there is no point to have a boolean status to begin\n>> with.\n> \n> OK, so our consensus is to return void on success and throw an error\n> otherwise. Attached is the updated version of the patch.\nThank you for the new version!\n\nIt is compiled and passes the tests. There is the documentation and it \nis built too without an error.\n\nIt seems that consensus about the returned type was reached and I marked \nthe patch as \"Ready for Commiter\".\n\n-- \nArthur\n\n\n",
"msg_date": "Fri, 24 Jan 2020 13:28:29 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n> It is compiled and passes the tests. There is the documentation and it is\n> built too without an error.\n> \n> It seems that consensus about the returned type was reached and I marked the\n> patch as \"Ready for Commiter\".\n\n+ fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\nOne comment here: should we warn better users in the docs that a fsync\nfailule will not trigger a PANIC here? Here, fsync failure on heap\nfile => ERROR => potential data corruption.\n--\nMichael",
"msg_date": "Fri, 24 Jan 2020 14:56:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 6:56 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n> > It is compiled and passes the tests. There is the documentation and it is\n> > built too without an error.\n> >\n> > It seems that consensus about the returned type was reached and I marked the\n> > patch as \"Ready for Commiter\".\n>\n> + fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n> One comment here: should we warn better users in the docs that a fsync\n> failule will not trigger a PANIC here? Here, fsync failure on heap\n> file => ERROR => potential data corruption.\n\nDefinitely yes.\n\n\n",
"msg_date": "Fri, 24 Jan 2020 07:31:06 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On 2020/01/24 14:56, Michael Paquier wrote:\n> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n>> It is compiled and passes the tests. There is the documentation and it is\n>> built too without an error.\n>>\n>> It seems that consensus about the returned type was reached and I marked the\n>> patch as \"Ready for Commiter\".\n> \n> + fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n> One comment here: should we warn better users in the docs that a fsync\n> failule will not trigger a PANIC here? Here, fsync failure on heap\n> file => ERROR => potential data corruption.\n\nAh, true. It is possible to add couple sentences that pg_file_sync() \ndoesn't depend on data_sync_retry GUC and doesn't raise a PANIC even for \ndatabase files.\n\n-- \nArthur\n\n\n",
"msg_date": "Fri, 24 Jan 2020 15:38:11 +0900",
"msg_from": "Arthur Zakirov <zaartur@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On 2020/01/24 15:38, Arthur Zakirov wrote:\n> On 2020/01/24 14:56, Michael Paquier wrote:\n>> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n>>> It is compiled and passes the tests. There is the documentation and \n>>> it is\n>>> built too without an error.\n>>>\n>>> It seems that consensus about the returned type was reached and I \n>>> marked the\n>>> patch as \"Ready for Commiter\".\n>>\n>> +������ fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n>> One comment here: should we warn better users in the docs that a fsync\n>> failule will not trigger a PANIC here?� Here, fsync failure on heap\n>> file => ERROR => potential data corruption.\n> \n> Ah, true. It is possible to add couple sentences that pg_file_sync() \n> doesn't depend on data_sync_retry GUC and doesn't raise a PANIC even for \n> database files.\n\nThanks all for the review!\n\nSo, what about the attached patch?\nIn the patch, I added the following note to the doc.\n\n--------------------\nNote that\n<xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\nand therefore a PANIC-level error will not be raised even on failure to\nflush database files.\n--------------------\n\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Fri, 24 Jan 2020 16:19:58 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "On Fri, Jan 24, 2020 at 8:20 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>\n> On 2020/01/24 15:38, Arthur Zakirov wrote:\n> > On 2020/01/24 14:56, Michael Paquier wrote:\n> >> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n> >>> It is compiled and passes the tests. There is the documentation and\n> >>> it is\n> >>> built too without an error.\n> >>>\n> >>> It seems that consensus about the returned type was reached and I\n> >>> marked the\n> >>> patch as \"Ready for Commiter\".\n> >>\n> >> + fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n> >> One comment here: should we warn better users in the docs that a fsync\n> >> failule will not trigger a PANIC here? Here, fsync failure on heap\n> >> file => ERROR => potential data corruption.\n> >\n> > Ah, true. It is possible to add couple sentences that pg_file_sync()\n> > doesn't depend on data_sync_retry GUC and doesn't raise a PANIC even for\n> > database files.\n>\n> Thanks all for the review!\n>\n> So, what about the attached patch?\n> In the patch, I added the following note to the doc.\n>\n> --------------------\n> Note that\n> <xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\n> and therefore a PANIC-level error will not be raised even on failure to\n> flush database files.\n> --------------------\n\nWe should explicitly mention that this can cause corruption. How about:\n\n--------------------\nNote that\n<xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\nand therefore a PANIC-level error will not be raised even on failure to\nflush database files. If that happens, the underlying database\nobjects may be corrupted.\n--------------------\n\n\n",
"msg_date": "Fri, 24 Jan 2020 08:56:47 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "\n\nOn 2020/01/24 16:56, Julien Rouhaud wrote:\n> On Fri, Jan 24, 2020 at 8:20 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>\n>> On 2020/01/24 15:38, Arthur Zakirov wrote:\n>>> On 2020/01/24 14:56, Michael Paquier wrote:\n>>>> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n>>>>> It is compiled and passes the tests. There is the documentation and\n>>>>> it is\n>>>>> built too without an error.\n>>>>>\n>>>>> It seems that consensus about the returned type was reached and I\n>>>>> marked the\n>>>>> patch as \"Ready for Commiter\".\n>>>>\n>>>> + fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n>>>> One comment here: should we warn better users in the docs that a fsync\n>>>> failule will not trigger a PANIC here? Here, fsync failure on heap\n>>>> file => ERROR => potential data corruption.\n>>>\n>>> Ah, true. It is possible to add couple sentences that pg_file_sync()\n>>> doesn't depend on data_sync_retry GUC and doesn't raise a PANIC even for\n>>> database files.\n>>\n>> Thanks all for the review!\n>>\n>> So, what about the attached patch?\n>> In the patch, I added the following note to the doc.\n>>\n>> --------------------\n>> Note that\n>> <xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\n>> and therefore a PANIC-level error will not be raised even on failure to\n>> flush database files.\n>> --------------------\n> \n> We should explicitly mention that this can cause corruption. How about:\n> \n> --------------------\n> Note that\n> <xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\n> and therefore a PANIC-level error will not be raised even on failure to\n> flush database files. If that happens, the underlying database\n> objects may be corrupted.\n> --------------------\n\nIMO that's overkill. If we really need such mention for pg_file_sync(),\nwe also need to add it for other functions like pg_read_file(),\npg_stat_file(), etc. But, again, which looks overkill.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 24 Jan 2020 17:08:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
},
{
"msg_contents": "\n\nOn 2020/01/24 17:08, Fujii Masao wrote:\n> \n> \n> On 2020/01/24 16:56, Julien Rouhaud wrote:\n>> On Fri, Jan 24, 2020 at 8:20 AM Fujii Masao \n>> <masao.fujii@oss.nttdata.com> wrote:\n>>>\n>>> On 2020/01/24 15:38, Arthur Zakirov wrote:\n>>>> On 2020/01/24 14:56, Michael Paquier wrote:\n>>>>> On Fri, Jan 24, 2020 at 01:28:29PM +0900, Arthur Zakirov wrote:\n>>>>>> It is compiled and passes the tests. There is the documentation and\n>>>>>> it is\n>>>>>> built too without an error.\n>>>>>>\n>>>>>> It seems that consensus about the returned type was reached and I\n>>>>>> marked the\n>>>>>> patch as \"Ready for Commiter\".\n>>>>>\n>>>>> + fsync_fname_ext(filename, S_ISDIR(fst.st_mode), false, ERROR);\n>>>>> One comment here: should we warn better users in the docs that a fsync\n>>>>> failule will not trigger a PANIC here? Here, fsync failure on heap\n>>>>> file => ERROR => potential data corruption.\n>>>>\n>>>> Ah, true. It is possible to add couple sentences that pg_file_sync()\n>>>> doesn't depend on data_sync_retry GUC and doesn't raise a PANIC even \n>>>> for\n>>>> database files.\n>>>\n>>> Thanks all for the review!\n>>>\n>>> So, what about the attached patch?\n>>> In the patch, I added the following note to the doc.\n>>>\n>>> --------------------\n>>> Note that\n>>> <xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\n>>> and therefore a PANIC-level error will not be raised even on failure to\n>>> flush database files.\n>>> --------------------\n>>\n>> We should explicitly mention that this can cause corruption. How about:\n>>\n>> --------------------\n>> Note that\n>> <xref linkend=\"guc-data-sync-retry\"/> has no effect on this function,\n>> and therefore a PANIC-level error will not be raised even on failure to\n>> flush database files. If that happens, the underlying database\n>> objects may be corrupted.\n>> --------------------\n> \n> IMO that's overkill. If we really need such mention for pg_file_sync(),\n> we also need to add it for other functions like pg_read_file(),\n> pg_stat_file(), etc. But, again, which looks overkill.\n\nI pushed the v5 of the patch. Thanks all for reviewing the patch!\nIf the current document is not good yet, let's keep discussing that!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 24 Jan 2020 20:50:16 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: Add pg_file_sync() to adminpack"
}
] |
[
{
"msg_contents": "Hi,\n\nI want to test GSSAPI based encryption support added via commit ID\nb0b39f72b9904bcb80f97b35837ccff1578aa4b8.\n\nI have built source --with-gssapi.\nAfter installation I added the following line in pg_hba.conf\nhostgssenc all all 172.16.214.149/24 trust\nNext I ran the server and ran psql using\n./psql 'host=172.16.214.149 port=5432 dbname=postgres user=abbas\ngssencmode=require'\nand it resulted in the following error:\npsql: error: could not connect to server: GSSAPI encryption required but\nwas impossible (possibly no credential cache, no server support, or using a\nlocal socket)\n\nWhat steps should I follow If I want to test just the encryption support?\n\nIf GSSAPI based encryption support cannot be tested without GSSAPI\n(kerberos) based authentication, then what is the purpose of having trust\nas authentication method for hostgssenc connections?\n\nBest Regards\n-- \n*Abbas*\nArchitect\n\nPh: 92.334.5100153\nSkype ID: gabbasb\nwww.enterprisedb.co <http://www.enterprisedb.com/>m\n<http://www.enterprisedb.com/>\n\n*Follow us on Twitter*\n@EnterpriseDB\n\nVisit EnterpriseDB for tutorials, webinars, whitepapers\n<http://www.enterprisedb.com/resources-community> and more\n<http://www.enterprisedb.com/resources-community>\n\nHi,I want to test GSSAPI based encryption support added via commit ID b0b39f72b9904bcb80f97b35837ccff1578aa4b8.I have built source --with-gssapi.After installation I added the following line in pg_hba.confhostgssenc all all 172.16.214.149/24 trustNext I ran the server and ran psql using./psql 'host=172.16.214.149 port=5432 dbname=postgres user=abbas gssencmode=require'and it resulted in the following error:psql: error: could not connect to server: GSSAPI encryption required but was impossible (possibly no credential cache, no server support, or using a local socket)What steps should I follow If I want to test just the encryption support?If GSSAPI based encryption support cannot be tested without GSSAPI (kerberos) based authentication, then what is the purpose of having trust as authentication method for hostgssenc connections?Best Regards-- Abbas\n\nArchitectPh: 92.334.5100153\n\nSkype ID: gabbasbwww.enterprisedb.com\nFollow us on Twitter@EnterpriseDB Visit EnterpriseDB for tutorials, webinars, whitepapers and more",
"msg_date": "Wed, 25 Dec 2019 08:02:14 -0500",
"msg_from": "Abbas Butt <abbas.butt@enterprisedb.com>",
"msg_from_op": true,
"msg_subject": "How to test GSSAPI based encryption support"
}
] |
[
{
"msg_contents": "Hello,\n\nGuillaume (in Cc) recently pointed out [1] that it's currently not\npossible to retrieve the list of parallel workers for a given backend\nat the SQL level. His use case was to develop a function in plpgsql\nto sample a given query wait event, but it's not hard to imagine other\nuseful use cases for this information, for instance doing some\nanalysis on the average number of workers per parallel query, or ratio\nof parallel queries. IIUC parallel queries is for now the only user\nof lock group, so this should work just fine.\n\nI'm attaching a trivial patch to expose the group leader pid if any\nin pg_stat_activity. Quick example of usage:\n\n=# SELECT query, leader_pid,\n array_agg(pid) filter(WHERE leader_pid != pid) AS members\nFROM pg_stat_activity\nWHERE leader_pid IS NOT NULL\nGROUP BY query, leader_pid;\n query | leader_pid | members\n-------------------+------------+---------------\n select * from t1; | 28701 | {28728,28732}\n(1 row)\n\n\n[1] https://twitter.com/g_lelarge/status/1209486212190343168",
"msg_date": "Wed, 25 Dec 2019 19:03:44 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n>\n> Guillaume (in Cc) recently pointed out [1] that it's currently not\n> possible to retrieve the list of parallel workers for a given backend\n> at the SQL level. His use case was to develop a function in plpgsql\n> to sample a given query wait event, but it's not hard to imagine other\n> useful use cases for this information, for instance doing some\n> analysis on the average number of workers per parallel query, or ratio\n> of parallel queries. IIUC parallel queries is for now the only user\n> of lock group, so this should work just fine.\n>\n> I'm attaching a trivial patch to expose the group leader pid if any\n> in pg_stat_activity. Quick example of usage:\n>\n> =# SELECT query, leader_pid,\n> array_agg(pid) filter(WHERE leader_pid != pid) AS members\n> FROM pg_stat_activity\n> WHERE leader_pid IS NOT NULL\n> GROUP BY query, leader_pid;\n> query | leader_pid | members\n> -------------------+------------+---------------\n> select * from t1; | 28701 | {28728,28732}\n> (1 row)\n>\n>\n> [1] https://twitter.com/g_lelarge/status/1209486212190343168\n\nAnd I just realized that I forgot to update rule.out, sorry about\nthat. v2 attached.",
"msg_date": "Wed, 25 Dec 2019 19:32:09 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n\r\n> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n> >\r\n> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n> > possible to retrieve the list of parallel workers for a given backend\r\n> > at the SQL level. His use case was to develop a function in plpgsql\r\n> > to sample a given query wait event, but it's not hard to imagine other\r\n> > useful use cases for this information, for instance doing some\r\n> > analysis on the average number of workers per parallel query, or ratio\r\n> > of parallel queries. IIUC parallel queries is for now the only user\r\n> > of lock group, so this should work just fine.\r\n> >\r\n> > I'm attaching a trivial patch to expose the group leader pid if any\r\n> > in pg_stat_activity. Quick example of usage:\r\n> >\r\n> > =# SELECT query, leader_pid,\r\n> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n> > FROM pg_stat_activity\r\n> > WHERE leader_pid IS NOT NULL\r\n> > GROUP BY query, leader_pid;\r\n> > query | leader_pid | members\r\n> > -------------------+------------+---------------\r\n> > select * from t1; | 28701 | {28728,28732}\r\n> > (1 row)\r\n> >\r\n> >\r\n> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n>\r\n> And I just realized that I forgot to update rule.out, sorry about\r\n> that. v2 attached.\r\n>\r\n\r\nSo I tried your patch this morning, and it works really well.\r\n\r\nOn a SELECT count(*), I got this:\r\n\r\nSELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type\r\nFROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n\r\n┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n│ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n backend_type │\r\n├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n│ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client\r\nbackend │\r\n│ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel\r\nworker │\r\n│ 111439 │ 116888 │ IO │ WALSync │ active │ parallel\r\nworker │\r\n└────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n(3 rows)\r\n\r\nand this from a CREATE INDEX:\r\n\r\n┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n│ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n backend_type │\r\n├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n│ 111439 │ 111439 │ │ │ active │ client\r\nbackend │\r\n│ 111439 │ 118775 │ │ │ active │ parallel\r\nworker │\r\n└────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n(2 rows)\r\n\r\nAnyway, it applies cleanly, it compiles, and it works. Documentation is\r\navailable. So it looks to me it's good to go :)\r\n\r\n\r\n-- \r\nGuillaume.\r\n\nLe mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>\r\n> Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n> possible to retrieve the list of parallel workers for a given backend\r\n> at the SQL level. His use case was to develop a function in plpgsql\r\n> to sample a given query wait event, but it's not hard to imagine other\r\n> useful use cases for this information, for instance doing some\r\n> analysis on the average number of workers per parallel query, or ratio\r\n> of parallel queries. IIUC parallel queries is for now the only user\r\n> of lock group, so this should work just fine.\r\n>\r\n> I'm attaching a trivial patch to expose the group leader pid if any\r\n> in pg_stat_activity. Quick example of usage:\r\n>\r\n> =# SELECT query, leader_pid,\r\n> array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n> FROM pg_stat_activity\r\n> WHERE leader_pid IS NOT NULL\r\n> GROUP BY query, leader_pid;\r\n> query | leader_pid | members\r\n> -------------------+------------+---------------\r\n> select * from t1; | 28701 | {28728,28732}\r\n> (1 row)\r\n>\r\n>\r\n> [1] https://twitter.com/g_lelarge/status/1209486212190343168\n\r\nAnd I just realized that I forgot to update rule.out, sorry about\r\nthat. v2 attached.So I tried your patch this morning, and it works really well.On a SELECT count(*), I got this:SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐│ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤│ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client backend ││ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel worker ││ 111439 │ 116888 │ IO │ WALSync │ active │ parallel worker │└────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘(3 rows)and this from a CREATE INDEX:┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐│ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤│ 111439 │ 111439 │ │ │ active │ client backend ││ 111439 │ 118775 │ │ │ active │ parallel worker │└────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘(2 rows)Anyway, it applies cleanly, it compiles, and it works. Documentation is available. So it looks to me it's good to go :)-- Guillaume.",
"msg_date": "Thu, 26 Dec 2019 09:08:08 +0100",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n<guillaume@lelarge.info> wrote:\r\n>\r\n> Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>>\r\n>> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> >\r\n>> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n>> > possible to retrieve the list of parallel workers for a given backend\r\n>> > at the SQL level. His use case was to develop a function in plpgsql\r\n>> > to sample a given query wait event, but it's not hard to imagine other\r\n>> > useful use cases for this information, for instance doing some\r\n>> > analysis on the average number of workers per parallel query, or ratio\r\n>> > of parallel queries. IIUC parallel queries is for now the only user\r\n>> > of lock group, so this should work just fine.\r\n>> >\r\n>> > I'm attaching a trivial patch to expose the group leader pid if any\r\n>> > in pg_stat_activity. Quick example of usage:\r\n>> >\r\n>> > =# SELECT query, leader_pid,\r\n>> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n>> > FROM pg_stat_activity\r\n>> > WHERE leader_pid IS NOT NULL\r\n>> > GROUP BY query, leader_pid;\r\n>> > query | leader_pid | members\r\n>> > -------------------+------------+---------------\r\n>> > select * from t1; | 28701 | {28728,28732}\r\n>> > (1 row)\r\n>> >\r\n>> >\r\n>> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n>>\r\n>> And I just realized that I forgot to update rule.out, sorry about\r\n>> that. v2 attached.\r\n>\r\n>\r\n> So I tried your patch this morning, and it works really well.\r\n>\r\n> On a SELECT count(*), I got this:\r\n>\r\n> SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n>\r\n> ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n> │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n> ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n> │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client backend │\r\n> │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel worker │\r\n> │ 111439 │ 116888 │ IO │ WALSync │ active │ parallel worker │\r\n> └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n> (3 rows)\r\n>\r\n> and this from a CREATE INDEX:\r\n>\r\n> ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n> │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n> ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n> │ 111439 │ 111439 │ │ │ active │ client backend │\r\n> │ 111439 │ 118775 │ │ │ active │ parallel worker │\r\n> └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n> (2 rows)\r\n>\r\n> Anyway, it applies cleanly, it compiles, and it works. Documentation is available. So it looks to me it's good to go :)\r\n\r\nThanks for the review Guillaume. Double checking the doc, I see that\r\nI made a copy/pasto mistake in the new field name. Attached v3 should\r\nbe all good.",
"msg_date": "Thu, 26 Dec 2019 09:51:23 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Le jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n\r\n> On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n> <guillaume@lelarge.info> wrote:\r\n> >\r\n> > Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a\r\n> écrit :\r\n> >>\r\n> >> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com>\r\n> wrote:\r\n> >> >\r\n> >> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n> >> > possible to retrieve the list of parallel workers for a given backend\r\n> >> > at the SQL level. His use case was to develop a function in plpgsql\r\n> >> > to sample a given query wait event, but it's not hard to imagine other\r\n> >> > useful use cases for this information, for instance doing some\r\n> >> > analysis on the average number of workers per parallel query, or ratio\r\n> >> > of parallel queries. IIUC parallel queries is for now the only user\r\n> >> > of lock group, so this should work just fine.\r\n> >> >\r\n> >> > I'm attaching a trivial patch to expose the group leader pid if any\r\n> >> > in pg_stat_activity. Quick example of usage:\r\n> >> >\r\n> >> > =# SELECT query, leader_pid,\r\n> >> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n> >> > FROM pg_stat_activity\r\n> >> > WHERE leader_pid IS NOT NULL\r\n> >> > GROUP BY query, leader_pid;\r\n> >> > query | leader_pid | members\r\n> >> > -------------------+------------+---------------\r\n> >> > select * from t1; | 28701 | {28728,28732}\r\n> >> > (1 row)\r\n> >> >\r\n> >> >\r\n> >> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n> >>\r\n> >> And I just realized that I forgot to update rule.out, sorry about\r\n> >> that. v2 attached.\r\n> >\r\n> >\r\n> > So I tried your patch this morning, and it works really well.\r\n> >\r\n> > On a SELECT count(*), I got this:\r\n> >\r\n> > SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type\r\n> FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n> >\r\n> >\r\n> ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n> backend_type │\r\n> >\r\n> ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n> > │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client\r\n> backend │\r\n> > │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │\r\n> parallel worker │\r\n> > │ 111439 │ 116888 │ IO │ WALSync │ active │\r\n> parallel worker │\r\n> >\r\n> └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n> > (3 rows)\r\n> >\r\n> > and this from a CREATE INDEX:\r\n> >\r\n> >\r\n> ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n> backend_type │\r\n> >\r\n> ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n> > │ 111439 │ 111439 │ │ │ active │ client\r\n> backend │\r\n> > │ 111439 │ 118775 │ │ │ active │ parallel\r\n> worker │\r\n> >\r\n> └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n> > (2 rows)\r\n> >\r\n> > Anyway, it applies cleanly, it compiles, and it works. Documentation is\r\n> available. So it looks to me it's good to go :)\r\n>\r\n> Thanks for the review Guillaume. Double checking the doc, I see that\r\n> I made a copy/pasto mistake in the new field name. Attached v3 should\r\n> be all good.\r\n>\r\n\r\nFeeling bad I missed this. But, yeah, it's much better with the right\r\ncolumn's name.\r\n\r\nFor me, it's looking good to be ready for commiter. Should I set it this\r\nway in the Commit Fest app?\r\n\r\n\r\n-- \r\nGuillaume.\r\n\nLe jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n<guillaume@lelarge.info> wrote:\r\n>\r\n> Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>>\r\n>> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> >\r\n>> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n>> > possible to retrieve the list of parallel workers for a given backend\r\n>> > at the SQL level. His use case was to develop a function in plpgsql\r\n>> > to sample a given query wait event, but it's not hard to imagine other\r\n>> > useful use cases for this information, for instance doing some\r\n>> > analysis on the average number of workers per parallel query, or ratio\r\n>> > of parallel queries. IIUC parallel queries is for now the only user\r\n>> > of lock group, so this should work just fine.\r\n>> >\r\n>> > I'm attaching a trivial patch to expose the group leader pid if any\r\n>> > in pg_stat_activity. Quick example of usage:\r\n>> >\r\n>> > =# SELECT query, leader_pid,\r\n>> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n>> > FROM pg_stat_activity\r\n>> > WHERE leader_pid IS NOT NULL\r\n>> > GROUP BY query, leader_pid;\r\n>> > query | leader_pid | members\r\n>> > -------------------+------------+---------------\r\n>> > select * from t1; | 28701 | {28728,28732}\r\n>> > (1 row)\r\n>> >\r\n>> >\r\n>> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n>>\r\n>> And I just realized that I forgot to update rule.out, sorry about\r\n>> that. v2 attached.\r\n>\r\n>\r\n> So I tried your patch this morning, and it works really well.\r\n>\r\n> On a SELECT count(*), I got this:\r\n>\r\n> SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n>\r\n> ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n> │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n> ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n> │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client backend │\r\n> │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel worker │\r\n> │ 111439 │ 116888 │ IO │ WALSync │ active │ parallel worker │\r\n> └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n> (3 rows)\r\n>\r\n> and this from a CREATE INDEX:\r\n>\r\n> ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n> │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n> ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n> │ 111439 │ 111439 │ │ │ active │ client backend │\r\n> │ 111439 │ 118775 │ │ │ active │ parallel worker │\r\n> └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n> (2 rows)\r\n>\r\n> Anyway, it applies cleanly, it compiles, and it works. Documentation is available. So it looks to me it's good to go :)\n\r\nThanks for the review Guillaume. Double checking the doc, I see that\r\nI made a copy/pasto mistake in the new field name. Attached v3 should\r\nbe all good.\nFeeling bad I missed this. But, yeah, it's much better with the right column's name.For me, it's looking good to be ready for commiter. Should I set it this way in the Commit Fest app?-- Guillaume.",
"msg_date": "Thu, 26 Dec 2019 10:19:53 +0100",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 10:20 AM Guillaume Lelarge\r\n<guillaume@lelarge.info> wrote:\r\n>\r\n> Le jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>>\r\n>> On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n>> <guillaume@lelarge.info> wrote:\r\n>> >\r\n>> > Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>> >>\r\n>> >> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> >> >\r\n>> >> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n>> >> > possible to retrieve the list of parallel workers for a given backend\r\n>> >> > at the SQL level. His use case was to develop a function in plpgsql\r\n>> >> > to sample a given query wait event, but it's not hard to imagine other\r\n>> >> > useful use cases for this information, for instance doing some\r\n>> >> > analysis on the average number of workers per parallel query, or ratio\r\n>> >> > of parallel queries. IIUC parallel queries is for now the only user\r\n>> >> > of lock group, so this should work just fine.\r\n>> >> >\r\n>> >> > I'm attaching a trivial patch to expose the group leader pid if any\r\n>> >> > in pg_stat_activity. Quick example of usage:\r\n>> >> >\r\n>> >> > =# SELECT query, leader_pid,\r\n>> >> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n>> >> > FROM pg_stat_activity\r\n>> >> > WHERE leader_pid IS NOT NULL\r\n>> >> > GROUP BY query, leader_pid;\r\n>> >> > query | leader_pid | members\r\n>> >> > -------------------+------------+---------------\r\n>> >> > select * from t1; | 28701 | {28728,28732}\r\n>> >> > (1 row)\r\n>> >> >\r\n>> >> >\r\n>> >> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n>> >>\r\n>> >> And I just realized that I forgot to update rule.out, sorry about\r\n>> >> that. v2 attached.\r\n>> >\r\n>> >\r\n>> > So I tried your patch this morning, and it works really well.\r\n>> >\r\n>> > On a SELECT count(*), I got this:\r\n>> >\r\n>> > SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n>> >\r\n>> > ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n>> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n>> > ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n>> > │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client backend │\r\n>> > │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel worker │\r\n>> > │ 111439 │ 116888 │ IO │ WALSync │ active │ parallel worker │\r\n>> > └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n>> > (3 rows)\r\n>> >\r\n>> > and this from a CREATE INDEX:\r\n>> >\r\n>> > ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n>> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n>> > ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n>> > │ 111439 │ 111439 │ │ │ active │ client backend │\r\n>> > │ 111439 │ 118775 │ │ │ active │ parallel worker │\r\n>> > └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n>> > (2 rows)\r\n>> >\r\n>> > Anyway, it applies cleanly, it compiles, and it works. Documentation is available. So it looks to me it's good to go :)\r\n>>\r\n>> Thanks for the review Guillaume. Double checking the doc, I see that\r\n>> I made a copy/pasto mistake in the new field name. Attached v3 should\r\n>> be all good.\r\n>\r\n>\r\n> Feeling bad I missed this. But, yeah, it's much better with the right column's name.\r\n>\r\n> For me, it's looking good to be ready for commiter. Should I set it this way in the Commit Fest app?\r\n\r\nIf you don't see any other issue with the patch, I'd say yes. A\r\ncommitter can still put it back to waiting on author if needed.\r\n",
"msg_date": "Thu, 26 Dec 2019 10:26:16 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Le jeu. 26 déc. 2019 à 10:26, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n\r\n> On Thu, Dec 26, 2019 at 10:20 AM Guillaume Lelarge\r\n> <guillaume@lelarge.info> wrote:\r\n> >\r\n> > Le jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju123@gmail.com> a\r\n> écrit :\r\n> >>\r\n> >> On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n> >> <guillaume@lelarge.info> wrote:\r\n> >> >\r\n> >> > Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a\r\n> écrit :\r\n> >> >>\r\n> >> >> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com>\r\n> wrote:\r\n> >> >> >\r\n> >> >> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n> >> >> > possible to retrieve the list of parallel workers for a given\r\n> backend\r\n> >> >> > at the SQL level. His use case was to develop a function in\r\n> plpgsql\r\n> >> >> > to sample a given query wait event, but it's not hard to imagine\r\n> other\r\n> >> >> > useful use cases for this information, for instance doing some\r\n> >> >> > analysis on the average number of workers per parallel query, or\r\n> ratio\r\n> >> >> > of parallel queries. IIUC parallel queries is for now the only\r\n> user\r\n> >> >> > of lock group, so this should work just fine.\r\n> >> >> >\r\n> >> >> > I'm attaching a trivial patch to expose the group leader pid if any\r\n> >> >> > in pg_stat_activity. Quick example of usage:\r\n> >> >> >\r\n> >> >> > =# SELECT query, leader_pid,\r\n> >> >> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n> >> >> > FROM pg_stat_activity\r\n> >> >> > WHERE leader_pid IS NOT NULL\r\n> >> >> > GROUP BY query, leader_pid;\r\n> >> >> > query | leader_pid | members\r\n> >> >> > -------------------+------------+---------------\r\n> >> >> > select * from t1; | 28701 | {28728,28732}\r\n> >> >> > (1 row)\r\n> >> >> >\r\n> >> >> >\r\n> >> >> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n> >> >>\r\n> >> >> And I just realized that I forgot to update rule.out, sorry about\r\n> >> >> that. v2 attached.\r\n> >> >\r\n> >> >\r\n> >> > So I tried your patch this morning, and it works really well.\r\n> >> >\r\n> >> > On a SELECT count(*), I got this:\r\n> >> >\r\n> >> > SELECT leader_pid, pid, wait_event_type, wait_event, state,\r\n> backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n> >> >\r\n> >> >\r\n> ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n> >> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n> backend_type │\r\n> >> >\r\n> ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n> >> > │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │\r\n> client backend │\r\n> >> > │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │\r\n> parallel worker │\r\n> >> > │ 111439 │ 116888 │ IO │ WALSync │ active │\r\n> parallel worker │\r\n> >> >\r\n> └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n> >> > (3 rows)\r\n> >> >\r\n> >> > and this from a CREATE INDEX:\r\n> >> >\r\n> >> >\r\n> ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n> >> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │\r\n> backend_type │\r\n> >> >\r\n> ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n> >> > │ 111439 │ 111439 │ │ │ active │\r\n> client backend │\r\n> >> > │ 111439 │ 118775 │ │ │ active │\r\n> parallel worker │\r\n> >> >\r\n> └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n> >> > (2 rows)\r\n> >> >\r\n> >> > Anyway, it applies cleanly, it compiles, and it works. Documentation\r\n> is available. So it looks to me it's good to go :)\r\n> >>\r\n> >> Thanks for the review Guillaume. Double checking the doc, I see that\r\n> >> I made a copy/pasto mistake in the new field name. Attached v3 should\r\n> >> be all good.\r\n> >\r\n> >\r\n> > Feeling bad I missed this. But, yeah, it's much better with the right\r\n> column's name.\r\n> >\r\n> > For me, it's looking good to be ready for commiter. Should I set it this\r\n> way in the Commit Fest app?\r\n>\r\n> If you don't see any other issue with the patch, I'd say yes. A\r\n> committer can still put it back to waiting on author if needed.\r\n>\r\n\r\nThat's also what I thought, but as I was the only one commenting on this...\r\nAnyway, done.\r\n\r\n\r\n-- \r\nGuillaume.\r\n\nLe jeu. 26 déc. 2019 à 10:26, Julien Rouhaud <rjuju123@gmail.com> a écrit :On Thu, Dec 26, 2019 at 10:20 AM Guillaume Lelarge\r\n<guillaume@lelarge.info> wrote:\r\n>\r\n> Le jeu. 26 déc. 2019 à 09:49, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>>\r\n>> On Thu, Dec 26, 2019 at 9:08 AM Guillaume Lelarge\r\n>> <guillaume@lelarge.info> wrote:\r\n>> >\r\n>> > Le mer. 25 déc. 2019 à 19:30, Julien Rouhaud <rjuju123@gmail.com> a écrit :\r\n>> >>\r\n>> >> On Wed, Dec 25, 2019 at 7:03 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\r\n>> >> >\r\n>> >> > Guillaume (in Cc) recently pointed out [1] that it's currently not\r\n>> >> > possible to retrieve the list of parallel workers for a given backend\r\n>> >> > at the SQL level. His use case was to develop a function in plpgsql\r\n>> >> > to sample a given query wait event, but it's not hard to imagine other\r\n>> >> > useful use cases for this information, for instance doing some\r\n>> >> > analysis on the average number of workers per parallel query, or ratio\r\n>> >> > of parallel queries. IIUC parallel queries is for now the only user\r\n>> >> > of lock group, so this should work just fine.\r\n>> >> >\r\n>> >> > I'm attaching a trivial patch to expose the group leader pid if any\r\n>> >> > in pg_stat_activity. Quick example of usage:\r\n>> >> >\r\n>> >> > =# SELECT query, leader_pid,\r\n>> >> > array_agg(pid) filter(WHERE leader_pid != pid) AS members\r\n>> >> > FROM pg_stat_activity\r\n>> >> > WHERE leader_pid IS NOT NULL\r\n>> >> > GROUP BY query, leader_pid;\r\n>> >> > query | leader_pid | members\r\n>> >> > -------------------+------------+---------------\r\n>> >> > select * from t1; | 28701 | {28728,28732}\r\n>> >> > (1 row)\r\n>> >> >\r\n>> >> >\r\n>> >> > [1] https://twitter.com/g_lelarge/status/1209486212190343168\r\n>> >>\r\n>> >> And I just realized that I forgot to update rule.out, sorry about\r\n>> >> that. v2 attached.\r\n>> >\r\n>> >\r\n>> > So I tried your patch this morning, and it works really well.\r\n>> >\r\n>> > On a SELECT count(*), I got this:\r\n>> >\r\n>> > SELECT leader_pid, pid, wait_event_type, wait_event, state, backend_type FROM pg_stat_activity WHERE pid=111439 or leader_pid=111439;\r\n>> >\r\n>> > ┌────────────┬────────┬─────────────────┬──────────────┬────────┬─────────────────┐\r\n>> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n>> > ├────────────┼────────┼─────────────────┼──────────────┼────────┼─────────────────┤\r\n>> > │ 111439 │ 111439 │ LWLock │ WALWriteLock │ active │ client backend │\r\n>> > │ 111439 │ 116887 │ LWLock │ WALWriteLock │ active │ parallel worker │\r\n>> > │ 111439 │ 116888 │ IO │ WALSync │ active │ parallel worker │\r\n>> > └────────────┴────────┴─────────────────┴──────────────┴────────┴─────────────────┘\r\n>> > (3 rows)\r\n>> >\r\n>> > and this from a CREATE INDEX:\r\n>> >\r\n>> > ┌────────────┬────────┬─────────────────┬────────────┬────────┬─────────────────┐\r\n>> > │ leader_pid │ pid │ wait_event_type │ wait_event │ state │ backend_type │\r\n>> > ├────────────┼────────┼─────────────────┼────────────┼────────┼─────────────────┤\r\n>> > │ 111439 │ 111439 │ │ │ active │ client backend │\r\n>> > │ 111439 │ 118775 │ │ │ active │ parallel worker │\r\n>> > └────────────┴────────┴─────────────────┴────────────┴────────┴─────────────────┘\r\n>> > (2 rows)\r\n>> >\r\n>> > Anyway, it applies cleanly, it compiles, and it works. Documentation is available. So it looks to me it's good to go :)\r\n>>\r\n>> Thanks for the review Guillaume. Double checking the doc, I see that\r\n>> I made a copy/pasto mistake in the new field name. Attached v3 should\r\n>> be all good.\r\n>\r\n>\r\n> Feeling bad I missed this. But, yeah, it's much better with the right column's name.\r\n>\r\n> For me, it's looking good to be ready for commiter. Should I set it this way in the Commit Fest app?\n\r\nIf you don't see any other issue with the patch, I'd say yes. A\r\ncommitter can still put it back to waiting on author if needed.That's also what I thought, but as I was the only one commenting on this... Anyway, done.-- Guillaume.",
"msg_date": "Thu, 26 Dec 2019 10:31:06 +0100",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Hello\n\nI doubt that \"Process ID of the lock group leader\" is enough for user documentation. I think we need note:\n- this field is related to parallel query execution\n- leader_pid = pid if process is parallel leader\n- leader_pid would point to pid of the leader if process is parallel worker\n- leader_pid will be NULL for non-parallel queries or idle sessions\n\nAlso patch has no tests. Possible this is normal, not sure how to write a reliable test for this feature.\nPatch applies, compiles, pass tests\n\nregards, Sergei\n\n\n",
"msg_date": "Thu, 26 Dec 2019 14:18:52 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Hello,\n\nOn Thu, Dec 26, 2019 at 12:18 PM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n> I doubt that \"Process ID of the lock group leader\" is enough for user documentation. I think we need note:\n> - this field is related to parallel query execution\n> - leader_pid = pid if process is parallel leader\n> - leader_pid would point to pid of the leader if process is parallel worker\n> - leader_pid will be NULL for non-parallel queries or idle sessions\n\nAs I understand it, lock group is some infrastructure that is used by\nparallel queries, but could be used for something else too. So if\nmore documentation is needed, we should say something like \"For now,\nonly parallel queries can have a lock group\" or something like that.\n\nThe fact that leader_pid == pid for the leader and different for the\nother member should be obvious, I'm not sure that it's worth\ndocumenting that.\n\n> Also patch has no tests. Possible this is normal, not sure how to write a reliable test for this feature.\n\nYes, I was unsure if some extra testing was required. We could set\nforce_parallel_mode to on and query \"select leader_pid is not null\nfrom pg_stat_activity where pid = pg_backend_pid(), and then the\nopposite test, which should do the trick.\n\n\n",
"msg_date": "Thu, 26 Dec 2019 13:11:53 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "Hello\n\n> As I understand it, lock group is some infrastructure that is used by\n> parallel queries, but could be used for something else too. So if\n> more documentation is needed, we should say something like \"For now,\n> only parallel queries can have a lock group\" or something like that.\n\nIf lockGroupLeader will be used in some way for non-parallel query, then the name leader_pid could be confusing. No?\nI treat pg_stat_activity as view for user. We have document somewhere what is \"lock group leader\" (excepts README in source tree)? I meant user going to read documentation, \"ok, this field is process ID of the lock group leader, but what is it?\". Expose a leader pid for parallel worker will be clear improvement for user. And seems lockGroupLeader->pid is exactly this stuff. Therefore, I would like to see such description and meaning of the field.\n\n> The fact that leader_pid == pid for the leader and different for the\n> other member should be obvious, I'm not sure that it's worth\n> documenting that.\n\nIt may be not obvious that leader_pid is not null in this case. But ok, no objections.\n\nregards, Sergei\n\n\n",
"msg_date": "Fri, 27 Dec 2019 12:01:21 +0300",
"msg_from": "Sergei Kornilov <sk@zsrv.org>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 10:01 AM Sergei Kornilov <sk@zsrv.org> wrote:\n>\n> Hello\n>\n> > As I understand it, lock group is some infrastructure that is used by\n> > parallel queries, but could be used for something else too. So if\n> > more documentation is needed, we should say something like \"For now,\n> > only parallel queries can have a lock group\" or something like that.\n>\n> If lockGroupLeader will be used in some way for non-parallel query, then the name leader_pid could be confusing. No?\n> I treat pg_stat_activity as view for user. We have document somewhere what is \"lock group leader\" (excepts README in source tree)? I meant user going to read documentation, \"ok, this field is process ID of the lock group leader, but what is it?\". Expose a leader pid for parallel worker will be clear improvement for user. And seems lockGroupLeader->pid is exactly this stuff. Therefore, I would like to see such description and meaning of the field.\n\nI think that not using \"parallel\" to name this field will help to\navoid confusion if the lock group infrastructure is eventually used\nfor something else, but that's only true if indeed we explain what a\nlock group is.\n\n> > The fact that leader_pid == pid for the leader and different for the\n> > other member should be obvious, I'm not sure that it's worth\n> > documenting that.\n>\n> It may be not obvious that leader_pid is not null in this case. But ok, no objections.\n\nIf we adapt lmgr/README to document the group locking, it also\naddresses this. What do you thing of:\n\nThe leader_pid is NULL for processes not involved in parallel query.\nWhen a process wants to cooperate with parallel workers, it becomes a\nlock group leader, which means that this field will be valued to its\nown pid. When a parallel worker starts up, this field will be valued\nwith the leader pid.\n\n\n",
"msg_date": "Fri, 27 Dec 2019 10:15:33 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 10:15:33AM +0100, Julien Rouhaud wrote:\n> I think that not using \"parallel\" to name this field will help to\n> avoid confusion if the lock group infrastructure is eventually used\n> for something else, but that's only true if indeed we explain what a\n> lock group is.\n\nAs you already pointed out, src/backend/storage/lmgr/README includes a\nfull description of this stuff under the section \"Group Locking\". So\nI agree that the patch ought to document your new field in a much\nbetter way, without mentioning the term \"group locking\" that's even\nbetter to not confuse the reader because this term not present in the\ndocs at all.\n\n> The leader_pid is NULL for processes not involved in parallel query.\n> When a process wants to cooperate with parallel workers, it becomes a\n> lock group leader, which means that this field will be valued to its\n> own pid. When a parallel worker starts up, this field will be valued\n> with the leader pid.\n\nThe first sentence is good to have. Now instead of \"lock group\nleader\", I think that we had better use \"parallel group leader\" as in\nother parts of the docs (see wait events for example). Then we just\nneed to say that if leader_pid has the same value as\npg_stat_activity.pid, then we have a group leader. If not, then it is\na parallel worker process initially spawned by the leader whose PID is\nleader_pid (when executing Gather, Gather Merge, soon-to-be parallel\nvacuum or for a parallel btree build, but this does not need a mention\nin the docs). There could be an argument as well to have leader_pid\nset to NULL for a leader, but that would be inconsistent with what\nthe PGPROC entry reports for the backend.\n\nWhile looking at the code, I think that we could refactor things a bit\nfor raw_wait_event, wait_event_type and wait_event which has some\nduplicated code for backend and auxiliary processes. What about\nfilling in the wait event data after fetching the PGPROC entry, and\nalso fill in leader_pid for auxiliary processes. This does not matter\nnow, perhaps it will never matter (or not), but that would make the\ncode much more consistent.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 16:27:27 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 04:27:27PM +0900, Michael Paquier wrote:\n> While looking at the code, I think that we could refactor things a bit\n> for raw_wait_event, wait_event_type and wait_event which has some\n> duplicated code for backend and auxiliary processes. What about\n> filling in the wait event data after fetching the PGPROC entry, and\n> also fill in leader_pid for auxiliary processes. This does not matter\n> now, perhaps it will never matter (or not), but that would make the\n> code much more consistent.\n\nAnd actually, the way you are looking at the leader's PID is visibly\nincorrect and inconsistent because the patch takes no shared LWLock on\nthe leader using LockHashPartitionLockByProc() followed by\nLWLockAcquire(), no? That's incorrect because it could be perfectly\npossible to crash with this code between the moment you check if \nlockGroupLeader is NULL and the moment you look at\nlockGroupLeader->pid if a process is being stopped in-between and\nremoves itself from a lock group in ProcKill(). That's also\ninconsistent because it could be perfectly possible to finish with an \nincorrect view of the data while scanning for all the backend entries,\nlike a leader set to NULL with workers pointing to the leader for\nexample, or even workers marked incorrectly as NULL. The second one\nmay not be a problem, but the first one could be confusing.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 16:49:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 8:28 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Dec 27, 2019 at 10:15:33AM +0100, Julien Rouhaud wrote:\n> > I think that not using \"parallel\" to name this field will help to\n> > avoid confusion if the lock group infrastructure is eventually used\n> > for something else, but that's only true if indeed we explain what a\n> > lock group is.\n>\n> As you already pointed out, src/backend/storage/lmgr/README includes a\n> full description of this stuff under the section \"Group Locking\". So\n> I agree that the patch ought to document your new field in a much\n> better way, without mentioning the term \"group locking\" that's even\n> better to not confuse the reader because this term not present in the\n> docs at all.\n>\n> > The leader_pid is NULL for processes not involved in parallel query.\n> > When a process wants to cooperate with parallel workers, it becomes a\n> > lock group leader, which means that this field will be valued to its\n> > own pid. When a parallel worker starts up, this field will be valued\n> > with the leader pid.\n>\n> The first sentence is good to have. Now instead of \"lock group\n> leader\", I think that we had better use \"parallel group leader\" as in\n> other parts of the docs (see wait events for example).\n\nOk, I'll change this way.\n\n> Then we just\n> need to say that if leader_pid has the same value as\n> pg_stat_activity.pid, then we have a group leader. If not, then it is\n> a parallel worker process initially spawned by the leader whose PID is\n> leader_pid (when executing Gather, Gather Merge, soon-to-be parallel\n> vacuum or for a parallel btree build, but this does not need a mention\n> in the docs). There could be an argument as well to have leader_pid\n> set to NULL for a leader, but that would be inconsistent with what\n> the PGPROC entry reports for the backend.\n\nIt would also slightly complicate things to get the full set of\nbackends involved in a parallel query, while excluding the leader is\nentirely trivial.\n\n> While looking at the code, I think that we could refactor things a bit\n> for raw_wait_event, wait_event_type and wait_event which has some\n> duplicated code for backend and auxiliary processes. What about\n> filling in the wait event data after fetching the PGPROC entry, and\n> also fill in leader_pid for auxiliary processes. This does not matter\n> now, perhaps it will never matter (or not), but that would make the\n> code much more consistent.\n\nYeah, I didn't think that auxiliary would be involved any time soon\nbut I can include this refactoring.\n\n\n",
"msg_date": "Fri, 17 Jan 2020 16:48:53 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 8:49 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Thu, Jan 16, 2020 at 04:27:27PM +0900, Michael Paquier wrote:\n> > While looking at the code, I think that we could refactor things a bit\n> > for raw_wait_event, wait_event_type and wait_event which has some\n> > duplicated code for backend and auxiliary processes. What about\n> > filling in the wait event data after fetching the PGPROC entry, and\n> > also fill in leader_pid for auxiliary processes. This does not matter\n> > now, perhaps it will never matter (or not), but that would make the\n> > code much more consistent.\n>\n> And actually, the way you are looking at the leader's PID is visibly\n> incorrect and inconsistent because the patch takes no shared LWLock on\n> the leader using LockHashPartitionLockByProc() followed by\n> LWLockAcquire(), no? That's incorrect because it could be perfectly\n> possible to crash with this code between the moment you check if\n> lockGroupLeader is NULL and the moment you look at\n> lockGroupLeader->pid if a process is being stopped in-between and\n> removes itself from a lock group in ProcKill(). That's also\n> inconsistent because it could be perfectly possible to finish with an\n> incorrect view of the data while scanning for all the backend entries,\n> like a leader set to NULL with workers pointing to the leader for\n> example, or even workers marked incorrectly as NULL. The second one\n> may not be a problem, but the first one could be confusing.\n\nOh indeed. But unless we hold some LWLock during the whole function\nexecution, we cannot guarantee a consistent view right? And isn't it\nalready possible to e.g. see a parallel worker in pg_stat_activity\nwhile all other queries are shown are idle, if you're unlucky enough?\n\nAlso, LockHashPartitionLockByProc requires the leader PGPROC, and\nthere's no guarantee that we'll see the leader before any of the\nworkers, so I'm unsure how to implement what you said. Wouldn't it be\nbetter to simply fetch the leader PGPROC after acquiring a shared\nProcArrayLock, and using that copy to display the pid, after checking\nthat we retrieved a non-null PGPROC?\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:07:55 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Fri, Jan 17, 2020 at 05:07:55PM +0100, Julien Rouhaud wrote:\n> Oh indeed. But unless we hold some LWLock during the whole function\n> execution, we cannot guarantee a consistent view right?\n\nYep. That's possible.\n\n> And isn't it already possible to e.g. see a parallel worker in\n> pg_stat_activity while all other queries are shown are idle, if\n> you're unlucky enough?\n\nYep. That's possible.\n\n> Also, LockHashPartitionLockByProc requires the leader PGPROC, and\n> there's no guarantee that we'll see the leader before any of the\n> workers, so I'm unsure how to implement what you said. Wouldn't it be\n> better to simply fetch the leader PGPROC after acquiring a shared\n> ProcArrayLock, and using that copy to display the pid, after checking\n> that we retrieved a non-null PGPROC?\n\nAnother idea would be to check if the current PGPROC entry is a leader\nand print in an int[] the list of PIDs of all the workers while\nholding a shared LWLock to avoid anything to be unregistered. Less\nhandy, but a bit more consistent. One problem with doing that is\nthat you may have in your list of PIDs some worker processes that are\nalready gone when going through all the other backend entries. An\nadvantage is that an empty array could mean \"I am the leader, but\nnothing has been registered yet to my group lock.\" (note that the\nleader adds itself to lockGroupMembers).\n--\nMichael",
"msg_date": "Sat, 18 Jan 2020 11:51:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Sat, Jan 18, 2020 at 3:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>\n> On Fri, Jan 17, 2020 at 05:07:55PM +0100, Julien Rouhaud wrote:\n> > Oh indeed. But unless we hold some LWLock during the whole function\n> > execution, we cannot guarantee a consistent view right?\n>\n> Yep. That's possible.\n>\n> > And isn't it already possible to e.g. see a parallel worker in\n> > pg_stat_activity while all other queries are shown are idle, if\n> > you're unlucky enough?\n>\n> Yep. That's possible.\n>\n> > Also, LockHashPartitionLockByProc requires the leader PGPROC, and\n> > there's no guarantee that we'll see the leader before any of the\n> > workers, so I'm unsure how to implement what you said. Wouldn't it be\n> > better to simply fetch the leader PGPROC after acquiring a shared\n> > ProcArrayLock, and using that copy to display the pid, after checking\n> > that we retrieved a non-null PGPROC?\n>\n> Another idea would be to check if the current PGPROC entry is a leader\n> and print in an int[] the list of PIDs of all the workers while\n> holding a shared LWLock to avoid anything to be unregistered. Less\n> handy, but a bit more consistent. One problem with doing that is\n> that you may have in your list of PIDs some worker processes that are\n> already gone when going through all the other backend entries. An\n> advantage is that an empty array could mean \"I am the leader, but\n> nothing has been registered yet to my group lock.\" (note that the\n> leader adds itself to lockGroupMembers).\n\nSo, AFAICT the LockHashPartitionLockByProc is required when\niterating/modifying lockGroupMembers or lockGroupLink, but just\ngetting the leader pid should be safe. Since we'll never be able to\nget a totally consistent view of data here, I'm in favor of avoiding\ntaking extra locks here. I agree that outputting an array of the pid\nwould be more consistent for the leader, but will have its own set of\ncorner cases. It seems to me that a new leader_pid column is easier\nto handle at SQL level, so I kept that approach in attached v4. If\nyou have strong objections to it. I can still change it.",
"msg_date": "Tue, 28 Jan 2020 12:36:41 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 12:36:41PM +0100, Julien Rouhaud wrote:\n>On Sat, Jan 18, 2020 at 3:51 AM Michael Paquier <michael@paquier.xyz> wrote:\n>>\n>> On Fri, Jan 17, 2020 at 05:07:55PM +0100, Julien Rouhaud wrote:\n>> > Oh indeed. But unless we hold some LWLock during the whole function\n>> > execution, we cannot guarantee a consistent view right?\n>>\n>> Yep. That's possible.\n>>\n>> > And isn't it already possible to e.g. see a parallel worker in\n>> > pg_stat_activity while all other queries are shown are idle, if\n>> > you're unlucky enough?\n>>\n>> Yep. That's possible.\n>>\n>> > Also, LockHashPartitionLockByProc requires the leader PGPROC, and\n>> > there's no guarantee that we'll see the leader before any of the\n>> > workers, so I'm unsure how to implement what you said. Wouldn't it be\n>> > better to simply fetch the leader PGPROC after acquiring a shared\n>> > ProcArrayLock, and using that copy to display the pid, after checking\n>> > that we retrieved a non-null PGPROC?\n>>\n>> Another idea would be to check if the current PGPROC entry is a leader\n>> and print in an int[] the list of PIDs of all the workers while\n>> holding a shared LWLock to avoid anything to be unregistered. Less\n>> handy, but a bit more consistent. One problem with doing that is\n>> that you may have in your list of PIDs some worker processes that are\n>> already gone when going through all the other backend entries. An\n>> advantage is that an empty array could mean \"I am the leader, but\n>> nothing has been registered yet to my group lock.\" (note that the\n>> leader adds itself to lockGroupMembers).\n>\n>So, AFAICT the LockHashPartitionLockByProc is required when\n>iterating/modifying lockGroupMembers or lockGroupLink, but just\n>getting the leader pid should be safe. Since we'll never be able to\n>get a totally consistent view of data here, I'm in favor of avoiding\n>taking extra locks here. I agree that outputting an array of the pid\n>would be more consistent for the leader, but will have its own set of\n>corner cases. It seems to me that a new leader_pid column is easier\n>to handle at SQL level, so I kept that approach in attached v4. If\n>you have strong objections to it. I can still change it.\n\nI agree a separate \"leader_id\" column is easier to work with, as it does\nnot require unnesting and so on.\n\nAs for the consistency, I agree we probably can't make this perfect, as\nwe're fetching and processing the PGPROC records one by one. Fixing that\nwould require acquiring a much stronger lock on PGPROC, and perhaps some\nother locks. That's pre-existing behavior, of course, it's just not very\nobvious as we don't have any dependencies between the rows, I think.\nAdding the leader_id will change, that, of course. But I think it's\nstill mostly OK, even with the possible inconsistency.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:09:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 2:09 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> I agree a separate \"leader_id\" column is easier to work with, as it does\n> not require unnesting and so on.\n>\n> As for the consistency, I agree we probably can't make this perfect, as\n> we're fetching and processing the PGPROC records one by one. Fixing that\n> would require acquiring a much stronger lock on PGPROC, and perhaps some\n> other locks. That's pre-existing behavior, of course, it's just not very\n> obvious as we don't have any dependencies between the rows, I think.\n> Adding the leader_id will change, that, of course. But I think it's\n> still mostly OK, even with the possible inconsistency.\n\nThere were already some dependencies between the rows since parallel\nqueries were added, as you could see eg. a parallel worker while no\nquery is currently active. This patch will make those corner cases\nmore obvious. Should I document the possible inconsistencies?\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:26:34 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 02:26:34PM +0100, Julien Rouhaud wrote:\n>On Tue, Jan 28, 2020 at 2:09 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>>\n>> I agree a separate \"leader_id\" column is easier to work with, as it does\n>> not require unnesting and so on.\n>>\n>> As for the consistency, I agree we probably can't make this perfect, as\n>> we're fetching and processing the PGPROC records one by one. Fixing that\n>> would require acquiring a much stronger lock on PGPROC, and perhaps some\n>> other locks. That's pre-existing behavior, of course, it's just not very\n>> obvious as we don't have any dependencies between the rows, I think.\n>> Adding the leader_id will change, that, of course. But I think it's\n>> still mostly OK, even with the possible inconsistency.\n>\n>There were already some dependencies between the rows since parallel\n>queries were added, as you could see eg. a parallel worker while no\n>query is currently active. This patch will make those corner cases\n>more obvious.\n\nYeah, sure. I mean explicit dependencies, e.g. a column referencing\nvalues from another row, like leader_id does.\n\n>Should I document the possible inconsistencies?\n\nI think it's worth mentioning that as a comment in the code, say before\nthe pg_stat_get_activity function. IMO we don't need to document all\npossible inconsistencies, a generic explanation is enough.\n\nNot sure about the user docs. Does it currently say anything about this\ntopic - consistency with stat catalogs?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 28 Jan 2020 14:52:08 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 02:52:08PM +0100, Tomas Vondra wrote:\n> On Tue, Jan 28, 2020 at 02:26:34PM +0100, Julien Rouhaud wrote:\n>> There were already some dependencies between the rows since parallel\n>> queries were added, as you could see eg. a parallel worker while no\n>> query is currently active. This patch will make those corner cases\n>> more obvious.\n\nI was reviewing the code and one thing that I was wondering is if it\nwould be better to make the code more defensive and return NULL when\nthe PID of the referenced leader is 0 or InvalidPid. However that\nwould mean that we have a dummy 2PC entry from PGPROC or something not\nyet started, which makes no sense. So your simpler version is\nactually fine. What you have here is that in the worst case you could\nfinish with an incorrect reference to shared memory if a PGPROC is\nrecycled between the moment you look for the leader field and the\nmoment the PID value is fetched. That's very unlikely to happen, and\nI agree that it does not really justify the cost of taking extra locks\nevery time we scan pg_stat_activity.\n\n> Yeah, sure. I mean explicit dependencies, e.g. a column referencing\n> values from another row, like leader_id does.\n\n+ The leader_pid is NULL for processes not involved in parallel query.\nThis is missing two markups, one for \"NULL\" and a second for\n\"leader_pid\". The documentation does not match the surroundings\neither, so I would suggest a reformulation for the beginning:\nPID of the leader process if this process is involved in parallel query.\n\nAnd actually this paragraph is not completely true, because leader_pid\nremains set even after one parallel query run has been finished for a\nsession when leader_pid = pid as lockGroupLeader is set to NULL only\nonce the process is stopped in ProcKill().\n\n>> Should I document the possible inconsistencies?\n> \n> I think it's worth mentioning that as a comment in the code, say before\n> the pg_stat_get_activity function. IMO we don't need to document all\n> possible inconsistencies, a generic explanation is enough.\n\nAgreed that adding some information in the area when we look at the\nPGPROC entries for wait events and such would be nice.\n\n> Not sure about the user docs. Does it currently say anything about this\n> topic - consistency with stat catalogs?\n\nNot sure that it is the job of this patch to do that. Do you have\nsomething specific in mind?\n--\nMichael",
"msg_date": "Thu, 30 Jan 2020 22:03:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Jan 30, 2020 at 10:03:01PM +0900, Michael Paquier wrote:\n> On Tue, Jan 28, 2020 at 02:52:08PM +0100, Tomas Vondra wrote:\n> > On Tue, Jan 28, 2020 at 02:26:34PM +0100, Julien Rouhaud wrote:\n> >> There were already some dependencies between the rows since parallel\n> >> queries were added, as you could see eg. a parallel worker while no\n> >> query is currently active. This patch will make those corner cases\n> >> more obvious.\n>\n> I was reviewing the code and one thing that I was wondering is if it\n> would be better to make the code more defensive and return NULL when\n> the PID of the referenced leader is 0 or InvalidPid. However that\n> would mean that we have a dummy 2PC entry from PGPROC or something not\n> yet started, which makes no sense. So your simpler version is\n> actually fine. What you have here is that in the worst case you could\n> finish with an incorrect reference to shared memory if a PGPROC is\n> recycled between the moment you look for the leader field and the\n> moment the PID value is fetched. That's very unlikely to happen, and\n> I agree that it does not really justify the cost of taking extra locks\n> every time we scan pg_stat_activity.\n\nOk.\n\n>\n> > Yeah, sure. I mean explicit dependencies, e.g. a column referencing\n> > values from another row, like leader_id does.\n>\n> + The leader_pid is NULL for processes not involved in parallel query.\n> This is missing two markups, one for \"NULL\" and a second for\n> \"leader_pid\".\n\nThe extra \"leader_pid\" disappeared when I reworked the doc. I'm not sure what\nyou meant here for NULL as I don't see any extra markup used in closeby\ndocumentation, so I hope this version is ok.\n\n> The documentation does not match the surroundings\n> either, so I would suggest a reformulation for the beginning:\n> PID of the leader process if this process is involved in parallel query.\n\n> And actually this paragraph is not completely true, because leader_pid\n> remains set even after one parallel query run has been finished for a\n> session when leader_pid = pid as lockGroupLeader is set to NULL only\n> once the process is stopped in ProcKill().\n\nOh good point, that's unfortunately not a super friendly behavior. I tried to\nadapt the documentation to address of all that. It's maybe slightly too\nverbose, but I guess that extra clarity is welcome here.\n\n> >> Should I document the possible inconsistencies?\n> >\n> > I think it's worth mentioning that as a comment in the code, say before\n> > the pg_stat_get_activity function. IMO we don't need to document all\n> > possible inconsistencies, a generic explanation is enough.\n>\n> Agreed that adding some information in the area when we look at the\n> PGPROC entries for wait events and such would be nice.\n\nI added some code comments to remind that we don't guarantee any consistency\nhere.",
"msg_date": "Tue, 4 Feb 2020 15:27:25 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Feb 04, 2020 at 03:27:25PM +0100, Julien Rouhaud wrote:\n> I added some code comments to remind that we don't guarantee any consistency\n> here.\n\nThat's mostly fine. I have moved the comment related to\nAuxiliaryPidGetProc() within the inner part of its the \"if\" (or the\ncomment should be changed to be conditional). An extra thing is that\nnulls[29] was not set to true for a user without the proper permission\nrights.\n\nDoes that look fine to you?\n--\nMichael",
"msg_date": "Wed, 5 Feb 2020 10:48:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Wed, Feb 05, 2020 at 10:48:31AM +0900, Michael Paquier wrote:\n> On Tue, Feb 04, 2020 at 03:27:25PM +0100, Julien Rouhaud wrote:\n> > I added some code comments to remind that we don't guarantee any consistency\n> > here.\n> \n> That's mostly fine. I have moved the comment related to\n> AuxiliaryPidGetProc() within the inner part of its the \"if\" (or the\n> comment should be changed to be conditional). An extra thing is that\n> nulls[29] was not set to true for a user without the proper permission\n> rights.\n\nOh, oops indeed.\n\n> Does that look fine to you?\n\nThis looks good, thanks a lot!\n\n\n",
"msg_date": "Wed, 5 Feb 2020 07:57:20 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Wed, Feb 05, 2020 at 07:57:20AM +0100, Julien Rouhaud wrote:\n> This looks good, thanks a lot!\n\nThanks for double-checking. And done.\n--\nMichael",
"msg_date": "Thu, 6 Feb 2020 09:24:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Feb 06, 2020 at 09:24:16AM +0900, Michael Paquier wrote:\n> On Wed, Feb 05, 2020 at 07:57:20AM +0100, Julien Rouhaud wrote:\n> > This looks good, thanks a lot!\n> \n> Thanks for double-checking. And done.\n\nThanks!\n\nWhile on the topic, is there any reason why the backend stays a group leader\nfor the rest of its lifetime, and should we change that?\n\nAlso, while reading ProcKill, I noticed a typo in a comment:\n\n /*\n * Detach from any lock group of which we are a member. If the leader\n- * exist before all other group members, it's PGPROC will remain allocated\n+ * exist before all other group members, its PGPROC will remain allocated\n * until the last group process exits; that process must return the\n * leader's PGPROC to the appropriate list.\n */",
"msg_date": "Thu, 6 Feb 2020 09:23:33 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Thu, Feb 06, 2020 at 09:23:33AM +0100, Julien Rouhaud wrote:\n> While on the topic, is there any reason why the backend stays a group leader\n> for the rest of its lifetime, and should we change that?\n\nNothing happens without a reason. a1c1af2 is the original commit, and\nthe thread is here:\nhttps://www.postgresql.org/message-id/CA+TgmoapgKdy_Z0W9mHqZcGSo2t_t-4_V36DXaKim+X_fYp0oQ@mail.gmail.com\n\nBy looking at the surroundings, there are a couple of assumptions\nbehind the timing of the shutdown for the workers and the leader. \nI have not studied much the details on that, but my guess is that it\nmakes the handling of the leader shutting down before its workers\neasier. Robert or Amit likely know all the details here.\n\n> Also, while reading ProcKill, I noticed a typo in a comment:\n> \n> /*\n> * Detach from any lock group of which we are a member. If the leader\n> - * exist before all other group members, it's PGPROC will remain allocated\n> + * exist before all other group members, its PGPROC will remain allocated\n> * until the last group process exits; that process must return the\n> * leader's PGPROC to the appropriate list.\n> */\n\nThanks, fixed.\n--\nMichael",
"msg_date": "Fri, 7 Feb 2020 12:47:12 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Tue, Jan 28, 2020 at 12:36:41PM +0100, Julien Rouhaud wrote:\n> So, AFAICT the LockHashPartitionLockByProc is required when\n> iterating/modifying lockGroupMembers or lockGroupLink, but just\n> getting the leader pid should be safe.\n\nThis still seems unsafe:\n\ngit show -U11 -w --patience b025f32e0b src/backend/utils/adt/pgstatfuncs.c\n+ /*\n+ * If a PGPROC entry was retrieved, display wait events and lock\n+ * group leader information if any. To avoid extra overhead, no\n+ * extra lock is being held, so there is no guarantee of\n+ * consistency across multiple rows.\n+ */\n...\n+ PGPROC *leader;\n...\n+ leader = proc->lockGroupLeader;\n+ if (leader)\n+ {\n# does something guarantee that leader doesn't change ?\n+ values[29] = Int32GetDatum(leader->pid);\n+ nulls[29] = false;\n }\n\nMichael seems to have raised the issue:\n\nOn Thu, Jan 16, 2020 at 04:49:12PM +0900, Michael Paquier wrote:\n> And actually, the way you are looking at the leader's PID is visibly\n> incorrect and inconsistent because the patch takes no shared LWLock on\n> the leader using LockHashPartitionLockByProc() followed by\n> LWLockAcquire(), no? That's incorrect because it could be perfectly\n> possible to crash with this code between the moment you check if \n> lockGroupLeader is NULL and the moment you look at\n> lockGroupLeader->pid if a process is being stopped in-between and\n> removes itself from a lock group in ProcKill(). \n\nBut I don't see how it was addressed ?\n\nI read this:\n\nsrc/backend/storage/lmgr/lock.c: * completely valid. We cannot safely dereference another backend's\nsrc/backend/storage/lmgr/lock.c- * lockGroupLeader field without holding all lock partition locks, and\nsrc/backend/storage/lmgr/lock.c- * it's not worth that.)\n\nI think if you do:\n|LWLockAcquire(&proc->backendLock, LW_SHARED);\n..then it's at least *safe* to access leader->pid, but it may be inconsistent\nunless you also call LockHashPartitionLockByProc.\n\nI wasn't able to produce a crash, so maybe I missed something.\n\n-- \nJustin\n\n\n",
"msg_date": "Sun, 15 Mar 2020 23:27:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 11:27:52PM -0500, Justin Pryzby wrote:\n> On Tue, Jan 28, 2020 at 12:36:41PM +0100, Julien Rouhaud wrote:\n> > So, AFAICT the LockHashPartitionLockByProc is required when\n> > iterating/modifying lockGroupMembers or lockGroupLink, but just\n> > getting the leader pid should be safe.\n> \n> This still seems unsafe:\n> \n> git show -U11 -w --patience b025f32e0b src/backend/utils/adt/pgstatfuncs.c\n> + /*\n> + * If a PGPROC entry was retrieved, display wait events and lock\n> + * group leader information if any. To avoid extra overhead, no\n> + * extra lock is being held, so there is no guarantee of\n> + * consistency across multiple rows.\n> + */\n> ...\n> + PGPROC *leader;\n> ...\n> + leader = proc->lockGroupLeader;\n> + if (leader)\n> + {\n> # does something guarantee that leader doesn't change ?\n> + values[29] = Int32GetDatum(leader->pid);\n> + nulls[29] = false;\n> }\n> \n> Michael seems to have raised the issue:\n> \n> On Thu, Jan 16, 2020 at 04:49:12PM +0900, Michael Paquier wrote:\n> > And actually, the way you are looking at the leader's PID is visibly\n> > incorrect and inconsistent because the patch takes no shared LWLock on\n> > the leader using LockHashPartitionLockByProc() followed by\n> > LWLockAcquire(), no? That's incorrect because it could be perfectly\n> > possible to crash with this code between the moment you check if \n> > lockGroupLeader is NULL and the moment you look at\n> > lockGroupLeader->pid if a process is being stopped in-between and\n> > removes itself from a lock group in ProcKill(). \n> \n> But I don't see how it was addressed ?\n> \n> I read this:\n> \n> src/backend/storage/lmgr/lock.c: * completely valid. We cannot safely dereference another backend's\n> src/backend/storage/lmgr/lock.c- * lockGroupLeader field without holding all lock partition locks, and\n> src/backend/storage/lmgr/lock.c- * it's not worth that.)\n> \n> I think if you do:\n> |LWLockAcquire(&proc->backendLock, LW_SHARED);\n> ..then it's at least *safe* to access leader->pid, but it may be inconsistent\n> unless you also call LockHashPartitionLockByProc.\n> \n> I wasn't able to produce a crash, so maybe I missed something.\n\nI think I see. Julien's v3 patch did this:\nhttps://www.postgresql.org/message-id/attachment/106429/pgsa_leader_pid-v3.diff\n+\t\t\t\tif (proc->lockGroupLeader)\n+\t\t\t\t\tvalues[29] = Int32GetDatum(proc->lockGroupLeader->pid);\n\n..which is racy because a proc with a leader might die and be replaced by\nanother proc without a leader between 1 and 2.\n\nBut the code since v4 does:\n\n+\t\t\t\tleader = proc->lockGroupLeader;\n+\t\t\t\tif (leader)\n+ values[29] = Int32GetDatum(leader->pid);\n\n..which is safe because PROCs are allocated in shared memory, so leader is for\nsure a non-NULL pointer to a PROC. leader->pid may be read inconsistently,\nwhich is what the comment says: \"no extra lock is being held\".\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Mar 2020 00:43:41 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 12:43:41AM -0500, Justin Pryzby wrote:\n> I think I see. Julien's v3 patch did this:\n> https://www.postgresql.org/message-id/attachment/106429/pgsa_leader_pid-v3.diff\n> +\t\t\t\tif (proc->lockGroupLeader)\n> +\t\t\t\t\tvalues[29] = Int32GetDatum(proc->lockGroupLeader->pid);\n> \n> ..which is racy because a proc with a leader might die and be replaced by\n> another proc without a leader between 1 and 2.\n> \n> But the code since v4 does:\n> \n> +\t\t\t\tleader = proc->lockGroupLeader;\n> +\t\t\t\tif (leader)\n> + values[29] = Int32GetDatum(leader->pid);\n> \n> ..which is safe because PROCs are allocated in shared memory, so leader is for\n> sure a non-NULL pointer to a PROC. leader->pid may be read inconsistently,\n> which is what the comment says: \"no extra lock is being held\".\n\nYes, you have the correct answer here. As shaped, the code relies on\nthe state of a PGPROC entry in shared memory.\n--\nMichael",
"msg_date": "Mon, 16 Mar 2020 15:02:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Expose lock group leader pid in pg_stat_activity"
}
] |
[
{
"msg_contents": "Hi,\n\nMy customer reported me that the queries through a partitioned table\nignore each partition's SELECT, INSERT, UPDATE, and DELETE privileges,\non the other hand, only TRUNCATE privilege specified for each partition\nis applied. I'm not sure if this behavior is expected or not. But anyway\nis it better to document that? For example,\n\n Access privileges may be defined and removed separately for each partition.\n But note that queries through a partitioned table ignore each partition's\n SELECT, INSERT, UPDATE and DELETE privileges, and apply only TRUNCATE one.\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Thu, 26 Dec 2019 15:37:52 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "table partitioning and access privileges"
},
{
"msg_contents": "Fujii Masao <masao.fujii@gmail.com> writes:\n> My customer reported me that the queries through a partitioned table\n> ignore each partition's SELECT, INSERT, UPDATE, and DELETE privileges,\n> on the other hand, only TRUNCATE privilege specified for each partition\n> is applied. I'm not sure if this behavior is expected or not. But anyway\n> is it better to document that? For example,\n\n> Access privileges may be defined and removed separately for each partition.\n> But note that queries through a partitioned table ignore each partition's\n> SELECT, INSERT, UPDATE and DELETE privileges, and apply only TRUNCATE one.\n\nI believe it's intentional that we only check access privileges on\nthe table explicitly named in the query. So I'd say SELECT etc\nare doing the right thing, and if TRUNCATE isn't in step with them\nthat's a bug to fix, not something to document.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 14:25:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 4:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Fujii Masao <masao.fujii@gmail.com> writes:\n> > My customer reported me that the queries through a partitioned table\n> > ignore each partition's SELECT, INSERT, UPDATE, and DELETE privileges,\n> > on the other hand, only TRUNCATE privilege specified for each partition\n> > is applied. I'm not sure if this behavior is expected or not. But anyway\n> > is it better to document that? For example,\n>\n> > Access privileges may be defined and removed separately for each partition.\n> > But note that queries through a partitioned table ignore each partition's\n> > SELECT, INSERT, UPDATE and DELETE privileges, and apply only TRUNCATE one.\n>\n> I believe it's intentional that we only check access privileges on\n> the table explicitly named in the query. So I'd say SELECT etc\n> are doing the right thing, and if TRUNCATE isn't in step with them\n> that's a bug to fix, not something to document.\n\nI tend to agree that TRUNCATE's permission model for inheritance\nshould be consistent with that for the other commands. How about the\nattached patch toward that end?\n\nThanks,\nAmit",
"msg_date": "Tue, 7 Jan 2020 17:15:40 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 5:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>\n> On Fri, Dec 27, 2019 at 4:26 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Fujii Masao <masao.fujii@gmail.com> writes:\n> > > My customer reported me that the queries through a partitioned table\n> > > ignore each partition's SELECT, INSERT, UPDATE, and DELETE privileges,\n> > > on the other hand, only TRUNCATE privilege specified for each partition\n> > > is applied. I'm not sure if this behavior is expected or not. But anyway\n> > > is it better to document that? For example,\n> >\n> > > Access privileges may be defined and removed separately for each partition.\n> > > But note that queries through a partitioned table ignore each partition's\n> > > SELECT, INSERT, UPDATE and DELETE privileges, and apply only TRUNCATE one.\n> >\n> > I believe it's intentional that we only check access privileges on\n> > the table explicitly named in the query. So I'd say SELECT etc\n> > are doing the right thing, and if TRUNCATE isn't in step with them\n> > that's a bug to fix, not something to document.\n>\n> I tend to agree that TRUNCATE's permission model for inheritance\n> should be consistent with that for the other commands. How about the\n> attached patch toward that end?\n\nThanks for the patch!\n\nThe patch basically looks good to me.\n\n+GRANT SELECT (f1, fz), UPDATE (fz) ON atestc TO regress_priv_user2;\n+REVOKE TRUNCATE ON atestc FROM regress_priv_user2;\n\nThese seem not to be necessary for the test.\n\nBTW, I found that LOCK TABLE on the parent table checks the permission\nof its child tables. This also needs to be fixed (as a separate patch)?\n\nRegards,\n\n-- \nFujii Masao\n\n\n",
"msg_date": "Fri, 10 Jan 2020 10:29:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "Fujii-san,\n\nThanks for taking a look.\n\nOn Fri, Jan 10, 2020 at 10:29 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n> On Tue, Jan 7, 2020 at 5:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n> > I tend to agree that TRUNCATE's permission model for inheritance\n> > should be consistent with that for the other commands. How about the\n> > attached patch toward that end?\n>\n> Thanks for the patch!\n>\n> The patch basically looks good to me.\n>\n> +GRANT SELECT (f1, fz), UPDATE (fz) ON atestc TO regress_priv_user2;\n> +REVOKE TRUNCATE ON atestc FROM regress_priv_user2;\n>\n> These seem not to be necessary for the test.\n\nYou're right. Removed in the attached updated patch.\n\n> BTW, I found that LOCK TABLE on the parent table checks the permission\n> of its child tables. This also needs to be fixed (as a separate patch)?\n\nCommit ac33c7e2c13 and a past discussion ([1], [2], resp.) appear to\ndisagree with that position, but I would like to agree with you\nbecause the behavior you suggest would be consistent with other\ncommands. So, I'm attaching a patch for that too, although it would\nbe better to hear more opinions before accepting it.\n\nThanks,\nAmit\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ac33c7e2c13\n[2] https://www.postgresql.org/message-id/flat/34d269d40905121340h535ef652kbf8f054811e42e39%40mail.gmail.com",
"msg_date": "Wed, 22 Jan 2020 16:54:42 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/01/22 16:54, Amit Langote wrote:\n> Fujii-san,\n> \n> Thanks for taking a look.\n> \n> On Fri, Jan 10, 2020 at 10:29 AM Fujii Masao <masao.fujii@gmail.com> wrote:\n>> On Tue, Jan 7, 2020 at 5:15 PM Amit Langote <amitlangote09@gmail.com> wrote:\n>>> I tend to agree that TRUNCATE's permission model for inheritance\n>>> should be consistent with that for the other commands. How about the\n>>> attached patch toward that end?\n>>\n>> Thanks for the patch!\n>>\n>> The patch basically looks good to me.\n>>\n>> +GRANT SELECT (f1, fz), UPDATE (fz) ON atestc TO regress_priv_user2;\n>> +REVOKE TRUNCATE ON atestc FROM regress_priv_user2;\n>>\n>> These seem not to be necessary for the test.\n> \n> You're right. Removed in the attached updated patch.\n\nThanks for updating the patch! Barring any objection,\nI will commit this fix and backport it to all supported versions.\n\n>> BTW, I found that LOCK TABLE on the parent table checks the permission\n>> of its child tables. This also needs to be fixed (as a separate patch)?\n> \n> Commit ac33c7e2c13 and a past discussion ([1], [2], resp.) appear to\n> disagree with that position, but I would like to agree with you\n> because the behavior you suggest would be consistent with other\n> commands. So, I'm attaching a patch for that too, although it would\n> be better to hear more opinions before accepting it.\n\nYes. I'd like to hear more opinion about this. But\nsince the document explains \"Inherited queries perform access\npermission checks on the parent table only.\" in ddl.sgml,\nthat also seems a bug to fix...\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 23 Jan 2020 22:14:48 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On 2020/01/23 22:14, Fujii Masao wrote:\n> \n> \n> On 2020/01/22 16:54, Amit Langote wrote:\n>> Fujii-san,\n>>\n>> Thanks for taking a look.\n>>\n>> On Fri, Jan 10, 2020 at 10:29 AM Fujii Masao <masao.fujii@gmail.com> \n>> wrote:\n>>> On Tue, Jan 7, 2020 at 5:15 PM Amit Langote <amitlangote09@gmail.com> \n>>> wrote:\n>>>> I tend to agree that TRUNCATE's permission model for inheritance\n>>>> should be consistent with that for the other commands. How about the\n>>>> attached patch toward that end?\n>>>\n>>> Thanks for the patch!\n>>>\n>>> The patch basically looks good to me.\n>>>\n>>> +GRANT SELECT (f1, fz), UPDATE (fz) ON atestc TO regress_priv_user2;\n>>> +REVOKE TRUNCATE ON atestc FROM regress_priv_user2;\n>>>\n>>> These seem not to be necessary for the test.\n>>\n>> You're right. Removed in the attached updated patch.\n> \n> Thanks for updating the patch! Barring any objection,\n> I will commit this fix and backport it to all supported versions.\n\nAttached are the back-port versions of the patches.\n\n- patch for master and v12\n \n0001-Don-t-check-child-s-TRUNCATE-privilege-when-truncate-fujii-pg12-13.patch\n\n- patch for v11\n \n0001-Don-t-check-child-s-TRUNCATE-privilege-when-truncate-fujii-pg11.patch\n\n- patch for v10\n \n0001-Don-t-check-child-s-TRUNCATE-privilege-when-truncate-fujii-pg10.patch\n\n- patch for v9.6\n \n0001-Don-t-check-child-s-TRUNCATE-privilege-when-truncate-fujii-pg96.patch\n\n- patch for v9.5 and v9.4\n \n0001-Don-t-check-child-s-TRUNCATE-privilege-when-truncate-fujii-pg94-95.patch\n\nThe patch for master branch separates truncate_check_activity() into two\nfunctions, but in v11 or before, truncate_check_activity() didn't exist and\nits code was in truncate_check_rel(). So I had to write the back-port \nversion\nof the patch for the previous versions and separate truncate_check_rel()\ninto three functions, i.e., truncate_check_rel(), \ntruncate_check_activity() and\ntruncate_check_perms().\n\nAlso the names of users that the regression test for privileges use were\ndifferent between PostgreSQL versions. This is another reason\nwhy I had to write several back-port versions of the patches.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 27 Jan 2020 11:19:04 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "Fujii-san,\n\nOn Mon, Jan 27, 2020 at 11:19 AM Fujii Masao\n<masao.fujii@oss.nttdata.com> wrote:\n> On 2020/01/23 22:14, Fujii Masao wrote:\n> > Thanks for updating the patch! Barring any objection,\n> > I will commit this fix and backport it to all supported versions.\n>\n> Attached are the back-port versions of the patches.\n>\n>\n> The patch for master branch separates truncate_check_activity() into two\n> functions, but in v11 or before, truncate_check_activity() didn't exist and\n> its code was in truncate_check_rel(). So I had to write the back-port\n> version\n> of the patch for the previous versions and separate truncate_check_rel()\n> into three functions, i.e., truncate_check_rel(),\n> truncate_check_activity() and\n> truncate_check_perms().\n\nThank you for creating the back-port versions. I agree with making\nthe code look similar in all supported branches for the ease of future\nmaintenance.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 27 Jan 2020 14:02:18 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/01/27 14:02, Amit Langote wrote:\n> Fujii-san,\n> \n> On Mon, Jan 27, 2020 at 11:19 AM Fujii Masao\n> <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/01/23 22:14, Fujii Masao wrote:\n>>> Thanks for updating the patch! Barring any objection,\n>>> I will commit this fix and backport it to all supported versions.\n>>\n>> Attached are the back-port versions of the patches.\n>>\n>>\n>> The patch for master branch separates truncate_check_activity() into two\n>> functions, but in v11 or before, truncate_check_activity() didn't exist and\n>> its code was in truncate_check_rel(). So I had to write the back-port\n>> version\n>> of the patch for the previous versions and separate truncate_check_rel()\n>> into three functions, i.e., truncate_check_rel(),\n>> truncate_check_activity() and\n>> truncate_check_perms().\n> \n> Thank you for creating the back-port versions. I agree with making\n> the code look similar in all supported branches for the ease of future\n> maintenance.\n\nThanks for the check! I pushed the patches.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 00:54:40 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> Thanks for updating the patch! Barring any objection,\n> I will commit this fix and backport it to all supported versions.\n\nSorry for not having paid closer attention to this thread, but ...\nis back-patching this behavioral change really a good idea?\n\nIt's not that hard to imagine that somebody is expecting the old\nbehavior and will complain that we broke their application's security.\nSo I'd have thought it better to fix only in HEAD, with a\ncompatibility warning in the v13 release notes.\n\nI'm afraid it's much more likely that people will complain about\nmaking such a change in a minor release than that they will be\nhappy about it. It's particularly risky to be making it in what\nwill be the last 9.4.x release, because we will not have any\nopportunity to undo it in that branch if there is pushback.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 11:02:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/01/31 1:02, Tom Lane wrote:\n> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>> Thanks for updating the patch! Barring any objection,\n>> I will commit this fix and backport it to all supported versions.\n> \n> Sorry for not having paid closer attention to this thread, but ...\n> is back-patching this behavioral change really a good idea?\n> \n> It's not that hard to imagine that somebody is expecting the old\n> behavior and will complain that we broke their application's security.\n> So I'd have thought it better to fix only in HEAD, with a\n> compatibility warning in the v13 release notes.\n> \n> I'm afraid it's much more likely that people will complain about\n> making such a change in a minor release than that they will be\n> happy about it. It's particularly risky to be making it in what\n> will be the last 9.4.x release, because we will not have any\n> opportunity to undo it in that branch if there is pushback.\n\nFair enough. I finally did back-patch because the behavior is clearly\ndocumented and I failed to hear the opinions to object the back-patch.\nBut I should have heard and discussed such risks more.\n\nI'm OK to revert all those back-patch. Instead, probably the document\nshould be updated in old branches.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 01:28:38 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/01/31 1:28, Fujii Masao wrote:\n> \n> \n> On 2020/01/31 1:02, Tom Lane wrote:\n>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>> Thanks for updating the patch! Barring any objection,\n>>> I will commit this fix and backport it to all supported versions.\n>>\n>> Sorry for not having paid closer attention to this thread, but ...\n>> is back-patching this behavioral change really a good idea?\n>>\n>> It's not that hard to imagine that somebody is expecting the old\n>> behavior and will complain that we broke their application's security.\n>> So I'd have thought it better to fix only in HEAD, with a\n>> compatibility warning in the v13 release notes.\n>>\n>> I'm afraid it's much more likely that people will complain about\n>> making such a change in a minor release than that they will be\n>> happy about it. It's particularly risky to be making it in what\n>> will be the last 9.4.x release, because we will not have any\n>> opportunity to undo it in that branch if there is pushback.\n> \n> Fair enough. I finally did back-patch because the behavior is clearly\n> documented and I failed to hear the opinions to object the back-patch.\n> But I should have heard and discussed such risks more.\n> \n> I'm OK to revert all those back-patch. Instead, probably the document\n> should be updated in old branches.\n\nI'm thinking to wait at least half a day before reverting\nthe back-patch just in case someone can give opinion\nduring that period.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 02:45:13 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> On 2020/01/31 1:28, Fujii Masao wrote:\n>> On 2020/01/31 1:02, Tom Lane wrote:\n>>> Sorry for not having paid closer attention to this thread, but ...\n>>> is back-patching this behavioral change really a good idea?\n\n> I'm thinking to wait at least half a day before reverting\n> the back-patch just in case someone can give opinion\n> during that period.\n\nSure, other opinions welcome. We still have a week before the\nback-branch releases.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 30 Jan 2020 12:57:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 1:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/01/31 1:02, Tom Lane wrote:\n> > Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n> >> Thanks for updating the patch! Barring any objection,\n> >> I will commit this fix and backport it to all supported versions.\n> >\n> > Sorry for not having paid closer attention to this thread, but ...\n> > is back-patching this behavioral change really a good idea?\n> >\n> > It's not that hard to imagine that somebody is expecting the old\n> > behavior and will complain that we broke their application's security.\n> > So I'd have thought it better to fix only in HEAD, with a\n> > compatibility warning in the v13 release notes.\n> >\n> > I'm afraid it's much more likely that people will complain about\n> > making such a change in a minor release than that they will be\n> > happy about it. It's particularly risky to be making it in what\n> > will be the last 9.4.x release, because we will not have any\n> > opportunity to undo it in that branch if there is pushback.\n>\n> Fair enough. I finally did back-patch because the behavior is clearly\n> documented and I failed to hear the opinions to object the back-patch.\n> But I should have heard and discussed such risks more.\n>\n> I'm OK to revert all those back-patch. Instead, probably the document\n> should be updated in old branches.\n\nI could find only this paragraph in the section on inheritance that\ntalks about how access permissions work:\n\n9.4:\n\n\"Note how table access permissions are handled. Querying a parent\ntable can automatically access data in child tables without further\naccess privilege checking. This preserves the appearance that the data\nis (also) in the parent table. Accessing the child tables directly is,\nhowever, not automatically allowed and would require further\nprivileges to be granted.\"\n\n9.5-12:\n\n\"Inherited queries perform access permission checks on the parent\ntable only. Thus, for example, granting UPDATE permission on the\ncities table implies permission to update rows in the capitals table\nas well, when they are accessed through cities. This preserves the\nappearance that the data is (also) in the parent table. But the\ncapitals table could not be updated directly without an additional\ngrant. In a similar way, the parent table's row security policies (see\nSection 5.7) are applied to rows coming from child tables during an\ninherited query. A child table's policies, if any, are applied only\nwhen it is the table explicitly named in the query; and in that case,\nany policies attached to its parent(s) are ignored.\"\n\nDo you mean that the TRUNCATE exception should be noted here?\n\nThanks,\nAmit\n\n\n",
"msg_date": "Fri, 31 Jan 2020 13:38:41 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/01/31 13:38, Amit Langote wrote:\n> On Fri, Jan 31, 2020 at 1:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/01/31 1:02, Tom Lane wrote:\n>>> Fujii Masao <masao.fujii@oss.nttdata.com> writes:\n>>>> Thanks for updating the patch! Barring any objection,\n>>>> I will commit this fix and backport it to all supported versions.\n>>>\n>>> Sorry for not having paid closer attention to this thread, but ...\n>>> is back-patching this behavioral change really a good idea?\n>>>\n>>> It's not that hard to imagine that somebody is expecting the old\n>>> behavior and will complain that we broke their application's security.\n>>> So I'd have thought it better to fix only in HEAD, with a\n>>> compatibility warning in the v13 release notes.\n>>>\n>>> I'm afraid it's much more likely that people will complain about\n>>> making such a change in a minor release than that they will be\n>>> happy about it. It's particularly risky to be making it in what\n>>> will be the last 9.4.x release, because we will not have any\n>>> opportunity to undo it in that branch if there is pushback.\n>>\n>> Fair enough. I finally did back-patch because the behavior is clearly\n>> documented and I failed to hear the opinions to object the back-patch.\n>> But I should have heard and discussed such risks more.\n>>\n>> I'm OK to revert all those back-patch. Instead, probably the document\n>> should be updated in old branches.\n> \n> I could find only this paragraph in the section on inheritance that\n> talks about how access permissions work:\n> \n> 9.4:\n> \n> \"Note how table access permissions are handled. Querying a parent\n> table can automatically access data in child tables without further\n> access privilege checking. This preserves the appearance that the data\n> is (also) in the parent table. Accessing the child tables directly is,\n> however, not automatically allowed and would require further\n> privileges to be granted.\"\n> \n> 9.5-12:\n> \n> \"Inherited queries perform access permission checks on the parent\n> table only. Thus, for example, granting UPDATE permission on the\n> cities table implies permission to update rows in the capitals table\n> as well, when they are accessed through cities. This preserves the\n> appearance that the data is (also) in the parent table. But the\n> capitals table could not be updated directly without an additional\n> grant. In a similar way, the parent table's row security policies (see\n> Section 5.7) are applied to rows coming from child tables during an\n> inherited query. A child table's policies, if any, are applied only\n> when it is the table explicitly named in the query; and in that case,\n> any policies attached to its parent(s) are ignored.\"\n> \n> Do you mean that the TRUNCATE exception should be noted here?\n\nYes, that's what I was thinking.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 31 Jan 2020 21:39:01 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 9:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/01/31 13:38, Amit Langote wrote:\n> > On Fri, Jan 31, 2020 at 1:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> Fair enough. I finally did back-patch because the behavior is clearly\n> >> documented and I failed to hear the opinions to object the back-patch.\n> >> But I should have heard and discussed such risks more.\n> >>\n> >> I'm OK to revert all those back-patch. Instead, probably the document\n> >> should be updated in old branches.\n> >\n> > I could find only this paragraph in the section on inheritance that\n> > talks about how access permissions work:\n> >\n> > 9.4:\n> >\n> > \"Note how table access permissions are handled. Querying a parent\n> > table can automatically access data in child tables without further\n> > access privilege checking. This preserves the appearance that the data\n> > is (also) in the parent table. Accessing the child tables directly is,\n> > however, not automatically allowed and would require further\n> > privileges to be granted.\"\n> >\n> > 9.5-12:\n> >\n> > \"Inherited queries perform access permission checks on the parent\n> > table only. Thus, for example, granting UPDATE permission on the\n> > cities table implies permission to update rows in the capitals table\n> > as well, when they are accessed through cities. This preserves the\n> > appearance that the data is (also) in the parent table. But the\n> > capitals table could not be updated directly without an additional\n> > grant. In a similar way, the parent table's row security policies (see\n> > Section 5.7) are applied to rows coming from child tables during an\n> > inherited query. A child table's policies, if any, are applied only\n> > when it is the table explicitly named in the query; and in that case,\n> > any policies attached to its parent(s) are ignored.\"\n> >\n> > Do you mean that the TRUNCATE exception should be noted here?\n>\n> Yes, that's what I was thinking.\n\nOkay. How about the attached?\n\nMaybe, we should also note the LOCK TABLE exception?\n\nRegards,\nAmit",
"msg_date": "Mon, 3 Feb 2020 11:05:46 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On 2020/02/03 11:05, Amit Langote wrote:\n> On Fri, Jan 31, 2020 at 9:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/01/31 13:38, Amit Langote wrote:\n>>> On Fri, Jan 31, 2020 at 1:28 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> Fair enough. I finally did back-patch because the behavior is clearly\n>>>> documented and I failed to hear the opinions to object the back-patch.\n>>>> But I should have heard and discussed such risks more.\n>>>>\n>>>> I'm OK to revert all those back-patch. Instead, probably the document\n>>>> should be updated in old branches.\n>>>\n>>> I could find only this paragraph in the section on inheritance that\n>>> talks about how access permissions work:\n>>>\n>>> 9.4:\n>>>\n>>> \"Note how table access permissions are handled. Querying a parent\n>>> table can automatically access data in child tables without further\n>>> access privilege checking. This preserves the appearance that the data\n>>> is (also) in the parent table. Accessing the child tables directly is,\n>>> however, not automatically allowed and would require further\n>>> privileges to be granted.\"\n>>>\n>>> 9.5-12:\n>>>\n>>> \"Inherited queries perform access permission checks on the parent\n>>> table only. Thus, for example, granting UPDATE permission on the\n>>> cities table implies permission to update rows in the capitals table\n>>> as well, when they are accessed through cities. This preserves the\n>>> appearance that the data is (also) in the parent table. But the\n>>> capitals table could not be updated directly without an additional\n>>> grant. In a similar way, the parent table's row security policies (see\n>>> Section 5.7) are applied to rows coming from child tables during an\n>>> inherited query. A child table's policies, if any, are applied only\n>>> when it is the table explicitly named in the query; and in that case,\n>>> any policies attached to its parent(s) are ignored.\"\n>>>\n>>> Do you mean that the TRUNCATE exception should be noted here?\n>>\n>> Yes, that's what I was thinking.\n> \n> Okay. How about the attached?\n\nThanks for the patches! You added the note just after the description\nabout row level security on inherited table, but isn't it better to\nadd it before that? Attached patch does that. Thought?\n\n> Maybe, we should also note the LOCK TABLE exception?\n\nYes.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters",
"msg_date": "Mon, 3 Feb 2020 14:07:35 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Mon, Feb 3, 2020 at 2:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/03 11:05, Amit Langote wrote:\n> > Okay. How about the attached?\n>\n> Thanks for the patches! You added the note just after the description\n> about row level security on inherited table, but isn't it better to\n> add it before that? Attached patch does that. Thought?\n\nYeah, that might be a better flow for that paragraph.\n\n> > Maybe, we should also note the LOCK TABLE exception?\n>\n> Yes.\n\nNote that, unlike TRUNCATE, LOCK TABLE exception exists in HEAD too,\nbut maybe you're aware of that.\n\nThanks,\nAmit\n\n\n",
"msg_date": "Mon, 3 Feb 2020 14:26:55 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/02/03 14:26, Amit Langote wrote:\n> On Mon, Feb 3, 2020 at 2:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/03 11:05, Amit Langote wrote:\n>>> Okay. How about the attached?\n>>\n>> Thanks for the patches! You added the note just after the description\n>> about row level security on inherited table, but isn't it better to\n>> add it before that? Attached patch does that. Thought?\n> \n> Yeah, that might be a better flow for that paragraph.\n\nPushed! Thanks!\n\n>>> Maybe, we should also note the LOCK TABLE exception?\n>>\n>> Yes.\n> \n> Note that, unlike TRUNCATE, LOCK TABLE exception exists in HEAD too,\n> but maybe you're aware of that.\n\nYes, so I will review your patch getting rid of\nLOCK TABLE exception.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Fri, 7 Feb 2020 01:16:31 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Fri, Feb 7, 2020 at 1:16 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/03 14:26, Amit Langote wrote:\n> > On Mon, Feb 3, 2020 at 2:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> On 2020/02/03 11:05, Amit Langote wrote:\n> >>> Okay. How about the attached?\n> >>\n> >> Thanks for the patches! You added the note just after the description\n> >> about row level security on inherited table, but isn't it better to\n> >> add it before that? Attached patch does that. Thought?\n> >\n> > Yeah, that might be a better flow for that paragraph.\n>\n> Pushed! Thanks!\n\nThank you.\n\n> >>> Maybe, we should also note the LOCK TABLE exception?\n> >>\n> >> Yes.\n> >\n> > Note that, unlike TRUNCATE, LOCK TABLE exception exists in HEAD too,\n> > but maybe you're aware of that.\n>\n> Yes, so I will review your patch getting rid of\n> LOCK TABLE exception.\n\nAttached updated patch.\n\nRegards,\nAmit",
"msg_date": "Fri, 7 Feb 2020 10:39:52 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/02/07 10:39, Amit Langote wrote:\n> On Fri, Feb 7, 2020 at 1:16 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/03 14:26, Amit Langote wrote:\n>>> On Mon, Feb 3, 2020 at 2:07 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> On 2020/02/03 11:05, Amit Langote wrote:\n>>>>> Okay. How about the attached?\n>>>>\n>>>> Thanks for the patches! You added the note just after the description\n>>>> about row level security on inherited table, but isn't it better to\n>>>> add it before that? Attached patch does that. Thought?\n>>>\n>>> Yeah, that might be a better flow for that paragraph.\n>>\n>> Pushed! Thanks!\n> \n> Thank you.\n> \n>>>>> Maybe, we should also note the LOCK TABLE exception?\n>>>>\n>>>> Yes.\n>>>\n>>> Note that, unlike TRUNCATE, LOCK TABLE exception exists in HEAD too,\n>>> but maybe you're aware of that.\n>>\n>> Yes, so I will review your patch getting rid of\n>> LOCK TABLE exception.\n> \n> Attached updated patch.\n\nThanks! This patch basically looks good to me except\nthe following minor comment.\n\n ROLLBACK;\n-BEGIN;\n-LOCK TABLE ONLY lock_tbl1;\n-ROLLBACK;\n RESET ROLE;\n\nI think that there is no strong reason why these SQLs need to be\nremoved. We can verify that even \"LOCK TABLE ONLY\" command works\nexpectedly on the inherited tables by keeping those SQLs in the\nregression test. So what about not removing these SQLs?\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Thu, 13 Feb 2020 20:39:54 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Thu, Feb 13, 2020 at 8:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/07 10:39, Amit Langote wrote:\n> > On Fri, Feb 7, 2020 at 1:16 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> Yes, so I will review your patch getting rid of\n> >> LOCK TABLE exception.\n> >\n> > Attached updated patch.\n>\n> Thanks! This patch basically looks good to me except\n> the following minor comment.\n>\n> ROLLBACK;\n> -BEGIN;\n> -LOCK TABLE ONLY lock_tbl1;\n> -ROLLBACK;\n> RESET ROLE;\n>\n> I think that there is no strong reason why these SQLs need to be\n> removed. We can verify that even \"LOCK TABLE ONLY\" command works\n> expectedly on the inherited tables by keeping those SQLs in the\n> regression test. So what about not removing these SQLs?\n\nHmm, that test becomes meaningless with the behavior change we are\nintroducing, but I am okay with not removing it.\n\nHowever, I added a test showing that locking child table directly doesn't work.\n\nAttached updated patch.\n\nThanks,\nAmit",
"msg_date": "Fri, 14 Feb 2020 10:28:35 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/02/14 10:28, Amit Langote wrote:\n> On Thu, Feb 13, 2020 at 8:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/07 10:39, Amit Langote wrote:\n>>> On Fri, Feb 7, 2020 at 1:16 AM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> Yes, so I will review your patch getting rid of\n>>>> LOCK TABLE exception.\n>>>\n>>> Attached updated patch.\n>>\n>> Thanks! This patch basically looks good to me except\n>> the following minor comment.\n>>\n>> ROLLBACK;\n>> -BEGIN;\n>> -LOCK TABLE ONLY lock_tbl1;\n>> -ROLLBACK;\n>> RESET ROLE;\n>>\n>> I think that there is no strong reason why these SQLs need to be\n>> removed. We can verify that even \"LOCK TABLE ONLY\" command works\n>> expectedly on the inherited tables by keeping those SQLs in the\n>> regression test. So what about not removing these SQLs?\n> \n> Hmm, that test becomes meaningless with the behavior change we are\n> introducing, but I am okay with not removing it.\n\nOnly this regression test seems to verify LOCK TABLE ONLY command.\nSo if we remove this, I'm afraid that the test coverage would be reduced.\n\n> However, I added a test showing that locking child table directly doesn't work.\n> \n> Attached updated patch.\n\nThanks for updating the patch!\nBarring any objection, I will commit the patch.\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Mon, 17 Feb 2020 16:59:46 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "On Mon, Feb 17, 2020 at 4:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> On 2020/02/14 10:28, Amit Langote wrote:\n> > On Thu, Feb 13, 2020 at 8:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n> >> We can verify that even \"LOCK TABLE ONLY\" command works\n> >> expectedly on the inherited tables by keeping those SQLs in the\n> >> regression test. So what about not removing these SQLs?\n> >\n> > Hmm, that test becomes meaningless with the behavior change we are\n> > introducing, but I am okay with not removing it.\n>\n> Only this regression test seems to verify LOCK TABLE ONLY command.\n> So if we remove this, I'm afraid that the test coverage would be reduced.\n\nOh, I didn't notice that this is the only instance of testing LOCK\nTABLE ONLY. I would've expected that the test for:\n\n1. checking that ONLY works correctly with LOCK TABLE, and\n2. checking permission works correctly with ONLY\n\nare separate. Anyway, we can leave that as is.\n\n> > However, I added a test showing that locking child table directly doesn't work.\n> >\n> > Attached updated patch.\n>\n> Thanks for updating the patch!\n> Barring any objection, I will commit the patch.\n\nThank you.\n\nRegards,\nAmit\n\n\n",
"msg_date": "Mon, 17 Feb 2020 17:13:47 +0900",
"msg_from": "Amit Langote <amitlangote09@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
},
{
"msg_contents": "\n\nOn 2020/02/17 17:13, Amit Langote wrote:\n> On Mon, Feb 17, 2020 at 4:59 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>> On 2020/02/14 10:28, Amit Langote wrote:\n>>> On Thu, Feb 13, 2020 at 8:39 PM Fujii Masao <masao.fujii@oss.nttdata.com> wrote:\n>>>> We can verify that even \"LOCK TABLE ONLY\" command works\n>>>> expectedly on the inherited tables by keeping those SQLs in the\n>>>> regression test. So what about not removing these SQLs?\n>>>\n>>> Hmm, that test becomes meaningless with the behavior change we are\n>>> introducing, but I am okay with not removing it.\n>>\n>> Only this regression test seems to verify LOCK TABLE ONLY command.\n>> So if we remove this, I'm afraid that the test coverage would be reduced.\n> \n> Oh, I didn't notice that this is the only instance of testing LOCK\n> TABLE ONLY. I would've expected that the test for:\n> \n> 1. checking that ONLY works correctly with LOCK TABLE, and\n> 2. checking permission works correctly with ONLY\n> \n> are separate. Anyway, we can leave that as is.\n> \n>>> However, I added a test showing that locking child table directly doesn't work.\n>>>\n>>> Attached updated patch.\n>>\n>> Thanks for updating the patch!\n>> Barring any objection, I will commit the patch.\n> \n> Thank you.\n\nPushed. Thanks!\n\nRegards,\n\n-- \nFujii Masao\nNTT DATA CORPORATION\nAdvanced Platform Technology Group\nResearch and Development Headquarters\n\n\n",
"msg_date": "Tue, 18 Feb 2020 13:16:56 +0900",
"msg_from": "Fujii Masao <masao.fujii@oss.nttdata.com>",
"msg_from_op": false,
"msg_subject": "Re: table partitioning and access privileges"
}
] |
[
{
"msg_contents": "Hello.\n\nI found that confirmed_flush in ReplicationSlot (PersistentData) is\npointed with a wrong name in the comments in slotfuncs.c\n\nThe attaches fixes that.\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center",
"msg_date": "Thu, 26 Dec 2019 17:59:19 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": true,
"msg_subject": "Fix comment typos."
},
{
"msg_contents": "On Thu, Dec 26, 2019 at 05:59:19PM +0900, Kyotaro Horiguchi wrote:\n> I found that confirmed_flush in ReplicationSlot (PersistentData) is\n> pointed with a wrong name in the comments in slotfuncs.c\n\nCommitted, thanks!\n--\nMichael",
"msg_date": "Thu, 26 Dec 2019 22:27:29 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Fix comment typos."
}
] |
[
{
"msg_contents": "Hi All,\n\nWhen the following test-case is executed on master, it fails with an\nerror: \"ERROR: could not open relation with OID ...\"\n\n-- create a test table:\ncreate table tab1(a int, b text);\n\n-- create a test function:\ncreate or replace function f1() returns void as\n$$\ndeclare\n var1 tab1;\nbegin\n select * into var1 from tab1;\nend\n$$ language plpgsql;\n\n-- call the test function:\nselect f1();\n\n-- drop the test table and re-create it:\ndrop table tab1;\ncreate table tab1(a int, b text);\n\n-- call the test function:\nselect f1();\n\n-- call the test function once again:\nselect f1(); -- this fails with an error \"ERROR: could not open\nrelation with OID ..\"\n\nI'm trying to investigate this issue and will try to share my findings soon...\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Dec 2019 15:57:18 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
},
{
"msg_contents": "The issue here is that PLpgSQL_rec structure being updated by\nrevalidate_rectypeid() is actually a local/duplicate copy of the\nPLpgSQL_rec structure available in plpgsql_HashTable (refer to\ncopy_plpgsql_datums() where you would notice that if datum type is\nPLPGSQL_DTYPE_REC we actually mempcy() the PLpgSQL_rec structure\navailable in func >datums[] array). This basically means that the\nrectypeid field updated post typcache entry validation in\nrevalidation_rectypeid() is actually a field in duplicate copy of\nPLpgSQL_rec structure, not the original copy of it available in\nfunc->datums[]. Hence, when the same function is executed for the\nsecond time, the rectypeid field of PLpgSQL_rec structure being\nreloaded from the func->datums[] actually contains the stale value\nhowever the typcache entry in it is up-to-date which means\nrevalidation_rectypeid() returns immediately leaving a stale value in\nrectypeid. This causes the function make_expanded_record_from_typeid()\nto use the outdated value in rec->rectypeid resulting into the given\nerror.\n\nTo fix this, I think instead of using rec->rectypeid field we should\ntry using rec->datatype->typoid when calling\nmake_expanded_record_from_typeid(). Here is the change that I'm\nsuggesting:\n\n--- a/src/pl/plpgsql/src/pl_exec.c\n+++ b/src/pl/plpgsql/src/pl_exec.c\n@@ -6942,7 +6942,7 @@ make_expanded_record_for_rec(PLpgSQL_execstate *estate,\n newerh = make_expanded_record_from_exprecord(srcerh,\n mcontext);\n else\n- newerh = make_expanded_record_from_typeid(rec->rectypeid, -1,\n+ newerh = make_expanded_record_from_typeid(rec->datatype->typoid, -1,\n\nThoughts ?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\nOn Thu, Dec 26, 2019 at 3:57 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>\n> Hi All,\n>\n> When the following test-case is executed on master, it fails with an\n> error: \"ERROR: could not open relation with OID ...\"\n>\n> -- create a test table:\n> create table tab1(a int, b text);\n>\n> -- create a test function:\n> create or replace function f1() returns void as\n> $$\n> declare\n> var1 tab1;\n> begin\n> select * into var1 from tab1;\n> end\n> $$ language plpgsql;\n>\n> -- call the test function:\n> select f1();\n>\n> -- drop the test table and re-create it:\n> drop table tab1;\n> create table tab1(a int, b text);\n>\n> -- call the test function:\n> select f1();\n>\n> -- call the test function once again:\n> select f1(); -- this fails with an error \"ERROR: could not open\n> relation with OID ..\"\n>\n> I'm trying to investigate this issue and will try to share my findings soon...\n>\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 26 Dec 2019 17:02:05 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
},
{
"msg_contents": "Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> The issue here is that PLpgSQL_rec structure being updated by\n> revalidate_rectypeid() is actually a local/duplicate copy of the\n> PLpgSQL_rec structure available in plpgsql_HashTable (refer to\n> copy_plpgsql_datums() where you would notice that if datum type is\n> PLPGSQL_DTYPE_REC we actually mempcy() the PLpgSQL_rec structure\n> available in func >datums[] array). This basically means that the\n> rectypeid field updated post typcache entry validation in\n> revalidation_rectypeid() is actually a field in duplicate copy of\n> PLpgSQL_rec structure, not the original copy of it available in\n> func->datums[]. Hence, when the same function is executed for the\n> second time, the rectypeid field of PLpgSQL_rec structure being\n> reloaded from the func->datums[] actually contains the stale value\n> however the typcache entry in it is up-to-date which means\n> revalidation_rectypeid() returns immediately leaving a stale value in\n> rectypeid. This causes the function make_expanded_record_from_typeid()\n> to use the outdated value in rec->rectypeid resulting into the given\n> error.\n\nGood catch!\n\n> To fix this, I think instead of using rec->rectypeid field we should\n> try using rec->datatype->typoid when calling\n> make_expanded_record_from_typeid().\n\nThis is a crummy fix, though. In the first place, if we did it like this\nwe'd have to fix every other caller of revalidate_rectypeid() likewise.\nBasically the issue here is that revalidate_rectypeid() is failing to do\nwhat it says on the tin, and you're proposing to make the callers work\naround that instead of fixing revalidate_rectypeid(). That seems like\nan odd choice from here.\n\nMore generally, the reason for the separation between PLpgSQL_rec and\nPLpgSQL_type in this part of the code is that PLpgSQL_rec.rectypeid is\nsupposed to record the actual type ID currently instantiated in that\nvariable (in the current function execution), whereas PLpgSQL_type is a\ncache for the last type lookup we did; that's why it's okay to share the\nlatter but not the former across function executions. So failing to\nupdate rec->rectypeid is almost certainly going to lead to problems\nlater on.\n\nI pushed a fix that makes revalidate_rectypeid() deal with this case.\nThanks for the report and debugging!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 15:29:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 1:59 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > The issue here is that PLpgSQL_rec structure being updated by\n> > revalidate_rectypeid() is actually a local/duplicate copy of the\n> > PLpgSQL_rec structure available in plpgsql_HashTable (refer to\n> > copy_plpgsql_datums() where you would notice that if datum type is\n> > PLPGSQL_DTYPE_REC we actually mempcy() the PLpgSQL_rec structure\n> > available in func >datums[] array). This basically means that the\n> > rectypeid field updated post typcache entry validation in\n> > revalidation_rectypeid() is actually a field in duplicate copy of\n> > PLpgSQL_rec structure, not the original copy of it available in\n> > func->datums[]. Hence, when the same function is executed for the\n> > second time, the rectypeid field of PLpgSQL_rec structure being\n> > reloaded from the func->datums[] actually contains the stale value\n> > however the typcache entry in it is up-to-date which means\n> > revalidation_rectypeid() returns immediately leaving a stale value in\n> > rectypeid. This causes the function make_expanded_record_from_typeid()\n> > to use the outdated value in rec->rectypeid resulting into the given\n> > error.\n>\n> Good catch!\n>\n> > To fix this, I think instead of using rec->rectypeid field we should\n> > try using rec->datatype->typoid when calling\n> > make_expanded_record_from_typeid().\n>\n> This is a crummy fix, though. In the first place, if we did it like this\n> we'd have to fix every other caller of revalidate_rectypeid() likewise.\n> Basically the issue here is that revalidate_rectypeid() is failing to do\n> what it says on the tin, and you're proposing to make the callers work\n> around that instead of fixing revalidate_rectypeid(). That seems like\n> an odd choice from here.\n>\n> More generally, the reason for the separation between PLpgSQL_rec and\n> PLpgSQL_type in this part of the code is that PLpgSQL_rec.rectypeid is\n> supposed to record the actual type ID currently instantiated in that\n> variable (in the current function execution), whereas PLpgSQL_type is a\n> cache for the last type lookup we did; that's why it's okay to share the\n> latter but not the former across function executions. So failing to\n> update rec->rectypeid is almost certainly going to lead to problems\n> later on.\n>\n> I pushed a fix that makes revalidate_rectypeid() deal with this case.\n> Thanks for the report and debugging!\n>\n\nOkay. Thanks for that fix. You've basically forced\nrevalidate_rectypeid() to update the PLpgSQL_rec's rectypeid\nirrespective of typcache entry requires re-validation or not.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Dec 2019 08:02:03 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
},
{
"msg_contents": "Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> Okay. Thanks for that fix. You've basically forced\n> revalidate_rectypeid() to update the PLpgSQL_rec's rectypeid\n> irrespective of typcache entry requires re-validation or not.\n\nRight. The assignment is cheap enough that it hardly seems\nworth avoiding.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 26 Dec 2019 22:50:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 9:20 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Ashutosh Sharma <ashu.coek88@gmail.com> writes:\n> > Okay. Thanks for that fix. You've basically forced\n> > revalidate_rectypeid() to update the PLpgSQL_rec's rectypeid\n> > irrespective of typcache entry requires re-validation or not.\n>\n> Right. The assignment is cheap enough that it hardly seems\n> worth avoiding.\n>\n\nAgreed.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 27 Dec 2019 16:45:18 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Calling PLpgSQL function with composite type fails with an error:\n \"ERROR: could not open relation with OID ...\""
}
] |
[
{
"msg_contents": "Hello\n\nOur current setup uses logical replication to build a BI replication server \nalong our primary clusters (running PG 10.10 so far). This implies having one \nlogical replication slot per database. After some analysis, we identified two \nhot spots behind this issue. Fixing them gave us a 10 fold performance \nimprovement in decoding speed.\n\n\nWe noticed our load had gotten quite bigger on our primary since the \nintroduction of this replication, seeing spikes in system time when a lot of \nwal were being written (for instance when creating GIN indexes).\n\nThe first hot spot is PostmasterIsAlive. The implementation reads on a pipe to \nknow if the postmaster is still alive, but this is very expensive kernel-wise. \nWhen switching the implementation to a much more primitive (and probably \nwrong):\n\tbool PostmasterIsAliveInternal() {\n\t\treturn getppid() == PostmasterPid;\n\t}\nwe stopped seeing spikes in system time.\nBut after doing that, the CPU time used by our walsenders increased. We \nreached a second hot spot, this time in XLogSendLogical, where each walsender \nwas using 100% of user CPU for minutes. After checking with perf, it appears \nall the decoders are fighting on GetFlushRecPtr.\n\nOn PostgreSQL 12, the call to PostmasterIsAlive is no longer present in \nWalSndLoop (thanks to commit cfdf4dc4f), so only the second hot spot is still \npresent, with the same effects.\n\nAttached to this email are two patches. \n\nThe first one, specific to PG 10 for our use-case, simplifies the \nPostmasterIsAlive function, as described above. I don't know if this \nimplementation is valid, but it was needed to uncover the performance issue in \nXLogSendLogical. Would it be possible to remove the systematic call to \nPostmasterIsAlive in WalSndLoop? We are not certain of the behaviour.\n\nThe second one was tested on PG 10 and PG 12 (with 48 lines offset). It has on \nPG12 the same effect it has on a PG10+isAlive patch. Instead of calling each \ntime GetFlushRecPtr, we call it only if we notice we have reached the value of \nthe previous call. This way, when the senders are busy decoding, we are no \nlonger fighting for a spinlock to read the FlushRecPtr.\n\nHere are some benchmark results.\nOn PG 10, to decode our replication stream, we went from 3m 43s to over 5 \nminutes after removing the first hot spot, and then down to 22 seconds.\nOn PG 12, we had to change the benchmark (due to GIN indexes creation being \nmore optimized) so we can not compare directly with our previous bench. We \nwent from 15m 11s down to 59 seconds.\nIf needed, we can provide scripts to reproduce this situation. It is quite \nsimple: add ~20 walsenders doing logical replication in database A, and then \ngenerate a lot of data in database B. The walsenders will be woken up by the \nactivity on database B, but not sending it thus keeping hitting the same \nlocks.\n\nRegards",
"msg_date": "Thu, 26 Dec 2019 17:42:51 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "[PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "Hello Pierre,\n\nOn Thu, Dec 26, 2019 at 5:43 PM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> The second one was tested on PG 10 and PG 12 (with 48 lines offset). It has on\n> PG12 the same effect it has on a PG10+isAlive patch. Instead of calling each\n> time GetFlushRecPtr, we call it only if we notice we have reached the value of\n> the previous call. This way, when the senders are busy decoding, we are no\n> longer fighting for a spinlock to read the FlushRecPtr.\n\nThe patch is quite straightforward and looks good to me.\n\n- XLogRecPtr flushPtr;\n+ static XLogRecPtr flushPtr = 0;\n\nYou should use InvalidXLogRecPtr instead though, and maybe adding some\ncomments to explain why the static variable is a life changer here.\n\n> Here are some benchmark results.\n> On PG 10, to decode our replication stream, we went from 3m 43s to over 5\n> minutes after removing the first hot spot, and then down to 22 seconds.\n> On PG 12, we had to change the benchmark (due to GIN indexes creation being\n> more optimized) so we can not compare directly with our previous bench. We\n> went from 15m 11s down to 59 seconds.\n> If needed, we can provide scripts to reproduce this situation. It is quite\n> simple: add ~20 walsenders doing logical replication in database A, and then\n> generate a lot of data in database B. The walsenders will be woken up by the\n> activity on database B, but not sending it thus keeping hitting the same\n> locks.\n\nQuite impressive speedup!\n\n\n",
"msg_date": "Thu, 26 Dec 2019 20:18:46 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "On Thursday, December 26, 2019 8:18:46 PM CET Julien Rouhaud wrote:\n> Hello Pierre,\n> \n> On Thu, Dec 26, 2019 at 5:43 PM Pierre Ducroquet <p.psql@pinaraf.info> \nwrote:\n> > The second one was tested on PG 10 and PG 12 (with 48 lines offset). It\n> > has on PG12 the same effect it has on a PG10+isAlive patch. Instead of\n> > calling each time GetFlushRecPtr, we call it only if we notice we have\n> > reached the value of the previous call. This way, when the senders are\n> > busy decoding, we are no longer fighting for a spinlock to read the\n> > FlushRecPtr.\n> \n> The patch is quite straightforward and looks good to me.\n> \n> - XLogRecPtr flushPtr;\n> + static XLogRecPtr flushPtr = 0;\n> \n> You should use InvalidXLogRecPtr instead though, and maybe adding some\n> comments to explain why the static variable is a life changer here.\n> \n> > Here are some benchmark results.\n> > On PG 10, to decode our replication stream, we went from 3m 43s to over 5\n> > minutes after removing the first hot spot, and then down to 22 seconds.\n> > On PG 12, we had to change the benchmark (due to GIN indexes creation\n> > being\n> > more optimized) so we can not compare directly with our previous bench. We\n> > went from 15m 11s down to 59 seconds.\n> > If needed, we can provide scripts to reproduce this situation. It is quite\n> > simple: add ~20 walsenders doing logical replication in database A, and\n> > then generate a lot of data in database B. The walsenders will be woken\n> > up by the activity on database B, but not sending it thus keeping hitting\n> > the same locks.\n> \n> Quite impressive speedup!\n\n\n\nHi\n\nThank you for your comments.\nAttached to this email is a patch with better comments regarding the \nXLogSendLogical change.\nWe've spent quite some time yesterday benching it again, this time with \nchanges that must be fully processed by the decoder. The speed-up is obviously \nmuch smaller, we are only ~5% faster than without the patch.\n\nRegards",
"msg_date": "Sat, 28 Dec 2019 13:55:59 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 1:56 PM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> Thank you for your comments.\n> Attached to this email is a patch with better comments regarding the\n> XLogSendLogical change.\n\nArguably the first test to compare to InvalidXLogRecPtr is unneeded,\nas any value of EndRecPtr is greater or equal than that value. It\nwill only save at best 1 GetFlushRecPtr() per walsender process\nlifetime, so I'm not sure it's worth arguing about it. Other than\nthat I still think that it's a straightforward optimization that\nbrings nice speedup, and I don't see any problem with this patch. I\nthink that given the time of the year you should create a commitfest\nentry for this patch to make sure it won't be forgotten (and obviously\nI'll mark it as RFC, unless someone objects by then).\n\n> We've spent quite some time yesterday benching it again, this time with\n> changes that must be fully processed by the decoder. The speed-up is obviously\n> much smaller, we are only ~5% faster than without the patch.\n\nI'm assuming that it's benchmarking done with multiple logical slots?\nAnyway, a 5% speedup in the case that this patch is not aimed to\noptimize is quite nice!\n\n\n",
"msg_date": "Sun, 29 Dec 2019 13:32:31 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "On Sunday, December 29, 2019 1:32:31 PM CET Julien Rouhaud wrote:\n> On Sat, Dec 28, 2019 at 1:56 PM Pierre Ducroquet <p.psql@pinaraf.info> \nwrote:\n> > Thank you for your comments.\n> > Attached to this email is a patch with better comments regarding the\n> > XLogSendLogical change.\n> \n> Arguably the first test to compare to InvalidXLogRecPtr is unneeded,\n> as any value of EndRecPtr is greater or equal than that value. It\n> will only save at best 1 GetFlushRecPtr() per walsender process\n> lifetime, so I'm not sure it's worth arguing about it. Other than\n> that I still think that it's a straightforward optimization that\n> brings nice speedup, and I don't see any problem with this patch. I\n> think that given the time of the year you should create a commitfest\n> entry for this patch to make sure it won't be forgotten (and obviously\n> I'll mark it as RFC, unless someone objects by then).\n> \n> > We've spent quite some time yesterday benching it again, this time with\n> > changes that must be fully processed by the decoder. The speed-up is\n> > obviously much smaller, we are only ~5% faster than without the patch.\n> \n> I'm assuming that it's benchmarking done with multiple logical slots?\n> Anyway, a 5% speedup in the case that this patch is not aimed to\n> optimize is quite nice!\n\n\nI've created a commitfest entry for this patch.\nhttps://commitfest.postgresql.org/26/2403/\nI would like to know if it would be acceptable to backport this to PostgreSQL \n12. I have to write a clean benchmark for that (our previous benchs are either \nPG10 or PG12 specific), but the change from Thomas Munro that removed the \ncalls to PostmasterIsAlive is very likely to have the same side-effect we \nobserved in PG10 when patching IsAlive, aka. moving the pressure from the pipe \nreads to the PostgreSQL locks between processes, and this made the whole \nprocess slower: the benchmark showed a serious regression, going from 3m45s to \n5m15s to decode the test transactions.\n\nRegards\n\n Pierre\n\n\n\n\n",
"msg_date": "Mon, 30 Dec 2019 12:13:58 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> Attached to this email is a patch with better comments regarding the \n> XLogSendLogical change.\n\nHi,\n This patch entirely fails to apply for me (and for the cfbot too).\nIt looks like (a) it's missing a final newline and (b) all the tabs\nhave been mangled into spaces, and not correctly mangled either.\nI could probably reconstruct a workable patch if I had to, but\nit seems likely that it'd be easier for you to resend it with a\nlittle more care about attaching an unmodified attachment.\n\nAs for the question of back-patching, it seems to me that it'd\nlikely be reasonable to put this into v12, but probably not\nfurther back. There will be no interest in back-patching\ncommit cfdf4dc4f, and it seems like the argument for this\npatch is relatively weak without that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 12:57:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "On Monday, January 6, 2020 6:57:33 PM CET Tom Lane wrote:\n> Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> > Attached to this email is a patch with better comments regarding the\n> > XLogSendLogical change.\n> \n> Hi,\n> This patch entirely fails to apply for me (and for the cfbot too).\n> It looks like (a) it's missing a final newline and (b) all the tabs\n> have been mangled into spaces, and not correctly mangled either.\n> I could probably reconstruct a workable patch if I had to, but\n> it seems likely that it'd be easier for you to resend it with a\n> little more care about attaching an unmodified attachment.\n> \n> As for the question of back-patching, it seems to me that it'd\n> likely be reasonable to put this into v12, but probably not\n> further back. There will be no interest in back-patching\n> commit cfdf4dc4f, and it seems like the argument for this\n> patch is relatively weak without that.\n> \n> \t\t\tregards, tom lane\n\nHi\n\nMy deepest apologies for the patch being broken, I messed up when transferring \nit between my computers after altering the comments. The verbatim one attached \nto this email applies with no issue on current HEAD.\nThe patch regarding PostmasterIsAlive is completely pointless since v12 where \nthe function was rewritten, and was included only to help reproduce the issue \non older versions. Back-patching the walsender patch further than v12 would \nimply back-patching all the machinery introduced for PostmasterIsAlive \n(9f09529952) or another intrusive change there, a too big risk indeed.\n\nRegards \n\n Pierre",
"msg_date": "Mon, 06 Jan 2020 19:57:40 +0100",
"msg_from": "Pierre Ducroquet <p.psql@pinaraf.info>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 7:57 PM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>\n> On Monday, January 6, 2020 6:57:33 PM CET Tom Lane wrote:\n> > Pierre Ducroquet <p.psql@pinaraf.info> writes:\n> > > Attached to this email is a patch with better comments regarding the\n> > > XLogSendLogical change.\n> >\n> > Hi,\n> > This patch entirely fails to apply for me (and for the cfbot too).\n> > It looks like (a) it's missing a final newline and (b) all the tabs\n> > have been mangled into spaces, and not correctly mangled either.\n> > I could probably reconstruct a workable patch if I had to, but\n> > it seems likely that it'd be easier for you to resend it with a\n> > little more care about attaching an unmodified attachment.\n> >\n> > As for the question of back-patching, it seems to me that it'd\n> > likely be reasonable to put this into v12, but probably not\n> > further back. There will be no interest in back-patching\n> > commit cfdf4dc4f, and it seems like the argument for this\n> > patch is relatively weak without that.\n> >\n> > regards, tom lane\n>\n> Hi\n>\n> My deepest apologies for the patch being broken, I messed up when transferring\n> it between my computers after altering the comments. The verbatim one attached\n> to this email applies with no issue on current HEAD.\n> The patch regarding PostmasterIsAlive is completely pointless since v12 where\n> the function was rewritten, and was included only to help reproduce the issue\n> on older versions. Back-patching the walsender patch further than v12 would\n> imply back-patching all the machinery introduced for PostmasterIsAlive\n> (9f09529952) or another intrusive change there, a too big risk indeed.\n\n+1, backpatch to v12 looks sensible.\n\n\n",
"msg_date": "Mon, 6 Jan 2020 20:16:08 +0100",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "Hi,\n\nI spent a little bit of time trying to explain the problem we are facing clearly, and provide a reproducible benchmark.\n\nSo here it is.\n\nWhat triggered our investigation is that we have a PostgreSQL cluster containing about 15 databases, most of them being used as sources for logical replication. This means we have about as many WAL senders active on the cluster at the same time.\n\nWhat we saw was that we had very high spikes of CPU activity, with very high level of SYS (up to 50% of the whole system was SYS, with 100% load on all CPUs) on the server, which is a dual Xeon Silver 4110, so 16 cores, 32 threads. That seemed insane as our usual load isn't that high on average (like 20% total cpu use, with ~ 1% of SYS), and mostly on 2 of those 15 databases. This mostly occured when creating an index, or doing batch updates, copys, etc. And all WAL senders consumed about the same amount, even those connected to databases where nothing happened. And while testing, we noticed things were worse with PG 12 than with PG 10 (that was the cause for the first patch Pierre posted, which was more of a way to get PostmasterIsAlive out of the way for PG 10 to get the same behaviour on both databases and confirm what was different between the two versions).\n\nSo that was what the first benchmark we did, what Pierre posted a few days ago. With the second patch (reducing calls to GetFlushRecPtr), on PostgreSQL 12, with statements affecting lots of records at once, we managed to reduce the WAL senders' consumption by a factor of 15 (if the patch is correct of course). SYS was down to more sensible (near 0) values. WAL senders for databases which had no decoding to do didn't consume that much anymore, only the one connected to the database doing the work used a lot of CPU, but that's expected. This solves the problem we are facing. Without this, we won't be able to upgrade to PG 12, as the impact of GetFlushRecPtr is even worse than with PG 10.\n\n\nI've now tried to measure the impact of the patch on a more evenly balanced activity on several databases, where the contention on GetFlushRecPtr is less severe, to see if there are wins in all cases. Simple scripts to test this are provided as an attachment.\n\nJust set the \"max\" environment variable to the amount of databases/WAL senders you want, and then run create_logical.sh (creates the databases and the slots), then connect_logical.sh (connects pg_recvlogical processes to these slots), and run_stress.sh (runs a pgbench on each database, each doing 100000 transactions, all in parallel). drop_logical.sh does the cleanup. Sorry, they are very basic...\n\nHere are the results on patched/unpatched PG 12. You have the messages from all pgbenchs, then the consumption of all WAL senders at the end of a run.\n\nAs you can see, pgbench runs are ~ 20% faster. That's because with unpatched, we are CPU starved (the server is 100% CPU), which we aren't with patched. WAL senders needed about half the CPU in the patched case as shown by pidstat.\n\nUNPATCHED:\n\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.943 ms\ntps = 514.796454 (including connections establishing)\ntps = 514.805610 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.949 ms\ntps = 513.130790 (including connections establishing)\ntps = 513.168135 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.950 ms\ntps = 512.946425 (including connections establishing)\ntps = 512.961746 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.951 ms\ntps = 512.643065 (including connections establishing)\ntps = 512.678765 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.953 ms\ntps = 512.159794 (including connections establishing)\ntps = 512.178075 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.953 ms\ntps = 512.024962 (including connections establishing)\ntps = 512.034590 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.953 ms\ntps = 512.016413 (including connections establishing)\ntps = 512.034438 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.954 ms\ntps = 511.728080 (including connections establishing)\ntps = 511.760138 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.954 ms\ntps = 511.655046 (including connections establishing)\ntps = 511.678533 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.955 ms\ntps = 511.604767 (including connections establishing)\ntps = 511.617562 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.955 ms\ntps = 511.525593 (including connections establishing)\ntps = 511.558150 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.955 ms\ntps = 511.496498 (including connections establishing)\ntps = 511.505871 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.956 ms\ntps = 511.334434 (including connections establishing)\ntps = 511.363141 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.956 ms\ntps = 511.256908 (including connections establishing)\ntps = 511.284577 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.957 ms\ntps = 511.021219 (including connections establishing)\ntps = 511.041905 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.957 ms\ntps = 510.977176 (including connections establishing)\ntps = 511.004730 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.958 ms\ntps = 510.818341 (including connections establishing)\ntps = 510.870735 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.958 ms\ntps = 510.719611 (including connections establishing)\ntps = 510.741600 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.959 ms\ntps = 510.460165 (including connections establishing)\ntps = 510.504649 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.961 ms\ntps = 509.962481 (including connections establishing)\ntps = 509.978043 (excluding connections establishing)\n\n~$ pidstat -C postgres -l -u -T CHILD\nLinux 4.9.0-11-amd64 (hardy) 01/06/2020 _x86_64_ (32 CPU)\n\n02:39:30 PM UID PID usr-ms system-ms guest-ms Command\n02:39:30 PM 111 78644 2232190 284870 0 /tmp/pg-12-unpatched/bin/postgres\n02:39:30 PM 111 78646 400 340 0 postgres: checkpointer\n02:39:30 PM 111 78647 530 1330 0 postgres: background writer\n02:39:30 PM 111 78648 300 150 0 postgres: walwriter\n02:39:30 PM 111 78649 30 40 0 postgres: autovacuum launcher\n02:39:30 PM 111 78650 130 440 0 postgres: stats collector\n02:39:30 PM 111 78790 86340 20560 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78792 89170 23270 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78794 86990 20740 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78795 91900 22540 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78797 92000 23190 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78799 94060 22520 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78801 95300 21500 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78803 93120 21360 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78805 95420 21920 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78807 94400 21350 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78809 88850 20390 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78811 90030 20690 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78812 94310 22660 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78813 94080 22470 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78814 95370 21520 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78815 94780 21470 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78816 92440 21080 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78817 94360 20700 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78818 92230 20760 0 postgres: walsender postgres [local] idle\n02:39:30 PM 111 78819 90280 20780 0 postgres: walsender postgres [local] idle\n\nPATCHED:\n\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.680 ms\ntps = 595.090858 (including connections establishing)\ntps = 595.131449 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.683 ms\ntps = 594.156492 (including connections establishing)\ntps = 594.198624 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.684 ms\ntps = 593.927387 (including connections establishing)\ntps = 593.946829 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.686 ms\ntps = 593.209506 (including connections establishing)\ntps = 593.257556 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.686 ms\ntps = 593.144977 (including connections establishing)\ntps = 593.162018 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.686 ms\ntps = 593.084403 (including connections establishing)\ntps = 593.122178 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.686 ms\ntps = 593.134657 (including connections establishing)\ntps = 593.199432 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.687 ms\ntps = 592.908760 (including connections establishing)\ntps = 592.923386 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.687 ms\ntps = 592.802027 (including connections establishing)\ntps = 592.814300 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.687 ms\ntps = 592.678874 (including connections establishing)\ntps = 592.769759 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.687 ms\ntps = 592.642501 (including connections establishing)\ntps = 592.723261 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.688 ms\ntps = 592.249597 (including connections establishing)\ntps = 592.262962 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.690 ms\ntps = 591.867795 (including connections establishing)\ntps = 591.958672 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.690 ms\ntps = 591.835950 (including connections establishing)\ntps = 591.908940 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.690 ms\ntps = 591.799816 (including connections establishing)\ntps = 591.824497 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.690 ms\ntps = 591.738978 (including connections establishing)\ntps = 591.780258 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.691 ms\ntps = 591.530490 (including connections establishing)\ntps = 591.570876 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.691 ms\ntps = 591.452142 (including connections establishing)\ntps = 591.498424 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.693 ms\ntps = 590.674305 (including connections establishing)\ntps = 590.708951 (excluding connections establishing)\ntransaction type: <builtin: TPC-B (sort of)>\nscaling factor: 1\nquery mode: simple\nnumber of clients: 1\nnumber of threads: 1\nnumber of transactions per client: 100000\nnumber of transactions actually processed: 100000/100000\nlatency average = 1.693 ms\ntps = 590.517240 (including connections establishing)\ntps = 590.563531 (excluding connections establishing)\n\n\n~$ pidstat -C postgres -l -u -T CHILD\nLinux 4.9.0-11-amd64 (hardy) 01/06/2020 _x86_64_ (32 CPU)\n\n02:29:02 PM UID PID usr-ms system-ms guest-ms Command\n02:29:02 PM 111 75810 2185430 294190 0 /tmp/pg-12-patched/bin/postgres\n02:29:02 PM 111 75812 410 320 0 postgres: checkpointer\n02:29:02 PM 111 75813 410 1230 0 postgres: background writer\n02:29:02 PM 111 75814 270 80 0 postgres: walwriter\n02:29:02 PM 111 75815 30 20 0 postgres: autovacuum launcher\n02:29:02 PM 111 75816 130 360 0 postgres: stats collector\n02:29:02 PM 111 75961 35890 27240 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75963 37390 27950 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75965 38360 28110 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75966 38350 28160 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75968 38370 28160 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75970 37820 28110 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75972 38250 27330 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75974 36870 27640 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75976 36890 26850 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75979 36920 26330 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75980 37090 27240 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75981 38040 28210 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75982 36530 27460 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75983 37560 27330 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75984 36660 27170 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75985 36370 27020 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75986 36960 27000 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75987 36460 26820 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75988 36290 27140 0 postgres: walsender postgres [local] idle\n02:29:02 PM 111 75989 36320 26750 0 postgres: walsender postgres [local] idle\n\n\nRegard,\n\nMarc",
"msg_date": "Mon, 6 Jan 2020 20:21:01 +0100",
"msg_from": "Marc Cousin <cousinmarc@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
},
{
"msg_contents": "Julien Rouhaud <rjuju123@gmail.com> writes:\n> On Mon, Jan 6, 2020 at 7:57 PM Pierre Ducroquet <p.psql@pinaraf.info> wrote:\n>> My deepest apologies for the patch being broken, I messed up when transferring\n>> it between my computers after altering the comments. The verbatim one attached\n>> to this email applies with no issue on current HEAD.\n>> The patch regarding PostmasterIsAlive is completely pointless since v12 where\n>> the function was rewritten, and was included only to help reproduce the issue\n>> on older versions. Back-patching the walsender patch further than v12 would\n>> imply back-patching all the machinery introduced for PostmasterIsAlive\n>> (9f09529952) or another intrusive change there, a too big risk indeed.\n\n> +1, backpatch to v12 looks sensible.\n\nPushed with some minor cosmetic fiddling.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 16:43:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] fix a performance issue with multiple logical-decoding\n walsenders"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn commit ab5b4e2f9ed, we optimized AllocSetFreeIndex() using a lookup\ntable. At the time, using CLZ was rejected because compiler/platform\nsupport was not widespread enough to justify it. For other reasons, we\nrecently added bitutils.h which uses __builtin_clz() where available,\nso it makes sense to revisit this. I modified the test in [1] (C files\nattached), using two separate functions to test CLZ versus the\nopen-coded algorithm of pg_leftmost_one_pos32().\n\nThese are typical results on a recent Intel platform:\n\nHEAD 5.55s\nclz 4.51s\nopen-coded 9.67s\n\nCLZ gives a nearly 20% speed boost on this microbenchmark. I suspect\nthat this micro-benchmark is actually biased towards the lookup table\nmore than real-world workloads, because it can monopolize the L1\ncache. Sparing cache is possibly the more interesting reason to use\nCLZ. The open-coded version is nearly twice as slow, so it makes sense\nto keep the current implementation as the default one, and not use\npg_leftmost_one_pos32() directly. However, with a small tweak, we can\nreuse the lookup table in bitutils.c instead of the bespoke one used\nsolely by AllocSetFreeIndex(), saving a couple cache lines here also.\nThis is done in the attached patch.\n\n[1] https://www.postgresql.org/message-id/407d949e0907201811i13c73e18x58295566d27aadcc%40mail.gmail.com\n\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 26 Dec 2019 18:49:46 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On 2019-Dec-26, John Naylor wrote:\n\n> In commit ab5b4e2f9ed, we optimized AllocSetFreeIndex() using a lookup\n> table. At the time, using CLZ was rejected because compiler/platform\n> support was not widespread enough to justify it. For other reasons, we\n> recently added bitutils.h which uses __builtin_clz() where available,\n> so it makes sense to revisit this. I modified the test in [1] (C files\n> attached), using two separate functions to test CLZ versus the\n> open-coded algorithm of pg_leftmost_one_pos32().\n> \n> These are typical results on a recent Intel platform:\n> \n> HEAD 5.55s\n> clz 4.51s\n> open-coded 9.67s\n\nI can confirm these results on my Intel laptop. I ran it with a\nrepetition of 20, averages of 4 runs:\n\nclz\t\t1,614\nbitutils\t3,714\ncurrent\t\t2,088\n(stddevs are under 0,031).\n\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 10:21:50 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-26, John Naylor wrote:\n>> In commit ab5b4e2f9ed, we optimized AllocSetFreeIndex() using a lookup\n>> table. At the time, using CLZ was rejected because compiler/platform\n>> support was not widespread enough to justify it. For other reasons, we\n>> recently added bitutils.h which uses __builtin_clz() where available,\n>> so it makes sense to revisit this. I modified the test in [1] (C files\n>> attached), using two separate functions to test CLZ versus the\n>> open-coded algorithm of pg_leftmost_one_pos32().\n\n> I can confirm these results on my Intel laptop. I ran it with a\n> repetition of 20, averages of 4 runs:\n\nI tried this on a few other architectures --- ppc32, aarch64, and x86\n(not 64). The general contours of the result are the same on all,\neg here's the results on aarch64 (Fedora 28):\n\n$ ./a.out 100\n...\n clz 22.713s\n bitutils func 59.462s\n current 30.630s\n\nThis kind of leads me to wonder if we don't need to expend more\neffort on the non-CLZ version of pg_leftmost_one_pos32; it seems\nlike it shouldn't be losing this badly to the only-slightly-\nimproved logic that's currently in AllocSetFreeIndex. On the\nother hand, the buildfarm thinks that __builtin_clz is essentially\nuniversal these days --- the only active non-MSVC critter that\nreports not having it is anole. So maybe it's not worth sweating\nover that. Perhaps what we really ought to be working on is\nfinding MSVC equivalents for __builtin_clz and friends.\n\nAnyway, getting back to the presented patch, I find myself a bit\ndissatisfied with it because it seems like it's leaving something\non the table. Specifically, looking at the generated assembly\ncode on a couple of architectures, the setup logic generated by\n\n\t\ttsize = (size - 1) >> ALLOC_MINBITS;\n\nlooks like it costs as much or more as the clz proper. I'm not\nsure we can get rid of the subtract-one step, but couldn't the\nright shift be elided in favor of changing the constant we\nsubtract clz's result from? Shifting off those bits separately\nmade sense in the old implementation, but assuming that CLZ is\nmore or less constant-time, it doesn't with CLZ.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 09:54:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Anyway, getting back to the presented patch, I find myself a bit\n> dissatisfied with it because it seems like it's leaving something\n> on the table. Specifically, looking at the generated assembly\n> code on a couple of architectures, the setup logic generated by\n>\n> tsize = (size - 1) >> ALLOC_MINBITS;\n>\n> looks like it costs as much or more as the clz proper. I'm not\n> sure we can get rid of the subtract-one step,\n\nAs I understand it, the desired outcome is ceil(log2(size)), which can\nbe computed by clz(size - 1) + 1.\n\n> but couldn't the\n> right shift be elided in favor of changing the constant we\n> subtract clz's result from? Shifting off those bits separately\n> made sense in the old implementation, but assuming that CLZ is\n> more or less constant-time, it doesn't with CLZ.\n\nThat makes sense -- I'll look into doing that.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 10:47:01 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> On Fri, Dec 27, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> ... but couldn't the\n>> right shift be elided in favor of changing the constant we\n>> subtract clz's result from? Shifting off those bits separately\n>> made sense in the old implementation, but assuming that CLZ is\n>> more or less constant-time, it doesn't with CLZ.\n\n> That makes sense -- I'll look into doing that.\n\nActually, we could apply that insight to both code paths.\nIn the existing path, that requires assuming \nALLOCSET_NUM_FREELISTS+ALLOC_MINBITS <= 17, but that's OK.\n(Nowadays I'd probably add a StaticAssert about that.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 11:05:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On 2019-Dec-27, Tom Lane wrote:\n\n> This kind of leads me to wonder if we don't need to expend more\n> effort on the non-CLZ version of pg_leftmost_one_pos32; it seems\n> like it shouldn't be losing this badly to the only-slightly-\n> improved logic that's currently in AllocSetFreeIndex. On the\n> other hand, the buildfarm thinks that __builtin_clz is essentially\n> universal these days --- the only active non-MSVC critter that\n> reports not having it is anole. So maybe it's not worth sweating\n> over that. Perhaps what we really ought to be working on is\n> finding MSVC equivalents for __builtin_clz and friends.\n\nApparently clz() can be written using _BitScanReverse(), per\nhttps://stackoverflow.com/a/20468180\nhttps://docs.microsoft.com/en-us/cpp/intrinsics/bitscanreverse-bitscanreverse64?view=vs-2015\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 27 Dec 2019 13:29:47 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 01:29:47PM -0300, Alvaro Herrera wrote:\n> On 2019-Dec-27, Tom Lane wrote:\n> \n> > This kind of leads me to wonder if we don't need to expend more\n> > effort on the non-CLZ version of pg_leftmost_one_pos32; it seems\n> > like it shouldn't be losing this badly to the only-slightly-\n> > improved logic that's currently in AllocSetFreeIndex. On the\n> > other hand, the buildfarm thinks that __builtin_clz is essentially\n> > universal these days --- the only active non-MSVC critter that\n> > reports not having it is anole. So maybe it's not worth sweating\n> > over that. Perhaps what we really ought to be working on is\n> > finding MSVC equivalents for __builtin_clz and friends.\n> \n> Apparently clz() can be written using _BitScanReverse(), per\n> https://stackoverflow.com/a/20468180\n> https://docs.microsoft.com/en-us/cpp/intrinsics/bitscanreverse-bitscanreverse64?view=vs-2015\n\nThere's also various flavors of lczntN for N in (16,32,64) on\n(relatively) modern architectures.\n\nhttps://docs.microsoft.com/en-us/cpp/intrinsics/lzcnt16-lzcnt-lzcnt64?redirectedfrom=MSDN&view=vs-2019\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Fri, 27 Dec 2019 17:37:28 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-27, Tom Lane wrote:\n>> ... Perhaps what we really ought to be working on is\n>> finding MSVC equivalents for __builtin_clz and friends.\n\n> Apparently clz() can be written using _BitScanReverse(), per\n> https://stackoverflow.com/a/20468180\n> https://docs.microsoft.com/en-us/cpp/intrinsics/bitscanreverse-bitscanreverse64?view=vs-2015\n\nYeah, I found that too. It looks promising, but we need to look into\n* portability to different MSVC versions? (I guess the buildfarm would\n tell us)\n* performance, does it actually generate comparable code?\n* intrinsics for the other bit instructions we use?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 27 Dec 2019 11:43:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> John Naylor <john.naylor@2ndquadrant.com> writes:\n> > On Fri, Dec 27, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> ... but couldn't the\n> >> right shift be elided in favor of changing the constant we\n> >> subtract clz's result from? Shifting off those bits separately\n> >> made sense in the old implementation, but assuming that CLZ is\n> >> more or less constant-time, it doesn't with CLZ.\n>\n> > That makes sense -- I'll look into doing that.\n>\n> Actually, we could apply that insight to both code paths.\n> In the existing path, that requires assuming\n> ALLOCSET_NUM_FREELISTS+ALLOC_MINBITS <= 17, but that's OK.\n> (Nowadays I'd probably add a StaticAssert about that.)\n\nI tried that in the attached files and got these results:\n\n current 6.14s\n clz 4.52s\n clz no right shift 3.15s\n lookup table 5.56s\nlookup table no right shift 7.34s\n\nHere, \"lookup table\" refers to using the pg_leftmost_one_pos[] array\nand incrementing the result. Removing the shift operation from the CLZ\ncase is clearly an improvement, and the main body goes from\n\nmovabsq $34359738367, %rax\naddq %rax, %rdi\nshrq $3, %rdi\nbsrl %edi, %eax\nxorl $-32, %eax\naddl $33, %eax\n\nto\n\ndecl %edi\nbsrl %edi, %eax\nxorl $-32, %eax\naddl $30, %eax\n\nThe lookup table case is less clear. Removing the shift results in\nassembly that looks more like the C code and is slower for me. The\nstandard lookup table code uses some magic constants and does its own\nconstant folding by shifting 11 (8 + 3). In the absence of testing on\nplatforms that will actually exercise this path, it seems the\nopen-coded path should keep the shift for now. Thoughts?\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Dec 2019 19:02:02 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 07:02:02PM -0500, John Naylor wrote:\n> On Fri, Dec 27, 2019 at 11:05 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >\n> > John Naylor <john.naylor@2ndquadrant.com> writes:\n> > > On Fri, Dec 27, 2019 at 9:54 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > >> ... but couldn't the\n> > >> right shift be elided in favor of changing the constant we\n> > >> subtract clz's result from? Shifting off those bits separately\n> > >> made sense in the old implementation, but assuming that CLZ is\n> > >> more or less constant-time, it doesn't with CLZ.\n> >\n> > > That makes sense -- I'll look into doing that.\n> >\n> > Actually, we could apply that insight to both code paths.\n> > In the existing path, that requires assuming\n> > ALLOCSET_NUM_FREELISTS+ALLOC_MINBITS <= 17, but that's OK.\n> > (Nowadays I'd probably add a StaticAssert about that.)\n> \n> I tried that in the attached files and got these results:\n> \n> current 6.14s\n> clz 4.52s\n> clz no right shift 3.15s\n> lookup table 5.56s\n> lookup table no right shift 7.34s\n> \n> Here, \"lookup table\" refers to using the pg_leftmost_one_pos[] array\n> and incrementing the result. Removing the shift operation from the CLZ\n> case is clearly an improvement, and the main body goes from\n> \n> movabsq $34359738367, %rax\n> addq %rax, %rdi\n> shrq $3, %rdi\n> bsrl %edi, %eax\n> xorl $-32, %eax\n> addl $33, %eax\n> \n> to\n> \n> decl %edi\n> bsrl %edi, %eax\n> xorl $-32, %eax\n> addl $30, %eax\n> \n> The lookup table case is less clear. Removing the shift results in\n> assembly that looks more like the C code and is slower for me. The\n> standard lookup table code uses some magic constants and does its own\n> constant folding by shifting 11 (8 + 3). In the absence of testing on\n> platforms that will actually exercise this path, it seems the\n> open-coded path should keep the shift for now. Thoughts?\n\nIt's probably worth doing the things you've found unambiguous gains\nfor as a patch, putting it on the next commitfest, and seeing what the\ncommitfest.cputube.org machinery has to say about it.\n\nMaybe it'd be worth trying out a patch that enables CLZ for Windows,\nbut that seems like a separate issue.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 28 Dec 2019 03:16:48 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 9:16 PM David Fetter <david@fetter.org> wrote:\n> On Fri, Dec 27, 2019 at 07:02:02PM -0500, John Naylor wrote:\n> > The lookup table case is less clear. Removing the shift results in\n> > assembly that looks more like the C code and is slower for me. The\n> > standard lookup table code uses some magic constants and does its own\n> > constant folding by shifting 11 (8 + 3). In the absence of testing on\n> > platforms that will actually exercise this path, it seems the\n> > open-coded path should keep the shift for now. Thoughts?\n>\n> It's probably worth doing the things you've found unambiguous gains\n> for as a patch, putting it on the next commitfest, and seeing what the\n> commitfest.cputube.org machinery has to say about it.\n\nDone in the attached.\n\n> Maybe it'd be worth trying out a patch that enables CLZ for Windows,\n> but that seems like a separate issue.\n\nAgreed.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Dec 2019 22:44:14 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "v2 had an Assert that was only correct while experimenting with\neliding right shift. Fixed in v3.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 28 Dec 2019 09:50:24 -0500",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
},
{
"msg_contents": "John Naylor <john.naylor@2ndquadrant.com> writes:\n> v2 had an Assert that was only correct while experimenting with\n> eliding right shift. Fixed in v3.\n\nI think there must have been something wrong with your test that\nsaid that eliminating the right shift from the non-CLZ code made\nit slower. It should be an unconditional win, just as it is for\nthe CLZ code path. (Maybe some odd cache-line-boundary effect?)\n\nAlso, I think it's just weird to account for ALLOC_MINBITS one\nway in the CLZ path and the other way in the other path.\n\nI decided that it might be a good idea to do performance testing\nin-place rather than in a standalone test program. I whipped up\nthe attached that just does a bunch of palloc/pfree cycles.\nI got the following results on a non-cassert build (medians of\na number of tests; the times are repeatable to ~ 0.1% for me):\n\nHEAD:\t\t2429.431 ms\nv3 CLZ:\t\t2131.735 ms\nv3 non-CLZ:\t2477.835 ms\nremove shift:\t2266.755 ms\n\nI didn't bother to try this on non-x86_64 architectures, as\nprevious testing convinces me the outcome should be about the\nsame.\n\nHence, pushed that way, with a bit of additional cosmetic foolery:\nthe static assertion made more sense to me in relation to the\ndocumented assumption that size <= ALLOC_CHUNK_LIMIT, and I\nthought the comment could use some work.\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 28 Dec 2019 17:33:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: use CLZ instruction in AllocSetFreeIndex()"
}
] |
[
{
"msg_contents": "I started writing this patch to avoid the possibly-misleading phrase: \"with no\nextra space\" (since it's expected to typically take ~2x space, or 1x \"extra\"\nspace).\n\nBut the original phrase \"with no extra space\" seems to be wrong anyway, since\nit actually follows fillfactor, so say that. Possibly should be backpatched.\n\ndiff --git a/doc/src/sgml/ref/vacuum.sgml b/doc/src/sgml/ref/vacuum.sgml\nindex ec2503d..9757352 100644\n--- a/doc/src/sgml/ref/vacuum.sgml\n+++ b/doc/src/sgml/ref/vacuum.sgml\n@@ -75,10 +75,16 @@ VACUUM [ FULL ] [ FREEZE ] [ VERBOSE ] [ ANALYZE ] [ <replaceable class=\"paramet\n with normal reading and writing of the table, as an exclusive lock\n is not obtained. However, extra space is not returned to the operating\n system (in most cases); it's just kept available for re-use within the\n- same table. <command>VACUUM FULL</command> rewrites the entire contents\n- of the table into a new disk file with no extra space, allowing unused\n- space to be returned to the operating system. This form is much slower and\n- requires an exclusive lock on each table while it is being processed.\n+ same table.\n+ </para>\n+\n+ <para>\n+ <command>VACUUM FULL</command> rewrites the entire contents of the table\n+ into a new file on disk with internal space left available as determined by\n+ <literal>fillfactor</literal>. If the table includes many dead tuples from\n+ updates/deletes, this allows unused space to be returned to the operating\n+ system. This form is much slower and requires an exclusive lock on each\n+ table while it is being processed.\n </para>\n \n <para>\n-- \n2.7.4",
"msg_date": "Thu, 26 Dec 2019 20:31:34 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "\nHello Justin,\n\n> I started writing this patch to avoid the possibly-misleading phrase: \"with no\n> extra space\" (since it's expected to typically take ~2x space, or 1x \"extra\"\n> space).\n>\n> But the original phrase \"with no extra space\" seems to be wrong anyway, since\n> it actually follows fillfactor, so say that. Possibly should be backpatched.\n\nPatch applies and compiles.\n\nGiven that the paragraph begins with \"Plain VACUUM (without FULL)\", it is \nbetter to have the VACUUM FULL explanations on a separate paragraph, and \nthe fillfactor precision makes it explicit about what it does, although it \ncould also be material for the NOTES section below.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 27 Dec 2019 11:58:18 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 11:58:18AM +0100, Fabien COELHO wrote:\n>> I started writing this patch to avoid the possibly-misleading phrase: \"with no\n>> extra space\" (since it's expected to typically take ~2x space, or 1x \"extra\"\n>> space).\n>> \n>> But the original phrase \"with no extra space\" seems to be wrong anyway, since\n>> it actually follows fillfactor, so say that. Possibly should be backpatched.\n> \n> Patch applies and compiles.\n> \n> Given that the paragraph begins with \"Plain VACUUM (without FULL)\", it is\n> better to have the VACUUM FULL explanations on a separate paragraph, and the\n\nThe original patch does that (Fabien agreed when I asked off list)\n\n\n",
"msg_date": "Tue, 14 Jan 2020 13:12:41 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "\n>> Patch applies and compiles.\n>>\n>> Given that the paragraph begins with \"Plain VACUUM (without FULL)\", it is\n>> better to have the VACUUM FULL explanations on a separate paragraph, and the\n>\n> The original patch does that (Fabien agreed when I asked off list)\n\nIndeed. I may have looked at it in reverse, dunno.\n\nI switched it to ready.\n\n-- \nFabien.\n\n\n",
"msg_date": "Tue, 14 Jan 2020 20:49:34 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9",
"msg_date": "Sun, 19 Jan 2020 23:30:14 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "On 2020-01-20 06:30, Justin Pryzby wrote:\n> Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\n\nI'm not sure that description of parallel vacuum in the middle of \nnon-full vs. full vacuum is actually that good. I think those sentences \nshould be moved to a separate paragraph.\n\nAbout your patch, I don't think this is clearer. The fillfactor stuff \nis valid to be mentioned, but the way it's being proposed makes it sound \nlike the main purpose of VACUUM FULL is to bloat the table to make \nfillfactor room. The \"no extra space\" wording made sense to me, with \nthe fillfactor business perhaps worth being put into a parenthetical note.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 29 Jan 2020 16:40:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 9:10 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-20 06:30, Justin Pryzby wrote:\n> > Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\n>\n> I'm not sure that description of parallel vacuum in the middle of\n> non-full vs. full vacuum is actually that good.\n>\n\nI have done like that because parallel vacuum is the default. I mean\nwhen the user runs vacuum command, it will invoke workers to perform\nindex cleanup based on some conditions.\n\n> I think those sentences\n> should be moved to a separate paragraph.\n>\n\nIt seems more natural to me to add immediately after vacuum\nexplanation, but I might be wrong. After the above explanation, if\nyou still think it is better to move into a separate paragraph, I can\ndo that.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 17:24:48 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "On 1/30/20 6:54 AM, Amit Kapila wrote:\n> On Wed, Jan 29, 2020 at 9:10 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>>\n>> On 2020-01-20 06:30, Justin Pryzby wrote:\n>>> Rebased against 40d964ec997f64227bc0ff5e058dc4a5770a70a9\n>>\n>> I'm not sure that description of parallel vacuum in the middle of\n>> non-full vs. full vacuum is actually that good.\n> \n> I have done like that because parallel vacuum is the default. I mean\n> when the user runs vacuum command, it will invoke workers to perform\n> index cleanup based on some conditions.\n> \n>> I think those sentences\n>> should be moved to a separate paragraph.\n> \n> It seems more natural to me to add immediately after vacuum\n> explanation, but I might be wrong. After the above explanation, if\n> you still think it is better to move into a separate paragraph, I can\n> do that.\nPeter, do you still think this should be moved into a separate paragraph?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 2 Mar 2020 08:40:30 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 9:10 PM Peter Eisentraut\n<peter.eisentraut@2ndquadrant.com> wrote:\n>\n> On 2020-01-20 06:30, Justin Pryzby wrote:\n>\n> About your patch, I don't think this is clearer. The fillfactor stuff\n> is valid to be mentioned, but the way it's being proposed makes it sound\n> like the main purpose of VACUUM FULL is to bloat the table to make\n> fillfactor room. The \"no extra space\" wording made sense to me, with\n> the fillfactor business perhaps worth being put into a parenthetical note.\n>\n\nJustin, would you like to address this comment of Peter E.?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 28 Mar 2020 15:53:11 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "> On 28 Mar 2020, at 11:23, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> \n> On Wed, Jan 29, 2020 at 9:10 PM Peter Eisentraut\n> <peter.eisentraut@2ndquadrant.com> wrote:\n>> \n>> On 2020-01-20 06:30, Justin Pryzby wrote:\n>> \n>> About your patch, I don't think this is clearer. The fillfactor stuff\n>> is valid to be mentioned, but the way it's being proposed makes it sound\n>> like the main purpose of VACUUM FULL is to bloat the table to make\n>> fillfactor room. The \"no extra space\" wording made sense to me, with\n>> the fillfactor business perhaps worth being put into a parenthetical note.\n> \n> Justin, would you like to address this comment of Peter E.?\n\nThis patch has been Waiting on Author since April, will you have time to\naddress the questions during this commitfest, or should it be moved to Returned\nwith Feedback?\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 5 Jul 2020 13:35:33 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
},
{
"msg_contents": "> On 5 Jul 2020, at 13:35, Daniel Gustafsson <daniel@yesql.se> wrote:\n\n> This patch has been Waiting on Author since April, will you have time to\n> address the questions during this commitfest, or should it be moved to Returned\n> with Feedback?\n\nThis has been closed as Returned with Feedback, please feel free to open a new\nentry if you return to this work.\n\ncheers ./daniel\n\n",
"msg_date": "Sun, 2 Aug 2020 23:54:59 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: doc: vacuum full, fillfactor, and \"extra space\""
}
] |
[
{
"msg_contents": "I was wondering why we have a separate libpq.rc for libpq and use \nwin32ver.rc for all other components. I suspect this is also a leftover \nfrom the now-removed client-only Windows build. With a bit of tweaking \nwe can use win32ver.rc for libpq as well and remove a bit of duplicative \ncode.\n\nI have tested this patch with MSVC and MinGW.\n\nI've also added some comments and a documentation link to be able to \nunderstand this business a bit better.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 27 Dec 2019 17:25:58 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 05:25:58PM +0100, Peter Eisentraut wrote:\n> I was wondering why we have a separate libpq.rc for libpq and use\n> win32ver.rc for all other components. I suspect this is also a leftover\n> from the now-removed client-only Windows build. With a bit of tweaking we\n> can use win32ver.rc for libpq as well and remove a bit of duplicative code.\n> \n> I have tested this patch with MSVC and MinGW.\n\nThe patch does not apply anymore because of two conflicts with the\ncopyright dates, could you rebase it? Reading through it, the change\nlooks sensible. However I have not looked at it yet in details.\n\n- FILEFLAGSMASK 0x17L\n+ FILEFLAGSMASK VS_FFI_FILEFLAGSMASK\nAre you sure with the mapping here? I would have thought that\nVS_FF_DEBUG is not necessary when using release-quality builds, which\nis something that can be configured with build.pl, and that it would\nbe better to not enforce VS_FF_PRERELEASE all the time.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 17:02:02 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "On 2020-01-06 09:02, Michael Paquier wrote:\n> - FILEFLAGSMASK 0x17L\n> + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK\n> Are you sure with the mapping here? I would have thought that\n> VS_FF_DEBUG is not necessary when using release-quality builds, which\n> is something that can be configured with build.pl, and that it would\n> be better to not enforce VS_FF_PRERELEASE all the time.\n\nNote that there is FILEFLAGSMASK and FILEFLAGS. The first is just a \nmask that says which bits in the second are valid. Since both libpq.rc \nand win32ver.rc use FILEFLAGS 0, it doesn't matter what we set \nFILEFLAGSMASK to. But currently libpq.rc uses 0x3fL and win32ver.rc \nuses 0x17L, so in order to unify this sensibly I looked for a \nwell-recognized standard value, which led to VS_FFI_FILEFLAGSMASK.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 10:56:32 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "On 2020-01-09 10:56, Peter Eisentraut wrote:\n> On 2020-01-06 09:02, Michael Paquier wrote:\n>> - FILEFLAGSMASK 0x17L\n>> + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK\n>> Are you sure with the mapping here? I would have thought that\n>> VS_FF_DEBUG is not necessary when using release-quality builds, which\n>> is something that can be configured with build.pl, and that it would\n>> be better to not enforce VS_FF_PRERELEASE all the time.\n> \n> Note that there is FILEFLAGSMASK and FILEFLAGS. The first is just a\n> mask that says which bits in the second are valid. Since both libpq.rc\n> and win32ver.rc use FILEFLAGS 0, it doesn't matter what we set\n> FILEFLAGSMASK to. But currently libpq.rc uses 0x3fL and win32ver.rc\n> uses 0x17L, so in order to unify this sensibly I looked for a\n> well-recognized standard value, which led to VS_FFI_FILEFLAGSMASK.\n\nHere is a rebased patch.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 14 Jan 2020 22:34:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "At Tue, 14 Jan 2020 22:34:10 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n> On 2020-01-09 10:56, Peter Eisentraut wrote:\n> > On 2020-01-06 09:02, Michael Paquier wrote:\n> >> - FILEFLAGSMASK 0x17L\n> >> + FILEFLAGSMASK VS_FFI_FILEFLAGSMASK\n> >> Are you sure with the mapping here? I would have thought that\n> >> VS_FF_DEBUG is not necessary when using release-quality builds, which\n> >> is something that can be configured with build.pl, and that it would\n> >> be better to not enforce VS_FF_PRERELEASE all the time.\n> > Note that there is FILEFLAGSMASK and FILEFLAGS. The first is just a\n> > mask that says which bits in the second are valid. Since both\n> > libpq.rc\n> > and win32ver.rc use FILEFLAGS 0, it doesn't matter what we set\n> > FILEFLAGSMASK to. But currently libpq.rc uses 0x3fL and win32ver.rc\n> > uses 0x17L, so in order to unify this sensibly I looked for a\n> > well-recognized standard value, which led to VS_FFI_FILEFLAGSMASK.\n\nI agree to the direction of the patch and the point above sounds\nsensible to me.\n\n> Here is a rebased patch.\n\nIt applied on 4d8a8d0c73 cleanly and built successfully by VS2019.\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 15 Jan 2020 14:22:45 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 02:22:45PM +0900, Kyotaro Horiguchi wrote:\n> At Tue, 14 Jan 2020 22:34:10 +0100, Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote in \n>> On 2020-01-09 10:56, Peter Eisentraut wrote:\n>>> Note that there is FILEFLAGSMASK and FILEFLAGS. The first is just a\n>>> mask that says which bits in the second are valid. Since both\n>>> libpq.rc\n>>> and win32ver.rc use FILEFLAGS 0, it doesn't matter what we set\n>>> FILEFLAGSMASK to. But currently libpq.rc uses 0x3fL and win32ver.rc\n>>> uses 0x17L, so in order to unify this sensibly I looked for a\n>>> well-recognized standard value, which led to VS_FFI_FILEFLAGSMASK.\n\nHmm. I agree that what you have here is sensible. I am wondering if\nit would be better to have VS_FF_DEBUG set dynamically in FILEFLAGS in\nthe future though. But that's no material for this patch.\n\n>> Here is a rebased patch.\n> \n> It applied on 4d8a8d0c73 cleanly and built successfully by VS2019.\n\nI have been testing and checking the patch a bit more seriously, and\nthe information gets generated correctly for dlls and exe files. The\nrest of the changes look fine to me. For src/makefiles/Makefile.win32,\nI don't have a MinGW environment at hand so I have not directly\ntested but the logic looks fine.\n--\nMichael",
"msg_date": "Wed, 15 Jan 2020 15:44:38 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
},
{
"msg_contents": "On 2020-01-15 07:44, Michael Paquier wrote:\n> I have been testing and checking the patch a bit more seriously, and\n> the information gets generated correctly for dlls and exe files. The\n> rest of the changes look fine to me. For src/makefiles/Makefile.win32,\n> I don't have a MinGW environment at hand so I have not directly\n> tested but the logic looks fine.\n\nI have tested MinGW.\n\nPatch committed.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 15 Jan 2020 15:40:56 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Remove libpq.rc, use win32ver.rc for libpq"
}
] |
[
{
"msg_contents": "Re-added -hackers.\n\nThanks for reviewing.\n\nOn Fri, Dec 27, 2019 at 05:22:47PM +0100, Fabien COELHO wrote:\n> The implementation simply extends an existing functions with a boolean to\n> allow for sub-directories. However, the function does not seem to show\n> subdir contents recursively. Should it be the case?\n\n> STM that \"//\"-comments are not project policy.\n\nSure, but the patch is less important than the design, which needs to be\naddressed first. The goal is to somehow show tmpfiles (or at least dirs) used\nby parallel workers. I mentioned a few possible ways, of which this was the\nsimplest to implement. Showing files beneath the dir is probably good, but\nneed to decide how to present it. Should there be a column for the dir (null\nif not a shared filesets)? Or some other presentation, like a boolean column\n\"is_shared_fileset\".\n\n> I'm unconvinced by the skipping condition:\n> \n> + if (!S_ISREG(attrib.st_mode) &&\n> + (!dir_ok && S_ISDIR(attrib.st_mode)))\n> continue;\n> \n> which is pretty hard to read. ISTM you meant \"not A and not (B and C)\"\n> instead?\n\nI can write it as two ifs. And, it's probably better to say:\n\tif (!ISDIR() || !dir_ok)\n\n..which is same as: !(B && C), as you said.\n\n> Catalog update should be a date + number? Maybe this is best left to the\n> committer?\n\nYes, but I preferred to include it, causing a deliberate conflict, to ensure\nit's not forgotten.\n\nThanks,\nJustin\n\n\n",
"msg_date": "Fri, 27 Dec 2019 11:02:20 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "\n> Re-added -hackers.\n\nIndeed, I left it out by accident. I tried to bounce the original mail but \nit did not work.\n\n> Thanks for reviewing.\n>\n> On Fri, Dec 27, 2019 at 05:22:47PM +0100, Fabien COELHO wrote:\n>> The implementation simply extends an existing functions with a boolean to\n>> allow for sub-directories. However, the function does not seem to show\n>> subdir contents recursively. Should it be the case?\n>\n>> STM that \"//\"-comments are not project policy.\n>\n> Sure, but the patch is less important than the design, which needs to be\n> addressed first. The goal is to somehow show tmpfiles (or at least dirs) used\n> by parallel workers. I mentioned a few possible ways, of which this was the\n> simplest to implement. Showing files beneath the dir is probably good, but\n> need to decide how to present it. Should there be a column for the dir (null\n> if not a shared filesets)? Or some other presentation, like a boolean column\n> \"is_shared_fileset\".\n\nWhy not simply showing the files underneath their directories?\n\n /path/to/tmp/file1\n /path/to/tmp/subdir1/file2\n\nIn which case probably showing the directory itself is not useful,\nand the is_dir column could be dropped?\n\n>> I'm unconvinced by the skipping condition:\n>>\n>> + if (!S_ISREG(attrib.st_mode) &&\n>> + (!dir_ok && S_ISDIR(attrib.st_mode)))\n>> continue;\n>>\n>> which is pretty hard to read. ISTM you meant \"not A and not (B and C)\"\n>> instead?\n>\n> I can write it as two ifs.\n\nHmmm. Not sure it would help that much. At least the condition must be \nright. Also, the comment should be updated.\n\n> And, it's probably better to say:\n> if (!ISDIR() || !dir_ok)\n\nI cannot say I like it. dir_ok is cheaper to test so could come first.\n\n> ..which is same as: !(B && C), as you said.\n\nOk, so you confirm that the condition was wrong.\n\n>> Catalog update should be a date + number? Maybe this is best left to \n>> the committer?\n>\n> Yes, but I preferred to include it, causing a deliberate conflict, to ensure\n> it's not forgotten.\n\nOk.\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 27 Dec 2019 18:50:24 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 06:50:24PM +0100, Fabien COELHO wrote:\n> >On Fri, Dec 27, 2019 at 05:22:47PM +0100, Fabien COELHO wrote:\n> >>The implementation simply extends an existing functions with a boolean to\n> >>allow for sub-directories. However, the function does not seem to show\n> >>subdir contents recursively. Should it be the case?\n> >\n> >>STM that \"//\"-comments are not project policy.\n> >\n> >Sure, but the patch is less important than the design, which needs to be\n> >addressed first. The goal is to somehow show tmpfiles (or at least dirs) used\n> >by parallel workers. I mentioned a few possible ways, of which this was the\n> >simplest to implement. Showing files beneath the dir is probably good, but\n> >need to decide how to present it. Should there be a column for the dir (null\n> >if not a shared filesets)? Or some other presentation, like a boolean column\n> >\"is_shared_fileset\".\n> \n> Why not simply showing the files underneath their directories?\n> \n> /path/to/tmp/file1\n> /path/to/tmp/subdir1/file2\n> \n> In which case probably showing the directory itself is not useful,\n> and the is_dir column could be dropped?\n\nThe names are expected to look like this:\n\n$ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 27 13:51 /var/lib/pgsql/12/data/base/pgsql_tmp\n169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n\nI think we'd have to show sudbdir/file1, subdir/file2, not just file1, file2.\nIt doesn't seem useful or nice to show a bunch of files called 0.0 or 1.0.\nActually the results should be unique, either on filename or (dir,file).\n\n\"ls\" wouldn't list same name twice, unless you list multiple dirs, like:\n|ls a/b c/d.\n\nIt's worth thinking if subdir should be a separate column.\n\nI'm interested to hear back from others.\n\nJustin\n\n\n",
"msg_date": "Fri, 27 Dec 2019 13:59:18 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "Hello Justin,\n\n>> Why not simply showing the files underneath their directories?\n>>\n>> /path/to/tmp/file1\n>> /path/to/tmp/subdir1/file2\n>>\n>> In which case probably showing the directory itself is not useful,\n>> and the is_dir column could be dropped?\n>\n> The names are expected to look like this:\n>\n> $ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n> 142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 27 13:51 /var/lib/pgsql/12/data/base/pgsql_tmp\n> 169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n> 169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n> 169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n>\n> I think we'd have to show sudbdir/file1, subdir/file2, not just file1, file2.\n> It doesn't seem useful or nice to show a bunch of files called 0.0 or 1.0.\n> Actually the results should be unique, either on filename or (dir,file).\n\nOk, so this suggests recursing into subdirs, which requires to make a \nseparate function of the inner loop.\n\n> It's worth thinking if subdir should be a separate column.\n\nMy 0.02ᅵᅵ: I would rather simply keep the full path and just add subdir \ncontents, so that the function output does not change at all.\n\n-- \nFabien.",
"msg_date": "Sat, 28 Dec 2019 07:52:55 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 07:52:55AM +0100, Fabien COELHO wrote:\n> >>Why not simply showing the files underneath their directories?\n> >>\n> >> /path/to/tmp/file1\n> >> /path/to/tmp/subdir1/file2\n> >>\n> >>In which case probably showing the directory itself is not useful,\n> >>and the is_dir column could be dropped?\n> >\n> >The names are expected to look like this:\n> >\n> >$ sudo find /var/lib/pgsql/12/data/base/pgsql_tmp -ls\n> >142977 4 drwxr-x--- 3 postgres postgres 4096 Dec 27 13:51 /var/lib/pgsql/12/data/base/pgsql_tmp\n> >169868 4 drwxr-x--- 2 postgres postgres 4096 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset\n> >169347 5492 -rw-r----- 1 postgres postgres 5619712 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/0.0\n> >169346 5380 -rw-r----- 1 postgres postgres 5505024 Dec 7 01:35 /var/lib/pgsql/12/data/base/pgsql_tmp/pgsql_tmp11025.0.sharedfileset/1.0\n> >\n> >I think we'd have to show sudbdir/file1, subdir/file2, not just file1, file2.\n> >It doesn't seem useful or nice to show a bunch of files called 0.0 or 1.0.\n> >Actually the results should be unique, either on filename or (dir,file).\n> \n> Ok, so this suggests recursing into subdirs, which requires to make a\n> separate function of the inner loop.\n\nYea, it suggests that; but, SRF_RETURN_NEXT doesn't make that very easy.\nIt'd need to accept the fcinfo argument, and pg_ls_dir_files would call it once\nfor every tuple to be returned. So it's recursive and saves its state...\n\nThe attached is pretty ugly, but I can't see how to do better.\nThe alternative seems to be to build up a full list of pathnames in the SRF\ninitial branch, and stat them all later. Or stat them all in the INITIAL case,\nand keep a list of stat structures to be emited during future calls.\n\nBTW, it seems to me this error message should be changed:\n\n snprintf(path, sizeof(path), \"%s/%s\", fctx->location, de->d_name);\n if (stat(path, &attrib) < 0)\n ereport(ERROR,\n (errcode_for_file_access(),\n- errmsg(\"could not stat directory \\\"%s\\\": %m\", dir)));\n+ errmsg(\"could not stat file \\\"%s\\\": %m\", path)));",
"msg_date": "Sat, 28 Dec 2019 04:16:50 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "Hello Justin,\n\n>> Ok, so this suggests recursing into subdirs, which requires to make a\n>> separate function of the inner loop.\n>\n> Yea, it suggests that; but, SRF_RETURN_NEXT doesn't make that very easy.\n> It'd need to accept the fcinfo argument, and pg_ls_dir_files would call it once\n> for every tuple to be returned. So it's recursive and saves its state...\n>\n> The attached is pretty ugly, but I can't see how to do better.\n\nHmmm… I do agree with pretty ugly:-)\n\nIf it really neads to be in the structure, why not save the open directory \nfield in the caller and restore it after the recursive call, and pass the \nrest of the structure as a pointer? Something like:\n\n ... root_function(...)\n {\n struct status_t status = initialization();\n ...\n call rec_function(&status, path)\n ...\n cleanup();\n }\n\n ... rec_function(struct *status, path)\n {\n status->dir = opendir(path);\n for (dir contents)\n {\n if (it_is_a_dir)\n {\n /* some comment about why we do that… */\n dir_t saved = status->dir;\n status->dir = NULL;\n rec_function(status, subpath);\n status->dir = saved;\n }\n }\n closedir(status->dir), status->dir = NULL;\n }\n\n> The alternative seems to be to build up a full list of pathnames in the SRF\n> initial branch, and stat them all later. Or stat them all in the INITIAL case,\n> and keep a list of stat structures to be emited during future calls.\n\n-- \nFabien.",
"msg_date": "Sat, 28 Dec 2019 16:02:12 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "Here's a version which uses an array of directory_fctx, rather than of DIR and\nlocation. That avoids changing the data structure and collatoral implications\nto pg_ls_dir().\n\nCurrently, this *shows* subdirs of subdirs, but doesn't decend into them.\nSo I think maybe toplevel subdirs should be shown, too.\nAnd maybe the is_dir flag should be re-introduced (although someone could call\npg_stat_file if needed).\nI'm interested to hear feedback on that, although this patch still isn't great.",
"msg_date": "Sat, 28 Dec 2019 15:09:53 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "\nHello Justin,\n\nAbout this v4.\n\nIt applies cleanly.\n\nI'm trying to think about how to get rid of the strange structure and \nhacks, and the arbitrary looking size 2 array.\n\nAlso the recursion is one step, but I'm not sure why, ISTM it could/should \ngo on always?\n\nMaybe the recursive implementation was not such a good idea, if I \nsuggested it is because I did not noticed the \"return next\" re-entrant \nstuff, shame on me.\n\nLooking at the code, ISTM that relying on a stack/list would be much \ncleaner and easier to understand. The code could look like:\n\n ls_func()\n if (first_time)\n initialize description\n set next directory to visit\n while (1)\n if next directory\n init/push next directory to visit as current\n read current directory\n if emty\n pop/close current directory\n elif is_a_dir and recursion allowed\n set next directory to visit\n else ...\n return next tuple\n cleanup\n\nThe point is to avoid a hack around the directory_fctx array, to have only \none place to push/init a new directory (i.e. call AllocateDir and play \naround with the memory context) instead of two, and to allow deeper \nrecursion if useful.\n\nDetails : snprintf return is not checked. Maybe it should say why an \noverflow cannot occur.\n\n\"bool nulls[3] = { 0,};\" -> \"bool nulls[3} = { false, false, false };\"?\n\n-- \nFabien.\n\n\n",
"msg_date": "Wed, 15 Jan 2020 11:21:36 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 11:21:36AM +0100, Fabien COELHO wrote:\n> I'm trying to think about how to get rid of the strange structure and hacks,\n> and the arbitrary looking size 2 array.\n> \n> Also the recursion is one step, but I'm not sure why, ISTM it could/should\n> go on always?\n\nBecause tmpfiles only go one level deep.\n\n> Looking at the code, ISTM that relying on a stack/list would be much cleaner\n> and easier to understand. The code could look like:\n\nI'm willing to change the implementation, but only after there's an agreement\nabout the desired behavior (extra column, one level, etc).\n\nJustin\n\n\n",
"msg_date": "Wed, 15 Jan 2020 18:39:24 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "\nHello Justin,\n\n>> I'm trying to think about how to get rid of the strange structure and hacks,\n>> and the arbitrary looking size 2 array.\n>>\n>> Also the recursion is one step, but I'm not sure why, ISTM it could/should\n>> go on always?\n>\n> Because tmpfiles only go one level deep.\n\nI'm not sure it is a general rule. ISTM that extensions can use tmp files, \nand we would have no control about what they would do there.\n\n>> Looking at the code, ISTM that relying on a stack/list would be much cleaner\n>> and easier to understand. The code could look like:\n>\n> I'm willing to change the implementation, but only after there's an agreement\n> about the desired behavior (extra column, one level, etc).\n\nFor the level, ISTM that the implementation should not make this \nassumption. If in practice there is just one level, then the function will \nnot recurse deep, no problem.\n\nFor the column, I'm not sure that \"isdir\" is necessary.\n\nYou could put it implicitely in the file name by ending it with \"/\", \nand/or showing the directory contents is enough a hint that there is a \ndirectory?\n\nAlso, I'm not fully sure why \".*\" files should be skipped, maybe it should \nbe an option? Or the user can filter it with SQL if it does not want them?\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 16 Jan 2020 09:34:32 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 09:34:32AM +0100, Fabien COELHO wrote:\n> Also, I'm not fully sure why \".*\" files should be skipped, maybe it should\n> be an option? Or the user can filter it with SQL if it does not want them?\n\nI think if someone wants the full generality, they can do this:\n\npostgres=# SELECT name, s.size, s.modification, s.isdir FROM (SELECT 'base/pgsql_tmp'p)p, pg_ls_dir(p)name, pg_stat_file(p||'/'||name)s;\n name | size | modification | isdir \n------+------+------------------------+-------\n .foo | 4096 | 2020-01-16 08:57:04-05 | t\n\nIn my mind, pg_ls_tmpdir() is for showing tmpfiles, not just a shortcut to\nSELECT pg_ls_dir((SELECT 'base/pgsql_tmp'p)); -- or, for all tablespaces:\nWITH x AS (SELECT format('/PG_%s_%s', split_part(current_setting('server_version'), '.', 1), catalog_version_no) suffix FROM pg_control_system()), y AS (SELECT a, pg_ls_dir(a) AS d FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(oid),'')||suffix, 'base') a FROM pg_tablespace,x)a) SELECT a, pg_ls_dir(a||'/pgsql_tmp') FROM y WHERE d='pgsql_tmp';\n\nI think changing dotfiles is topic for another patch.\nThat would also affect pg_ls_dir, and everything else that uses the backing\nfunction pg_ls_dir_files_recurse. I'd have to ask why not also show . and .. ?\n\n(In fact, if I were to change anything, I would propose to limit pg_ls_tmpdir()\nto files matching PG_TEMP_FILE_PREFIX).\n\nJustin\n\n\n",
"msg_date": "Thu, 16 Jan 2020 08:38:46 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "Hi Fabien,\n\nOn 1/16/20 9:38 AM, Justin Pryzby wrote:\n> On Thu, Jan 16, 2020 at 09:34:32AM +0100, Fabien COELHO wrote:\n>> Also, I'm not fully sure why \".*\" files should be skipped, maybe it should\n>> be an option? Or the user can filter it with SQL if it does not want them?\n> \n> I think if someone wants the full generality, they can do this:\n> \n> postgres=# SELECT name, s.size, s.modification, s.isdir FROM (SELECT 'base/pgsql_tmp'p)p, pg_ls_dir(p)name, pg_stat_file(p||'/'||name)s;\n> name | size | modification | isdir\n> ------+------+------------------------+-------\n> .foo | 4096 | 2020-01-16 08:57:04-05 | t\n> \n> In my mind, pg_ls_tmpdir() is for showing tmpfiles, not just a shortcut to\n> SELECT pg_ls_dir((SELECT 'base/pgsql_tmp'p)); -- or, for all tablespaces:\n> WITH x AS (SELECT format('/PG_%s_%s', split_part(current_setting('server_version'), '.', 1), catalog_version_no) suffix FROM pg_control_system()), y AS (SELECT a, pg_ls_dir(a) AS d FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(oid),'')||suffix, 'base') a FROM pg_tablespace,x)a) SELECT a, pg_ls_dir(a||'/pgsql_tmp') FROM y WHERE d='pgsql_tmp';\n> \n> I think changing dotfiles is topic for another patch.\n> That would also affect pg_ls_dir, and everything else that uses the backing\n> function pg_ls_dir_files_recurse. I'd have to ask why not also show . and .. ?\n> \n> (In fact, if I were to change anything, I would propose to limit pg_ls_tmpdir()\n> to files matching PG_TEMP_FILE_PREFIX).\n\nWe seem to be at an impasse on this patch. What do you think of \nJustin's comments here?\n\nDo you still believe a different implementation is required?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Tue, 3 Mar 2020 14:51:54 -0500",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Tue, Mar 03, 2020 at 02:51:54PM -0500, David Steele wrote:\n> Hi Fabien,\n> \n> On 1/16/20 9:38 AM, Justin Pryzby wrote:\n> >On Thu, Jan 16, 2020 at 09:34:32AM +0100, Fabien COELHO wrote:\n> >>Also, I'm not fully sure why \".*\" files should be skipped, maybe it should\n> >>be an option? Or the user can filter it with SQL if it does not want them?\n> >\n> >I think if someone wants the full generality, they can do this:\n> >\n> >postgres=# SELECT name, s.size, s.modification, s.isdir FROM (SELECT 'base/pgsql_tmp'p)p, pg_ls_dir(p)name, pg_stat_file(p||'/'||name)s;\n> > name | size | modification | isdir\n> >------+------+------------------------+-------\n> > .foo | 4096 | 2020-01-16 08:57:04-05 | t\n> >\n> >In my mind, pg_ls_tmpdir() is for showing tmpfiles, not just a shortcut to\n> >SELECT pg_ls_dir((SELECT 'base/pgsql_tmp'p)); -- or, for all tablespaces:\n> >WITH x AS (SELECT format('/PG_%s_%s', split_part(current_setting('server_version'), '.', 1), catalog_version_no) suffix FROM pg_control_system()), y AS (SELECT a, pg_ls_dir(a) AS d FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(oid),'')||suffix, 'base') a FROM pg_tablespace,x)a) SELECT a, pg_ls_dir(a||'/pgsql_tmp') FROM y WHERE d='pgsql_tmp';\n> >\n> >I think changing dotfiles is topic for another patch.\n> >That would also affect pg_ls_dir, and everything else that uses the backing\n> >function pg_ls_dir_files_recurse. I'd have to ask why not also show . and .. ?\n> >\n> >(In fact, if I were to change anything, I would propose to limit pg_ls_tmpdir()\n> >to files matching PG_TEMP_FILE_PREFIX).\n> \n> We seem to be at an impasse on this patch. What do you think of Justin's\n> comments here?\n\nActually, I found Fabien's comment regarding extensions use of tmp dir to be\nconvincing. And I'm willing to update the patch to use a stack to show\narbitrarily-deep files/dirs rather than just one level deep (as used for shared\nfilesets in core postgres).\n\nBut I don't think it makes sense to go through more implementation/review\ncycles without some agreement from a larger group regarding the\ndesired/intended interface. Should there be a column for \"parent dir\" ? Or a\ncolumn for \"is_dir\" ? Should dirs be shown at all, or only files ?\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 3 Mar 2020 14:01:17 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On 2020-Mar-03, Justin Pryzby wrote:\n\n> But I don't think it makes sense to go through more implementation/review\n> cycles without some agreement from a larger group regarding the\n> desired/intended interface. Should there be a column for \"parent dir\" ? Or a\n> column for \"is_dir\" ? Should dirs be shown at all, or only files ?\n\nIMO: is_dir should be there (and subdirs should be listed), but\nparent_dir should not appear. Also, the \"path\" should show the complete\npathname, including containing dirs, starting from whatever the \"root\"\nis for the operation.\n\nSo for the example in the initial email, it would look like\n\npath\t\t\t\t\tisdir\npgsql_tmp11025.0.sharedfileset/\t\tt\npgsql_tmp11025.0.sharedfileset/0.0\tf\npgsql_tmp11025.0.sharedfileset/1.0\tf\n\nplus additional columns, same as pg_ls_waldir et al.\n\nI'd rather not have the code assume that there's a single level of\nsubdirs, or assuming that an entry in the subdir cannot itself be a dir;\nthat might end up hiding files for no good reason.\n\nI don't understand what purpose is served by having pg_ls_waldir() hide\ndirectories.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 3 Mar 2020 17:23:13 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH v1] pg_ls_tmpdir to show directories"
},
{
"msg_contents": "On Tue, Mar 03, 2020 at 05:23:13PM -0300, Alvaro Herrera wrote:\n> On 2020-Mar-03, Justin Pryzby wrote:\n> \n> > But I don't think it makes sense to go through more implementation/review\n> > cycles without some agreement from a larger group regarding the\n> > desired/intended interface. Should there be a column for \"parent dir\" ? Or a\n> > column for \"is_dir\" ? Should dirs be shown at all, or only files ?\n> \n> IMO: is_dir should be there (and subdirs should be listed), but\n> parent_dir should not appear. Also, the \"path\" should show the complete\n> pathname, including containing dirs, starting from whatever the \"root\"\n> is for the operation.\n> \n> So for the example in the initial email, it would look like\n> \n> path\t\t\t\t\tisdir\n> pgsql_tmp11025.0.sharedfileset/\t\tt\n> pgsql_tmp11025.0.sharedfileset/0.0\tf\n> pgsql_tmp11025.0.sharedfileset/1.0\tf\n> \n> plus additional columns, same as pg_ls_waldir et al.\n> \n> I'd rather not have the code assume that there's a single level of\n> subdirs, or assuming that an entry in the subdir cannot itself be a dir;\n> that might end up hiding files for no good reason.\n> \n\nThanks for your input, see attached.\n\nI'm not sure if prefer the 0002 patch alone (which recurses into dirs all at\nonce during the initial call), or 0002+3+4, which incrementally reads the dirs\non each call (but requires keeping dirs opened).\n\n> I don't understand what purpose is served by having pg_ls_waldir() hide\n> directories.\n\nWe could talk about whether the other functions should show dirs, if it's worth\nbreaking their return type. Or if they should show hidden or special files,\nwhich doesn't require breaking the return. But until then I am to leave the\nbehavior alone.\n\n-- \nJustin",
"msg_date": "Thu, 5 Mar 2020 10:18:38 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets"
},
{
"msg_contents": "On Thu, Mar 05, 2020 at 10:18:38AM -0600, Justin Pryzby wrote:\n> I'm not sure if prefer the 0002 patch alone (which recurses into dirs all at\n> once during the initial call), or 0002+3+4, which incrementally reads the dirs\n> on each call (but requires keeping dirs opened).\n\nI fixed an issue that leading dirs were being shown which should not have been,\nwhich was easier in the 0004 patch, so squished. And fixed a bug that\n\"special\" files weren't excluded, and \"missing_ok\" wasn't effective.\n\n> > I don't understand what purpose is served by having pg_ls_waldir() hide\n> > directories.\n> \n> We could talk about whether the other functions should show dirs, if it's worth\n> breaking their return type. Or if they should show hidden or special files,\n> which doesn't require breaking the return. But until then I am to leave the\n> behavior alone.\n\nI don't see why any of the functions would exclude dirs, but ls_tmpdir deserves\nto be fixed since core postgres dynamically creates dirs there.\n\nAlso ... I accidentally changed the behavior: master not only doesn't decend\ninto dirs, it hides them - that was my original complaint. I propose to *also*\nchange at least tmpdir and logdir to show dirs, but don't decend. I left\nwaldir alone for now.\n\nSince v12 ls_tmpdir and since v10 logdir and waldir exclude dirs, I think we\nshould backpatch documentation to say so.\n\nISTM pg_ls_tmpdir and ls_logdir should be called with missing_ok=true, since\nthey're not created until they're used.\n\n-- \nJustin",
"msg_date": "Fri, 6 Mar 2020 17:35:07 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets"
},
{
"msg_contents": "\nHello Justin,\n\nSome feedback about the v7 patch set.\n\nAbout v7.1, seems ok.\n\nAbout v7.2 & v7.3 seems ok, altought the two could be merged.\n\nAbout v7.4:\n\nThe documentation sentences could probably be improved \"for for\", \"used \n... used\". Maybe:\n\n For the temporary directory for <parameter>tablespace</parameter>, ...\n->\n For <parameter>tablespace</parameter> temporary directory, ...\n\n Directories are used for temporary files used by parallel\n processes, and are shown recursively.\n->\n Directories holding temporary files used by parallel\n processes are shown recursively.\n\nIt seems that lists are used as FIFO structures by appending, fetching & \ndeleting last, all of which are O(n). ISTM it would be better to use the \nhead of the list by inserting, getting and deleting first, which are O(1).\n\nISTM that several instances of: \"pg_ls_dir_files(..., true, false);\" \nshould be \"pg_ls_dir_files(..., true, DIR_HIDE);\".\n\nAbout v7.5 looks like a doc update which should be merged with v7.4.\n\nAlas, ISTM that there are no tests on any of these functions:-(\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 7 Mar 2020 15:14:37 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets"
},
{
"msg_contents": "On Sat, Mar 07, 2020 at 03:14:37PM +0100, Fabien COELHO wrote:\n> Some feedback about the v7 patch set.\n\nThanks for looking again\n\n> About v7.1, seems ok.\n> \n> About v7.2 & v7.3 seems ok, altought the two could be merged.\n\nThese are separate since I proprose that one should be backpatched to v12 and\nthe other to v10.\n\n> About v7.4:\n...\n> It seems that lists are used as FIFO structures by appending, fetching &\n> deleting last, all of which are O(n). ISTM it would be better to use the\n> head of the list by inserting, getting and deleting first, which are O(1).\n\nI think you're referring to linked lists, but pglists are now arrays, for which\nthat's backwards. See 1cff1b95a and d97b714a2. For example, list_delete_last\nsays:\n * This is the opposite of list_delete_first(), but is noticeably cheaper\n * with a long list, since no data need be moved.\n\n> ISTM that several instances of: \"pg_ls_dir_files(..., true, false);\" should\n> be \"pg_ls_dir_files(..., true, DIR_HIDE);\".\n\nOops, that affects an intermediate commit and maybe due to merge conflict.\nThanks.\n\n> About v7.5 looks like a doc update which should be merged with v7.4.\n\nNo, v7.5 updates pg_proc.dat and changes the return type of two functions.\nIt's a short commit since all the infrastructure is implemented to make the\nfunctions do whatever we want. But it's deliberately separate since I'm\nproposing a breaking change, and one that hasn't been discussed until now.\n\n> Alas, ISTM that there are no tests on any of these functions:-(\n\nYeah. Everything that includes any output is going to include timestamps;\nthose could be filtered out. waldir is going to have random filenames, and a\ndiffering number of rows. But we should exercize pg_ls_dir_files at least\nonce..\n\nMy previous version had a bug with ignore_missing with pg_ls_tmpdir, which\nwould've been caught by a test like:\nSELECT FROM pg_ls_tmpdir() WHERE name='Does not exist'; -- Never true, so the function runs to completion but returns zero rows.\n\nThe 0006 commit changes that for logdir, too. Without 0006, that will ERROR if\nthe dir doesn't exist (which I think would be the default during regression\ntests).\n\nIt'd be nice to run pg_ls_tmpdir before the tmpdir exists, and again\nafterwards. But I'm having trouble finding a single place to put it. The\nclosest I can find is dbsize.sql. Any ideas ?\n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 7 Mar 2020 11:10:36 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets"
},
{
"msg_contents": ">> It seems that lists are used as FIFO structures by appending, fetching &\n>> deleting last, all of which are O(n). ISTM it would be better to use the\n>> head of the list by inserting, getting and deleting first, which are O(1).\n>\n> I think you're referring to linked lists, but pglists are now arrays,\n\nOk… I forgot about this change, so my point is void, you took the right \none.\n\n-- \nFabien.",
"msg_date": "Sat, 7 Mar 2020 18:40:08 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets"
},
{
"msg_contents": "On Sat, Mar 07, 2020 at 03:14:37PM +0100, Fabien COELHO wrote:\n> The documentation sentences could probably be improved \"for for\", \"used ...\n> used\". Maybe:\n\n> ISTM that several instances of: \"pg_ls_dir_files(..., true, false);\" should\n> be \"pg_ls_dir_files(..., true, DIR_HIDE);\".\n\n> Alas, ISTM that there are no tests on any of these functions:-(\n\nAddressed these.\n\nAnd reordered the last two commits to demonstrate and exercize the behavior\nchange in regress test.\n\n-- \nJustin",
"msg_date": "Sat, 7 Mar 2020 15:40:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\nPatch series applies cleanly. The last status compiles and passes \"make \ncheck\". A few more comments:\n\n* v8.[123] ok.\n\n* v8.4\n\nAvoid using the type name as a field name? \"enum dir_action dir_action;\" \n-> \"enum dir_action action\", or maybe rename \"dir_action\" enum \n\"dir_action_t\".\n\nAbout pg_ls_dir:\n\n\"if (!fctx->dirdesc)\" I do not think that is true even if AllocateDir \nfailed, the list exists anyway. ISTM it should be linitial which is NULL \nin that case.\n\nGiven the overlap between pg_ls_dir and pg_ls_dir_files, ISTM that the \nformer should call the slightly extended later with appropriate flags.\n\nAbout populate_paths:\n\nfunction is a little bit strange to me, ISTM it would deserve more \ncomments.\n\nI'm not sure the name reflect what it does. For instance, ISTM that it \ndoes one thing, but the name is plural. Maybe \"move_to_next_path\" or \n\"update_current_path\" or something?\n\nIt returns an int which can only be 0 or 1, which smells like an bool. \nWhat is this int/bool is not told in the function head comment. I guess it \nis whether the path was updated. When it returns false, the list length is \ndown to one.\n\nShouldn't AllocateDir be tested for bad result? Maybe it is a dir but you \ndo not have perms to open it? Or give a comment about why it cannot \nhappen?\n\nlater, good, at least the function is called, even if it is only for an \nerror case. Maybe some non empty coverage tests could be added with a \n\"count(*) > 0\" on not is_dir or maybe \"count(*) = 0\" on is_dir, for \ninstance?\n\n (SELECT oid FROM pg_tablespace b WHERE b.spcname='regress_tblspace'\n UNION SELECT 0 ORDER BY 1 DESC LIMIT 1)b\n\nThe 'b' glued to the ')' looks pretty strange. I'd suggest \") AS b\". \nReusing the same alias twice could be avoided for clarity, maybe.\n\n* v8.[56]\n\nI'd say that a behavior change which adds a column and a possibly a few \nrows is ok, especially as the tmpdir contains subdirs now.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 8 Mar 2020 09:02:19 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "I took a step back, and I wondered whether we should add a generic function for\nlisting a dir with metadata, possibly instead of changing the existing\nfunctions. Then one could do pg_ls_dir_metadata('pg_wal',false,false);\n\nSince pg8.1, we have pg_ls_dir() to show a list of files. Since pg10, we've\nhad pg_ls_logdir and pg_ls_waldir, which show not only file names but also\n(some) metadata (size, mtime). And since pg12, we've had pg_ls_tmpfile and\npg_ls_archive_statusdir, which also show metadata.\n\n...but there's no a function which lists the metadata of an directory other\nthan tmp, wal, log.\n\nOne can do this:\n|SELECT b.*, c.* FROM (SELECT 'base' a)a, LATERAL (SELECT a||'/'||pg_ls_dir(a.a)b)b, pg_stat_file(b)c;\n..but that's not as helpful as allowing:\n|SELECT * FROM pg_ls_dir_metadata('.',true,true);\n\nThere's also no function which recurses into an arbitrary directory, so it\nseems shortsighted to provide a function to recursively list a tmpdir.\n\nAlso, since pg_ls_dir_metadata indicates whether the path is a dir, one can\nwrite a SQL function to show the dir recursively. It'd be trivial to plug in\nwal/log/tmp (it seems like tmpdirs of other tablespace's are not entirely\ntrivial).\n|SELECT * FROM pg_ls_dir_recurse('base/pgsql_tmp');\n\nAlso, on a neighboring thread[1], Tom indicated that the pg_ls_* functions\nshould enumerate all files during the initial call, which sounds like a bad\nidea when recursively showing directories. If we add a function recursing into\na directory, we'd need to discuss all the flags to expose to it, like recurse,\nignore_errors, one_filesystem?, show_dotfiles (and eventually bikeshed all the\nrest of the flags in find(1)).\n\nMy initial patch [2] changed ls_tmpdir to show metadata columns including\nis_dir, but not decend. It's pretty unfortunate if a function called\npg_ls_tmpdir hides shared filesets, so maybe it really is best to change that\n(it's new in v12).\n\nI'm interested to in feedback on the alternative approach, as attached. The\nfinal patch to include all the rest of columns shown by pg_stat_file() is more\nof an idea/proposal and not sure if it'll be desirable. But pg_ls_tmpdir() is\nessentially the same as my v1 patch.\n\nThis is intended to be mostly independent of any fix to the WARNING I reported\n[1]. Since my patch collapses pg_ls_dir into pg_ls_dir_files, we'd only need\nto fix one place. I'm planning to eventually look into Tom's suggestion of\nreturning tuplestore to fix that, and maybe rebase this patchset on top of\nthat.\n\n-- \nJustin\n\n[1] https://www.postgresql.org/message-id/flat/20200308173103.GC1357%40telsasoft.com\n[2] https://www.postgresql.org/message-id/20191214224735.GA28433%40telsasoft.com",
"msg_date": "Tue, 10 Mar 2020 13:30:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "@cfbot: rebased onto 085b6b6679e73b9b386f209b4d625c7bc60597c0\n\nThe merge conflict presents another opportunity to solicit comments on the new\napproach. Rather than making \"recurse into tmpdir\" the end goal:\n\n - add a function to show metadata of an arbitrary dir;\n - add isdir arguments to pg_ls_* functions (including pg_ls_tmpdir but not\n pg_ls_dir).\n - maybe add pg_ls_dir_recurse, which satisfies the original need;\n - retire pg_ls_dir (does this work with tuplestore?)\n - profit\n\nThe alternative seems to be to go back to Alvaro's earlier proposal:\n - not only add \"isdir\", but also recurse;\n\nI think I would insist on adding a general function to recurse into any dir.\nAnd *optionally* change ps_ls_* to recurse (either by accepting an argument, or\nby making that a separate patch to debate). tuplestore is certainly better\nthan keeping a stack/List of DIRs for this.\n\nOn Tue, Mar 10, 2020 at 01:30:37PM -0500, Justin Pryzby wrote:\n> I took a step back, and I wondered whether we should add a generic function for\n> listing a dir with metadata, possibly instead of changing the existing\n> functions. Then one could do pg_ls_dir_metadata('pg_wal',false,false);\n> \n> Since pg8.1, we have pg_ls_dir() to show a list of files. Since pg10, we've\n> had pg_ls_logdir and pg_ls_waldir, which show not only file names but also\n> (some) metadata (size, mtime). And since pg12, we've had pg_ls_tmpfile and\n> pg_ls_archive_statusdir, which also show metadata.\n> \n> ...but there's no a function which lists the metadata of an directory other\n> than tmp, wal, log.\n> \n> One can do this:\n> |SELECT b.*, c.* FROM (SELECT 'base' a)a, LATERAL (SELECT a||'/'||pg_ls_dir(a.a)b)b, pg_stat_file(b)c;\n> ..but that's not as helpful as allowing:\n> |SELECT * FROM pg_ls_dir_metadata('.',true,true);\n> \n> There's also no function which recurses into an arbitrary directory, so it\n> seems shortsighted to provide a function to recursively list a tmpdir.\n> \n> Also, since pg_ls_dir_metadata indicates whether the path is a dir, one can\n> write a SQL function to show the dir recursively. It'd be trivial to plug in\n> wal/log/tmp (it seems like tmpdirs of other tablespace's are not entirely\n> trivial).\n> |SELECT * FROM pg_ls_dir_recurse('base/pgsql_tmp');\n> \n> Also, on a neighboring thread[1], Tom indicated that the pg_ls_* functions\n> should enumerate all files during the initial call, which sounds like a bad\n> idea when recursively showing directories. If we add a function recursing into\n> a directory, we'd need to discuss all the flags to expose to it, like recurse,\n> ignore_errors, one_filesystem?, show_dotfiles (and eventually bikeshed all the\n> rest of the flags in find(1)).\n> \n> My initial patch [2] changed ls_tmpdir to show metadata columns including\n> is_dir, but not decend. It's pretty unfortunate if a function called\n> pg_ls_tmpdir hides shared filesets, so maybe it really is best to change that\n> (it's new in v12).\n> \n> I'm interested to in feedback on the alternative approach, as attached. The\n> final patch to include all the rest of columns shown by pg_stat_file() is more\n> of an idea/proposal and not sure if it'll be desirable. But pg_ls_tmpdir() is\n> essentially the same as my v1 patch.\n> \n> This is intended to be mostly independent of any fix to the WARNING I reported\n> [1]. Since my patch collapses pg_ls_dir into pg_ls_dir_files, we'd only need\n> to fix one place. I'm planning to eventually look into Tom's suggestion of\n> returning tuplestore to fix that, and maybe rebase this patchset on top of\n> that.\n> \n> -- \n> Justin\n> \n> [1] https://www.postgresql.org/message-id/flat/20200308173103.GC1357%40telsasoft.com\n> [2] https://www.postgresql.org/message-id/20191214224735.GA28433%40telsasoft.com",
"msg_date": "Fri, 13 Mar 2020 08:12:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Hello Justin,\n\nSome feedback on v10:\n\nAll patches apply cleanly, one on top of the previous. I really wish there \nwould be less than 9 patches…\n\n* v10.1 doc change: ok\n\n* v10.2 doc change: ok, not sure why it is not merged with previous\n\n* v10.3 test add: could be merge with both previous\n\nTests seem a little contrived. I'm wondering whether something more \nstraightforward could be proposed. For instance, once the tablespace is \njust created but not used yet, probably we do know that the tmp file \nexists and is empty?\n\n* v10.4 at least, some code!\n\nCompiles, make check ok.\n\npg_ls_dir_files: I'm fine with the flag approach given the number of \nswitches and the internal nature of the function.\n\nI'm not sure of the \"FLAG_\" prefix which seems too generic, even if it is \nlocal. I'd suggest \"LS_DIR_*\", maybe, as a more specific prefix.\n\nISTM that Pg style requires spaces around operators. I'd suggest some \nparenthesis would help as well, eg: \"flags&FLAG_MISSING_OK\" -> \"(flags & \nFLAG_MISSING_OK)\" and other instances.\n\nAbout:\n\n if (S_ISDIR(attrib.st_mode)) {\n if (flags&FLAG_SKIP_DIRS)\n continue;\n }\n\nand similars, why not the simpler:\n\n if (S_ISDIR(attrib.st_mode) && (flags & FLAG_SKIP_DIRS))\n continue;\n\nEspecially that it is done like that in previous cases.\n\nMaybe I'd create defines for long and common flag specs, eg:\n\n #define ..._LS_SIMPLE (FLAG_SKIP_DIRS|FLAG_SKIP_HIDDEN|FLAG_SKIP_SPECIAL|FLAG_METADATA)\n\nNo attempt at recursing.\n\nI'm not sure about these asserts:\n\n /* isdir depends on metadata */\n Assert(!(flags&FLAG_ISDIR) || (flags&FLAG_METADATA));\n\nHmmm. Why?\n\n /* Unreasonable to show isdir and skip dirs */\n Assert(!(flags&FLAG_ISDIR) || !(flags&FLAG_SKIP_DIRS));\n\nHmmm. Why would I prevent that, even if it has little sense, it should \nwork. I do not see having false on the isdir column as an actual issue.\n\n* v10.5 add is_dir column, a few tests & doc.\n\nOk.\n\n* v10.6 behavior change for existing functions, always show isdir column,\nand removal of IS_DIR flag.\n\nI'm unsure why the features are removed, some use case may benefit from \nthe more complete function?\n\nMaybe flags defs should not be changed anyway?\n\nI do not like much the \"if (...) /* empty */;\" code. Maybe it could be \ncaught more cleanly later in the conditional structure.\n\n* v10.7 adds \"pg_ls_dir_recurse\" function\n\nUsing sql recurse to possibly to implement the feature is pretty elegant\nand limits open directories to one at a time, which is pretty neat.\n\nDoc looks incomplete and the example is very contrived and badly indented.\n\nThe function definition does not follow the style around: uppercase \nwhereas all others are lowercase, \"\" instead of '', no \"as\"…\n\nI do not understand why oid 8511 is given to the new function.\n\nI do not understand why UNION ALL and not UNION.\n\nI would have put the definition after \"pg_ls_dir_metadata\" definition.\n\npg_ls_dir_metadata seems defined as (text,bool,bool) but called as \n(text,bool,bool,bool).\n\nMaybe a better alias could be given instead of x?\n\nThere are no tests for the new function. I'm not sure it would work.\n\n* v10.8 change flags & add test on pg_ls_logdir().\n\nI'm unsure why it is done at this stage.\n\n* v10.9 change some ls functions and fix patch 10.7 issue\n\nI'm unsure why it is done at this stage. \"make check\" ok.\n\n-- \nFabien.",
"msg_date": "Sun, 15 Mar 2020 18:15:02 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Mar 15, 2020 at 06:15:02PM +0100, Fabien COELHO wrote:\n> Some feedback on v10:\n\nThanks for looking. I'm hoping to hear from Alvaro what he thinks of this\napproach (all functions to show isdir, rather than one function which lists\nrecursively).\n\n> All patches apply cleanly, one on top of the previous. I really wish there\n> would be less than 9 patches…\n\nI kept them separate to allow the earlier patches to be applied.\nAnd intended to make easier to review, even if it's more work for me..\n\nIf you mean that it's a pain to apply 9 patches, I will suggest to use:\n|git am -3 ./mailbox\nwhere ./mailbox is either a copy of the mail you received, or retrieved from\nthe web interface.\n\nTo test that each one works (compiles, passes tests, etc), I use git rebase -i\nHEAD~11 and \"e\"edit the target (set of) patches.\n\n> * v10.1 doc change: ok\n> \n> * v10.2 doc change: ok, not sure why it is not merged with previous\n\nAs I mentioned, separate since I'm proposing that they're backpatched to\ndifferent releases. Those could be applied now (and Tom already applied a\npatch identical to 0001 in a prior patchset).\n\n> * v10.3 test add: could be merge with both previous\n\n> Tests seem a little contrived. I'm wondering whether something more\n> straightforward could be proposed. For instance, once the tablespace is just\n> created but not used yet, probably we do know that the tmp file exists and\n> is empty?\n\nThe tmpdir *doesn't* exist until someone creates tmpfiles there.\nAs it mentions:\n+-- This tests the missing_ok parameter, which causes pg_ls_tmpdir to succeed even if the tmpdir doesn't exist yet\n\n> * v10.4 at least, some code!\n> I'm not sure of the \"FLAG_\" prefix which seems too generic, even if it is\n> local. I'd suggest \"LS_DIR_*\", maybe, as a more specific prefix.\n\nDone.\n\n> ISTM that Pg style requires spaces around operators. I'd suggest some\n> parenthesis would help as well, eg: \"flags&FLAG_MISSING_OK\" -> \"(flags &\n> FLAG_MISSING_OK)\" and other instances.\n\nPartially took your suggestion.\n\n> About:\n> \n> if (S_ISDIR(attrib.st_mode)) {\n> if (flags&FLAG_SKIP_DIRS)\n> continue;\n> }\n> \n> and similars, why not the simpler:\n> \n> if (S_ISDIR(attrib.st_mode) && (flags & FLAG_SKIP_DIRS))\n> continue;\n\nThat's not the same - if SKIP_DIRS isn't set, your way would fail that test for\ndirs, and then hit the !ISREG test, and skip them anyway.\n|else if (!S_ISREG(attrib.st_mode))\n|\tcontinue\n\n> Maybe I'd create defines for long and common flag specs, eg:\n> \n> #define ..._LS_SIMPLE (FLAG_SKIP_DIRS|FLAG_SKIP_HIDDEN|FLAG_SKIP_SPECIAL|FLAG_METADATA)\n\nDone\n\n> I'm not sure about these asserts:\n> \n> /* isdir depends on metadata */\n> Assert(!(flags&FLAG_ISDIR) || (flags&FLAG_METADATA));\n> \n> Hmmm. Why?\n\nIt's not supported to show isdir without showing metadata (because that case\nisn't needed to support the old and the new behaviors).\n\n+ if (flags & FLAG_METADATA)\n+ {\n+ values[1] = Int64GetDatum((int64) attrib.st_size);\n+ values[2] = TimestampTzGetDatum(time_t_to_timestamptz(attrib.st_mtime));\n+ if (flags & FLAG_ISDIR)\n+ values[3] = BoolGetDatum(S_ISDIR(attrib.st_mode));\n+ }\n\n> /* Unreasonable to show isdir and skip dirs */\n> Assert(!(flags&FLAG_ISDIR) || !(flags&FLAG_SKIP_DIRS));\n> \n> Hmmm. Why would I prevent that, even if it has little sense, it should work.\n> I do not see having false on the isdir column as an actual issue.\n\nIt's unimportant, but testing for intended use of flags during development.\n\n> * v10.6 behavior change for existing functions, always show isdir column,\n> and removal of IS_DIR flag.\n> \n> I'm unsure why the features are removed, some use case may benefit from the\n> more complete function?\n> Maybe flags defs should not be changed anyway?\n\nMaybe. I put them back...but it means they're not being exercized by any\n*used* case.\n\n> I do not like much the \"if (...) /* empty */;\" code. Maybe it could be\n> caught more cleanly later in the conditional structure.\n\nThis went away when I put back the SKIP_DIRS flag.\n\n> * v10.7 adds \"pg_ls_dir_recurse\" function\n\n> Doc looks incomplete and the example is very contrived and badly indented.\n\nWhy you think it's contrived? Listing a tmpdir recursively is the initial\nmotivation of this patch. Maybe you think I should list just the tmpdir for\none tablespace ? Note that for temp_tablespaces parameter:\n\n|When there is more than one name in the list, PostgreSQL chooses a random member of the list each time a temporary object is to be created; except that within a transaction, successively created temporary objects are placed in successive tablespaces from the list.\n\n> The function definition does not follow the style around: uppercase whereas\n> all others are lowercase, \"\" instead of '', no \"as\"…\n\nI used \"\" because of this:\n| x.name||'/'||a.name\nI don't know if there's a better way to join paths in SQL, or if that suggests\nthis is a bad way to do it.\n\n> I do not understand why oid 8511 is given to the new function.\n\nI used: ./src/include/catalog/unused_oids (maybe not correctly).\n\n> I do not understand why UNION ALL and not UNION.\n\nIn general, union ALL can avoid a \"distinct\" plan node, but it doesn't seem to\nhave any effect here.\n\n> I would have put the definition after \"pg_ls_dir_metadata\" definition.\n\nDone\n\n> pg_ls_dir_metadata seems defined as (text,bool,bool) but called as\n> (text,bool,bool,bool).\n\nfixed, thanks.\n\n> Maybe a better alias could be given instead of x?\n> \n> There are no tests for the new function. I'm not sure it would work.\n\nI added something which would've caught the issue with number of arguments.\n\n> * v10.8 change flags & add test on pg_ls_logdir().\n> \n> I'm unsure why it is done at this stage.\n\nI think it makes sense to allow ls_logdir to succeed even if ./log doesn't\nexist, since it isn't created by initdb or during postmaster start, and since\nwe already using MISSING_OK for tmpdir.\n\nBut a separate patch since we didn't previous discuss changing logdir. \n\n> * v10.9 change some ls functions and fix patch 10.7 issue\n> I'm unsure why it is done at this stage. \"make check\" ok.\n\nThis is the last patch in the series, since I think it's least likely to be\nagreed on.\n\n-- \nJustin",
"msg_date": "Sun, 15 Mar 2020 16:27:29 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nAbout v11, ISTM that the recursive function should check for symbolic \nlinks and possibly avoid them:\n\n sh> cd data/base\n sh> ln -s .. foo\n\n psql> SELECT * FROM pg_ls_dir_recurse('.');\n ERROR: could not stat file \"./base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo\": Too many levels of symbolic links\n CONTEXT: SQL function \"pg_ls_dir_recurse\" statement 1\n\nThis probably means using lstat instead of (in supplement to?) stat, and \nprobably tell if something is a link, and if so not recurse in them.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 16 Mar 2020 16:20:21 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 04:20:21PM +0100, Fabien COELHO wrote:\n> \n> About v11, ISTM that the recursive function should check for symbolic links\n> and possibly avoid them:\n> \n> sh> cd data/base\n> sh> ln -s .. foo\n> \n> psql> SELECT * FROM pg_ls_dir_recurse('.');\n> ERROR: could not stat file \"./base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo\": Too many levels of symbolic links\n> CONTEXT: SQL function \"pg_ls_dir_recurse\" statement 1\n> \n> This probably means using lstat instead of (in supplement to?) stat, and\n> probably tell if something is a link, and if so not recurse in them.\n\nThanks for looking.\n\nI think that opens up a can of worms. I don't want to go into the business of\nre-implementing all of find(1) - I count ~128 flags (most of which take\narguments). You're referring to find -L vs find -P, and some people would want\none and some would want another. And don't forget about find -H...\n\npg_stat_file doesn't expose the file type (I guess because it's not portable?),\nand I think it's outside the scope of this patch to change that. Maybe it\nsuggests that the pg_ls_dir_recurse patch should be excluded.\n\nISTM if someone wants to recursively list a directory, they should avoid\nputting cycles there, or permission errors, or similar. Or they should write\ntheir own C extension that borrows from pg_ls_dir_files but handles more\narguments.\n\n-- \nJustin\n\n\n",
"msg_date": "Mon, 16 Mar 2020 10:41:36 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\n>> psql> SELECT * FROM pg_ls_dir_recurse('.');\n>> ERROR: could not stat file \"./base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo/base/foo\": Too many levels of symbolic links\n>> CONTEXT: SQL function \"pg_ls_dir_recurse\" statement 1\n>>\n>> This probably means using lstat instead of (in supplement to?) stat, and\n>> probably tell if something is a link, and if so not recurse in them.\n>\n> Thanks for looking.\n>\n> I think that opens up a can of worms. I don't want to go into the business of\n> re-implementing all of find(1) - I count ~128 flags (most of which take\n> arguments). You're referring to find -L vs find -P, and some people would want\n> one and some would want another. And don't forget about find -H...\n\nThis is not the point. The point is that a link can change a finite tree \ninto cyclic graph, and you do not want to delve into that, ever.\n\nThe \"find\" command, by default, does not recurse into a link because of \nsaid problem, and the user *must* ask for it and assume the infinite loop \nif any.\n\nSo if you implement one behavior, it should be not recursing into links. \nFranckly, I would not provide the recurse into link alternative, but it \ncould be implemented if someone wants it, and the problem that come with \nit.\n\n> pg_stat_file doesn't expose the file type (I guess because it's not portable?),\n\nYou are right that Un*x and Windows are not the same wrt link. It seems \nthat there is already something about that in port:\n\n \"./src/port/dirmod.c:pgwin32_is_junction(const char *path)\"\n\nSo most of the details are already hidden.\n\n> and I think it's outside the scope of this patch to change that. Maybe it\n> suggests that the pg_ls_dir_recurse patch should be excluded.\n\nIMHO, I really think that it should be included. Dealing with links is no \nbig deal, but you need an additional column in _metadata to tell it is a \nlink, and there is a ifdef because testing is a little different between \nunix and windows. I'd guess around 10-20 lines of code added.\n\n> ISTM if someone wants to recursively list a directory, they should avoid\n> putting cycles there, or permission errors, or similar.\n\nHmmm. I'd say the user should like to be able to call the function and \nnever have a bad experience with it such as a failure on an infinite loop.\n\n> Or they should write their own C extension that borrows from \n> pg_ls_dir_files but handles more arguments.\n\nISTM that the point of your patch is to provide the basic tool needed to \nlist directories contents, and handling links somehow is a necessary part \nof that.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 16 Mar 2020 19:21:06 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 04:20:21PM +0100, Fabien COELHO wrote:\n> This probably means using lstat instead of (in supplement to?) stat, and\n> probably tell if something is a link, and if so not recurse in them.\n\nOn Mon, Mar 16, 2020 at 07:21:06PM +0100, Fabien COELHO wrote:\n> IMHO, I really think that it should be included. Dealing with links is no\n> big deal, but you need an additional column in _metadata to tell it is a\n> link\n\nInstead of showing another column, I changed to show links with isdir=false.\nAt the cost of two more patches, to allow backpatching docs and maybe separate\ncommit to make the subtle change obvious in commit history, at least.\n\nI see a few places in the backend and a few more in the fronted using the same\nlogic that I used for islink(), but I'm not sure if there's a good place to put\nthat to allow factoring out at least the other backend ones.\n\n-- \nJustin",
"msg_date": "Mon, 16 Mar 2020 16:48:06 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "I pushed 0001 and 0003 (as a single commit). archive_statusdir didn't\nget here until 12, so your commit message was mistaken. Also, pg10 is\nslightly different so it didn't apply there, so I left it alone.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 16 Mar 2020 19:17:36 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 16, 2020 at 07:17:36PM -0300, Alvaro Herrera wrote:\n> I pushed 0001 and 0003 (as a single commit). archive_statusdir didn't\n> get here until 12, so your commit message was mistaken. Also, pg10 is\n> slightly different so it didn't apply there, so I left it alone.\n\nThanks, I appreciate it (and I'm sure Fabien will appreciate having two fewer\npatches...).\n\n@cfbot: rebased onto b4570d33aa045df330bb325ba8a2cbf02266a555\n\nI realized that if I lstat() a file to make sure links to dirs show as\nisdir=false, it's odd to then show size and timestamps of the dir. So changed\nto use lstat ... and squished.\n\n-- \nJustin",
"msg_date": "Mon, 16 Mar 2020 22:14:27 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "About v13, seens as one patch:\n\nFunction \"pg_ls_dir_metadata\" documentation suggests a variable number of \narguments with brackets, but parameters are really mandatory.\n\n postgres=# SELECT pg_ls_dir_metadata('.');\n ERROR: function pg_ls_dir_metadata(unknown) does not exist\n LINE 1: SELECT pg_ls_dir_metadata('.');\n ^\n HINT: No function matches the given name and argument types. You might need to add explicit type casts.\n postgres=# SELECT pg_ls_dir_metadata('.', true, true);\n …\n\nThe example in the documentation could be better indented. Also, ISTM that \nthere are two implicit laterals (format & pg_ls_dir_recurse) that I would \nmake explicit. I'd use the pcs alias explicitely. I'd use meaningful \naliases (eg ts instead of b, …).\n\nOn reflection, I think that a boolean \"isdir\" column is a bad idea because \nit is not extensible. I'd propose to switch to the standard \"ls\" approach \nof providing the type as one character: '-' for regular, 'd' for \ndirectory, 'l' for link, 's' for socket, 'c' for character special…\n\nISTM that \"lstat\" is not available on windows, which suggests to call \n\"stat\" always, and then \"lstat\" on un*x and pg ports stuff on win.\n\nI'm wondering about the restriction on directories only. Why should it not \nwork on a file? Can it be easily extended to work on a simple file? If so, \nit could be just \"pg_ls\".\n\n-- \nFabien.",
"msg_date": "Tue, 17 Mar 2020 10:21:48 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 10:21:48AM +0100, Fabien COELHO wrote:\n> \n> About v13, seens as one patch:\n> \n> Function \"pg_ls_dir_metadata\" documentation suggests a variable number of\n> arguments with brackets, but parameters are really mandatory.\n\nFixed, and added tests on 1 and 3 arg versions of both pg_ls_dir() and\npg_ls_dir_metadata().\n\nIt seems like the only way to make variable number of arguments is is with\nmultiple entries in pg_proc.dat, one for each \"number of\" arguments. Is that\nright ?\n\n> The example in the documentation could be better indented. Also, ISTM that\n> there are two implicit laterals (format & pg_ls_dir_recurse) that I would\n> make explicit. I'd use the pcs alias explicitely. I'd use meaningful aliases\n> (eg ts instead of b, …).\n\n> On reflection, I think that a boolean \"isdir\" column is a bad idea because\n> it is not extensible. I'd propose to switch to the standard \"ls\" approach of\n> providing the type as one character: '-' for regular, 'd' for directory, 'l'\n> for link, 's' for socket, 'c' for character special…\n\nI think that's outside the scope of the patch, since I'd want to change\npg_stat_file; that's where I borrowed \"isdir\" from, for consistency.\n\nNote that both LS_DIR_HISTORIC and LS_DIR_MODERN include LS_DIR_SKIP_SPECIAL,\nso only pg_ls_dir itself show specials, so they way to do it would be to 1)\nchange pg_stat_file to expose the file's \"type\", 2) use pg_ls_dir() AS a,\nlateral pg_stat_file(a) AS b, 3) then consider also changing LS_DIR_MODERN and\nall the existing pg_ls_*.\n\n> ISTM that \"lstat\" is not available on windows, which suggests to call \"stat\"\n> always, and then \"lstat\" on un*x and pg ports stuff on win.\n\nI believe that's handled here.\nsrc/include/port/win32_port.h:#define lstat(path, sb) stat(path, sb)\n\n> I'm wondering about the restriction on directories only. Why should it not\n> work on a file? Can it be easily extended to work on a simple file? If so,\n> it could be just \"pg_ls\".\n\nI think that's a good idea, except it doesn't fit with what the code does:\nAllocDir() and ReadDir(). Instead, use pg_stat_file() for that.\n\nHm, I realized that the existing pg_ls_dir_metadata was skipping links to dirs,\nsince !ISREG(). So changed to use both stat() and lstat().\n\n-- \nJustin",
"msg_date": "Tue, 17 Mar 2020 14:04:01 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> It seems like the only way to make variable number of arguments is is with\n> multiple entries in pg_proc.dat, one for each \"number of\" arguments. Is that\n> right ?\n\nAnother way to do it is to have one entry, put the full set of arguments\ninto the initial pg_proc.dat data, and then use CREATE OR REPLACE FUNCTION\nlater during initdb to install some defaults. See existing cases in\nsystem_views.sql, starting about line 1180. Neither way is especially\npretty, so take your choice.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 17 Mar 2020 15:11:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Tue, Mar 17, 2020 at 02:04:01PM -0500, Justin Pryzby wrote:\n> > The example in the documentation could be better indented. Also, ISTM that\n> > there are two implicit laterals (format & pg_ls_dir_recurse) that I would\n> > make explicit. I'd use the pcs alias explicitely. I'd use meaningful aliases\n> > (eg ts instead of b, …).\n> \n> > On reflection, I think that a boolean \"isdir\" column is a bad idea because\n> > it is not extensible. I'd propose to switch to the standard \"ls\" approach of\n> > providing the type as one character: '-' for regular, 'd' for directory, 'l'\n> > for link, 's' for socket, 'c' for character special…\n> \n> I think that's outside the scope of the patch, since I'd want to change\n> pg_stat_file; that's where I borrowed \"isdir\" from, for consistency.\n> \n> Note that both LS_DIR_HISTORIC and LS_DIR_MODERN include LS_DIR_SKIP_SPECIAL,\n> so only pg_ls_dir itself show specials, so they way to do it would be to 1)\n> change pg_stat_file to expose the file's \"type\", 2) use pg_ls_dir() AS a,\n> lateral pg_stat_file(a) AS b, 3) then consider also changing LS_DIR_MODERN and\n> all the existing pg_ls_*.\n\nThe patch intends to fix the issue of \"failing to show failed filesets\"\n(because dirs are skipped) while also generalizing existing functions (to show\ndirectories and \"isdir\" column) and providing some more flexible ones (to list\nfile and metadata of a dir, which is currently possible [only] for \"special\"\ndirectories, or by recursively calling pg_stat_file).\n\nI'm still of the opinion that supporting arbitrary file types is out of scope,\nbut I changed the \"isdir\" to show \"type\". I'm only supporting '[-dl]'. I\ndon't want to have to check #ifdef S_ISDOOR or whatever other vendors have. I\ninsist that it is a separate patch, since it depends on everything else, and I\nhave no feedback from anybody else as to whether any of that is desired.\n\ntemplate1=# SELECT * FROM pg_ls_waldir();\n name | size | access | modification | change | creation | type \n--------------------------+----------+------------------------+------------------------+------------------------+----------+------\n barr | 0 | 2020-03-31 14:43:11-05 | 2020-03-31 14:43:11-05 | 2020-03-31 14:43:11-05 | | ?\n baz | 4096 | 2020-03-31 14:39:18-05 | 2020-03-31 14:39:18-05 | 2020-03-31 14:39:18-05 | | d\n foo | 0 | 2020-03-31 14:39:37-05 | 2020-03-31 14:39:37-05 | 2020-03-31 14:39:37-05 | | -\n archive_status | 4096 | 2020-03-31 14:38:20-05 | 2020-03-31 14:38:18-05 | 2020-03-31 14:38:18-05 | | d\n 000000010000000000000001 | 16777216 | 2020-03-31 14:42:53-05 | 2020-03-31 14:43:08-05 | 2020-03-31 14:43:08-05 | | -\n bar | 3 | 2020-03-31 14:39:16-05 | 2020-03-31 14:39:01-05 | 2020-03-31 14:39:01-05 | | l",
"msg_date": "Tue, 31 Mar 2020 15:08:12 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\nAbout v15, seen as one patch.\n\nPatches serie applies cleanly, compiles, \"make check\" ok.\n\nDocumentation:\n - indent documentation text around 80 cols, as done around?\n - indent SQL example for readability and capitalize keywords\n (pg_ls_dir_metadata)\n - \"For each file in a directory, list the file and its metadata.\"\n maybe: \"List files and their metadata in a directory\"?\n\nCode:\n - Most pg_ls_*dir* functions call pg_ls_dir_files(), which looks like\n reasonable refactoring, ISTM that the code is actually smaller.\n - please follow pg style, eg not \"} else {\"\n - there is a \"XXX\" (meaning fixme?) tag remaining in a comment.\n - file types: why not do block & character devices, fifo and socket\n as well, before the unkown case?\n - I'm wondering whether could pg_stat_file call pg_ls_dir_files without\n too much effort? ISTM that the output structure nearly the same. I do\n not like much having one function specialized for files and one for\n directories.\n\nTests:\n - good, there are some!\n - indent SQL code, eg by starting a new line on new clauses?\n - put comments on separate lines (I'm not against it on principle, I do\n that, but I do not think that it is done much in test files).\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 12 Apr 2020 13:53:40 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Apr 12, 2020 at 01:53:40PM +0200, Fabien COELHO wrote:\n> About v15, seen as one patch.\n\nThanks for looking.\n\n> - I'm wondering whether could pg_stat_file call pg_ls_dir_files without\n> too much effort? ISTM that the output structure nearly the same. I do\n> not like much having one function specialized for files and one for\n> directories.\n\nI refactored but not like that. As I mentioned in the commit message, I don't\nsee a good way to make a function operate on a file when the function's primary\ndata structure is a DIR*. Do you ? I don't think it should call stat() and\nthen conditionally branch off to pg_stat_file().\n\nThere are two functions because they wrap two separate syscalls, which see as\ngood, transparent goal. If we want a function that does what \"ls -al\" does,\nthat would also be a good example to follow, except that we already didn't\nfollow it.\n\n/bin/ls first stat()s the path, and then either outputs its metadata (if it's a\nfile or if -d was specified) or lists a dir. It's essentially a wrapper around\n*two* system calls (stat and readdir/getdents).\n\nMaybe we could invent a new pg_ls() which does that, and then refactor existing\ncode. Or, maybe it would be a SQL function which calls stat() and then\nconditionally calls pg_ls_dir if isdir=True (or type='d'). That would be easy\nif we merge the commit which outputs all stat fields.\n\nI'm still hoping for confirmation from a committer if this approach is worth\npursuing:\n\nhttps://www.postgresql.org/message-id/20200310183037.GA29065%40telsasoft.com\nhttps://www.postgresql.org/message-id/20200313131232.GO29065%40telsasoft.com\n|Rather than making \"recurse into tmpdir\" the end goal:\n|\n| - add a function to show metadata of an arbitrary dir;\n| - add isdir arguments to pg_ls_* functions (including pg_ls_tmpdir but not\n| pg_ls_dir).\n| - maybe add pg_ls_dir_recurse, which satisfies the original need;\n| - retire pg_ls_dir (does this work with tuplestore?)\n| - profit\n\n-- \nJustin",
"msg_date": "Sat, 2 May 2020 21:42:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Rebased onto 1ad23335f36b07f4574906a8dc66a3d62af7c40c",
"msg_date": "Thu, 7 May 2020 10:08:35 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Rebased onto 7b48f1b490978a8abca61e9a9380f8de2a56f266 and renumbered OIDs.",
"msg_date": "Mon, 25 May 2020 21:10:03 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\n> Rebased onto 7b48f1b490978a8abca61e9a9380f8de2a56f266 and renumbered OIDs.\n\nSome feedback about v18, seen as one patch.\n\nPatch applies cleanly & compiles. \"make check\" is okay.\n\npg_stat_file() and pg_stat_dir_files() now return a char type, as well as \nthe function which call them, but the documentation does not seem to say \nthat it is the case.\n\nI must admit that I'm not a fan on the argument management of \npg_ls_dir_metadata and pg_ls_dir_metadata_1arg and others. I understand \nthat it saves a few lines though, so maybe let it be.\n\nThere is a comment in pg_ls_dir_files which talks about pg_ls_dir.\n\nCould pg_ls_*dir functions C implementations be dropped in favor of a pure \nSQL implementation, like you did with recurse?\n\nIf so, ISTM that pg_ls_dir_files() could be significantly simplified by \nmoving its filtering flag to SQL conditions on \"type\" and others. That \ncould allow not to change the existing function output a keep the \"isdir\" \ncolumn defined as \"type = 'd'\" where it was used previously, if someone \ncomplains, but still have the full capability of \"ls\". That would also \nallow to drop the \"*_1arg\" hacks. Basically I'm advocating having 1 or 2 \nactual C functions, and all other variants managed at the SQL level.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 7 Jun 2020 10:07:19 +0200 (CEST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Jun 07, 2020 at 10:07:19AM +0200, Fabien COELHO wrote:\n> Hello Justin,\n> > Rebased onto 7b48f1b490978a8abca61e9a9380f8de2a56f266 and renumbered OIDs.\n\nRebased again on whatever broke func.sgml.\n\n> pg_stat_file() and pg_stat_dir_files() now return a char type, as well as\n> the function which call them, but the documentation does not seem to say\n> that it is the case.\n\nFixed, thanks\n\n> I must admit that I'm not a fan on the argument management of\n> pg_ls_dir_metadata and pg_ls_dir_metadata_1arg and others. I understand that\n> it saves a few lines though, so maybe let it be.\n\nI think you're saying that you don't like the _1arg functions, but they're\nneeded to allow the regression tests to pass:\n\n| * note: this wrapper is necessary to pass the sanity check in opr_sanity,\n| * which checks that all built-in functions that share the implementing C\n| * function take the same number of arguments\n\n> There is a comment in pg_ls_dir_files which talks about pg_ls_dir.\n> \n> Could pg_ls_*dir functions C implementations be dropped in favor of a pure\n> SQL implementation, like you did with recurse?\n\nI'm still waiting to hear feedback from a commiter if this is a good idea to\nput this into the system catalog. Right now, ts_debug is the only nontrivial\nfunction.\n\n> If so, ISTM that pg_ls_dir_files() could be significantly simplified by\n> moving its filtering flag to SQL conditions on \"type\" and others. That could\n> allow not to change the existing function output a keep the \"isdir\" column\n> defined as \"type = 'd'\" where it was used previously, if someone complains,\n> but still have the full capability of \"ls\". That would also allow to drop\n> the \"*_1arg\" hacks. Basically I'm advocating having 1 or 2 actual C\n> functions, and all other variants managed at the SQL level.\n\nYou want to get rid of the 1arg stuff and just have one function.\n\nI see your point, but I guess the C function would still need to accept a\n\"missing_ok\" argument, so we need two functions, so there's not much utility in\ngetting rid of the \"include_dot_dirs\" argument, which is there for consistency\nwith pg_ls_dir.\n\nConceivably we could 1) get rid of pg_ls_dir, and 2) get rid of the\ninclude_dot_dirs argument and 3) maybe make \"missing_ok\" a required argument;\nand, 4) get rid of the C wrapper functions, and replace with a bunch of stuff\nlike this:\n\nSELECT name, size, access, modification, change, creation, type='d' AS isdir\nFROM pg_ls_dir_metadata('pg_wal') WHERE substring(name,1,1)!='.' AND type!='d';\n\nWhere the defaults I changed in this patchset still remain to be discussed:\nwith or without metadata, hidden files, dotdirs.\n\nAs I'm still waiting for committer feedback on the first 10 patches, so not\nintending to add more.\n\n-- \nJustin",
"msg_date": "Sun, 21 Jun 2020 20:53:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Jun 21, 2020 at 08:53:25PM -0500, Justin Pryzby wrote:\n> I'm still waiting to hear feedback from a commiter if this is a good idea to\n> put this into the system catalog. Right now, ts_debug is the only nontrivial\n> function.\n\nI'm still missing feedback from committers about the foundation of this\napproach.\n\nBut I finally looked into the pg_rewind test failure \n\nThat led met to keep the \"dir\" as a separate column, since that's what's needed\nthere, and it's more convenient to have a separate column than to provide a\ncolumn needing to be parsed.\n\n-- \nJustin",
"msg_date": "Tue, 14 Jul 2020 22:08:39 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Tue, Jul 14, 2020 at 10:08:39PM -0500, Justin Pryzby wrote:\n> I'm still missing feedback from committers about the foundation of this\n> approach.\n\nNow rebased on top of fix for my own bug report (1d09fb1f).\n\nI also changed argument handling for pg_ls_dir_recurse().\n\nPassing '.' gave an initial path of . (of course) but then every other path\nbegins with './' which I didn't like, since it's ambiguous with empty path, or\n.// or ././ ... And one could pass './' which gives different output (like\n././). \n\nSo I specially handled the input of '.'. Maybe the special value should be\nNULL instead of ''. But it looks no other system functions are currently\nnon-strict.\n\nFor pg_rewind testcase, getting the output path+filename uses a coalesce, since\nthe rest of the test does stuff like strcmp(\"pg_wal\").\n\nStill waiting for feedback from a committer.\n\n-- \nJustin",
"msg_date": "Sat, 18 Jul 2020 15:15:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sat, Jul 18, 2020 at 03:15:32PM -0500, Justin Pryzby wrote:\n> Still waiting for feedback from a committer.\n\nThis patch has been waiting for input from a committer on the approach I've\ntaken with the patches since March 10, so I'm planning to set to \"Ready\" - at\nleast ready for senior review.\n\n-- \nJustin\n\n\n",
"msg_date": "Tue, 8 Sep 2020 14:51:26 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Tue, Sep 08, 2020 at 02:51:26PM -0500, Justin Pryzby wrote:\n> On Sat, Jul 18, 2020 at 03:15:32PM -0500, Justin Pryzby wrote:\n> > Still waiting for feedback from a committer.\n> \n> This patch has been waiting for input from a committer on the approach I've\n> taken with the patches since March 10, so I'm planning to set to \"Ready\" - at\n> least ready for senior review.\n\n@cfbot: rebased",
"msg_date": "Wed, 28 Oct 2020 14:34:02 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Wed, Oct 28, 2020 at 02:34:02PM -0500, Justin Pryzby wrote:\n> On Tue, Sep 08, 2020 at 02:51:26PM -0500, Justin Pryzby wrote:\n> > On Sat, Jul 18, 2020 at 03:15:32PM -0500, Justin Pryzby wrote:\n> > > Still waiting for feedback from a committer.\n> > \n> > This patch has been waiting for input from a committer on the approach I've\n> > taken with the patches since March 10, so I'm planning to set to \"Ready\" - at\n> > least ready for senior review.\n> \n> @cfbot: rebased\n\nRebased on e152506adef4bc503ea7b8ebb4fedc0b8eebda81\n\n-- \nJustin",
"msg_date": "Thu, 5 Nov 2020 07:51:57 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n>> This patch has been waiting for input from a committer on the approach I've\n>> taken with the patches since March 10, so I'm planning to set to \"Ready\" - at\n>> least ready for senior review.\n\nI took a quick look through this. This is just MHO, of course:\n\n* I don't think it's okay to change the existing signatures of\npg_ls_logdir() et al. Even if you can make an argument that it's\nnot too harmful to add more output columns, replacing pg_stat_file's\nisdir output with something of a different name and datatype is most\nsurely not OK --- there is no possible way that doesn't break existing\nuser queries.\n\nI think possibly a more acceptable approach is to leave these functions\nalone but add documentation explaining how to get the additional info.\nYou could say things along the lines of \"pg_ls_waldir() is the same as\npg_ls_dir_metadata('pg_wal') except for showing fewer columns.\"\n\n* I'm not very much on board with implementing pg_ls_dir_recurse()\nas a SQL function that depends on a WITH RECURSIVE construct.\nI do not think that's okay from either performance or security\nstandpoints. Surely it can't be hard to build a recursion capability\ninto the C code? We could then also debate whether this ought to be\na separate function at all, instead of something you invoke via an\nadditional boolean flag parameter to pg_ls_dir_metadata().\n\n* I'm fairly unimpressed with the testing approach, because it doesn't\nseem like you're getting very much coverage. It's hard to do better while\nstill having the totally-fixed output expected by our regular regression\ntest framework, but to me that just says not to test these functions in\nthat framework. I'd consider ripping all of that out in favor of a\nTAP test.\n\nWhile I didn't read the C code in any detail, a couple of things stood\nout to me:\n\n* I noticed that you did s/stat/lstat/. That's fine on Unix systems,\nbut it won't have any effect on Windows systems (cf bed90759f),\nwhich means that we'll have to document a platform-specific behavioral\ndifference. Do we want to go there? Maybe this patch needs to wait\non somebody fixing our lack of real lstat() on Windows. (I assume BTW\nthat this means the WIN32 code in get_file_type() is unreachable.)\n\n* This bit:\n\n+\t\t/* Skip dot dirs? */\n+\t\tif (flags & LS_DIR_SKIP_DOT_DIRS &&\n+\t\t\t(strcmp(de->d_name, \".\") == 0 ||\n+\t\t\t strcmp(de->d_name, \"..\") == 0))\n+\t\t\tcontinue;\n+\n+\t\t/* Skip hidden files? */\n+\t\tif (flags & LS_DIR_SKIP_HIDDEN &&\n+\t\t\tde->d_name[0] == '.')\n \t\t\tcontinue;\n\ndoesn't seem to have thought very carefully about the interaction\nof those two flags, ie it seems like LS_DIR_SKIP_HIDDEN effectively\nimplies LS_DIR_SKIP_DOT_DIRS. Do we want that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 16:14:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> This patch has been waiting for input from a committer on the approach I've\n> >> taken with the patches since March 10, so I'm planning to set to \"Ready\" - at\n> >> least ready for senior review.\n> \n> I took a quick look through this. This is just MHO, of course:\n> \n> * I don't think it's okay to change the existing signatures of\n> pg_ls_logdir() et al. Even if you can make an argument that it's\n> not too harmful to add more output columns, replacing pg_stat_file's\n> isdir output with something of a different name and datatype is most\n> surely not OK --- there is no possible way that doesn't break existing\n> user queries.\n\nI disagree that we need to stress over this- we pretty routinely change\nthe signature of various catalogs and functions and anyone using these\nis already of the understanding that we are free to make such changes\nbetween major versions. If anything, we should be strongly discouraging\nthe notion of \"don't break user queries\" when it comes to administrative\nand monitoring functions like these because, otherwise, we end up with\nthings like the mess that is pg_start/stop_backup() (and just contrast\nthat to what we did to recovery.conf when thinking about \"well, do we\nneed to 'deprecate' or keep around the old stuff so we don't break\nthings for users who use these functions?\" or the changes made in v10,\nneither of which have produced much in the way of complaints).\n\nLet's focus on working towards cleaner APIs and functions, accepting a\nbreak when it makes sense to, which seems to be the case with this patch\n(though I agree about using a TAP test suite and about performing the\ndirectory recursion in C instead), and not pull forward cruft that we\nthen are telling ourselves we have to maintain compatibility of\nindefinitely and at the expense of sensible APIs.\n\nThanks,\n\nStephen",
"msg_date": "Mon, 23 Nov 2020 18:00:31 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n>> I took a quick look through this. This is just MHO, of course:\n>> \n>> * I don't think it's okay to change the existing signatures of\n>> pg_ls_logdir() et al.\n\n> I disagree that we need to stress over this- we pretty routinely change\n> the signature of various catalogs and functions and anyone using these\n> is already of the understanding that we are free to make such changes\n> between major versions.\n\nWell, like I said, just MHO. Anybody else want to weigh in?\n\nI'm mostly concerned about removing the isdir output of pg_stat_file().\nMaybe we could compromise to the extent of keeping that, allowing it\nto be partially duplicative of a file-type-code output column.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 23 Nov 2020 18:06:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> >> I took a quick look through this. This is just MHO, of course:\n> >> \n> >> * I don't think it's okay to change the existing signatures of\n> >> pg_ls_logdir() et al.\n> \n> > I disagree that we need to stress over this- we pretty routinely change\n> > the signature of various catalogs and functions and anyone using these\n> > is already of the understanding that we are free to make such changes\n> > between major versions.\n> \n> Well, like I said, just MHO. Anybody else want to weigh in?\n> \n> I'm mostly concerned about removing the isdir output of pg_stat_file().\n> Maybe we could compromise to the extent of keeping that, allowing it\n> to be partially duplicative of a file-type-code output column.\n\nI don't have any particular issue with keeping isdir as a convenience\ncolumn. I agree it'll now be a bit duplicative but that seems alright.\n\nThanks,\n\nStephen",
"msg_date": "Tue, 24 Nov 2020 11:53:22 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> >> This patch has been waiting for input from a committer on the approach I've\n> >> taken with the patches since March 10, so I'm planning to set to \"Ready\" - at\n> >> least ready for senior review.\n> \n> I took a quick look through this. This is just MHO, of course:\n> \n> * I don't think it's okay to change the existing signatures of\n> pg_ls_logdir() et al. Even if you can make an argument that it's\n> not too harmful to add more output columns, replacing pg_stat_file's\n> isdir output with something of a different name and datatype is most\n> surely not OK --- there is no possible way that doesn't break existing\n> user queries.\n> \n> I think possibly a more acceptable approach is to leave these functions\n> alone but add documentation explaining how to get the additional info.\n> You could say things along the lines of \"pg_ls_waldir() is the same as\n> pg_ls_dir_metadata('pg_wal') except for showing fewer columns.\"\n> \n> * I'm not very much on board with implementing pg_ls_dir_recurse()\n> as a SQL function that depends on a WITH RECURSIVE construct.\n> I do not think that's okay from either performance or security\n> standpoints. Surely it can't be hard to build a recursion capability\n\nThanks. WITH RECURSIVE was the \"new approach\" I took early this year. Of\ncourse we can recurse in C, now that I know (how) to use the tuplestore.\nWorking on that patch was how I ran into the \"LIMIT 1\" SRF bug.\n\nI don't see how security is relevant though, though, since someone can run a\nthe WITH query directly. The function just needs to be restricted to\nsuperusers, same as pg_ls_dir().\n\nAnyway, I've re-ordered commits so this the last patch, since earlier commits\ndon't need to depend on it. I don't think it's even essential to provide a\nrecursive function (anyone could write the CTE), so long as we don't hide dirs\nand show isdir or type.\n\nI implemented it first as a separate function and then as an optional argument\nto pg_ls_dir_files(). If it's implemented as an optional \"mode\" of an existing\nfunction, there's the constraint that returning a \"path\" argument has to be\nafter all other arguments (the ones that are useful without recursion) or else\nit messes up other functions (like pg_ls_waldir()) that also call\npg_ls_dir_files().\n\n> doesn't seem to have thought very carefully about the interaction\n> of those two flags, ie it seems like LS_DIR_SKIP_HIDDEN effectively\n> implies LS_DIR_SKIP_DOT_DIRS. Do we want that?\n\nYes it's implied. Those options exist to support the pre-existing behavior.\npg_ls_dir can optionaly show dotdirs, but pg_ls_*dir skip all hidden files\n(which is documented since 8b6d94cf6). I'm happy to implement something else\nif a different behavior is desirable.\n\n-- \nJustin",
"msg_date": "Sun, 29 Nov 2020 11:21:15 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n[ v24-0001-Document-historic-behavior-of-links-to-directori.patch ]\n\nThe cfbot is unhappy with one of the test cases you added:\n\n6245@@ -259,9 +259,11 @@\n6246 select path, filename, type from pg_ls_dir_metadata('.', true, false, true) where path!~'[0-9]|pg_internal.init|global.tmp' order by 1;\n6247 path | filename | type \n6248 ----------------------------------+-----------------------+------\n6249+ PG_VERSION | PG_VERSION | -\n6250 base | base | d\n6251 base/pgsql_tmp | pgsql_tmp | d\n6252 global | global | d\n6253+ global/config_exec_params | config_exec_params | -\n6254 global/pg_control | pg_control | -\n6255 global/pg_filenode.map | pg_filenode.map | -\n6256 pg_commit_ts | pg_commit_ts | d\n6257@@ -285,7 +287,6 @@\n6258 pg_subtrans | pg_subtrans | d\n6259 pg_tblspc | pg_tblspc | d\n6260 pg_twophase | pg_twophase | d\n6261- PG_VERSION | PG_VERSION | -\n6262 pg_wal | pg_wal | d\n6263 pg_wal/archive_status | archive_status | d\n6264 pg_xact | pg_xact | d\n6265@@ -293,7 +294,7 @@\n6266 postgresql.conf | postgresql.conf | -\n6267 postmaster.opts | postmaster.opts | -\n6268 postmaster.pid | postmaster.pid | -\n6269-(34 rows)\n6270+(35 rows)\n\nThis shows that (a) the test is sensitive to prevailing collation and\n(b) it's not filtering out enough temporary files. Even if those things\nwere fixed, though, the test would break every time we added/removed\nsome PGDATA substructure. Worse, it'd also break if say somebody had\nedited postgresql.conf and left an editor backup file behind, or when\nrunning in an installation where the configuration files are someplace\nelse. I think this is way too fragile to be acceptable.\n\nMaybe it could be salvaged by reversing the sense of the WHERE condition\nso that instead of trying to blacklist stuff, you whitelist just a small\nnumber of files that should certainly be there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 04 Dec 2020 12:23:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Fri, Dec 04, 2020 at 12:23:23PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> [ v24-0001-Document-historic-behavior-of-links-to-directori.patch ]\n> \n> The cfbot is unhappy with one of the test cases you added:\n\n> Maybe it could be salvaged by reversing the sense of the WHERE condition\n> so that instead of trying to blacklist stuff, you whitelist just a small\n> number of files that should certainly be there.\n\nYes, I had noticed this one.\n\n-- \nJustin",
"msg_date": "Wed, 9 Dec 2020 10:37:43 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n> * I don't think it's okay to change the existing signatures of\n> pg_ls_logdir() et al. Even if you can make an argument that it's\n> not too harmful to add more output columns, replacing pg_stat_file's\n> isdir output with something of a different name and datatype is most\n> surely not OK --- there is no possible way that doesn't break existing\n> user queries.\n> \n> I think possibly a more acceptable approach is to leave these functions\n> alone but add documentation explaining how to get the additional info.\n> You could say things along the lines of \"pg_ls_waldir() is the same as\n> pg_ls_dir_metadata('pg_wal') except for showing fewer columns.\"\n\nOn Mon, Nov 23, 2020 at 06:06:19PM -0500, Tom Lane wrote:\n> I'm mostly concerned about removing the isdir output of pg_stat_file().\n> Maybe we could compromise to the extent of keeping that, allowing it\n> to be partially duplicative of a file-type-code output column.\n\nOn Tue, Nov 24, 2020 at 11:53:22AM -0500, Stephen Frost wrote:\n> I don't have any particular issue with keeping isdir as a convenience\n> column. I agree it'll now be a bit duplicative but that seems alright.\n\nMaybe we should do what Tom said, and leave pg_ls_* unchanged, but also mark\nthem as deprecated in favour of:\n| pg_ls_dir_metadata(dir), dir={'pg_wal/archive_status', 'log', 'pg_wal', 'base/pgsql_tmp'}\n\nHowever, pg_ls_tmpdir is special since it handles tablespace tmpdirs, which it\nseems is not trivial to get from sql:\n\n+SELECT * FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(b.oid),'')||suffix, 'base/pgsql_tmp') AS dir\n+FROM pg_tablespace b, pg_control_system() pcs,\n+LATERAL format('/PG_%s_%s', left(current_setting('server_version_num'), 2), pcs.catalog_version_no) AS suffix) AS dir,\n+LATERAL pg_ls_dir_recurse(dir) AS a;\n\nFor context, the line of reasoning that led me to this patch series was\nsomething like this:\n\n0) Why can't I list shared tempfiles (dirs) using pg_ls_tmpdir() ?\n1) Implement recursion for pg_ls_tmpdir();\n2) Eventually realize that it's silly to implement a function to recurse into\n one particular directory when no general feature exists;\n3) Implement generic facility;\n\n> * I noticed that you did s/stat/lstat/. That's fine on Unix systems,\n> but it won't have any effect on Windows systems (cf bed90759f),\n> which means that we'll have to document a platform-specific behavioral\n> difference. Do we want to go there?\n>\n> Maybe this patch needs to wait on somebody fixing our lack of real lstat() on Windows.\n\nI think only the \"top\" patches depend on lstat (for the \"type\" column and\nrecursion, to avoid loops). The initial patches are independently useful, and\nresolve the original issue of hiding tmpdirs. I've rebased and re-arranged the\npatches to reflect this.\n\n-- \nJustin",
"msg_date": "Wed, 23 Dec 2020 13:17:10 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Greetings,\n\n* Justin Pryzby (pryzby@telsasoft.com) wrote:\n> On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n> > * I don't think it's okay to change the existing signatures of\n> > pg_ls_logdir() et al. Even if you can make an argument that it's\n> > not too harmful to add more output columns, replacing pg_stat_file's\n> > isdir output with something of a different name and datatype is most\n> > surely not OK --- there is no possible way that doesn't break existing\n> > user queries.\n> > \n> > I think possibly a more acceptable approach is to leave these functions\n> > alone but add documentation explaining how to get the additional info.\n> > You could say things along the lines of \"pg_ls_waldir() is the same as\n> > pg_ls_dir_metadata('pg_wal') except for showing fewer columns.\"\n> \n> On Mon, Nov 23, 2020 at 06:06:19PM -0500, Tom Lane wrote:\n> > I'm mostly concerned about removing the isdir output of pg_stat_file().\n> > Maybe we could compromise to the extent of keeping that, allowing it\n> > to be partially duplicative of a file-type-code output column.\n> \n> On Tue, Nov 24, 2020 at 11:53:22AM -0500, Stephen Frost wrote:\n> > I don't have any particular issue with keeping isdir as a convenience\n> > column. I agree it'll now be a bit duplicative but that seems alright.\n> \n> Maybe we should do what Tom said, and leave pg_ls_* unchanged, but also mark\n> them as deprecated in favour of:\n> | pg_ls_dir_metadata(dir), dir={'pg_wal/archive_status', 'log', 'pg_wal', 'base/pgsql_tmp'}\n\nHaven't really time to review the patches here in detail right now\n(maybe next month), but in general, I dislike marking things as\ndeprecated. If we don't want to change them and we're happy to continue\nsupporting them as-is (which is what 'deprecated' really means), then we\ncan just do so- nothing stops us from that. If we don't think the\ncurrent API makes sense, for whatever reason, we can just change that-\nthere's no need for a 'deprecation period', as we already have major\nversions and support each major version for 5 years.\n\nI haven't particularly strong feelings one way or the other regarding\nthese particular functions. If you asked which way I leaned, I'd say\nthat I'd rather redefine the functions to make more sense and to be easy\nto use for people who would like to use them. I wouldn't object to new\nfunctions to provide that either though. I don't think there's all that\nmuch code or that it's changed often enough to be a big burden to keep\nboth, but that's more feeling than anything based in actual research at\nthis point.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 23 Dec 2020 14:27:32 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On 12/23/20 2:27 PM, Stephen Frost wrote:\n> * Justin Pryzby (pryzby@telsasoft.com) wrote:\n>> On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n>>> * I don't think it's okay to change the existing signatures of\n>>> pg_ls_logdir() et al. Even if you can make an argument that it's\n>>> not too harmful to add more output columns, replacing pg_stat_file's\n>>> isdir output with something of a different name and datatype is most\n>>> surely not OK --- there is no possible way that doesn't break existing\n>>> user queries.\n>>>\n>>> I think possibly a more acceptable approach is to leave these functions\n>>> alone but add documentation explaining how to get the additional info.\n>>> You could say things along the lines of \"pg_ls_waldir() is the same as\n>>> pg_ls_dir_metadata('pg_wal') except for showing fewer columns.\"\n>>\n>> On Mon, Nov 23, 2020 at 06:06:19PM -0500, Tom Lane wrote:\n>>> I'm mostly concerned about removing the isdir output of pg_stat_file().\n>>> Maybe we could compromise to the extent of keeping that, allowing it\n>>> to be partially duplicative of a file-type-code output column.\n>>\n>> On Tue, Nov 24, 2020 at 11:53:22AM -0500, Stephen Frost wrote:\n>>> I don't have any particular issue with keeping isdir as a convenience\n>>> column. I agree it'll now be a bit duplicative but that seems alright.\n>>\n>> Maybe we should do what Tom said, and leave pg_ls_* unchanged, but also mark\n>> them as deprecated in favour of:\n>> | pg_ls_dir_metadata(dir), dir={'pg_wal/archive_status', 'log', 'pg_wal', 'base/pgsql_tmp'}\n> \n> Haven't really time to review the patches here in detail right now\n> (maybe next month), but in general, I dislike marking things as\n> deprecated. \nStephen, are you still planning to review these patches?\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Mon, 15 Mar 2021 08:47:17 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Wed, Dec 23, 2020 at 01:17:10PM -0600, Justin Pryzby wrote:\n> On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n> > * I noticed that you did s/stat/lstat/. That's fine on Unix systems,\n> > but it won't have any effect on Windows systems (cf bed90759f),\n> > which means that we'll have to document a platform-specific behavioral\n> > difference. Do we want to go there?\n> >\n> > Maybe this patch needs to wait on somebody fixing our lack of real lstat() on Windows.\n> \n> I think only the \"top\" patches depend on lstat (for the \"type\" column and\n> recursion, to avoid loops). The initial patches are independently useful, and\n> resolve the original issue of hiding tmpdirs. I've rebased and re-arranged the\n> patches to reflect this.\n\nI said that, but then failed to attach the re-arranged patches.\nNow I also renumbered OIDs following best practice.\n\nThe first handful of patches address the original issue, and I think could be\n\"ready\":\n\n$ git log --oneline origin..pg-ls-dir-new |tac\n... Document historic behavior of links to directories..\n... Add tests on pg_ls_dir before changing it\n... Add pg_ls_dir_metadata to list a dir with file metadata..\n... pg_ls_tmpdir to show directories and \"isdir\" argument..\n... pg_ls_*dir to show directories and \"isdir\" column..\n\nThese others are optional:\n... pg_ls_logdir to ignore error if initial/top dir is missing..\n... pg_ls_*dir to return all the metadata from pg_stat_file..\n\n..and these maybe requires more work for lstat on windows:\n... pg_stat_file and pg_ls_dir_* to use lstat()..\n... pg_ls_*/pg_stat_file to show file *type*..\n... Preserve pg_stat_file() isdir..\n... Add recursion option in pg_ls_dir_files..\n\n-- \nJustin",
"msg_date": "Tue, 6 Apr 2021 11:01:31 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Breaking with tradition, the previous patch included one too *few* changes, and\nfailed to resolve the OID collisions.",
"msg_date": "Thu, 8 Apr 2021 23:14:32 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Tue, Apr 06, 2021 at 11:01:31AM -0500, Justin Pryzby wrote:\n> On Wed, Dec 23, 2020 at 01:17:10PM -0600, Justin Pryzby wrote:\n> > On Mon, Nov 23, 2020 at 04:14:18PM -0500, Tom Lane wrote:\n> > > * I noticed that you did s/stat/lstat/. That's fine on Unix systems,\n> > > but it won't have any effect on Windows systems (cf bed90759f),\n> > > which means that we'll have to document a platform-specific behavioral\n> > > difference. Do we want to go there?\n> > >\n> > > Maybe this patch needs to wait on somebody fixing our lack of real lstat() on Windows.\n> > \n> > I think only the \"top\" patches depend on lstat (for the \"type\" column and\n> > recursion, to avoid loops). The initial patches are independently useful, and\n> > resolve the original issue of hiding tmpdirs. I've rebased and re-arranged the\n> > patches to reflect this.\n> \n> I said that, but then failed to attach the re-arranged patches.\n> Now I also renumbered OIDs following best practice.\n> \n> The first handful of patches address the original issue, and I think could be\n> \"ready\":\n> \n> $ git log --oneline origin..pg-ls-dir-new |tac\n> ... Document historic behavior of links to directories..\n> ... Add tests on pg_ls_dir before changing it\n> ... Add pg_ls_dir_metadata to list a dir with file metadata..\n> ... pg_ls_tmpdir to show directories and \"isdir\" argument..\n> ... pg_ls_*dir to show directories and \"isdir\" column..\n> \n> These others are optional:\n> ... pg_ls_logdir to ignore error if initial/top dir is missing..\n> ... pg_ls_*dir to return all the metadata from pg_stat_file..\n> \n> ..and these maybe requires more work for lstat on windows:\n> ... pg_stat_file and pg_ls_dir_* to use lstat()..\n> ... pg_ls_*/pg_stat_file to show file *type*..\n> ... Preserve pg_stat_file() isdir..\n> ... Add recursion option in pg_ls_dir_files..\n\n@cfbot: rebased",
"msg_date": "Fri, 2 Jul 2021 14:16:37 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "In an attempt to get this patch set off the ground again, I took a\r\nlook at the first 5 patches.\r\n\r\n0001: This one is a very small documentation update for pg_stat_file\r\nto point out that isdir will be true for symbolic links to\r\ndirectories. Given this is true, I think the patch looks good.\r\n\r\n0002: This patch adds some very basic testing for pg_ls_dir(). The\r\nonly comment that I have for this one is that I might also check\r\nwhether '..' is included in the results of the include_dot_dirs tests.\r\nThe docs specifically note that include_dot_dirs indicates whether\r\nboth '.' and '..' are included, so IMO we might as well verify that.\r\n\r\n0003: This one didn't apply cleanly until I used 'git apply -3', so it\r\nlikely needs a rebase. This patch introduces the pg_ls_dir_metadata()\r\nfunction, which appears to just be pg_ls_dir() with some additional\r\ncolumns for the size and modification time. My initial reaction to\r\nthis one is that we should just add those columns to pg_ls_dir() to\r\nmatch all the other pg_ls_* functions (and not bother attempting to\r\nmaintain historic behavior for things like hidden and special files).\r\nI believe there is some existing discussion on this point upthread, so\r\nperhaps there is a good reason to make a new function. In any case, I\r\nlike the idea of having pg_ls_dir() use pg_ls_dir_files() internally\r\nlike the rest of the pg_ls_* functions.\r\n\r\n0004: This one changes pg_ls_tmpdir to show directories as well. I\r\nthink this is a reasonable change. On it's own, the patch looks\r\nalright, although it might look different if my suggestions for 0003\r\nwere followed.\r\n\r\n0005: This one adjusts the rest of the pg_ls_* functions to show\r\ndirectories. Again, I think this is a reasonable change. As noted in\r\n0003, I think it'd be alright just to have all the pg_ls_* functions\r\nshow special and hidden files as well. It's simple enough already to\r\nfilter our files that start with '.' if necessary, and I'm not sure\r\nthere's any strong reason for leaving out special files. If special\r\nfiles are included, perhaps isdir should be changed to indicate the\r\nfile type instead of just whether it is a directory. (Reading ahead,\r\nit looks like this is what 0009 might do.)\r\n\r\nI haven't looked at the following patches too much, but I'm getting\r\nthe idea that they might address a lot of the feedback above and that\r\nthe first bunch of patches are more like staging patches that add the\r\nabilities without changing the behavior. I wonder if just going\r\nstraight to the end goal behavior might simplify the patch set a bit.\r\nI can't say I feel too strongly about this, but I figure I'd at least\r\nshare my thoughts.\r\n\r\nNathan\r\n\r\n",
"msg_date": "Mon, 22 Nov 2021 19:17:01 +0000",
"msg_from": "\"Bossart, Nathan\" <bossartn@amazon.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Nov 22, 2021 at 07:17:01PM +0000, Bossart, Nathan wrote:\n> In an attempt to get this patch set off the ground again, I took a\n> look at the first 5 patches.\n\n> I haven't looked at the following patches too much, but I'm getting\n> the idea that they might address a lot of the feedback above and that\n> the first bunch of patches are more like staging patches that add the\n> abilities without changing the behavior. I wonder if just going\n> straight to the end goal behavior might simplify the patch set a bit.\n> I can't say I feel too strongly about this, but I figure I'd at least\n> share my thoughts.\n\nThanks for looking.\n\nThe patches are separate since the early patches are the most necessary, least\ndisputable parts, to allow the possibility of (say) chaging pg_ls_tmpdir() without\nchanging other functions, since pg_ls_tmpdir was was original motivation behind\nthis whole thread.\n\nIn a recent thread, Bharath Rupireddy added pg_ls functions for the logical\ndirs, but expressed a preference not to add the metadata columns. I still\nthink that at least \"isdir\" should be added to all the \"ls\" functions, since\nit's easy to SELECT the columns you want, and a bit of a pain to write the\ncorresponding LATERAL query: 'dir' AS dir, pg_ls_dir(dir) AS ls,\npg_stat_file(ls) AS st. I think it would be strange if pg_ls_tmpdir() were to\nreturn a different set of columns than the other functions, even though admins\nor extensions might have created dirs or other files in those directories.\n\nTom pointed out that we don't have a working lstat() for windows, so then it\nseems like we're not yet ready to show file \"types\" (we'd show the type of the\nlink target, which is sometimes what's wanted, but not usually what \"ls\" would\nshow), nor ready to implement recurse.\n\nAs before:\n\nOn Tue, Apr 06, 2021 at 11:01:31AM -0500, Justin Pryzby wrote:\n> The first handful of patches address the original issue, and I think could be\n> \"ready\":\n> \n> $ git log --oneline origin..pg-ls-dir-new |tac\n> ... Document historic behavior of links to directories..\n> ... Add tests on pg_ls_dir before changing it\n> ... Add pg_ls_dir_metadata to list a dir with file metadata..\n> ... pg_ls_tmpdir to show directories and \"isdir\" argument..\n> ... pg_ls_*dir to show directories and \"isdir\" column..\n> \n> These others are optional:\n> ... pg_ls_logdir to ignore error if initial/top dir is missing..\n> ... pg_ls_*dir to return all the metadata from pg_stat_file..\n> \n> ..and these maybe requires more work for lstat on windows:\n> ... pg_stat_file and pg_ls_dir_* to use lstat()..\n> ... pg_ls_*/pg_stat_file to show file *type*..\n> ... Preserve pg_stat_file() isdir..\n> ... Add recursion option in pg_ls_dir_files..\n\nrebased on 1922d7c6e1a74178bd2f1d5aa5a6ab921b3fcd34\n\n-- \nJustin",
"msg_date": "Tue, 23 Nov 2021 18:04:44 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\nIt seems that the v31 patch does not apply anymore:\n\n postgresql> git apply ~/v31-0001-Document-historic-behavior-of-links-to-directori.patch\n error: patch failed: doc/src/sgml/func.sgml:27410\n error: doc/src/sgml/func.sgml: patch does not apply\n\n-- \nFabien.\n\n\n",
"msg_date": "Thu, 23 Dec 2021 09:14:18 -0400 (AST)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Thu, Dec 23, 2021 at 09:14:18AM -0400, Fabien COELHO wrote:\n> It seems that the v31 patch does not apply anymore:\n> \n> postgresql> git apply ~/v31-0001-Document-historic-behavior-of-links-to-directori.patch\n> error: patch failed: doc/src/sgml/func.sgml:27410\n> error: doc/src/sgml/func.sgml: patch does not apply\n\nThanks for continuing to follow this patch ;)\n\nI fixed a conflict with output/tablespace from d1029bb5a et seq.\nI'm not sure why you got a conflict with 0001, though.\n\nI think the 2nd half of the patches are still waiting for fixes to lstat() on\nwindows.\n\nYou complained before that there were too many patches, and I can see how it\nmight be a pain depending on your workflow. But the division is deliberate.\nDealing with patchsets is easy for me: I use \"mutt\" and for each patch\nattachment, I type \"|git am\" (or |git am -3).",
"msg_date": "Thu, 23 Dec 2021 11:36:31 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Hello Justin,\n\nHappy new year!\n\n> I think the 2nd half of the patches are still waiting for fixes to lstat() on\n> windows.\n\nNot your problem?\n\nHere is my review about v32:\n\nAll patches apply cleanly.\n\n# part 01\n\nOne liner doc improvement to tell that creation time is only available on windows.\nIt is indeed not available on Linux.\n\nOK.\n\n# part 02\n\nAdd tests for various options on pg_ls_dir, and for pg_stat_file, which were not\nexercised before. \"make check\" is ok.\n\nOK.\n\n# part 03\n\nThis patch adds a new pg_ls_dir_metadata. Internally, this is an extension of\npg_ls_dir_files function which is used by other pg_ls functions. Doc ok.\n\nAbout the code:\n\nISTM that the \"if (1) { if (2) continue; } else if(3) { if (4) continue; }\" structure\"\nmay be simplified to \"if (1 && 2) continue; if (3 && 4) continue;\", at least if\nIS_DIR and IS_REG are incompatible? Otherwise, at least \"else if (3 & 4) continue\"?\n\nThe ifdef WIN32 (which probably detects windows 64 bits…) overwrites values[3]. ISTM\nit could be reordered so that there is no overwrite, and simpler single assignements.\n\n #ifndef WIN32\n v = ...;\n #else\n v = ... ? ... : ...;\n #endif\n\nNew tests are added which check that the result columns are configured as required,\nincluding error cases.\n\n\"make check\" is ok.\n\nOK.\n\n# part 04\n\nAdd a new \"isdir\" column to \"pg_ls_tmpdir\" output. This is a small behavioral\nchange. I'm ok with it, however I'm unsure why we would not jump directly to\nthe \"type\" char column done later in the patch series. ISTM all such functions\nshould be extended the same way for better homogeneity? That would also impact\n\"waldir\", \"archive_status\", \"logical_*\", \"replslot\" variants. \"make check\" ok.\n\nOK.\n\n# part 05\n\nThis patch applies my previous advice:-) ISTM that parts 4 and 5 should be one\nsingle patch. The test changes show that only waldir has a test. Would it be\npossible to add minimal tests to other variants as well? \"make check\" ok.\n\nI'd consider add such tests with part 02.\n\nOK.\n\n# part 06\n\nThis part extends and adds a test for pg_ls_logdir. ISTM that it should\nbe merged with the previous patches. \"make check\" is ok.\n\nOK.\n\n# part 07\n\nThis part extends pg_stat_file with more date informations.\n\nISTM that the documentation should be clear about windows vs unix/cygwin specific\ndata provided (change/creation).\n\nThe code adds a new value_from_stat function to avoid code duplication.\nFine with me.\n\nAll pg_ls_*dir functions are impacted. Fine with me.\n\n\"make check\" is ok.\n\nOK.\n\n# part 08\n\nThis part substitutes lstat to stat. Fine with me. \"make check\" is ok.\nI guess that lstat does something under windows even if the concept of\nlink is somehow different there. Maybe the doc should say so somewhere?\n\nOK.\n\n# part 09\n\nThis part switches the added \"isdir\" to a \"type\" column. \"make check\" is ok.\nThis is a definite improvement.\n\nOK.\n\n# part 10\n\nThis part adds a redundant \"isdir\" column. I do not see the point.\n\"make check\" is ok.\n\nNOT OK.\n\n# part 11\n\nThis part adds a recurse option. Why not. However, the true value does not\nseem to be tested? \"make check\" is ok.\n\nMy opinion is unclear.\n\nOverall, ignoring part 10, this makes a definite improvement to postgres ls\ncapabilities. I do not seen any reason why not to add those.\n\n-- \nFabien.",
"msg_date": "Sun, 2 Jan 2022 13:07:29 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\n> Here is my review about v32:\n\nI forgot to tell that doc generation for the cumulated changes also works.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 2 Jan 2022 14:55:04 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Hi,\n\nOn Sun, Jan 02, 2022 at 02:55:04PM +0100, Fabien COELHO wrote:\n> \n> > Here is my review about v32:\n> \n> I forgot to tell that doc generation for the cumulated changes also works.\n\nUnfortunately the patchset doesn't apply anymore:\n\nhttp://cfbot.cputube.org/patch_36_2377.log\n=== Applying patches on top of PostgreSQL commit ID 4483b2cf29bfe8091b721756928ccbe31c5c8e14 ===\n=== applying patch ./v32-0003-Add-pg_ls_dir_metadata-to-list-a-dir-with-file-m.patch\n[...]\npatching file src/test/regress/expected/misc_functions.out\nHunk #1 succeeded at 274 (offset 7 lines).\npatching file src/test/regress/expected/tablespace.out\nHunk #1 FAILED at 16.\n1 out of 1 hunk FAILED -- saving rejects to file src/test/regress/expected/tablespace.out.rej\n\nJustin, could you send a rebased version?\n\n\n",
"msg_date": "Tue, 18 Jan 2022 15:00:47 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Jan 02, 2022 at 01:07:29PM +0100, Fabien COELHO wrote:\n> One liner doc improvement to tell that creation time is only available on windows.\n> It is indeed not available on Linux.\n\nThe change is about the \"isflag\" flag, not creation time.\n\n Returns a record containing the file's size, last access time stamp,\n last modification time stamp, last file status change time stamp (Unix\n platforms only), file creation time stamp (Windows only), and a flag\n- indicating if it is a directory.\n+ indicating if it is a directory (or a symbolic link to a directory).\n\n> # part 03\n> ISTM that the \"if (1) { if (2) continue; } else if(3) { if (4) continue; }\" structure\"\n> may be simplified to \"if (1 && 2) continue; if (3 && 4) continue;\", at least if\n> IS_DIR and IS_REG are incompatible?\n\nNo, what you suggested is not the same;\n\nWe talked about this before:\nhttps://www.postgresql.org/message-id/20200315212729.GC26184@telsasoft.com\n\n> Otherwise, at least \"else if (3 & 4) continue\"?\n\nI could write the *final* \"else if\" like that, but then it would be different\nfrom the previous case. Which would be confusing and prone to mistakes.\n\nIf I wrote it like this, I think it'd just provoke suggestions from someone\nelse to change it differently:\n\n /* Skip dirs or special files? */\n if (S_ISDIR(attrib.st_mode) && !(flags & LS_DIR_SKIP_DIRS))\n continue;\n if (!S_ISDIR(attrib.st_mode) && !S_ISREG(attrib.st_mode) && !(flags & LS_DIR_SKIP_SPECIAL)\n continue;\n\n...\n<< Why don't you use \"else if\" instead of \"if (a){} if (!a && b){}\" >>\n\nI'm going to leave it up to a committer.\n\n> The ifdef WIN32 (which probably detects windows 64 bits…) overwrites values[3]. ISTM\n> it could be reordered so that there is no overwrite, and simpler single assignements.\n> \n> #ifndef WIN32\n> v = ...;\n> #else\n> v = ... ? ... : ...;\n> #endif\n\nI changed this but without using nested conditionals.\n\n> Add a new \"isdir\" column to \"pg_ls_tmpdir\" output. This is a small behavioral\n> change. I'm ok with it, however I'm unsure why we would not jump directly to\n> the \"type\" char column done later in the patch series.\n\nBecause that depends on lstat().\n\n> ISTM all such functions\n> should be extended the same way for better homogeneity? That would also impact\n> \"waldir\", \"archive_status\", \"logical_*\", \"replslot\" variants. \"make check\" ok.\n\nI agree that makes sense, however others have expressed the opposite opinion.\nhttps://www.postgresql.org/message-id/CALj2ACWtrt5EkHrY4WAZ4Cv42SidXAwpeQJU021bxaKpjmbGfA%40mail.gmail.com\n\nThe original motive for the patch was that pg_ls_tmpdir doesn't show shared\nfilesets. This fixes that essential problem without immediately dragging\neverything else along. I think it's more likely that a committer would merge\nthem both. But I don't know, and it's easy to combine patches if desired.\n\n> This patch applies my previous advice:-) ISTM that parts 4 and 5 should be one\n> single patch. The test changes show that only waldir has a test. Would it be\n> possible to add minimal tests to other variants as well? \"make check\" ok.\n\nI have added tests, although some are duplicative.\n\n> This part extends and adds a test for pg_ls_logdir. ISTM that it should\n> be merged with the previous patches. \"make check\" is ok.\n\nIt's seperate to allow writing a separate commit message since it does\nsomething unrelated to the other patches. What other patch would it would be\nmerged with ?\n| v32-0006-pg_ls_logdir-to-ignore-error-if-initial-top-dir-.patch \n\n> ISTM that the documentation should be clear about windows vs unix/cygwin specific\n> data provided (change/creation).\n\nI preferred to refer to pg_stat_file rather than repeat it for all 7 functions\ncurrently in v15, (and future functions added for new, toplevel dirs).\n\n> # part 11\n> \n> This part adds a recurse option. Why not. However, the true value does not\n> seem to be tested? \"make check\" is ok.\n\nWDYM the true value? It's tested like:\n\n+-- Exercise recursion\n+select path, filename, type from pg_ls_dir_metadata('.', true, false, true) where\n+path in ('base', 'base/pgsql_tmp', 'global', 'global/pg_control', 'global/pg_filenode.map', 'PG_VERSION', 'pg_multixact', 'pg_multixact/members', 'pg_multixact/offsets', 'pg_wal', 'pg_wal/archive_status')\n+-- (type='d' or path~'^(global/.*|PG_VERSION|postmaster\\.opts|postmaster\\.pid|pg_logical/replorigin_checkpoint)$') and filename!~'[0-9]'\n+order by path collate \"C\", filename collate \"C\";\n+ path | filename | type \n+------------------------+-----------------+------\n+ PG_VERSION | PG_VERSION | -\n+ base | base | d\n+ base/pgsql_tmp | pgsql_tmp | d\n...\n\n-- \nJustin",
"msg_date": "Tue, 25 Jan 2022 13:27:55 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Rebased over 9e9858389 (Michael may want to look at the tuplestore part?).\n\nFixing a comment typo.\n\nI also changed pg_ls_dir_recurse() to handle concurrent removal of a dir, which\nI noticed caused an infrequent failure on CI. However I'm not including that\nhere, since the 2nd half of the patch set seems isn't ready due to lstat() on\nwindows.",
"msg_date": "Wed, 9 Mar 2022 10:50:45 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\nI hope to look at it over the week-end.\n\n-- \nFabien Coelho - CRI, MINES ParisTech\n\n\n",
"msg_date": "Thu, 10 Mar 2022 09:45:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "\nHello Justin,\n\nReview about v34, up from v32 which I reviewed in January. v34 is an \nupdated version of v32, without the part about lstat at the end of the \nseries.\n\nAll 7 patches apply cleanly.\n\n# part 01\n\nOne liner doc improvement about the directory flag.\n\nOK.\n\n# part 02\n\nAdd tests for various options on pg_ls_dir, and for pg_stat_file, which were not\nexercised before. \"make check\" is ok.\n\nOK.\n\n# part 03\n\nThis patch adds a new pg_ls_dir_metadata. Internally, this is an extension of\npg_ls_dir_files function which is used by other pg_ls functions. Doc ok.\n\nNew tests are added which check that the result columns are configured as required,\nincluding error cases.\n\n\"make check\" is ok.\n\nOK.\n\n# part 04\n\nAdd a new \"isdir\" column to \"pg_ls_tmpdir\" output. This is a small behavioral\nchange.\n\nI'm ok with that, however I must say that I'm still unsure why we would \nnot jump directly to a \"type\" char column. What is wrong with outputing \n'd' or '-' instead of true or false? This way, the interface needs not \nchange if \"lstat\" is used later? ISTM that the interface issue should be \nsomehow independent of the implementation issue, and we should choose \ndirectly the right/best interface?\n\nIndependently, the documentation may be clearer about what happens to \n\"isdir\" when the file is a link? It may say that the behavior may change \nin the future?\n\nAbout homogeneity, I note that some people may be against adding \"isdir\" \nto other ls functions. I must say that I cannot see a good argument not to \ndo it, and that I hate dealing with systems which are not homogeneous \nbecause it creates surprises and some loss of time.\n\n\"make check\" ok.\n\nOK.\n\n# part 05\n\nAdd isdir to other ls functions. Doc is updated.\n\nSame as above, I'd prefer a char instead of a bool, as it is more extendable and\nfuture-proof.\n\nOK.\n\n# part 06\n\nThis part extends and adds a test for pg_ls_logdir.\n\"make check\" is ok.\n\nOK.\n\n# part 07\n\nThis part extends pg_stat_file with more date informations.\n\n\"make check\" is ok.\n\nOK.\n\n# doc\n\nOverall doc generation is OK.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 12 Mar 2022 10:13:21 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Wed, Mar 09, 2022 at 10:50:45AM -0600, Justin Pryzby wrote:\n> Rebased over 9e9858389 (Michael may want to look at the tuplestore part?).\n\nAre you referring to the contents of 0003 here that changes the\nsemantics of pg_ls_dir_files() regarding its setup call?\n--\nMichael",
"msg_date": "Sun, 13 Mar 2022 09:45:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sun, Mar 13, 2022 at 09:45:35AM +0900, Michael Paquier wrote:\n> On Wed, Mar 09, 2022 at 10:50:45AM -0600, Justin Pryzby wrote:\n> > Rebased over 9e9858389 (Michael may want to look at the tuplestore part?).\n> \n> Are you referring to the contents of 0003 here that changes the\n> semantics of pg_ls_dir_files() regarding its setup call?\n\nYes, as it has this:\n\n- SetSingleFuncCall(fcinfo, SRF_SINGLE_USE_EXPECTED); \n...\n- SetSingleFuncCall(fcinfo, 0); \n...\n+ if (flags & LS_DIR_METADATA) \n+ SetSingleFuncCall(fcinfo, 0); \n+ else \n+ SetSingleFuncCall(fcinfo, SRF_SINGLE_USE_EXPECTED); \n\n-- \nJustin\n\n\n",
"msg_date": "Sat, 12 Mar 2022 18:56:01 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Wed, Mar 09, 2022 at 10:50:45AM -0600, Justin Pryzby wrote:\n> I also changed pg_ls_dir_recurse() to handle concurrent removal of a dir, which\n> I noticed caused an infrequent failure on CI. However I'm not including that\n> here, since the 2nd half of the patch set seems isn't ready due to lstat() on\n> windows.\n\nlstat() has been a subject of many issues over the years with our\ninternal emulation and issues related to its concurrency, but we use\nit in various areas of the in-core code, so that does not sound like\nan issue to me. It depends on what you want to do with it in\ngenfile.c and which data you'd expect, in addition to the detection of\njunction points for WIN32, I guess. v34 has no references to\npg_ls_dir_recurse(), but that's a WITH RECURSIVE, so we would not\nreally need it, do we?\n\n@@ -27618,7 +27618,7 @@ SELECT convert_from(pg_read_binary_file('file_in_utf8.txt'), 'UTF8');\n Returns a record containing the file's size, last access time stamp,\n last modification time stamp, last file status change time stamp (Unix\n platforms only), file creation time stamp (Windows only), and a flag\n- indicating if it is a directory.\n+ indicating if it is a directory (or a symbolic link to a directory).\n </para>\n <para>\n This function is restricted to superusers by default, but other users\n\nThis is from 0001, and this addition in the documentation is not\ncompletely right. As pg_stat_file() uses stat() to get back the\ninformation of a file/directory, we'd just follow the link if\nspecifying one in the input argument. We could say instead, if we\nwere to improve the docs, that \"If filename is a link, this function\nreturns information about the file or directory the link refers to.\"\nI would put that as a different paragraph.\n\n+select * from pg_ls_archive_statusdir() limit 0;\n+ name | size | modification \n+------+------+--------------\n+(0 rows)\n\nFWIW, this one is fine as of ValidateXLOGDirectoryStructure() that\nwould make sure archive_status exists before any connection is\nattempted to the cluster.\n\n> +select * from pg_ls_logdir() limit 0;\n\nThis test on pg_ls_logdir() would fail if running installcheck on a\ncluster that has logging_collector disabled. So this cannot be\nincluded.\n\n+select * from pg_ls_logicalmapdir() limit 0;\n+select * from pg_ls_logicalsnapdir() limit 0;\n+select * from pg_ls_replslotdir('') limit 0;\n+select * from pg_ls_tmpdir() limit 0;\n+select * from pg_ls_waldir() limit 0;\n+select * from pg_stat_file('.') limit 0;\n\nThe rest of the patch set should be stable AFAIK, there are various\nsteps when running a checkpoint that makes sure that any of these\nexist, without caring about the value of wal_level.\n\n+ <para>\n+ For each file in the specified directory, list the file and its\n+ metadata.\n+ Restricted to superusers by default, but other users can be granted\n+ EXECUTE to run the function.\n+ </para></entry>\n\nWhat is metadata in this case? (I have read the code and know what\nyou mean, but folks only looking at the documentation may be puzzled\nby that). It could be cleaner to use the same tupledesc for any\ncallers of this function, and return NULL in cases these are not \nadapted.\n\n+ /* check the optional arguments */\n+ if (PG_NARGS() == 3)\n+ {\n+ if (!PG_ARGISNULL(1))\n+ {\n+ if (PG_GETARG_BOOL(1))\n+ flags |= LS_DIR_MISSING_OK;\n+ else\n+ flags &= ~LS_DIR_MISSING_OK;\n+ }\n+\n+ if (!PG_ARGISNULL(2))\n+ {\n+ if (PG_GETARG_BOOL(2))\n+ flags &= ~LS_DIR_SKIP_DOT_DIRS;\n+ else\n+ flags |= LS_DIR_SKIP_DOT_DIRS;\n+ }\n+ }\n\nThe subtle different between the false and true code paths of those\narguments 1 and 2 had better be explained? The bit-wise operations\nare slightly different in both cases, so it is not clear which part\ndoes what, and what's the purpose of this logic.\n\n- SetSingleFuncCall(fcinfo, 0);\n+ /* isdir depends on metadata */\n+ Assert(!(flags&LS_DIR_ISDIR) || (flags&LS_DIR_METADATA));\n+ /* Unreasonable to show isdir and skip dirs */\n+ Assert(!(flags&LS_DIR_ISDIR) || !(flags&LS_DIR_SKIP_DIRS));\n\nIncorrect code format. Spaces required.\n\n+-- This tests the missing_ok parameter, which causes pg_ls_tmpdir to\nsucceed even if the tmpdir doesn't exist yet\n+-- The name='' condition is never true, so the function runs to\ncompletion but returns zero rows.\n+-- The query is written to ERROR if the tablespace doesn't exist,\nrather than silently failing to call pg_ls_tmpdir()\n+SELECT c.* FROM (SELECT oid FROM pg_tablespace b WHERE\nb.spcname='regress_tblspace' UNION SELECT 0 ORDER BY 1 DESC LIMIT 1)\nAS b , pg_ls_tmpdir(oid) AS c WHERE c.name='Does not exist';\n\nSo, here, we have a test that may not actually test what we want to\ntest.\n\nHmm. I am not convinced that we need a new set of SQL functions as\npresented in 0003 (addition of meta-data in pg_ls_dir()), and\nextensions of 0004 (do the same but for pg_ls_tmpdir) and 0005 (same\nfor the other pg_ls* functions). The changes of pg_ls_dir_files()\nmake in my opinion the code harder to follow as the resulting tuple \nsize depends on the type of flag used, and you can already retrieve\nthe rest of the information with a join, probably LATERAL, on\npg_stat_file(), no? The same can be said about 0007, actually.\n\nThe addition of isdir for any of the paths related to pg_logical/ and\nthe replication slots has also a limited interest, because we know\nalready those contents, and these are not directories as far as I\nrecall.\n\n0006 invokes a behavior change for pg_ls_logdir(), where it makes\nsense to me to fail if the directory does not exist, so I am not in\nfavor of that.\n\nIn the whole set, improving the docs as of 0001 makes sense, but the\nchange is incomplete. Most of 0002 also makes sense and should be\nstable enough. I am less enthusiastic about any of the other changes\nproposed and what we can gain from these parts.\n--\nMichael",
"msg_date": "Mon, 14 Mar 2022 13:53:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 01:53:54PM +0900, Michael Paquier wrote:\n> +select * from pg_ls_logicalmapdir() limit 0;\n> +select * from pg_ls_logicalsnapdir() limit 0;\n> +select * from pg_ls_replslotdir('') limit 0;\n> +select * from pg_ls_tmpdir() limit 0;\n> +select * from pg_ls_waldir() limit 0;\n> +select * from pg_stat_file('.') limit 0;\n> \n> The rest of the patch set should be stable AFAIK, there are various\n> steps when running a checkpoint that makes sure that any of these\n> exist, without caring about the value of wal_level.\n\nI was contemplating at 0002 this morning, so see which parts would be\nindependently useful, and got reminded that we already check the\nexecution of all those functions in other regression tests, like\ntest_decoding for the replication slot ones and misc_functions.sql for\nthe more critical ones, so those extra queries would be just\ninteresting to check the shape of their SRFs, which is related to the\nother patches of the set and limited based on my arguments from\nyesterday.\n\nThere was one thing that stood out though: there was nothing for the\ntwo options of pg_ls_dir(), called missing_ok and include_dot_dirs.\nmissing_ok is embedded in one query of pg_rewind, but this is a\ncounter-measure against concurrent file removals so we cannot rely on\npg_rewind to check that. And the second option was not run at all.\n\nI have extracted both test cases after rewriting them a bit, and\napplied that separately.\n--\nMichael",
"msg_date": "Tue, 15 Mar 2022 10:59:17 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 01:53:54PM +0900, Michael Paquier wrote:\n> On Wed, Mar 09, 2022 at 10:50:45AM -0600, Justin Pryzby wrote:\n> > I also changed pg_ls_dir_recurse() to handle concurrent removal of a dir, which\n> > I noticed caused an infrequent failure on CI. However I'm not including that\n> > here, since the 2nd half of the patch set seems isn't ready due to lstat() on\n> > windows.\n> \n> lstat() has been a subject of many issues over the years with our\n> internal emulation and issues related to its concurrency, but we use\n> it in various areas of the in-core code, so that does not sound like\n> an issue to me. It depends on what you want to do with it in\n> genfile.c and which data you'd expect, in addition to the detection of\n> junction points for WIN32, I guess.\n\nTo avoid cycles, a recursion function would need to know whether to recurse\ninto a directory or to output that something is isdir=false or type=link, and\navoid recursing into it. \n\n> pg_ls_dir_recurse(), but that's a WITH RECURSIVE, so we would not\n> really need it, do we?\n\nTom disliked it when I had written it as a recursive CTE, so I rewrote it in C.\n129225.1606166058@sss.pgh.pa.us\n\n> Hmm. I am not convinced that we need a new set of SQL functions as\n> presented in 0003 (addition of meta-data in pg_ls_dir()), and\n> extensions of 0004 (do the same but for pg_ls_tmpdir) and 0005 (same\n> for the other pg_ls* functions). The changes of pg_ls_dir_files()\n> make in my opinion the code harder to follow as the resulting tuple \n> size depends on the type of flag used, and you can already retrieve\n> the rest of the information with a join, probably LATERAL, on\n> pg_stat_file(), no? The same can be said about 0007, actually.\n\nYes, one can get the same information with a lateral join (as I said 2 years\nago). But it's more helpful to provide a function, rather than leave that to\npeople to figure out - possibly incorrectly, or badly, like by parsing the\noutput of COPY FROM PROGRAM 'ls -l'. The query to handle tablespaces is\nparticularly obscure:\n20200310183037.GA29065@telsasoft.com\n20201223191710.GR30237@telsasoft.com\n\nOne could argue that most of the pg_ls_* functions aren't needed (including\n1922d7c6e), since the same things are possible with pg_ls_dir() and\npg_stat_file().\n|1922d7c6e Add SQL functions to monitor the directory contents of replication slots\n\nThe original, minimal goal of this patch was to show shared tempdirs in\npg_ls_tmpfile() - rather than hiding them misleadingly as currently happens.\n20200310183037.GA29065@telsasoft.com\n20200313131232.GO29065@telsasoft.com\n\nI added the metadata function 2 years ago since it's silly to show metadata for\ntmpdir but not other, arbitrary directories.\n20200310183037.GA29065@telsasoft.com\n20200313131232.GO29065@telsasoft.com\n20201223191710.GR30237@telsasoft.com\n\n> The addition of isdir for any of the paths related to pg_logical/ and\n> the replication slots has also a limited interest, because we know\n> already those contents, and these are not directories as far as I\n> recall.\n\nExcept when we don't, since extensions can do things that core doesn't, as\nFabien pointed out.\nalpine.DEB.2.21.2001160927390.30419@pseudo\n\n> In the whole set, improving the docs as of 0001 makes sense, but the\n> change is incomplete. Most of 0002 also makes sense and should be\n> stable enough. I am less enthusiastic about any of the other changes\n> proposed and what we can gain from these parts.\n\nIt is frustrating to hear this feedback now, after the patch has gone through\nmultiple rewrites over 2 years - based on other positive feedback and review.\nI went to the effort to ask, numerous times, whether to write the patch and how\nits interfaces should look. Now, I'm hearing that not only the implementation\nbut its goals are wrong. What should I have done to avoid that ?\n\n20200503024215.GJ28974@telsasoft.com\n20191227195918.GF12890@telsasoft.com\n20200116003924.GJ26045@telsasoft.com\n20200908195126.GB18552@telsasoft.com\n\n\n",
"msg_date": "Mon, 14 Mar 2022 21:37:25 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 09:37:25PM -0500, Justin Pryzby wrote:\n> One could argue that most of the pg_ls_* functions aren't needed (including\n> 1922d7c6e), since the same things are possible with pg_ls_dir() and\n> pg_stat_file().\n> |1922d7c6e Add SQL functions to monitor the directory contents of replication slots\n\nThis main argument behind this one is monitoring, as the execution to\nthose functions can be granted at a granular level depending on the\nroles doing the disk space lookups.\n--\nMichael",
"msg_date": "Tue, 15 Mar 2022 11:44:34 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-09 10:50:45 -0600, Justin Pryzby wrote:\n> Rebased over 9e9858389 (Michael may want to look at the tuplestore part?).\n\nDoesn't apply cleanly anymore: http://cfbot.cputube.org/patch_37_2377.log\n\nMarked as waiting-on-author.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 21 Mar 2022 18:28:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 06:28:28PM -0700, Andres Freund wrote:\n> Doesn't apply cleanly anymore: http://cfbot.cputube.org/patch_37_2377.log\n> \n> Marked as waiting-on-author.\n\nFWIW, per my review the bit of the patch set that I found the most\nrelevant is the addition of a note in the docs of pg_stat_file() about\nthe case where \"filename\" is a link, because the code internally uses\nstat(). The function name makes that obvious, but that's not\ncommonly known, I guess. Please see the attached, that I would be\nfine to apply.\n--\nMichael",
"msg_date": "Wed, 23 Mar 2022 15:17:35 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Wed, Mar 23, 2022 at 03:17:35PM +0900, Michael Paquier wrote:\n> FWIW, per my review the bit of the patch set that I found the most\n> relevant is the addition of a note in the docs of pg_stat_file() about\n> the case where \"filename\" is a link, because the code internally uses\n> stat(). The function name makes that obvious, but that's not\n> commonly known, I guess. Please see the attached, that I would be\n> fine to apply.\n\nHmm. I am having second thoughts on this one, as on Windows we rely\non GetFileInformationByHandle() for the emulation of stat() in\nwin32stat.c, and it looks like this returns some information about the\njunction point and not the directory or file this is pointing to, it\nseems. At the end, it looks better to keep things simple here, so\nlet's drop it.\n--\nMichael",
"msg_date": "Sat, 26 Mar 2022 20:23:54 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Sat, Mar 26, 2022 at 08:23:54PM +0900, Michael Paquier wrote:\n> On Wed, Mar 23, 2022 at 03:17:35PM +0900, Michael Paquier wrote:\n> > FWIW, per my review the bit of the patch set that I found the most\n> > relevant is the addition of a note in the docs of pg_stat_file() about\n> > the case where \"filename\" is a link, because the code internally uses\n> > stat(). The function name makes that obvious, but that's not\n> > commonly known, I guess. Please see the attached, that I would be\n> > fine to apply.\n> \n> Hmm. I am having second thoughts on this one, as on Windows we rely\n> on GetFileInformationByHandle() for the emulation of stat() in\n> win32stat.c, and it looks like this returns some information about the\n> junction point and not the directory or file this is pointing to, it\n> seems.\n\nWhere did you find that ? What metadata does it return about the junction\npoint ? We only care about a handful of fields.\n\n\n",
"msg_date": "Mon, 28 Mar 2022 21:13:52 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 09:37:25PM -0500, Justin Pryzby wrote:\n> The original, minimal goal of this patch was to show shared tempdirs in\n> pg_ls_tmpfile() - rather than hiding them misleadingly as currently happens.\n> 20200310183037.GA29065@telsasoft.com\n> 20200313131232.GO29065@telsasoft.com\n> \n> I added the metadata function 2 years ago since it's silly to show metadata for\n> tmpdir but not other, arbitrary directories.\n> 20200310183037.GA29065@telsasoft.com\n> 20200313131232.GO29065@telsasoft.com\n> 20201223191710.GR30237@telsasoft.com\n\nI renamed the CF entry to make even more clear the original motive for the\npatches (I'm not maintaining the patch to add the metadata function just to\navoid writing a lateral join).\n\n> > In the whole set, improving the docs as of 0001 makes sense, but the\n> > change is incomplete. Most of 0002 also makes sense and should be\n> > stable enough. I am less enthusiastic about any of the other changes\n> > proposed and what we can gain from these parts.\n> \n> It is frustrating to hear this feedback now, after the patch has gone through\n> multiple rewrites over 2 years - based on other positive feedback and review.\n> I went to the effort to ask, numerous times, whether to write the patch and how\n> its interfaces should look. Now, I'm hearing that not only the implementation\n> but its goals are wrong. What should I have done to avoid that ?\n> \n> 20200503024215.GJ28974@telsasoft.com\n> 20191227195918.GF12890@telsasoft.com\n> 20200116003924.GJ26045@telsasoft.com\n> 20200908195126.GB18552@telsasoft.com\n\nMichael said he's not enthusiastic about the patch. But I haven't heard a\nsuggestion about how else to address the issue that pg_ls_tmpdir() hides shared\nfilesets.\n\nOn Mon, Mar 28, 2022 at 09:13:52PM -0500, Justin Pryzby wrote:\n> On Sat, Mar 26, 2022 at 08:23:54PM +0900, Michael Paquier wrote:\n> > On Wed, Mar 23, 2022 at 03:17:35PM +0900, Michael Paquier wrote:\n> > > FWIW, per my review the bit of the patch set that I found the most\n> > > relevant is the addition of a note in the docs of pg_stat_file() about\n> > > the case where \"filename\" is a link, because the code internally uses\n> > > stat(). The function name makes that obvious, but that's not\n> > > commonly known, I guess. Please see the attached, that I would be\n> > > fine to apply.\n> > \n> > Hmm. I am having second thoughts on this one, as on Windows we rely\n> > on GetFileInformationByHandle() for the emulation of stat() in\n> > win32stat.c, and it looks like this returns some information about the\n> > junction point and not the directory or file this is pointing to, it\n> > seems.\n> \n> Where did you find that ? What metadata does it return about the junction\n> point ? We only care about a handful of fields.\n\nPending your feedback, I didn't modify this beyond your original suggestion -\nwhich seemed like a good one.\n\nThis also adds some comments you requested and fixes your coding style\ncomplaints, and causes cfbot to test my proposed patch rather than your doc\npatch.\n\n-- \nJustin",
"msg_date": "Thu, 31 Mar 2022 18:42:00 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "The cfbot is failing testing this patch. It seems... unlikely given\nthe nature of the patch modifying an admin function that doesn't even\nmodify the database that it should be breaking a streaming test.\nPerhaps the streaming test is using this function in the testing\nscaffolding?\n\n[03:19:29.564] # Failed test 'regression tests pass'\n[03:19:29.564] # at t/027_stream_regress.pl line 76.\n[03:19:29.564] # got: '256'\n[03:19:29.564] # expected: '0'\n[03:19:29.564] # Looks like you failed 1 test of 5.\n[03:19:29.565] [03:19:27] t/027_stream_regress.pl ..............\n[03:19:29.565] Dubious, test returned 1 (wstat 256, 0x100)\n[03:19:29.565] Failed 1/5 subtests\n\n\n",
"msg_date": "Sun, 3 Apr 2022 09:39:35 -0400",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "This thread has been going for 2.5 years, so here's a(nother) recap.\n\nThis omits the patches for recursion, since they're optional and evidently a\ndistraction from the main patches.\n\nOn Fri, Dec 27, 2019 at 11:02:20AM -0600, Justin Pryzby wrote:\n> The goal is to somehow show tmpfiles (or at least dirs) used by parallel\n> workers.\n\nOn Thu, Jan 16, 2020 at 08:38:46AM -0600, Justin Pryzby wrote:\n> I think if someone wants the full generality, they can do this:\n> \n> postgres=# SELECT name, s.size, s.modification, s.isdir FROM (SELECT 'base/pgsql_tmp'p)p, pg_ls_dir(p)name, pg_stat_file(p||'/'||name)s;\n> name | size | modification | isdir \n> ------+------+------------------------+-------\n> .foo | 4096 | 2020-01-16 08:57:04-05 | t\n> \n> In my mind, pg_ls_tmpdir() is for showing tmpfiles, not just a shortcut to\n> SELECT pg_ls_dir((SELECT 'base/pgsql_tmp'p)); -- or, for all tablespaces:\n> WITH x AS (SELECT format('/PG_%s_%s', split_part(current_setting('server_version'), '.', 1), catalog_version_no) suffix FROM pg_control_system()), y AS (SELECT a, pg_ls_dir(a) AS d FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(oid),'')||suffix, 'base') a FROM pg_tablespace,x)a) SELECT a, pg_ls_dir(a||'/pgsql_tmp') FROM y WHERE d='pgsql_tmp';\n\nOn Tue, Mar 10, 2020 at 01:30:37PM -0500, Justin Pryzby wrote:\n> I took a step back, and I wondered whether we should add a generic function for\n> listing a dir with metadata, possibly instead of changing the existing\n> functions. Then one could do pg_ls_dir_metadata('pg_wal',false,false);\n> \n> Since pg8.1, we have pg_ls_dir() to show a list of files. Since pg10, we've\n> had pg_ls_logdir and pg_ls_waldir, which show not only file names but also\n> (some) metadata (size, mtime). And since pg12, we've had pg_ls_tmpfile and\n> pg_ls_archive_statusdir, which also show metadata.\n> \n> ...but there's no a function which lists the metadata of an directory other\n> than tmp, wal, log.\n> \n> One can do this:\n> |SELECT b.*, c.* FROM (SELECT 'base' a)a, LATERAL (SELECT a||'/'||pg_ls_dir(a.a)b)b, pg_stat_file(b)c;\n> ..but that's not as helpful as allowing:\n> |SELECT * FROM pg_ls_dir_metadata('.',true,true);\n> \n> There's also no function which recurses into an arbitrary directory, so it\n> seems shortsighted to provide a function to recursively list a tmpdir.\n> \n> Also, since pg_ls_dir_metadata indicates whether the path is a dir, one can\n> write a SQL function to show the dir recursively. It'd be trivial to plug in\n> wal/log/tmp (it seems like tmpdirs of other tablespace's are not entirely\n> trivial).\n> |SELECT * FROM pg_ls_dir_recurse('base/pgsql_tmp');\n\n> It's pretty unfortunate if a function called\n> pg_ls_tmpdir hides shared filesets, so maybe it really is best to change that\n> (it's new in v12).\n\nOn Fri, Mar 13, 2020 at 08:12:32AM -0500, Justin Pryzby wrote:\n> The merge conflict presents another opportunity to solicit comments on the new\n> approach. Rather than making \"recurse into tmpdir\" the end goal:\n> \n> - add a function to show metadata of an arbitrary dir;\n> - add isdir arguments to pg_ls_* functions (including pg_ls_tmpdir but not\n> pg_ls_dir).\n> - maybe add pg_ls_dir_recurse, which satisfies the original need;\n> - retire pg_ls_dir (does this work with tuplestore?)\n> - profit\n> \n> The alternative seems to be to go back to Alvaro's earlier proposal:\n> - not only add \"isdir\", but also recurse;\n> \n> I think I would insist on adding a general function to recurse into any dir.\n> And *optionally* change ps_ls_* to recurse (either by accepting an argument, or\n> by making that a separate patch to debate).\n\nOn Tue, Mar 31, 2020 at 03:08:12PM -0500, Justin Pryzby wrote:\n> The patch intends to fix the issue of \"failing to show failed filesets\"\n> (because dirs are skipped) while also generalizing existing functions (to show\n> directories and \"isdir\" column) and providing some more flexible ones (to list\n> file and metadata of a dir, which is currently possible [only] for \"special\"\n> directories, or by recursively calling pg_stat_file).\n\nOn Wed, Dec 23, 2020 at 01:17:10PM -0600, Justin Pryzby wrote:\n> However, pg_ls_tmpdir is special since it handles tablespace tmpdirs, which it\n> seems is not trivial to get from sql:\n> \n> +SELECT * FROM (SELECT DISTINCT COALESCE(NULLIF(pg_tablespace_location(b.oid),'')||suffix, 'base/pgsql_tmp') AS dir\n> +FROM pg_tablespace b, pg_control_system() pcs,\n> +LATERAL format('/PG_%s_%s', left(current_setting('server_version_num'), 2), pcs.catalog_version_no) AS suffix) AS dir,\n> +LATERAL pg_ls_dir_recurse(dir) AS a;\n> \n> For context, the line of reasoning that led me to this patch series was\n> something like this:\n> \n> 0) Why can't I list shared tempfiles (dirs) using pg_ls_tmpdir() ?\n> 1) Implement recursion for pg_ls_tmpdir();\n> 2) Eventually realize that it's silly to implement a function to recurse into\n> one particular directory when no general feature exists;\n> 3) Implement generic facility;\n\nOn Tue, Apr 06, 2021 at 11:01:31AM -0500, Justin Pryzby wrote:\n> The first handful of patches address the original issue, and I think could be\n> \"ready\":\n> \n> $ git log --oneline origin..pg-ls-dir-new |tac\n> ... Document historic behavior of links to directories..\n> ... Add tests on pg_ls_dir before changing it\n> ... Add pg_ls_dir_metadata to list a dir with file metadata..\n> ... pg_ls_tmpdir to show directories and \"isdir\" argument..\n> ... pg_ls_*dir to show directories and \"isdir\" column..\n> \n> These others are optional:\n> ... pg_ls_logdir to ignore error if initial/top dir is missing..\n> ... pg_ls_*dir to return all the metadata from pg_stat_file..\n> \n> ..and these maybe requires more work for lstat on windows:\n> ... pg_stat_file and pg_ls_dir_* to use lstat()..\n> ... pg_ls_*/pg_stat_file to show file *type*..\n> ... Preserve pg_stat_file() isdir..\n> ... Add recursion option in pg_ls_dir_files..\n\nOn Tue, Jan 25, 2022 at 01:27:55PM -0600, Justin Pryzby wrote:\n> The original motive for the patch was that pg_ls_tmpdir doesn't show shared\n> filesets.",
"msg_date": "Thu, 23 Jun 2022 23:35:49 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "On Fri, Dec 13, 2019 at 03:03:47PM +1300, Thomas Munro wrote:\n> > Actually, I tried using pg_ls_tmpdir(), but it unconditionally masks\n> > non-regular files and thus shared filesets. Maybe that's worth\n> > discussion on a new thread ?\n> >\n> > src/backend/utils/adt/genfile.c\n> > /* Ignore anything but regular files */\n> > if (!S_ISREG(attrib.st_mode))\n> > continue;\n> \n> +1, that's worth fixing.\n\n@cfbot: rebased on eddc128be.",
"msg_date": "Thu, 27 Oct 2022 19:38:15 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
},
{
"msg_contents": "2024-01 Commitfest.\n\nHi, this patch was marked in CF as \"Needs Review\", but there has been\nno activity on this thread for 14+ months.\n\nSince there seems not much interest, I have changed the status to\n\"Returned with Feedback\" [1]. Feel free to propose a stronger use case\nfor the patch and add an entry for the same.\n\n======\n[1] https://commitfest.postgresql.org/46/2377/\n\nKind Regards,\nPeter Smith.\n\n\n",
"msg_date": "Mon, 22 Jan 2024 12:16:13 +1100",
"msg_from": "Peter Smith <smithpb2250@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_ls_tmpdir to show directories and shared filesets (and\n pg_ls_*)"
}
] |
[
{
"msg_contents": "Hi!\n\nFound crash on production instance, assert-enabled build crashes in pfree() \ncall, with default config. v11, v12 and head are affected, but, seems, you need \nto be a bit lucky.\n\nThe bug is comparing old and new aggregate pass-by-ref values only by pointer \nvalue itself, despite on null flag. Any function which returns null doesn't \nworry about actual returned Datum value, so that comparison isn't enough. Test \ncase shows bug with ExecInterpExpr() but there several similar places (thanks \nNikita Glukhov for help).\nAttached patch adds check of null flag.\n\nHow to reproduce:\nhttp://sigaev.ru/misc/xdump.sql.bz2\nbzcat xdump.sql.bz2 | psql postgres && psql postgres < x.sql\n\n\nBacktrace from v12 (note, newValue and oldValue are differ on current call, but \noldValue points into pfreed memory) :\n#0 0x0000000000c8405a in GetMemoryChunkContext (pointer=0x80a808250) at \n../../../../src/include/utils/memutils.h:130\n130 AssertArg(MemoryContextIsValid(context));\n(gdb) bt\n#0 0x0000000000c8405a in GetMemoryChunkContext (pointer=0x80a808250) at \n../../../../src/include/utils/memutils.h:130\n#1 0x0000000000c85ae5 in pfree (pointer=0x80a808250) at mcxt.c:1058\n#2 0x000000000080475e in ExecAggTransReparent (aggstate=0x80a806370, \npertrans=0x80a87e830, newValue=34535940744, newValueIsNull=false, \noldValue=34535932496, oldValueIsNull=false)\n at execExprInterp.c:4209\n#3 0x00000000007ff51f in ExecInterpExpr (state=0x80a87f4d8, \necontext=0x80a8065a8, isnull=0x7fffffffd7b7) at execExprInterp.c:1747\n#4 0x000000000082c12b in ExecEvalExprSwitchContext (state=0x80a87f4d8, \necontext=0x80a8065a8, isNull=0x7fffffffd7b7) at \n../../../src/include/executor/executor.h:308\n#5 0x000000000082bc0f in advance_aggregates (aggstate=0x80a806370) at nodeAgg.c:679\n#6 0x000000000082b8a6 in agg_retrieve_direct (aggstate=0x80a806370) at \nnodeAgg.c:1847\n#7 0x0000000000828782 in ExecAgg (pstate=0x80a806370) at nodeAgg.c:1572\n#8 0x000000000080e712 in ExecProcNode (node=0x80a806370) at \n../../../src/include/executor/executor.h:240\n#9 0x000000000080a4a1 in ExecutePlan (estate=0x80a806120, \nplanstate=0x80a806370, use_parallel_mode=false, operation=CMD_SELECT, \nsendTuples=true, numberTuples=0,\n direction=ForwardScanDirection, dest=0x80a851cc0, execute_once=true) at \nexecMain.c:1646\n#10 0x000000000080a362 in standard_ExecutorRun (queryDesc=0x80a853120, \ndirection=ForwardScanDirection, count=0, execute_once=true) at execMain.c:364\n#11 0x000000000080a114 in ExecutorRun (queryDesc=0x80a853120, \ndirection=ForwardScanDirection, count=0, execute_once=true) at execMain.c:308\n#12 0x0000000000a79d6f in PortalRunSelect (portal=0x80a70d120, forward=true, \ncount=0, dest=0x80a851cc0) at pquery.c:929\n#13 0x0000000000a79807 in PortalRun (portal=0x80a70d120, \ncount=9223372036854775807, isTopLevel=true, run_once=true, dest=0x80a851cc0, \naltdest=0x80a851cc0, completionTag=0x7fffffffdc30 \"\")\n at pquery.c:770\n#14 0x0000000000a74e49 in exec_simple_query (\n query_string=0x800d02950 \n\"SELECT\\nT1._Q_001_F_000,\\nT1._Q_001_F_001,\\nT1._Q_001_F_002RRef,\\nT1._Q_001_F_003RRef,\\nT1._Q_001_F_004RRef,\\nT1._Q_001_F_005RRef,\\nMAX(CASE \nWHEN (T1._Q_001_F_010 > CAST(0 AS NUMERIC)) THEN T2._Q_001_F_009RR\"...) at \npostgres.c:1227\n#15 0x0000000000a74123 in PostgresMain (argc=1, argv=0x80a6ef8f0, \ndbname=0x80a6ef850 \"postgres\", username=0x80a6ef830 \"teodor\") at postgres.c:4291\n#16 0x00000000009a4c3b in BackendRun (port=0x80a6e6000) at postmaster.c:4498\n#17 0x00000000009a403a in BackendStartup (port=0x80a6e6000) at postmaster.c:4189\n#18 0x00000000009a2f63 in ServerLoop () at postmaster.c:1727\n#19 0x00000000009a0a0a in PostmasterMain (argc=3, argv=0x7fffffffe3c8) at \npostmaster.c:1400\n#20 0x000000000088deef in main (argc=3, argv=0x7fffffffe3c8) at main.c:210\n\n-- \nTeodor Sigaev E-mail: teodor@sigaev.ru\n WWW: http://www.sigaev.ru/",
"msg_date": "Fri, 27 Dec 2019 20:13:26 +0300",
"msg_from": "Teodor Sigaev <teodor@sigaev.ru>",
"msg_from_op": true,
"msg_subject": "aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2019-12-27 20:13:26 +0300, Teodor Sigaev wrote:\n> Found crash on production instance, assert-enabled build crashes in pfree()\n> call, with default config. v11, v12 and head are affected, but, seems, you\n> need to be a bit lucky.\n> \n> The bug is comparing old and new aggregate pass-by-ref values only by\n> pointer value itself, despite on null flag. Any function which returns null\n> doesn't worry about actual returned Datum value, so that comparison isn't\n> enough. Test case shows bug with ExecInterpExpr() but there several similar\n> places (thanks Nikita Glukhov for help).\n> Attached patch adds check of null flag.\n\nHm. I don't understand the problem here. Why do we need to reparent in\nthat case? What freed the relevant value?\n\nNor do I really understand why v10 wouldn't be affected if this actually\nis a problem. The relevant code is also only guarded by\n\t\tDatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue))\n\n\n> \n> Backtrace from v12 (note, newValue and oldValue are differ on current call,\n> but oldValue points into pfreed memory) :\n> #0 0x0000000000c8405a in GetMemoryChunkContext (pointer=0x80a808250) at\n> ../../../../src/include/utils/memutils.h:130\n> 130 AssertArg(MemoryContextIsValid(context));\n> (gdb) bt\n> #0 0x0000000000c8405a in GetMemoryChunkContext (pointer=0x80a808250) at\n> ../../../../src/include/utils/memutils.h:130\n> #1 0x0000000000c85ae5 in pfree (pointer=0x80a808250) at mcxt.c:1058\n> #2 0x000000000080475e in ExecAggTransReparent (aggstate=0x80a806370,\n> pertrans=0x80a87e830, newValue=34535940744, newValueIsNull=false,\n> oldValue=34535932496, oldValueIsNull=false)\n> at execExprInterp.c:4209\n> #3 0x00000000007ff51f in ExecInterpExpr (state=0x80a87f4d8,\n> econtext=0x80a8065a8, isnull=0x7fffffffd7b7) at execExprInterp.c:1747\n> #4 0x000000000082c12b in ExecEvalExprSwitchContext (state=0x80a87f4d8,\n> econtext=0x80a8065a8, isNull=0x7fffffffd7b7) at\n> ../../../src/include/executor/executor.h:308\n> #5 0x000000000082bc0f in advance_aggregates (aggstate=0x80a806370) at nodeAgg.c:679\n> #6 0x000000000082b8a6 in agg_retrieve_direct (aggstate=0x80a806370) at\n> nodeAgg.c:1847\n> #7 0x0000000000828782 in ExecAgg (pstate=0x80a806370) at nodeAgg.c:1572\n> #8 0x000000000080e712 in ExecProcNode (node=0x80a806370) at\n> ../../../src/include/executor/executor.h:240\n\n\n\n> How to reproduce:\n> http://sigaev.ru/misc/xdump.sql.bz2\n> bzcat xdump.sql.bz2 | psql postgres && psql postgres < x.sql\n\nIt should be possible to create a smaller reproducer... It'd be good if\na bug fix for this were committed with a regression test.\n\n\n> diff --git a/src/backend/executor/execExprInterp.c b/src/backend/executor/execExprInterp.c\n> index 034970648f3..3b5333716d4 100644\n> --- a/src/backend/executor/execExprInterp.c\n> +++ b/src/backend/executor/execExprInterp.c\n> @@ -1743,7 +1743,8 @@ ExecInterpExpr(ExprState *state, ExprContext *econtext, bool *isnull)\n> \t\t\t * expanded object that is already a child of the aggcontext,\n> \t\t\t * assume we can adopt that value without copying it.\n> \t\t\t */\n> -\t\t\tif (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue))\n> +\t\t\tif (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue) ||\n> +\t\t\t\tfcinfo->isnull != pergroup->transValueIsNull)\n> \t\t\t\tnewVal = ExecAggTransReparent(aggstate, pertrans,\n> \t\t\t\t\t\t\t\t\t\t\t newVal, fcinfo->isnull,\n> \t\t\t\t\t\t\t\t\t\t\t pergroup->transValue,\n\nI'd really like to avoid adding additional branches to these paths.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Jan 2020 11:35:39 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "On 2019-Dec-27, Teodor Sigaev wrote:\n\n> Hi!\n> \n> Found crash on production instance, assert-enabled build crashes in pfree()\n> call, with default config. v11, v12 and head are affected, but, seems, you\n> need to be a bit lucky.\n\nIs this bug being considered for the next set of minors?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 14 Jan 2020 18:53:02 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2019-Dec-27, Teodor Sigaev wrote:\n>> Found crash on production instance, assert-enabled build crashes in pfree()\n>> call, with default config. v11, v12 and head are affected, but, seems, you\n>> need to be a bit lucky.\n\n> Is this bug being considered for the next set of minors?\n\nI think Andres last touched that code, so I was sort of expecting\nhim to have an opinion on this. But I agree that not checking null-ness\nexplicitly is kind of unsafe. We've never before had any expectation\nthat the Datum value of a null is anything in particular.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:01:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-14 17:01:01 -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2019-Dec-27, Teodor Sigaev wrote:\n> >> Found crash on production instance, assert-enabled build crashes in pfree()\n> >> call, with default config. v11, v12 and head are affected, but, seems, you\n> >> need to be a bit lucky.\n> \n> > Is this bug being considered for the next set of minors?\n> \n> I think Andres last touched that code, so I was sort of expecting\n> him to have an opinion on this.\n\nWell, I commented a few days ago, also asking for further input...\n\nTo me it looks like that code has effectively been the same for quite a\nwhile. While today the code is:\n\n\t\t\tnewVal = FunctionCallInvoke(fcinfo);\n\n\t\t\t/*\n\t\t\t * For pass-by-ref datatype, must copy the new value into\n\t\t\t * aggcontext and free the prior transValue. But if transfn\n\t\t\t * returned a pointer to its first input, we don't need to do\n\t\t\t * anything. Also, if transfn returned a pointer to a R/W\n\t\t\t * expanded object that is already a child of the aggcontext,\n\t\t\t * assume we can adopt that value without copying it.\n\t\t\t */\n\t\t\tif (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue))\n\t\t\t\tnewVal = ExecAggTransReparent(aggstate, pertrans,\n\t\t\t\t\t\t\t\t\t\t\t newVal, fcinfo->isnull,\n\t\t\t\t\t\t\t\t\t\t\t pergroup->transValue,\n\t\t\t\t\t\t\t\t\t\t\t pergroup->transValueIsNull);\n...\nExecAggTransReparent(AggState *aggstate, AggStatePerTrans pertrans,\n\t\t\t\t\t Datum newValue, bool newValueIsNull,\n\t\t\t\t\t Datum oldValue, bool oldValueIsNull)\n...\n\tif (!newValueIsNull)\n\t{\n\t\tMemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory);\n\t\tif (DatumIsReadWriteExpandedObject(newValue,\n\t\t\t\t\t\t\t\t\t\t false,\n\t\t\t\t\t\t\t\t\t\t pertrans->transtypeLen) &&\n\t\t\tMemoryContextGetParent(DatumGetEOHP(newValue)->eoh_context) == CurrentMemoryContext)\n\t\t\t /* do nothing */ ;\n\t\telse\n\t\t\tnewValue = datumCopy(newValue,\n\t\t\t\t\t\t\t\t pertrans->transtypeByVal,\n\t\t\t\t\t\t\t\t pertrans->transtypeLen);\n\t}\n\tif (!oldValueIsNull)\n\t{\n\t\tif (DatumIsReadWriteExpandedObject(oldValue,\n\t\t\t\t\t\t\t\t\t\t false,\n\t\t\t\t\t\t\t\t\t\t pertrans->transtypeLen))\n\t\t\tDeleteExpandedObject(oldValue);\n\t\telse\n\t\t\tpfree(DatumGetPointer(oldValue));\n\t}\n\nbefore it was (in v10):\n\n\tif (!pertrans->transtypeByVal &&\n\t\tDatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue))\n\t{\n\t\tif (!fcinfo->isnull)\n\t\t{\n\t\t\tMemoryContextSwitchTo(aggstate->curaggcontext->ecxt_per_tuple_memory);\n\t\t\tif (DatumIsReadWriteExpandedObject(newVal,\n\t\t\t\t\t\t\t\t\t\t\t false,\n\t\t\t\t\t\t\t\t\t\t\t pertrans->transtypeLen) &&\n\t\t\t\tMemoryContextGetParent(DatumGetEOHP(newVal)->eoh_context) == CurrentMemoryContext)\n\t\t\t\t /* do nothing */ ;\n\t\t\telse\n\t\t\t\tnewVal = datumCopy(newVal,\n\t\t\t\t\t\t\t\t pertrans->transtypeByVal,\n\t\t\t\t\t\t\t\t pertrans->transtypeLen);\n\t\t}\n\t\tif (!pergroupstate->transValueIsNull)\n\t\t{\n\t\t\tif (DatumIsReadWriteExpandedObject(pergroupstate->transValue,\n\t\t\t\t\t\t\t\t\t\t\t false,\n\t\t\t\t\t\t\t\t\t\t\t pertrans->transtypeLen))\n\t\t\t\tDeleteExpandedObject(pergroupstate->transValue);\n\t\t\telse\n\t\t\t\tpfree(DatumGetPointer(pergroupstate->transValue));\n\t\t}\n\t}\n\nthere's no need in the current code to check !pertrans->transtypeByVal,\nas byval has a separate expression opcode. So I don't think things have\nchanged?\n\nAs far as I can tell, comparing the values by pointer goes back a *long*\nwhile. We didn't use to handle expanded objects, but otherwise it looked\npretty similar back to 7.4 (oldest version I've checked out):\n\n\tnewVal = FunctionCallInvoke(&fcinfo);\n\n\t/*\n\t * If pass-by-ref datatype, must copy the new value into aggcontext\n\t * and pfree the prior transValue.\tBut if transfn returned a pointer\n\t * to its first input, we don't need to do anything.\n\t */\n\tif (!peraggstate->transtypeByVal &&\n\tDatumGetPointer(newVal) != DatumGetPointer(pergroupstate->transValue))\n\t{\n\t\tif (!fcinfo.isnull)\n\t\t{\n\t\t\tMemoryContextSwitchTo(aggstate->aggcontext);\n\t\t\tnewVal = datumCopy(newVal,\n\t\t\t\t\t\t\t peraggstate->transtypeByVal,\n\t\t\t\t\t\t\t peraggstate->transtypeLen);\n\t\t}\n\t\tif (!pergroupstate->transValueIsNull)\n\t\t\tpfree(DatumGetPointer(pergroupstate->transValue));\n\t}\n\n\n> But I agree that not checking null-ness\n> explicitly is kind of unsafe. We've never before had any expectation\n> that the Datum value of a null is anything in particular.\n\nI'm still not sure I actually fully understand the bug. It's obvious how\nreturning the input value again could lead to memory not being freed (so\nthat leak seems to go all the way back). And similarly, since the\nintroduction of expanded objects, it can also lead to the expanded\nobject not being deleted.\n\nBut that's not the problem causing the crash here. What I think must\ninstead be the problem is that pergroupstate->transValueIsNull, but\npergroupstate->transValue is set to something looking like a\npointer. Which caused us not to datumCopy() a new transition value into\na long lived context. and then a later transition causes us to free the\nshort-lived value?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jan 2020 14:40:59 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-14 17:01:01 -0500, Tom Lane wrote:\n>> But I agree that not checking null-ness\n>> explicitly is kind of unsafe. We've never before had any expectation\n>> that the Datum value of a null is anything in particular.\n\n> I'm still not sure I actually fully understand the bug. It's obvious how\n> returning the input value again could lead to memory not being freed (so\n> that leak seems to go all the way back). And similarly, since the\n> introduction of expanded objects, it can also lead to the expanded\n> object not being deleted.\n> But that's not the problem causing the crash here. What I think must\n> instead be the problem is that pergroupstate->transValueIsNull, but\n> pergroupstate->transValue is set to something looking like a\n> pointer. Which caused us not to datumCopy() a new transition value into\n> a long lived context. and then a later transition causes us to free the\n> short-lived value?\n\nYeah, I was kind of wondering that too. While formally the Datum value\nfor a null is undefined, I'm not aware offhand of any functions that\nwouldn't return zero --- and this would have to be an aggregate transition\nfunction doing so, which reduces the universe of candidates quite a lot.\nPlus there's the question of how often a transition function would return\nnull for non-null input at all.\n\nCould we see a test case that provokes this crash, even if it doesn't\ndo so reliably?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 14 Jan 2020 17:54:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-14 17:54:16 -0500, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2020-01-14 17:01:01 -0500, Tom Lane wrote:\n> >> But I agree that not checking null-ness\n> >> explicitly is kind of unsafe. We've never before had any expectation\n> >> that the Datum value of a null is anything in particular.\n> \n> > I'm still not sure I actually fully understand the bug. It's obvious how\n> > returning the input value again could lead to memory not being freed (so\n> > that leak seems to go all the way back). And similarly, since the\n> > introduction of expanded objects, it can also lead to the expanded\n> > object not being deleted.\n> > But that's not the problem causing the crash here. What I think must\n> > instead be the problem is that pergroupstate->transValueIsNull, but\n> > pergroupstate->transValue is set to something looking like a\n> > pointer. Which caused us not to datumCopy() a new transition value into\n> > a long lived context. and then a later transition causes us to free the\n> > short-lived value?\n> \n> Yeah, I was kind of wondering that too. While formally the Datum value\n> for a null is undefined, I'm not aware offhand of any functions that\n> wouldn't return zero --- and this would have to be an aggregate transition\n> function doing so, which reduces the universe of candidates quite a lot.\n> Plus there's the question of how often a transition function would return\n> null for non-null input at all.\n> \n> Could we see a test case that provokes this crash, even if it doesn't\n> do so reliably?\n\nThere's a larger reproducer referenced in the first message. I had hoped\nthat Teodor could narrow it down - I guess I'll try to do that tomorrow...\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 14 Jan 2020 23:27:02 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-14 23:27:02 -0800, Andres Freund wrote:\n> On 2020-01-14 17:54:16 -0500, Tom Lane wrote:\n> > Andres Freund <andres@anarazel.de> writes:\n> > > I'm still not sure I actually fully understand the bug. It's obvious how\n> > > returning the input value again could lead to memory not being freed (so\n> > > that leak seems to go all the way back). And similarly, since the\n> > > introduction of expanded objects, it can also lead to the expanded\n> > > object not being deleted.\n> > > But that's not the problem causing the crash here. What I think must\n> > > instead be the problem is that pergroupstate->transValueIsNull, but\n> > > pergroupstate->transValue is set to something looking like a\n> > > pointer. Which caused us not to datumCopy() a new transition value into\n> > > a long lived context. and then a later transition causes us to free the\n> > > short-lived value?\n> >\n> > Yeah, I was kind of wondering that too. While formally the Datum value\n> > for a null is undefined, I'm not aware offhand of any functions that\n> > wouldn't return zero --- and this would have to be an aggregate transition\n> > function doing so, which reduces the universe of candidates quite a lot.\n> > Plus there's the question of how often a transition function would return\n> > null for non-null input at all.\n> >\n> > Could we see a test case that provokes this crash, even if it doesn't\n> > do so reliably?\n>\n> There's a larger reproducer referenced in the first message. I had hoped\n> that Teodor could narrow it down - I guess I'll try to do that tomorrow...\n\nFWIW, I'm working on narrowing it down to something small. I can\nreliably trigger the bug, and I understand the mechanics, I\nthink. Interestingly enough the reproducer currently only triggers on\nv12, not on v11 and before.\n\nAs you say, this requires a transition function returning a NULL that\nhas the datum part set - the reproducer here defines a non-strict\naggregate transition function that can indirectly do so:\n\nCREATE FUNCTION public.state_max_bytea(st bytea, inp bytea) RETURNS bytea\n LANGUAGE plpgsql\n AS $$\n BEGIN\n if st is null\n then\n return inp;\n elseif st<inp then\n return inp;\n else\n return st;\n end if;\n END;$$;\n\nCREATE AGGREGATE public.max(bytea) (\n SFUNC = public.state_max_bytea,\n STYPE = bytea\n);\n\nI.e. when the current transition is null (e.g. for the first tuple), the\ntransition is always set to new input value. Even if that is null.\n\nThen the question in turn is, how the input datum is != 0, but has\nisnull set. And that's caused by:\n\n\n\t\tEEO_CASE(EEOP_FUNCEXPR_STRICT)\n\t\t{\n\t\t\tFunctionCallInfo fcinfo = op->d.func.fcinfo_data;\n\t\t\tNullableDatum *args = fcinfo->args;\n\t\t\tint\t\t\targno;\n\t\t\tDatum\t\td;\n\n\t\t\t/* strict function, so check for NULL args */\n\t\t\tfor (argno = 0; argno < op->d.func.nargs; argno++)\n\t\t\t{\n\t\t\t\tif (args[argno].isnull)\n\t\t\t\t{\n\t\t\t\t\t*op->resnull = true;\n\t\t\t\t\tgoto strictfail;\n\t\t\t\t}\n\t\t\t}\n\t\t\tfcinfo->isnull = false;\n\t\t\td = op->d.func.fn_addr(fcinfo);\n\t\t\t*op->resvalue = d;\n\t\t\t*op->resnull = fcinfo->isnull;\n\n\tstrictfail:\n\t\t\tEEO_NEXT();\n\t\t}\n\n\nI.e. if the transitions argument is a strict function, and that strict\nfunction is not evaluated because of a NULL input, we set op->resnull =\ntrue, but do *not* touch op->resvalue. If there was a previous row that\nactually set resvalue to something meaningful, we get an input to the\ntransition function consisting out of the old resvalue (!= 0), but the\nnew resnull = true. If the transition function returns that unchanged,\nExecAggTransReparent() doesn't do anything, because the new value is\nnull. Afterwards pergroup->transValue is set != 0, even though\ntransValueIsNull = true.\n\nThe somewhat tricky bit is arranging this to happen with pointers that\nare the same. I think I'm on the way to narrow that down, but it'll take\nme a bit longer.\n\nTo fix this I think we should set newVal = 0 in\nExecAggTransReparent()'s, as a new else to !newValueIsNull. That should\nnot add any additional branches, I think. I contrast to always doing so\nwhen checking whether ExecAggTransReparent() ought to be called.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 15 Jan 2020 12:47:47 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-15 12:47:47 -0800, Andres Freund wrote:\n> FWIW, I'm working on narrowing it down to something small. I can\n> reliably trigger the bug, and I understand the mechanics, I\n> think. Interestingly enough the reproducer currently only triggers on\n> v12, not on v11 and before.\n\nThat's just happenstance due to allocation changes in plpgsql,\nthough. The attached small reproducer, for me, reliably triggers crashes\non 10 - master.\n\nIt's hard to hit intentionally, because plpgsql does a datumCopy() to\nits non-null return value, which means that to hit the bug, one needs\ndifferent numbers of allocations between setting up the transition value\nwith transvalueisnull = true, transvalue = 0xsomepointer (because\nplpgsql doesn't copy NULLs), and the transition output with\ntransvalueisnull = false, transvalue = 0xsomepointer. Which is necessary\nto trigger the bug, as it's then not reparented into a long lived enough\ncontext. To be then freed/accessed for the next group input value.\n\nI think this is too finnicky to actually keep as a regression test.\n\nThe bug, in a way, exists all the way back, but it's a bit harder to\ncreate NULL values where the datum component isn't 0.\n\n\nTo fix I suggest we, in all branches, do the equivalent of adding\nsomething like:\ndiff --git i/src/backend/executor/execExprInterp.c w/src/backend/executor/execExprInterp.c\nindex 790380051be..3260a63ac6b 100644\n--- i/src/backend/executor/execExprInterp.c\n+++ w/src/backend/executor/execExprInterp.c\n@@ -4199,6 +4199,12 @@ ExecAggTransReparent(AggState *aggstate, AggStatePerTrans pertrans,\n pertrans->transtypeByVal,\n pertrans->transtypeLen);\n }\n+ else\n+ {\n+ /* ensure datum component is 0 for NULL transition values */\n+ newValue = (Datum) 0;\n+ }\n+\n if (!oldValueIsNull)\n {\n if (DatumIsReadWriteExpandedObject(oldValue,\n\nand a comment explaining why it's (now) safe to rely on datum\ncomparisons for\n if (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue))\n\n\nI don't think it makes sense to add something like it to the byval case\n- there's plenty other ways a function returning != 0 with\nfcinfo->isnull == true can cause such values to exist. And that's\nlongstanding.\n\n\nA separate question is whether it's worth adding code to\ne.g. EEO_CASE(EEOP_FUNCEXPR_STRICT) also resetting *op->resvalue to\n(Datum) 0. I don't personally don't think ensuring the datum is always\n0 when isnull true is all that helpful, if we can't guarantee it\neverywhere. So I'm a bit loathe to add cycles to places that don't need\nit, and are hot.\n\nRegards,\n\nAndres",
"msg_date": "Wed, 15 Jan 2020 19:16:30 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-15 19:16:30 -0800, Andres Freund wrote:\n> The bug, in a way, exists all the way back, but it's a bit harder to\n> create NULL values where the datum component isn't 0.\n\n> To fix I suggest we, in all branches, do the equivalent of adding\n> something like:\n> diff --git i/src/backend/executor/execExprInterp.c w/src/backend/executor/execExprInterp.c\n> index 790380051be..3260a63ac6b 100644\n> --- i/src/backend/executor/execExprInterp.c\n> +++ w/src/backend/executor/execExprInterp.c\n> @@ -4199,6 +4199,12 @@ ExecAggTransReparent(AggState *aggstate, AggStatePerTrans pertrans,\n> pertrans->transtypeByVal,\n> pertrans->transtypeLen);\n> }\n> + else\n> + {\n> + /* ensure datum component is 0 for NULL transition values */\n> + newValue = (Datum) 0;\n> + }\n> +\n> if (!oldValueIsNull)\n> {\n> if (DatumIsReadWriteExpandedObject(oldValue,\n> \n> and a comment explaining why it's (now) safe to rely on datum\n> comparisons for\n> if (DatumGetPointer(newVal) != DatumGetPointer(pergroup->transValue))\n\nPushed something along those lines.\n\n\n> A separate question is whether it's worth adding code to\n> e.g. EEO_CASE(EEOP_FUNCEXPR_STRICT) also resetting *op->resvalue to\n> (Datum) 0. I don't personally don't think ensuring the datum is always\n> 0 when isnull true is all that helpful, if we can't guarantee it\n> everywhere. So I'm a bit loathe to add cycles to places that don't need\n> it, and are hot.\n\nI wonder if its worth adding a few valgrind annotations marking values\nas undefined when null? Would make it easier to catch such cases in the\nfuture.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Mon, 20 Jan 2020 23:35:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: aggregate crash"
}
] |
[
{
"msg_contents": "Folks,\n\nWhile noodling around with an upcoming patch to remove user-modifiable\nRULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\ntriggers for no discernible reason. This patch removes that\nrestriction.\n\nI noticed that columns were also disallowed in INSTEAD OF triggers,\nbut haven't dug further into those just yet.\n\nWhat say?\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 28 Dec 2019 04:01:14 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Allow WHEN in INSTEAD OF triggers"
},
{
"msg_contents": "On 2019-Dec-28, David Fetter wrote:\n\n> While noodling around with an upcoming patch to remove user-modifiable\n> RULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\n> triggers for no discernible reason. This patch removes that\n> restriction.\n\nIf you want to remove the restriction, your patch should add some test\ncases that show it working.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 28 Dec 2019 00:12:30 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow WHEN in INSTEAD OF triggers"
},
{
"msg_contents": "David Fetter <david@fetter.org> writes:\n> While noodling around with an upcoming patch to remove user-modifiable\n> RULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\n> triggers for no discernible reason. This patch removes that\n> restriction.\n\nThis seems like a remarkably bad idea. The point of an INSTEAD OF\ntrigger is that it is guaranteed to handle the operation. What's\nthe system supposed to do with rows the trigger doesn't handle?\n\nI notice that your patch doesn't even bother to test what happens,\nbut I'd argue that whatever it is, it's wrong. If you think that\n\"do nothing\" or \"throw an error\" is appropriate, you can code that\ninside the trigger. It's not PG's charter to make such a decision.\n\n\t\t\tregards, tom lane\n\nPS: I think your chances of removing rules are not good, either.\n\n\n",
"msg_date": "Fri, 27 Dec 2019 22:29:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Allow WHEN in INSTEAD OF triggers"
},
{
"msg_contents": "On Fri, Dec 27, 2019 at 10:29:15PM -0500, Tom Lane wrote:\n> David Fetter <david@fetter.org> writes:\n> > While noodling around with an upcoming patch to remove user-modifiable\n> > RULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\n> > triggers for no discernible reason. This patch removes that\n> > restriction.\n> \n> This seems like a remarkably bad idea. The point of an INSTEAD OF\n> trigger is that it is guaranteed to handle the operation. What's\n> the system supposed to do with rows the trigger doesn't handle?\n\nNothing. Why would it be different from the other forms of WHEN in\ntriggers?\n\n> I notice that your patch doesn't even bother to test what happens,\n> but I'd argue that whatever it is, it's wrong. If you think that\n> \"do nothing\" or \"throw an error\" is appropriate, you can code that\n> inside the trigger. It's not PG's charter to make such a decision.\n\nIf that's the case, why do we have WHEN for triggers at all? I'm just\nworking toward make them more consistent. From a UX perspective, it's\na lot simpler and clearer to do this in the trigger declaration than\nit is in the body.\n\n> PS: I think your chances of removing rules are not good, either.\n\nI suspect I have a lot of company in my view of user-modifiable\nrewrite rules as an experiment we can finally discontinue in view of\nits decisive results.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Sat, 28 Dec 2019 17:29:42 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Allow WHEN in INSTEAD OF triggers"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 12:12:30AM -0300, Alvaro Herrera wrote:\n> On 2019-Dec-28, David Fetter wrote:\n> \n> > While noodling around with an upcoming patch to remove user-modifiable\n> > RULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\n> > triggers for no discernible reason. This patch removes that\n> > restriction.\n> \n> If you want to remove the restriction, your patch should add some test\n> cases that show it working.\n\nTests added :)\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate",
"msg_date": "Sat, 28 Dec 2019 17:45:30 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": true,
"msg_subject": "Re: Allow WHEN in INSTEAD OF triggers"
},
{
"msg_contents": "On Sat, 28 Dec 2019 at 16:45, David Fetter <david@fetter.org> wrote:\n>\n> On Sat, Dec 28, 2019 at 12:12:30AM -0300, Alvaro Herrera wrote:\n> > On 2019-Dec-28, David Fetter wrote:\n> >\n> > > While noodling around with an upcoming patch to remove user-modifiable\n> > > RULEs, I noticed that WHEN conditions were disallowed from INSTEAD OF\n> > > triggers for no discernible reason. This patch removes that\n> > > restriction.\n> >\n> > If you want to remove the restriction, your patch should add some test\n> > cases that show it working.\n>\n> Tests added :)\n>\n\nI too think this is a bad idea.\n\nDoing nothing if the trigger's WHEN condition isn't satisfied is not\nconsistent with the way other types of trigger work -- with any other\ntype of trigger, if the WHEN condition doesn't evaluate to true, the\nquery goes ahead as if the trigger hadn't been there. So the most\nconsistent thing to do would be to attempt an auto-update if the\ntrigger isn't fired, and that leads to a whole other world of pain\n(e.g., you'd need 2 completely different query plans for the 2 cases,\nand more if you had views on top of other views).\n\nThe SQL spec explicitly states that INSTEAD OF triggers on views\nshould not have WHEN clauses, and for good reason. There are cases\nwhere it makes sense to deviate from the spec, but I don't think this\nis one of them.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 6 Jan 2020 16:15:38 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Allow WHEN in INSTEAD OF triggers"
}
] |
[
{
"msg_contents": "As I have published on\nhttps://abdulyadi.wordpress.com/2019/12/26/reinforce-data-validation-prevent-direct-table-modification/,\nthe patch is to have \"private_modify\" option in table creation. For example:\nCREATE TABLE mytable (id integer) WITH (private_modify=true);\n\nHaving the option set, even superuser can not insert/update/delete the\ntable outside SQL or SPI-based function where complex data validation takes\nplace.\n\nThe patch has been passed all regression test provided in Postgresql source\ncode (src/test/regression): make check, make installcheck, make\ninstallcheck-parallel, make checkworld, make install-checkworld.\n\nRegards,\nAbdul Yadi",
"msg_date": "Sat, 28 Dec 2019 11:33:43 +0700",
"msg_from": "Abdul Yadi AH-2 <abdulyadi.datatrans@gmail.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 12.1 patch for \"private_modify\" table creation option for\n data validation reinforcement"
},
{
"msg_contents": "Abdul Yadi AH-2 <abdulyadi.datatrans@gmail.com> writes:\n> As I have published on\n> https://abdulyadi.wordpress.com/2019/12/26/reinforce-data-validation-prevent-direct-table-modification/,\n> the patch is to have \"private_modify\" option in table creation. For example:\n> CREATE TABLE mytable (id integer) WITH (private_modify=true);\n\n> Having the option set, even superuser can not insert/update/delete the\n> table outside SQL or SPI-based function where complex data validation takes\n> place.\n\nI do not actually see the point of this. It seems randomly inconsistent\nwith the normal SQL permissions mechanisms, and it's not very flexible,\nnor does it add any meaningful security AFAICS. Anybody who can\nexecute SQL can create a function. For that matter, you don't even\nneed to create a persistent function: you can just wrap the command\nin a DO block, and that'll bypass this restriction.\n\nIn what way is this better than the usual technique of putting the\ntable update logic into SECURITY DEFINER functions, and then not\ngranting update rights to anybody other than the owner of those\nfunctions? (Please don't say \"because it blocks superusers too\".\nThat's an anti-feature.)\n\nI'm also slightly astonished by your choice to tie the implementation\nto snapshots. If we do accept something like this, we most certainly\naren't going to do it like that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Dec 2019 12:56:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL 12.1 patch for \"private_modify\" table creation option\n for data validation reinforcement"
}
] |
[
{
"msg_contents": "I recently came across the need for a gcd function (actually I needed\nlcm) and was surprised that we didn't have one.\n\n\nSo here one is, using the basic Euclidean algorithm. I made one for\nsmallint, integer, and bigint.\n\n-- \n\nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support",
"msg_date": "Sat, 28 Dec 2019 17:58:52 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Greatest Common Divisor"
},
{
"msg_contents": "Bonsoir Vik,\n\n> I recently came across the need for a gcd function (actually I needed\n> lcm) and was surprised that we didn't have one.\n\nWhy not.\n\n> So here one is, using the basic Euclidean algorithm. I made one for\n> smallint, integer, and bigint.\n\nShould pg provide the LCM as well? Hmmm, probably not, too likely to \noverflow.\n\nShould there be a NUMERIC version as well? I'd say maybe yes.\n\nI'm wondering what it should do on N, 0 and 0, 0. Raise an error? Return \n0? Return 1? return N? There should be some logic and comments explaining \nit.\n\nI'd test with INT_MIN and INT_MAX.\n\nGiven that there are no overflows risk with the Euclidian descent, would\nit make sense that the int2 version call the int4 implementation?\n\nC modulo operator (%) is a pain because it is not positive remainder (2 % \n-3 == -1 vs 2 % 3 == 2, AFAICR). It does not seem that fixing the sign \nafterwards is the right thing to do. I'd rather turn both arguments \npositive before doing the descent.\n\nWhich raises an issue with INT_MIN by the way, which has no positive:-(\n\nAlso, the usual approach is to exchange args so that the largest is first, \nbecause there may be a software emulation if the hardware does not \nimplement modulo. At least it was the case with some sparc processors 25 \nyears ago:-)\n\nSee for instance (the int min value is probably not well handled):\n\n https://svn.cri.ensmp.fr/svn/linear/trunk/src/arithmetique/pgcd.c\n\nBasically, welcome to arithmetic:-)\n\n-- \nFabien.",
"msg_date": "Sat, 28 Dec 2019 19:15:03 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 28/12/2019 19:15, Fabien COELHO wrote:\n>\n>> So here one is, using the basic Euclidean algorithm. I made one for\n>> smallint, integer, and bigint.\n>\n> Should pg provide the LCM as well? Hmmm, probably not, too likely to\n> overflow.\n\n\nI decided against it for that reason.\n\n\n> Should there be a NUMERIC version as well? I'd say maybe yes.\n\n\nI thought about that, too, but also decided against it for this patch.\n\n\n> I'm wondering what it should do on N, 0 and 0, 0. Raise an error?\n> Return 0? Return 1? return N? There should be some logic and comments\n> explaining it.\n\n\nWell, gcd(N, 0) is N, and gcd(0, 0) is 0, so I don't see an issue here?\n\n\n> I'd test with INT_MIN and INT_MAX.\n\n\nOkay, I'll add tests for those, instead of the pretty much random values\nI have now.\n\n\n> Given that there are no overflows risk with the Euclidian descent, would\n> it make sense that the int2 version call the int4 implementation?\n\n\nMeh.\n\n\n>\n> C modulo operator (%) is a pain because it is not positive remainder\n> (2 % -3 == -1 vs 2 % 3 == 2, AFAICR). \n\n\nThis does not seem to be the case...\n\n\n> It does not seem that fixing the sign afterwards is the right thing to\n> do. I'd rather turn both arguments positive before doing the descent.\n\n\nWhy isn't it the right thing to do?\n\n\n> Which raises an issue with INT_MIN by the way, which has no positive:-(\n\n\nThat's an argument against abs-ing the input values. It's only an issue\nwith gcd(INT_MIN, INT_MIN) which currently returns INT_MIN. Any other\nvalue with INT_MIN will be 1 or something lower than INT_MAX.\n\n\n> Also, the usual approach is to exchange args so that the largest is\n> first, because there may be a software emulation if the hardware does\n> not implement modulo. At least it was the case with some sparc\n> processors 25 years ago:-)\n\n\nThe args will exchange themselves.\n\n\nThanks for the review! Attached is a new patch that changes the\nregression tests based on your comments (and another comment that I got\non irc to test gcd(b, a)).",
"msg_date": "Sat, 28 Dec 2019 23:03:59 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Bonjour Vik,\n\n>> Should there be a NUMERIC version as well? I'd say maybe yes.\n>\n> I thought about that, too, but also decided against it for this patch.\n\nHmmm. ISTM that int functions are available for numeric?\n\n>> I'm wondering what it should do on N, 0 and 0, 0. Raise an error?\n>> Return 0? Return 1? return N? There should be some logic and comments\n>> explaining it.\n>\n> Well, gcd(N, 0) is N, and gcd(0, 0) is 0, so I don't see an issue here?\n\nI think that there should be a comment.\n\n>> I'd test with INT_MIN and INT_MAX.\n>\n> Okay, I'll add tests for those, instead of the pretty much random values\n> I have now.\n>\n>> C modulo operator (%) is a pain because it is not positive remainder\n>> (2 % -3 == -1 vs 2 % 3 == 2, AFAICR).\n>\n> This does not seem to be the case...\n\nIndeed, I tested quickly with python, but it has yet another behavior as \nshown above, what a laugh!\n\nSo with C: 2 % -3 == 2, -2 % 3 == -2\n\nNote that AFAICS there is no integer i so that 3 * i - (-2) == -2.\n\n>> It does not seem that fixing the sign afterwards is the right thing to\n>> do. I'd rather turn both arguments positive before doing the descent.\n>\n> Why isn't it the right thing to do?\n\nBecause I do not trust C modulo as I had a lot of problems with it? :-)\n\nIf it works, but it should deserve a clear comment explaining why.\n\n>> Which raises an issue with INT_MIN by the way, which has no positive:-(\n>\n> That's an argument against abs-ing the input values. It's only an issue\n> with gcd(INT_MIN, INT_MIN) which currently returns INT_MIN.\n\nThat should be an error instead, because -INT_MIN cannot be represented?\n\n> Any other value with INT_MIN will be 1 or something lower than INT_MAX.\n\nLooks ok.\n\n>> Also, the usual approach is to exchange args so that the largest is\n>> first, because there may be a software emulation if the hardware does\n>> not implement modulo. At least it was the case with some sparc\n>> processors 25 years ago:-)\n>\n> The args will exchange themselves.\n\nYep, but after a possibly expensive software-emulated modulo operation?\n\n-- \nFabien.",
"msg_date": "Sun, 29 Dec 2019 08:30:26 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 29/12/2019 08:30, Fabien COELHO wrote:\n>\n>>> I'm wondering what it should do on N, 0 and 0, 0. Raise an error?\n>>> Return 0? Return 1? return N? There should be some logic and comments\n>>> explaining it.\n>>\n>> Well, gcd(N, 0) is N, and gcd(0, 0) is 0, so I don't see an issue here?\n>\n> I think that there should be a comment.\n\n\nDone.\n\n\n>>> It does not seem that fixing the sign afterwards is the right thing to\n>>> do. I'd rather turn both arguments positive before doing the descent.\n>>\n>> Why isn't it the right thing to do?\n>\n> Because I do not trust C modulo as I had a lot of problems with it? :-)\n>\n> If it works, but it should deserve a clear comment explaining why.\n\n\nSurely such a comment should be on the mod functions and not in this patch.\n\n\n>\n>>> Which raises an issue with INT_MIN by the way, which has no positive:-(\n>>\n>> That's an argument against abs-ing the input values. It's only an issue\n>> with gcd(INT_MIN, INT_MIN) which currently returns INT_MIN.\n>\n> That should be an error instead, because -INT_MIN cannot be represented?\n\n\nWhy should it error? Is INT_MIN not a valid divisor of INT_MIN? I\nadded a comment instead.\n\n\n>>> Also, the usual approach is to exchange args so that the largest is\n>>> first, because there may be a software emulation if the hardware does\n>>> not implement modulo. At least it was the case with some sparc\n>>> processors 25 years ago:-)\n>>\n>> The args will exchange themselves.\n>\n> Yep, but after a possibly expensive software-emulated modulo operation?\n>\n\nI'll just trust you on this. Swap added.\n\n\nThanks!\n\n-- \n\nVik",
"msg_date": "Sun, 29 Dec 2019 14:58:49 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 12/29/19 02:30, Fabien COELHO wrote:\n\n>>> C modulo operator (%) is a pain because it is not positive remainder\n>>> (2 % -3 == -1 vs 2 % 3 == 2, AFAICR).\n>>\n>> This does not seem to be the case...\n> ...\n> Because I do not trust C modulo as I had a lot of problems with it? :-)\n\nIf I recall correctly (and I'm traveling and away from those notes),\nthe exact semantics of C's % with negative operands was left\nimplementation-defined until, was it, C99 ?\n\nSo it might be ok to rely on the specified C99 behavior (whichever\nbehavior that is, he wrote, notelessly) for PG 12 and later, where\nC99 is expected.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Sun, 29 Dec 2019 11:50:15 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "\nHello,\n\n>> Because I do not trust C modulo as I had a lot of problems with it?:-)\n>\n> If I recall correctly (and I'm traveling and away from those notes),\n> the exact semantics of C's % with negative operands was left\n> implementation-defined until, was it, C99 ?\n\nIndeed, my woes with C % started before that date:-)\n\nBy Googling the C99 spec, I found: \"When integers are divided, the result \nof the / operator is the algebraic quotient with any fractional part \ndiscarded (aka truncation toward zero). If the quotient a/b is \nrepresentable, the expression (a/b)*b + a%b shall equal a.\"\n\nLet a = 2 and b = -3, then a/b == 0 (-0.666 truncated toward zero), then\n\n (a/b)*b + a%b == a\n\n=> 0 * -3 + (2 % -3) == 2\n\n=> 2 % -3 == 2\n\nThen with a = -2, b = 3, then a/b == 0 (same as above), and the same \nreasoning leads to\n\n -2 % 3 == -2\n\nWhich is indeed what was produced with C, but not with Python.\n\nThe good news is that the absolute value of the modulo is the module in \nthe usual sense, which is what is needed for the Euclidian descent and \nallows fixing the sign afterwards, as Vik was doing.\n\n> So it might be ok to rely on the specified C99 behavior (whichever\n> behavior that is, he wrote, notelessly) for PG 12 and later, where\n> C99 is expected.\n\nYep, probably with a comment.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 29 Dec 2019 18:13:32 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Out of curiosity, what was the original use-case for this?\n\nI'm not objecting to adding it, I'm just curious. In fact, I think\nthat if we do add this, then we should probably add lcm() at the same\ntime, since handling its overflow cases is sufficiently non-trivial to\njustify not requiring users to have to implement it themselves.\n\nI don't like the INT_MIN handling though:\n\nselect gcd(-2147483648,0);\n gcd\n-------------\n -2147483648\n(1 row)\n\nselect gcd(-2147483648,-2147483648);\n gcd\n-------------\n -2147483648\n(1 row)\n\nNormally gcd() returns a positive integer, and gcd(a,0) = gcd(a,a) =\nabs(a). But since abs(INT_MIN) cannot be represented as a 32-bit\ninteger, both those cases should throw an integer-out-of-range error.\n\nIn addition, the following case should produce 1, but for me it\nproduces an error. This is actually going to be platform-dependent as\nit is currently implemented (see the comments in int4div and int4mod):\n\nselect gcd(-2147483648,-1);\nERROR: floating-point exception\nDETAIL: An invalid floating-point operation was signaled. This\nprobably means an out-of-range result or an invalid operation, such as\ndivision by zero.\n\nso there needs to be some special-case INT_MIN handling at the start\nto deal with these cases.\n\nAFAIK, what gcd(0,0) should be is not well defined, but most common\nimplementations seem to return 0 (I checked Matlab, python and Java's\nstandard libraries). This seems like a reasonable extrapolation of the\nrule gcd(a,0) = gcd(0,a) = gcd(a,a) = abs(a), so I don't have a\nproblem with doing the same here, but I think that it should be\ndocumented (e.g., see [1]), if for no other reason than users might\nexpect it to be safe to divide by the result.\n\nRegards,\nDean\n\n[1] https://www.mathworks.com/help/matlab/ref/gcd.html\n\n\n",
"msg_date": "Thu, 2 Jan 2020 14:50:44 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Greetings,\n\n* Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> I'm not objecting to adding it, I'm just curious. In fact, I think\n> that if we do add this, then we should probably add lcm() at the same\n> time, since handling its overflow cases is sufficiently non-trivial to\n> justify not requiring users to have to implement it themselves.\n\nI tend to agree with this.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 2 Jan 2020 09:57:12 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n>> I'm not objecting to adding it, I'm just curious. In fact, I think\n>> that if we do add this, then we should probably add lcm() at the same\n>> time, since handling its overflow cases is sufficiently non-trivial to\n>> justify not requiring users to have to implement it themselves.\n\n> I tend to agree with this.\n\nDoes this impact the decision about whether we need a variant for\nnumeric? I was leaning against that, primarily because (a)\nit'd introduce a set of questions about what to do with non-integral\ninputs, and (b) it'd make the patch quite a lot larger, I imagine.\nBut a variant of lcm() that returns numeric would have much more\nresistance to overflow.\n\nMaybe we could just define \"lcm(bigint, bigint) returns numeric\"\nand figure that that covers all cases, but it feels slightly\nweird. You couldn't do lcm(lcm(a,b),c) without casting.\nI guess that particular use-case could be addressed with\n\"lcm(variadic bigint[]) returns numeric\", but that's getting\nreally odd.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 10:12:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 12:15 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Bonsoir Vik,\n>\n> > I recently came across the need for a gcd function (actually I needed\n> > lcm) and was surprised that we didn't have one.\n>\n> Why not.\n\nProliferation of code in the public namespace; it can displace code\nthat is written by others during the upgrade.\n\nmerlin\n\n\n",
"msg_date": "Thu, 2 Jan 2020 10:59:35 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "\n> Normally gcd() returns a positive integer, and gcd(a,0) = gcd(a,a) =\n> abs(a). But since abs(INT_MIN) cannot be represented as a 32-bit\n> integer, both those cases should throw an integer-out-of-range error.\n\nI'm also in favor of that option, rather than sending a negative result as \na result.\n\nAbout lcm(a, b): a / gcd(a, b) * b, at least if a & b are positive. If \nnot, some thoughts are needed:-)\n\nReturning a NUMERIC as suggested by Tom would solve the overflow problem \nby sending it back to the user who has to cast. This looks ok to me.\n\nMaybe we could provide \"int4 lcm(int2, int2)\", \"int8 lcm(int4, int4)\", as \nISTM that there cannot be overflows on those (eg for the later: lcm <= \na*b, a & b are 31 non-signed bits, 62 bits are needed, 63 are available).\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 09:43:40 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-01-02 15:50, Dean Rasheed wrote:\n> Out of curiosity, what was the original use-case for this?\n\nYeah, I'm wondering, is this useful for any typical analytics or \nbusiness application? Otherwise, abstract algebra functionality seems a \nbit out of scope.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 10:00:14 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-02 15:50, Dean Rasheed wrote:\n>> Out of curiosity, what was the original use-case for this?\n\n> Yeah, I'm wondering, is this useful for any typical analytics or \n> business application? Otherwise, abstract algebra functionality seems a \n> bit out of scope.\n\nNobody complained when we added sinh, cosh, tanh, asinh, acosh, atanh\nlast year, so I'm feeling skeptical of claims that gcd should be out\nof scope.\n\nNow, those functions were just exposing libc functionality, so there\nwasn't a lot of code to write. There might be a good argument that\ngcd isn't useful enough to justify the amount of code we'd have to\nadd (especially if we allow it to scope-creep into needing to deal\nwith \"numeric\" calculations). But I'm not on board with just\ndismissing it as uninteresting.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 10:22:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Now, those functions were just exposing libc functionality, so there\n> wasn't a lot of code to write. There might be a good argument that\n> gcd isn't useful enough to justify the amount of code we'd have to\n> add (especially if we allow it to scope-creep into needing to deal\n> with \"numeric\" calculations). But I'm not on board with just\n> dismissing it as uninteresting.\n\nYeah. There's always the question with things like this as to whether\nwe ought to push certain things into contrib modules that are not\ninstalled by default to avoid bloating the set of things built into\nthe core server. But it's hard to know where to draw the line. There's\nno objective answer to the question of whether gcd() or sinh() is more\nuseful to have in core; each is more useful to people who need that\none but not the other, and trying to guess whether more or fewer\npeople need gcd() than sinh() seems like a fool's errand. Perhaps in\nretrospect we would be better off having a 'math' extension where a\nlot of this stuff lives, and people who want that extension can\ninstall it and others need not bother. But, to try to create that now\nand move things there would break upgrades for an exceedingly marginal\nbenefit. I don't really like the namespace pollution that comes with\naccepting feature requests like this, but it's hard to argue that it's\na serious show-stopper or that the cure is any less bad than the\ndisease. And I'm sure that I'd be much more likely to use gcd() or\nlcm() in a query than tanh()...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 11:24:32 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-Jan-03, Robert Haas wrote:\n\n> On Fri, Jan 3, 2020 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Now, those functions were just exposing libc functionality, so there\n> > wasn't a lot of code to write. There might be a good argument that\n> > gcd isn't useful enough to justify the amount of code we'd have to\n> > add (especially if we allow it to scope-creep into needing to deal\n> > with \"numeric\" calculations). But I'm not on board with just\n> > dismissing it as uninteresting.\n> \n> Yeah. There's always the question with things like this as to whether\n> we ought to push certain things into contrib modules that are not\n> installed by default to avoid bloating the set of things built into\n> the core server. But it's hard to know where to draw the line. There's\n> no objective answer to the question of whether gcd() or sinh() is more\n> useful to have in core;\n\nThe SQL standard's feature T622 requires trigonometric functions, while\nit doesn't list gcd() or anything of the sort, so there's that.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:37:19 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 10:24 AM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jan 3, 2020 at 10:23 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > Now, those functions were just exposing libc functionality, so there\n> > wasn't a lot of code to write. There might be a good argument that\n> > gcd isn't useful enough to justify the amount of code we'd have to\n> > add (especially if we allow it to scope-creep into needing to deal\n> > with \"numeric\" calculations). But I'm not on board with just\n> > dismissing it as uninteresting.\n>\n> Yeah. There's always the question with things like this as to whether\n> we ought to push certain things into contrib modules that are not\n> installed by default to avoid bloating the set of things built into\n> the core server. But it's hard to know where to draw the line.\n\nJust stop doing it. It's very little extra work to package an item\ninto an extension and this protects your hapless users who might have\nimplemented a function called gcd() that does something different.\nIdeally, the public namespace should contain (by default) only sql\nstandard functions with all non-standard material in an appropriate\nextension. Already released material is obviously problematic and\nneeds more thought but we ought to at least stop making the problem\nworse if possible.\n\nmerlin\n\n\n",
"msg_date": "Fri, 3 Jan 2020 12:10:19 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 02/01/2020 16:12, Tom Lane wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n>> * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n>>> I'm not objecting to adding it, I'm just curious. In fact, I think\n>>> that if we do add this, then we should probably add lcm() at the same\n>>> time, since handling its overflow cases is sufficiently non-trivial to\n>>> justify not requiring users to have to implement it themselves.\n>> I tend to agree with this.\n> Does this impact the decision about whether we need a variant for\n> numeric? I was leaning against that, primarily because (a)\n> it'd introduce a set of questions about what to do with non-integral\n> inputs, and (b) it'd make the patch quite a lot larger, I imagine.\n> But a variant of lcm() that returns numeric would have much more\n> resistance to overflow.\n>\n> Maybe we could just define \"lcm(bigint, bigint) returns numeric\"\n> and figure that that covers all cases, but it feels slightly\n> weird. You couldn't do lcm(lcm(a,b),c) without casting.\n> I guess that particular use-case could be addressed with\n> \"lcm(variadic bigint[]) returns numeric\", but that's getting\n> really odd.\n\n\nOkay. Here is a version that should handle everyone's comments.\n\n\ngcd() is now strictly positive, so INT_MIN is no longer a valid result.\n\n\nI added an lcm() function. It returns the same type as its arguments so\noverflow is possible. I made this choice because int2mul returns int2\n(and same for its friends). One can just cast to a bigger integer if\nneeded.\n\n\nBecause of that, I added a version of gcd() and lcm() for numeric. This\nwas my first time working with numeric so reviewers should pay extra\nattention there, please.\n\n\nPatch attached.\n\n-- \n\nVik Fearing",
"msg_date": "Fri, 3 Jan 2020 19:34:55 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 1:10 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n> Just stop doing it. It's very little extra work to package an item\n> into an extension and this protects your hapless users who might have\n> implemented a function called gcd() that does something different.\n> Ideally, the public namespace should contain (by default) only sql\n> standard functions with all non-standard material in an appropriate\n> extension. Already released material is obviously problematic and\n> needs more thought but we ought to at least stop making the problem\n> worse if possible.\n\nThere are counter-arguments to that, though. Maintaining a lot of\nextensions with only one or two functions in them is a nuisance.\nHaving things installed by default is convenient for wanting to use\nthem. Maintaining contrib code so that it works whether or not the SQL\ndefinitions have been updated via ALTER EXTENSION .. UPDATE takes some\nwork and thought, and sometimes we screw it up.\n\nI don't find any position on this topic to be without merit.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:46:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 1/3/20 1:46 PM, Robert Haas wrote:\n> On Fri, Jan 3, 2020 at 1:10 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n>> Just stop doing it. It's very little extra work to package an item\n>> into an extension and this protects your hapless users who might have\n>> implemented a function called gcd() that does something different.\n>> ...\n> There are counter-arguments to that, though. Maintaining a lot of\n> extensions with only one or two functions in them is a nuisance.\n> Having things installed by default is convenient for wanting to use\n> them. Maintaining contrib code so that it works whether or not the SQL\n> definitions have been updated via ALTER EXTENSION .. UPDATE takes some\n> work and thought, and sometimes we screw it up.\n\nIs there a middle ground staring us in the face, where certain things\ncould be added in core, but in a new schema like pg_math (pg_ !), so\nif you want them you put them on your search path or qualify them\nexplicitly, and if you don't, you don't?\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:57:42 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 1:57 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> Is there a middle ground staring us in the face, where certain things\n> could be added in core, but in a new schema like pg_math (pg_ !), so\n> if you want them you put them on your search path or qualify them\n> explicitly, and if you don't, you don't?\n\nI guess, but it seems like a patch whose mandate is to add one or two\nfunctions should not be burdened with inventing an entirely new way to\ndo extensibility. Also, I'm not entirely sure that really addresses\nall the concerns. Part of my concern about continually adding new\nfunctions to core comes from the fact that it bloats the core code,\nand moving things to another schema does not help with that. It does\npotentially help with the namespace pollution issue, but how much of\nan issue is that anyway? Unless you've set up an unusual search_path\nconfiguration, your own schemas probably precede pg_catalog in your\nsearch path, besides which it seems unlikely that many people have a\ngcd() function that does anything other than take the greatest common\ndivisor.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:11:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "\nBonsoir Vik,\n\n +int4gcd_internal(int32 arg1, int32 arg2)\n +{\n + int32 swap;\n +\n + /*\n + * Put the greater value in arg1.\n + * This would happen automatically in the loop below, but avoids an\n + * expensive modulo simulation on some architectures.\n + */\n + if (arg1 < arg2)\n + {\n + swap = arg1;\n + arg1 = arg2;\n + arg2 = swap;\n + }\n\n\nThe point of swapping is to a void possibly expensive modulo, but this \nshould be done on absolute values, otherwise it may not achieve its \npurpose as stated by the comment?\n\n> gcd() is now strictly positive, so INT_MIN is no longer a valid result.\n\nOk.\n\nI'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be \nnicer?\n\n-- \nFabien.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 20:14:13 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 1/3/20 2:11 PM, Robert Haas wrote:\n> and moving things to another schema does not help with that. It does\n> potentially help with the namespace pollution issue, but how much of\n> an issue is that anyway? Unless you've set up an unusual search_path\n> configuration, your own schemas probably precede pg_catalog in your\n> search path, besides which it seems unlikely that many people have a\n> gcd() function that does anything other than take the greatest common\n> divisor.\n\nAs seen in this thread though, there can be edge cases of \"take the\ngreatest common divisor\" that might not be identically treated in a\nthoroughly-reviewed addition to core as in someone's hastily-rolled\nlocal version.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:27:25 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 2:27 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> On 1/3/20 2:11 PM, Robert Haas wrote:\n> > and moving things to another schema does not help with that. It does\n> > potentially help with the namespace pollution issue, but how much of\n> > an issue is that anyway? Unless you've set up an unusual search_path\n> > configuration, your own schemas probably precede pg_catalog in your\n> > search path, besides which it seems unlikely that many people have a\n> > gcd() function that does anything other than take the greatest common\n> > divisor.\n>\n> As seen in this thread though, there can be edge cases of \"take the\n> greatest common divisor\" that might not be identically treated in a\n> thoroughly-reviewed addition to core as in someone's hastily-rolled\n> local version.\n\nTrue, but because of the way search_path is typically set, they'd\nprobably continue to get their own version anyway, so I'm not sure\nwhat the problem is.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:32:01 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-01-03 16:22, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> On 2020-01-02 15:50, Dean Rasheed wrote:\n>>> Out of curiosity, what was the original use-case for this?\n> \n>> Yeah, I'm wondering, is this useful for any typical analytics or\n>> business application? Otherwise, abstract algebra functionality seems a\n>> bit out of scope.\n> \n> Nobody complained when we added sinh, cosh, tanh, asinh, acosh, atanh\n> last year, so I'm feeling skeptical of claims that gcd should be out\n> of scope.\n\nGeometry is generally in scope, though, for Postgres specifically and \nfor databases in general.\n\nAbstract algebra is not in scope, so far, and we still haven't been told \nthe use case for this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 21:09:38 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 1:32 PM Robert Haas <robertmhaas@gmail.com> wrote:\n>\n> On Fri, Jan 3, 2020 at 2:27 PM Chapman Flack <chap@anastigmatix.net> wrote:\n> > On 1/3/20 2:11 PM, Robert Haas wrote:\n> > > and moving things to another schema does not help with that. It does\n> > > potentially help with the namespace pollution issue, but how much of\n> > > an issue is that anyway? Unless you've set up an unusual search_path\n> > > configuration, your own schemas probably precede pg_catalog in your\n> > > search path, besides which it seems unlikely that many people have a\n> > > gcd() function that does anything other than take the greatest common\n> > > divisor.\n> >\n> > As seen in this thread though, there can be edge cases of \"take the\n> > greatest common divisor\" that might not be identically treated in a\n> > thoroughly-reviewed addition to core as in someone's hastily-rolled\n> > local version.\n>\n> True, but because of the way search_path is typically set, they'd\n> probably continue to get their own version anyway, so I'm not sure\n> what the problem is.\n\nIs that right? Default search_path is for pg_catalog to resolve before\npublic. Lightly testing with a hand rolled pg_advisory_lock\nimplementation that raise a notice, my default database seemed to\nprefer the build in function. Maybe I'm not following you.\n\n> There are counter-arguments to that, though. Maintaining a lot of\n> extensions with only one or two functions in them is a nuisance.\n> Having things installed by default is convenient for wanting to use\n> them. Maintaining contrib code so that it works whether or not the SQL\n> definitions have been updated via ALTER EXTENSION .. UPDATE takes some\n> work and thought, and sometimes we screw it up.\n\nIf the external contract changes (which seems likely for gcd) than I\nwould much rather have the core team worry about this than force your\nusers to worry about it, which is what putting the function in core\nwould require them to do (if version < x call it this way, > y then\nthat way etc). This is exactly why we shouldn't be putting non\nstandard items in core (maybe excepting some pg_ prefixed\nadministration functions).\n\nNow, it's quite unfair to $OP to saddle his proposal and patch with\nthe broader considerations of core/extension packaging, so if some\nkind of rational framework can be applied to the NEXT submission, or a\nleast a discussion about this can start, those are all good options.\nBut we need to start from somewhere, and moving forward with, \"If it's\nnot sql standard or prefixed with pg_, it ought not to be in\npg_catalog\" might be a good way to open the discussion.\n\nmerlin\n\n\n",
"msg_date": "Fri, 3 Jan 2020 14:51:37 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 1/3/20 3:09 PM, Peter Eisentraut wrote:\n> Geometry is generally in scope, though, for Postgres specifically and\n> for databases in general.\n> \n> Abstract algebra is not in scope, so far, and we still haven't been told\n> the use case for this.\n\nIt's funny, I think I've used gcd and lcm in real life way more often\nthan sinh and cosh, maybe even as often as sin and cos. For example,\nhow many times around will I have to go with this engine crankshaft\nto be able to confirm the painted links on the timing chain really\ndo line up with the sprocket marks? (Need to count the sprocket\nteeth and the chain links.)\n\nOr, if I'm cycling through two different-length tuple stores, how\nmany times before the same tuples coincide again? That isn't a question\nI've yet had an occasion to face, but I don't have to squint real hard\nto imagine it arising in a database in some situation or other. This\nis just me riffing, as of course I'm not the person who had such a\npressing use case as to justify sitting down to write the patch.\n\nAnother funny thing: this message sent me googling just to indulge\nmy own \"is gcd more abstract algebra or number theory?\" quibble*, and\nI ended up discovering there are more algorithms for it than the\nEuclidean one I remember.\n\nThere's a binary one using only ands, subtractions, and shifts,\nasymptotically the same as Euclid but perhaps somewhat faster:\nhttps://en.wikipedia.org/wiki/Binary_GCD_algorithm\n\nIt looks fairly simple to code up, if not quite as short as Euclid.\n\nThere's at least one specific to representations like numeric:\nhttps://en.wikipedia.org/wiki/Lehmer%27s_GCD_algorithm\n\n... considerably more effort to implement though.\n\nIt might be possible, if there are crypto libraries we're already\nlinking to for other reasons like SSL, there could be good\nbig-number gcd implementations already in there.\n\nRegards,\n-Chap\n\n\n* maybe I've decided to call it number theory if the things being gcd'd\nare integers, abstract algebra if they belong to other commutative rings.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 16:06:31 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-Jan-03, Merlin Moncure wrote:\n\n> On Fri, Jan 3, 2020 at 1:32 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> > True, but because of the way search_path is typically set, they'd\n> > probably continue to get their own version anyway, so I'm not sure\n> > what the problem is.\n> \n> Is that right? Default search_path is for pg_catalog to resolve before\n> public. Lightly testing with a hand rolled pg_advisory_lock\n> implementation that raise a notice, my default database seemed to\n> prefer the build in function. Maybe I'm not following you.\n\nMaybe a very simple solution is indeed to have a separate pg_math or\npg_extra or whatever, which by default is *last* in the search_path.\nThat would make a user's gcd() be chosen preferently, if one exists.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 18:10:43 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 1/3/20 4:10 PM, Alvaro Herrera wrote:\n\n> Maybe a very simple solution is indeed to have a separate pg_math or\n> pg_extra or whatever, which by default is *last* in the search_path.\n> That would make a user's gcd() be chosen preferently, if one exists.\n\nI'm liking the direction this is going.\n\nRegards,\n-Chap\n\n\n",
"msg_date": "Fri, 3 Jan 2020 16:14:22 -0500",
"msg_from": "Chapman Flack <chap@anastigmatix.net>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 3:51 PM Merlin Moncure <mmoncure@gmail.com> wrote:\n> Is that right? Default search_path is for pg_catalog to resolve before\n> public. Lightly testing with a hand rolled pg_advisory_lock\n> implementation that raise a notice, my default database seemed to\n> prefer the build in function. Maybe I'm not following you.\n\nNope, I'm just wrong. Sorry.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 17:09:37 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 4:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> Maybe a very simple solution is indeed to have a separate pg_math or\n> pg_extra or whatever, which by default is *last* in the search_path.\n> That would make a user's gcd() be chosen preferently, if one exists.\n\nThen every time we add a function, or anything else, we can bikeshed\nabout whether it should go in pg_catalog or pg_extra!\n\nFWIW, EnterpriseDB has something like this for Advanced Server, and it\nactually adds a fair amount of complexity, much of it around\nOverrideSearchPath. It's not unmanageable, but it's not trivial,\neither.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 17:13:35 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-Jan-03, Robert Haas wrote:\n\n> On Fri, Jan 3, 2020 at 4:11 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> > Maybe a very simple solution is indeed to have a separate pg_math or\n> > pg_extra or whatever, which by default is *last* in the search_path.\n> > That would make a user's gcd() be chosen preferently, if one exists.\n> \n> Then every time we add a function, or anything else, we can bikeshed\n> about whether it should go in pg_catalog or pg_extra!\n\nYeah, I was just thinking about that :-) I was thinking that all\nstandard-mandated functions, as well as system functions, should be in\npg_catalog; and otherwise stuff should not get in the user's way.\n\n> FWIW, EnterpriseDB has something like this for Advanced Server, and it\n> actually adds a fair amount of complexity, much of it around\n> OverrideSearchPath. It's not unmanageable, but it's not trivial,\n> either.\n\nOh, hmm. okay.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 3 Jan 2020 19:31:31 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 03/01/2020 20:14, Fabien COELHO wrote:\n>\n> Bonsoir Vik,\n>\n> +int4gcd_internal(int32 arg1, int32 arg2)\n> +{\n> + int32 swap;\n> +\n> + /*\n> + * Put the greater value in arg1.\n> + * This would happen automatically in the loop below, but\n> avoids an\n> + * expensive modulo simulation on some architectures.\n> + */\n> + if (arg1 < arg2)\n> + {\n> + swap = arg1;\n> + arg1 = arg2;\n> + arg2 = swap;\n> + }\n>\n>\n> The point of swapping is to a void possibly expensive modulo, but this\n> should be done on absolute values, otherwise it may not achieve its\n> purpose as stated by the comment?\n\n\nAh, true. How widespread are these architectures that need this special\ntreatment? Is it really worth handling?\n\n\n> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be nicer?\n\n\nWhat justification for that do you have?\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Fri, 3 Jan 2020 23:57:54 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> On 2020-Jan-03, Robert Haas wrote:\n>> Then every time we add a function, or anything else, we can bikeshed\n>> about whether it should go in pg_catalog or pg_extra!\n\n> Yeah, I was just thinking about that :-) I was thinking that all\n> standard-mandated functions, as well as system functions, should be in\n> pg_catalog; and otherwise stuff should not get in the user's way.\n\nI think that ship sailed a long time ago, frankly.\n\nWhy is it that this particular proposal is such a problem that we\nneed to redesign how we add features? There are currently 2977\nrows in a default installation's pg_proc, with 2447 unique values\nof proname. Certainly at least a couple of thousand of them are not\nstandard-mandated; despite which there are only 357 named 'pg_something'.\ngcd and/or lcm are not going to move the needle noticeably.\n\nI'd also submit that just pushing a bunch of built-in stuff into a\nschema that's behind the users' schema instead of in front doesn't\nmean that all is magically better. There are still going to be the\nsame issues that make CVE-2018-1058 such a problem, but now we get\nto have them in both directions not just one:\n\n* a system-supplied function in \"pg_extra\" could still capture a call\naway from a user-supplied one in an earlier schema, if it is a better\nmatch to the actual argument types;\n\n* malicious users now have a much better chance to capture other\npeople's calls to \"pg_extra\" functions, since they can just drop an\nexact match into public.\n\n(BTW, I'm pretty sure we've had this conversation before. I\ndefinitely recall a proposal to try to move functions not meant\nfor user consumption at all, such as index support functions,\ninto a whole other schema that wouldn't be in the path period.\nIt went nowhere, partly because those functions don't seem to\nbe big problems in practice.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 18:00:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-01-03 18:00:01 -0500, Tom Lane wrote:\n> Alvaro Herrera <alvherre@2ndquadrant.com> writes:\n> > On 2020-Jan-03, Robert Haas wrote:\n> >> Then every time we add a function, or anything else, we can bikeshed\n> >> about whether it should go in pg_catalog or pg_extra!\n> \n> > Yeah, I was just thinking about that :-) I was thinking that all\n> > standard-mandated functions, as well as system functions, should be in\n> > pg_catalog; and otherwise stuff should not get in the user's way.\n> \n> I think that ship sailed a long time ago, frankly.\n> \n> Why is it that this particular proposal is such a problem that we\n> need to redesign how we add features? There are currently 2977\n> rows in a default installation's pg_proc, with 2447 unique values\n> of proname. Certainly at least a couple of thousand of them are not\n> standard-mandated; despite which there are only 357 named 'pg_something'.\n> gcd and/or lcm are not going to move the needle noticeably.\n> \n> I'd also submit that just pushing a bunch of built-in stuff into a\n> schema that's behind the users' schema instead of in front doesn't\n> mean that all is magically better. There are still going to be the\n> same issues that make CVE-2018-1058 such a problem, but now we get\n> to have them in both directions not just one:\n> \n> * a system-supplied function in \"pg_extra\" could still capture a call\n> away from a user-supplied one in an earlier schema, if it is a better\n> match to the actual argument types;\n> \n> * malicious users now have a much better chance to capture other\n> people's calls to \"pg_extra\" functions, since they can just drop an\n> exact match into public.\n> \n> (BTW, I'm pretty sure we've had this conversation before. I\n> definitely recall a proposal to try to move functions not meant\n> for user consumption at all, such as index support functions,\n> into a whole other schema that wouldn't be in the path period.\n> It went nowhere, partly because those functions don't seem to\n> be big problems in practice.)\n\n+1 to all of this.\n\n\n",
"msg_date": "Fri, 3 Jan 2020 15:30:22 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 03/01/2020 20:14, Fabien COELHO wrote:\n>> The point of swapping is to a void possibly expensive modulo, but this\n>> should be done on absolute values, otherwise it may not achieve its\n>> purpose as stated by the comment?\n\n> Ah, true. How widespread are these architectures that need this special\n> treatment? Is it really worth handling?\n\nOn some older RISC architectures, integer division is really slow, like\nslower than floating-point. I'm not sure if that's true on any platform\npeople still care about though. In recent years, CPU architects have been\nable to throw all the transistors they needed at such problems. On a\nmachine with single-cycle divide, it's likely that the extra\ncompare-and-branch is a net loss.\n\nMight be worth checking it on ARM in particular, as being a RISC\narchitecture that's still popular.\n\nAlso, if we end up having a \"numeric\" implementation, it absolutely is\nworth it for that, because there is nothing cheap about numeric_div.\nI'd be sort of inclined to have the swap in the other implementations\njust to keep the algorithms as much alike as possible.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 18:49:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-03 18:49:18 -0500, Tom Lane wrote:\n> On some older RISC architectures, integer division is really slow, like\n> slower than floating-point. I'm not sure if that's true on any platform\n> people still care about though. In recent years, CPU architects have been\n> able to throw all the transistors they needed at such problems. On a\n> machine with single-cycle divide, it's likely that the extra\n> compare-and-branch is a net loss.\n\nWhich architecture has single cycle division? I think it's way above\nthat, based on profiles I've seen. And Agner seems to back me up:\nhttps://www.agner.org/optimize/instruction_tables.pdf\n\nThat lists a 32/64 idiv with a latency of ~26/~42-95 cycles, even on a\nmoder uarch like skylake-x.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 3 Jan 2020 16:10:01 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2020-01-03 18:49:18 -0500, Tom Lane wrote:\n>> On a machine with single-cycle divide, it's likely that the extra\n>> compare-and-branch is a net loss.\n\n> Which architecture has single cycle division? I think it's way above\n> that, based on profiles I've seen. And Agner seems to back me up:\n> https://www.agner.org/optimize/instruction_tables.pdf\n> That lists a 32/64 idiv with a latency of ~26/~42-95 cycles, even on a\n> moder uarch like skylake-x.\n\nHuh. I figured Intel would have thrown sufficient transistors at that\nproblem by now. But per that result, it's worth having the swap step\neven on CISC, never mind RISC.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:17:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 03/01/2020 20:14, Fabien COELHO wrote:\n>> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be nicer?\n\n> What justification for that do you have?\n\nZero is the \"correct\" answer for that, isn't it, independently of overflow\nconsiderations? We should strive to give the correct answer if it's known\nand representable, rather than have arbitrary failure conditions.\n\n(IOW, we should throw errors only when the *result* is out of range\nor undefined, not just because the input is an edge case.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:21:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 00:49, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> On 03/01/2020 20:14, Fabien COELHO wrote:\n>>> The point of swapping is to a void possibly expensive modulo, but this\n>>> should be done on absolute values, otherwise it may not achieve its\n>>> purpose as stated by the comment?\n>> Ah, true. How widespread are these architectures that need this special\n>> treatment? Is it really worth handling?\n> On some older RISC architectures, integer division is really slow, like\n> slower than floating-point. I'm not sure if that's true on any platform\n> people still care about though. In recent years, CPU architects have been\n> able to throw all the transistors they needed at such problems. On a\n> machine with single-cycle divide, it's likely that the extra\n> compare-and-branch is a net loss.\n\n\nOK.\n\n\n> Might be worth checking it on ARM in particular, as being a RISC\n> architecture that's still popular.\n\n\nI don't know how I would check this.\n\n\n> Also, if we end up having a \"numeric\" implementation, it absolutely is\n> worth it for that, because there is nothing cheap about numeric_div.\n\n\nThe patch includes a numeric version, and I take care to short-circuit\neverything I can.\n\n\n> I'd be sort of inclined to have the swap in the other implementations\n> just to keep the algorithms as much alike as possible.\n\n\nThey can't quite be the same behavior because numeric doesn't have the\nunrepresentable -INT_MIN problem, and integers don't have NaN.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:21:32 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 01:21, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> On 03/01/2020 20:14, Fabien COELHO wrote:\n>>> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be nicer?\n>> What justification for that do you have?\n> Zero is the \"correct\" answer for that, isn't it, independently of overflow\n> considerations? \n\n\nI would say not. The correct answer is INT_MIN but we've decided a\nnegative result is not desirable.\n\n\n> We should strive to give the correct answer if it's known\n> and representable, rather than have arbitrary failure conditions.\n\n\nOn that we fully agree.\n\n\n> (IOW, we should throw errors only when the *result* is out of range\n> or undefined, not just because the input is an edge case.)\n\n\nThat's what I do with the rest of it. INT_MIN is only an error if the\nresult of the calculation is also INT_MIN.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:26:48 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 04/01/2020 01:21, Tom Lane wrote:\n>> Zero is the \"correct\" answer for that, isn't it, independently of overflow\n>> considerations? \n\n> I would say not.\n\nOh, right, I was misremembering the identity gcd(a,0) as being 0 not a.\nNever mind that then.\n\n> The correct answer is INT_MIN but we've decided a\n> negative result is not desirable.\n\nAgreed. On the other hand, we could stave off overflow the same\nway we discussed for lcm: make it return int8. We're still stuck\nwith the special case for INT64_MIN in gcd64 of course, so maybe\nthat's just inconsistent rather than being worthwhile.\n\n[ thinks for a bit... ] In practice, I imagine few people use gcd on\nnegative values, so doing weird things with the datatype choices is\nprobably not better than throwing an error for this case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 19:34:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 01:26, Vik Fearing wrote:\n> On 04/01/2020 01:21, Tom Lane wrote:\n>> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>>> On 03/01/2020 20:14, Fabien COELHO wrote:\n>>>> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be nicer?\n>>> What justification for that do you have?\n>> Zero is the \"correct\" answer for that, isn't it, independently of overflow\n>> considerations? \n>\n> I would say not. The correct answer is INT_MIN but we've decided a\n> negative result is not desirable.\n\n\nWolfram Alpha agrees.\n\nhttps://www.wolframalpha.com/input/?i=gcd%28-9223372036854775808%2C0%29\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Sat, 4 Jan 2020 01:38:45 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 03/01/2020 20:14, Fabien COELHO wrote:\n>\n> Bonsoir Vik,\n>\n> +int4gcd_internal(int32 arg1, int32 arg2)\n> +{\n> + int32 swap;\n> +\n> + /*\n> + * Put the greater value in arg1.\n> + * This would happen automatically in the loop below, but\n> avoids an\n> + * expensive modulo simulation on some architectures.\n> + */\n> + if (arg1 < arg2)\n> + {\n> + swap = arg1;\n> + arg1 = arg2;\n> + arg2 = swap;\n> + }\n>\n>\n> The point of swapping is to a void possibly expensive modulo, but this\n> should be done on absolute values, otherwise it may not achieve its\n> purpose as stated by the comment?\n\n\nHere is an updated patch fixing that.\n\n-- \n\nVik Fearing",
"msg_date": "Sat, 4 Jan 2020 03:32:34 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Bonjour Vik,\n\n>> The point of swapping is to a void possibly expensive modulo, but this\n>> should be done on absolute values, otherwise it may not achieve its\n>> purpose as stated by the comment?\n>\n> Ah, true. How widespread are these architectures that need this special\n> treatment? Is it really worth handling?\n\nDunno. AFAICR it was with sparc architectures 25 years ago.\n\nAlso I do not like much relying on the subtleties of C99 % wrt negative \nnumbers to have the algorithm work, I'd be much at ease to deal with sign \nand special values at the beginning of the function and proceed with \npositive numbers afterwards.\n\n>> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be nicer?\n>\n>\n> What justification for that do you have?\n\nISTM that the current implementation has:\n\n \\forall int4 n, n \\neq MIN_INT4, \\gcd(n, 0) = 0 ?\n\nIn which case applying the same rule for min int seems ok.\n\n-- \nFabien Coelho - CRI, MINES ParisTech",
"msg_date": "Sat, 4 Jan 2020 09:35:43 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "\nHello Tom,\n\n>> Which architecture has single cycle division? I think it's way above\n>> that, based on profiles I've seen. And Agner seems to back me up:\n>> https://www.agner.org/optimize/instruction_tables.pdf\n>> That lists a 32/64 idiv with a latency of ~26/~42-95 cycles, even on a\n>> moder uarch like skylake-x.\n>\n> Huh. I figured Intel would have thrown sufficient transistors at that\n> problem by now.\n\nIt is not just a problem of number of transistors, division is \nintrisically iterative (with various kind of iterations used in division \nalgorithms), involving some level of guessing and other arithmetics, so \nthe latency can only be bad, and the possibility of implementing that in 1 \ncycle at 3 GHz looks pretty remote.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 4 Jan 2020 09:59:46 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 09:35, Fabien COELHO wrote:\n>>> I'm unsure about gcd(INT_MIN, 0) should error. Possibly 0 would be\n>>> nicer?\n>>\n>>\n>> What justification for that do you have?\n>\n> ISTM that the current implementation has:\n>\n> \\forall int4 n, n \\neq MIN_INT4, \\gcd(n, 0) = 0 ?\n>\n> In which case applying the same rule for min int seems ok. \n\n\nNo, gcd(n, 0) = n.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Sat, 4 Jan 2020 10:30:55 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "\nBonjour Vik,\n\n> Here is an updated patch fixing that.\n\nAs I said, welcome to arithmetic:-)\n\nPatch v5 applies cleanly.\n\nDoc: I'd consider using an example the result of which is 42 instead of \n21, for obvious geek motivations. Possibly gcd(2142, 462) should be ok.\n\nI got it wrong with my previous comment on gcd(int_min, 0). I'm okay with \nerroring on int_min.\n\nCode comments: gcd(n, 0) = abs(n), not n?\n\nAbout the code.\n\nAdd unlikely() where appropriate.\n\nI'd deal with int_min and 0 at the beginning and then proceed with \nabsoluting the values, rather than the dance around a1/arg1, a2/arg2, and \nother arg2 = -arg2, and arg1 = -arg1 anyway in various places, which does \nnot make the code that easy to understand.\n\nPseudo code could be:\n\n if ((a1 == min && (a2 == min || a2 == 0)) ||\n (a2 == min && a1 == 0))\n error;\n a1 = abs(a1), a2 = abs(a2);\n euclids;\n return;\n\nNote: in the numeric code you abs the value, ISTM consistent to do it as \nwell in the other implementations.\n\nWould it make sense that NAN is returned on NUMERIC when the computation \ncannot be performed, eg on non integer values?\n\nWhy the positive constraint on LCM(NUMERIC, NUMERIC)? Why not absoluting?\n\nTests: you can make LCM fail on much smaller values for int2/4/8, you do \nnot need to start around max_int.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 4 Jan 2020 10:34:10 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Thu, 2 Jan 2020 at 15:13, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Stephen Frost <sfrost@snowman.net> writes:\n> > * Dean Rasheed (dean.a.rasheed@gmail.com) wrote:\n> >> I'm not objecting to adding it, I'm just curious. In fact, I think\n> >> that if we do add this, then we should probably add lcm() at the same\n> >> time, since handling its overflow cases is sufficiently non-trivial to\n> >> justify not requiring users to have to implement it themselves.\n>\n> > I tend to agree with this.\n>\n> Does this impact the decision about whether we need a variant for\n> numeric? I was leaning against that, primarily because (a)\n> it'd introduce a set of questions about what to do with non-integral\n> inputs, and (b) it'd make the patch quite a lot larger, I imagine.\n> But a variant of lcm() that returns numeric would have much more\n> resistance to overflow.\n>\n\nWell Vik has now provided a numeric implementation and it doesn't\nappear to be too much code.\n\nBTW, there is actually no need to restrict the inputs to integral\nvalues because GCD is something that has a perfectly natural extension\nto floating point inputs (see for example [1]). Moreover, since we\nalready have a mod(numeric, numeric) that works for arbitrary inputs,\nEuclid's algorithm just works. For example:\n\nSELECT gcd(285, 7845);\n gcd\n-----\n 15\n\nSELECT gcd(28.5, 7.845);\n gcd\n-------\n 0.015\n\nEssentially, this is because gcd(a*10^n, b*10^n) = gcd(a, b) * 10^n,\nso you can think of it as pre-multiplying by a power of 10 large\nenough to make both inputs integers, and then dividing the result by\nthat power of 10.\n\nIf it were more work to support non-integer inputs, I'd say that it's\nnot worth the effort, but since it's actually less work to just allow\nit, then why not?\n\n\n> Maybe we could just define \"lcm(bigint, bigint) returns numeric\"\n> and figure that that covers all cases, but it feels slightly\n> weird. You couldn't do lcm(lcm(a,b),c) without casting.\n> I guess that particular use-case could be addressed with\n> \"lcm(variadic bigint[]) returns numeric\", but that's getting\n> really odd.\n>\n\nHaving thought about that, I don't like defining these functions to\nreturn a different type than their inputs. I think most people tend to\nbe working with a given type, and are used to having to move to a\nwider type if necessary. We don't, for example, define \"mul(bigint,\nbigint) returns numeric\".\n\nAlso I don't think it really buys you all that much -- the problem\nwith lcm(lcm(a,b),c) where bigint inputs produce a numeric output\nisn't just that you need to add casting; the lcm(a,b) result may not\nfit in a bigint, so the cast might fail. So really, this is just\npostponing the problem a bit, without really fixing it. As for\n\"lcm(variadic bigint[]) returns numeric\", to implement that you'd need\nto use numeric computations internally, so I suspect it's\nimplementation would be at least as complex as lcm(numeric, numeric).\n\nFWIW, looking for precedents elsewhere, I note that the C++ standard\nlibrary defines these functions to return the same type as the inputs.\nTo me, that seems more natural.\n\nRegards,\nDean\n\n[1] https://www.geeksforgeeks.org/program-find-gcd-floating-point-numbers/\n\n\n",
"msg_date": "Sat, 4 Jan 2020 09:37:09 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Sat, 4 Jan 2020 at 09:37, Dean Rasheed <dean.a.rasheed@gmail.com> wrote:\n>\n> Well Vik has now provided a numeric implementation and it doesn't\n> appear to be too much code.\n>\n\nBTW, I did a bit of research into the efficiency of Euclid's\nalgorithm. It's actually quite interesting:\n\nIt turns out that the worst case is when the inputs are successive\nvalues from the Fibonacci sequence. In that case, since\nFib(n)/Fib(n-1) = 1 remainder Fib(n-2), the algorithm will walk\nbackwards through the whole sequence before terminating, and the\nresult will always be 1.\n\nFor bigint inputs, the worst case is gcd(7540113804746346429,\n4660046610375530309) which requires something like 90 divisions.\nTesting that, it's still sub-millisecond though, so I don't think\nthere's any problem there.\n\nOTOH, for numeric inputs, this could easily end up needing many\nthousands of divisions and it's not hard to construct inputs that take\nminutes to compute, although this is admittedly with ridiculously\nlarge inputs (~10^130000), and AFAICS, the performance is OK with\n\"normal\" sized inputs. Should we put a limit on the size of the\ninputs? I'm not sure exactly how that would work, but I think it would\nhave to take into account the relative weights of the inputs rather\nthan just the maximum weight. At the very least, I think we need a\ncheck for interrupts here (c.f. the numeric factorial function).\nPerhaps such a check is sufficient. It's not like there aren't lots of\nother ways to tie up the server.\n\nThere are apparently more efficient algorithms, but I think that\nshould definitely be kept out of scope for this patch.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 4 Jan 2020 10:25:17 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> OTOH, for numeric inputs, this could easily end up needing many\n> thousands of divisions and it's not hard to construct inputs that take\n> minutes to compute, although this is admittedly with ridiculously\n> large inputs (~10^130000), and AFAICS, the performance is OK with\n> \"normal\" sized inputs. Should we put a limit on the size of the\n> inputs?\n\nNo, but a CHECK_FOR_INTERRUPTS in the loop would be well-advised,\nif there's not one already inside the called functions.\n\n> There are apparently more efficient algorithms, but I think that\n> should definitely be kept out of scope for this patch.\n\n+1\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 12:01:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 10:34, Fabien COELHO wrote:\n> Code comments: gcd(n, 0) = abs(n), not n?\n\n\nOK, changed.\n\n\n> Add unlikely() where appropriate.\n\n\nAny particular place in mind where I didn't already put it?\n\n\n> I'd deal with int_min and 0 at the beginning and then proceed with\n> absoluting the values, rather than the dance around a1/arg1, a2/arg2,\n> and other arg2 = -arg2, and arg1 = -arg1 anyway in various places,\n> which does not make the code that easy to understand.\n>\n> Pseudo code could be:\n>\n> if ((a1 == min && (a2 == min || a2 == 0)) ||\n> (a2 == min && a1 == 0))\n> error;\n> a1 = abs(a1), a2 = abs(a2);\n> euclids;\n> return;\n\n\nThis would cause one of my tests to fail. Please stop suggesting it.\n\n\n> Note: in the numeric code you abs the value, ISTM consistent to do it\n> as well in the other implementations.\n\n\nAs noted in the comments, numeric does not have the INT_MIN problem.\n\n\n> Would it make sense that NAN is returned on NUMERIC when the\n> computation cannot be performed, eg on non integer values?\n\n\nI don't think so, no.\n\n\n> Why the positive constraint on LCM(NUMERIC, NUMERIC)? Why not absoluting? \n\n\nI didn't see a definition for negative inputs, but now I see one so I've\nlifted the restriction.\n\n\nOn 04/01/2020 10:37, Dean Rasheed wrote:\n>\n> BTW, there is actually no need to restrict the inputs to integral\n> values because GCD is something that has a perfectly natural extension\n> to floating point inputs (see for example [1]). Moreover, since we\n> already have a mod(numeric, numeric) that works for arbitrary inputs,\n> Euclid's algorithm just works.\n> [...]\n> If it were more work to support non-integer inputs, I'd say that it's\n> not worth the effort, but since it's actually less work to just allow\n> it, then why not?\n\n\nOkay, I allow that now, but I've still left it for lcm. I can't find\nanything anywhere that defines lcm for floating point (I do find it for\nfractions) and the result of abs(a*b)/gcd(a,b) certainly doesn't match\n\"lowest\" in the examples I tried.\n\n\nOn 04/01/2020 18:01, Tom Lane wrote:\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>> OTOH, for numeric inputs, this could easily end up needing many\n>> thousands of divisions and it's not hard to construct inputs that take\n>> minutes to compute, although this is admittedly with ridiculously\n>> large inputs (~10^130000), and AFAICS, the performance is OK with\n>> \"normal\" sized inputs. Should we put a limit on the size of the\n>> inputs?\n> No, but a CHECK_FOR_INTERRUPTS in the loop would be well-advised,\n> if there's not one already inside the called functions.\n\n\nGood idea. Added.\n\n-- \n\nVik Fearing",
"msg_date": "Sat, 4 Jan 2020 18:55:30 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Sat, 4 Jan 2020 at 17:55, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> On 04/01/2020 10:37, Dean Rasheed wrote:\n> >\n> > BTW, there is actually no need to restrict the inputs to integral\n> > values because GCD is something that has a perfectly natural extension\n> > to floating point inputs (see for example [1]). Moreover, since we\n> > already have a mod(numeric, numeric) that works for arbitrary inputs,\n> > Euclid's algorithm just works.\n> > [...]\n> > If it were more work to support non-integer inputs, I'd say that it's\n> > not worth the effort, but since it's actually less work to just allow\n> > it, then why not?\n>\n>\n> Okay, I allow that now, but I've still left it for lcm. I can't find\n> anything anywhere that defines lcm for floating point (I do find it for\n> fractions) and the result of abs(a*b)/gcd(a,b) certainly doesn't match\n> \"lowest\" in the examples I tried.\n>\n\nHere's another article on the subject:\nhttps://www.math-only-math.com/hcf-and-lcm-of-decimals.html\n\nIt works because gcd(a*10^n, b*10^n) = gcd(a, b)*10^n, and therefore\nlcm(a*10^n, b*10^n) = lcm(a, b)*10^n, so the results will just have\ntheir decimal points shifted. For example:\n\ngcd(54, 24) = 6\nlcm(54, 24) = 216 = 4*54 = 9*24\n\ngcd(5.4, 2.4) = 0.6\nlcm(5.4, 2.4) = 21.6 = 4*5.4 = 9*2.4\n\nthat is the lowest common integer multiple of the two decimal inputs.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 4 Jan 2020 19:08:54 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Hello Vik,\n\n>> Add unlikely() where appropriate.\n>\n> Any particular place in mind where I didn't already put it?\n\nIn GCD implementations, for instance:\n\n if (arg1 == PG_INT32_MIN)\n if (arg2 == 0 || arg2 == PG_INT32_MIN)\n\nAnd possibly a \"likely\" on the while.\n\nIn LCM implementations, for instance:\n\n if (arg1 == 0 || arg2 == 0)\n if (arg1 == arg2)\n\nThe later is partially redundant with preceeding case BTW, which could be \nmanaged inside this one, reducing the number of tests? Something like:\n\n if (arg1 == arg2)\n if (arg1 == MIN_INT)\n error\n else\n return abs(arg1)\n\nI'm not sure why you want to deal with a1 == a2 case separately, could it \nnot just work with the main code?\n\nIf you want to deal with it separately, then why not doing arg1 == -arg2 \nas well?\n\n> Please stop suggesting it.\n\nFine, fine!\n\nTom also suggested to align implementations as much as possible, and I do \nagree with him.\n\nAlso, I'd suggest to add a comment to explain that the precise C99 modulo \nsemantics is required to make the algorithm work, and that it may not work \nwith C89 semantics for instance.\n\n>> Note: in the numeric code you abs the value, ISTM consistent to do it\n>> as well in the other implementations.\n>\n> As noted in the comments, numeric does not have the INT_MIN problem.\n\nSure, but there are special cases at the beginning all the same: NAN, \nINTEGRAL…\n\n>> Would it make sense that NAN is returned on NUMERIC when the \n>> computation cannot be performed, eg on non integer values?\n>\n> I don't think so, no.\n\nOk. Why? I do not have an opinion, but ISTM that there is a choice and it \nshould be explained. Could be consistency with other cases, whatever.\n\n>> Why the positive constraint on LCM(NUMERIC, NUMERIC)? Why not absoluting?\n>\n> I didn't see a definition for negative inputs, but now I see one so I've\n> lifted the restriction.\n\nGood.\n\nAbout tests: again, I'd check the LCM overflow on smaller values.\n\nI'm not convinced by the handling of fractional numerics in gcd, eg:\n\n +SELECT gcd(330.3::numeric, 462::numeric);\n + gcd\n +-----\n + 0.3\n +(1 row)\n\nISTM that the average user, including myself, would expect an integer \nresult from gcd.\n\nIf this is kept, the documentation should be clear about what it does and \nwhat it means, because the least to say is that it is surprising.\n\nSomehow I could have expected the arguments to be casted to int, so that \nit would lead to 66.\n\nPython does a type error, which I find even better. I'd vote for erroring.\n\nIf this fractional gcd makes some sense and is desirable, then ISTM that \nlcm(a,b) = a / gcd(a, b) * b should make as much sense and should be \nallowed as well, for consistency.\n\n-- \nFabien.",
"msg_date": "Sat, 4 Jan 2020 22:21:15 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 04/01/2020 20:08, Dean Rasheed wrote:\n> On Sat, 4 Jan 2020 at 17:55, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>> On 04/01/2020 10:37, Dean Rasheed wrote:\n>>> BTW, there is actually no need to restrict the inputs to integral\n>>> values because GCD is something that has a perfectly natural extension\n>>> to floating point inputs (see for example [1]). Moreover, since we\n>>> already have a mod(numeric, numeric) that works for arbitrary inputs,\n>>> Euclid's algorithm just works.\n>>> [...]\n>>> If it were more work to support non-integer inputs, I'd say that it's\n>>> not worth the effort, but since it's actually less work to just allow\n>>> it, then why not?\n>>\n>> Okay, I allow that now, but I've still left it for lcm. I can't find\n>> anything anywhere that defines lcm for floating point (I do find it for\n>> fractions) and the result of abs(a*b)/gcd(a,b) certainly doesn't match\n>> \"lowest\" in the examples I tried.\n>>\n> Here's another article on the subject:\n> https://www.math-only-math.com/hcf-and-lcm-of-decimals.html\n\n\nYeah, my eyes weren't aligning the decimal points properly.\n\n\nAttached version frees up lcm to work on non-integrals. Thanks for your\ninput!\n\n-- \n\nVik Fearing",
"msg_date": "Sat, 4 Jan 2020 22:31:43 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 4:21 PM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n> In GCD implementations, for instance:\n>\n> if (arg1 == PG_INT32_MIN)\n> if (arg2 == 0 || arg2 == PG_INT32_MIN)\n>\n> And possibly a \"likely\" on the while.\n\nI don't think decoration the code with likely() and unlikely() all\nover the place is a very good idea. Odds are good that we'll end up\nwith a bunch that are actually non-optimal, and nobody will ever\nfigure it out because it's hard to figure out. I have a hard time\nbelieving that we're going to be much worse off if we just write the\ncode normally.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Mon, 6 Jan 2020 07:11:42 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Hello Robert,\n\n>> if (arg1 == PG_INT32_MIN)\n>> if (arg2 == 0 || arg2 == PG_INT32_MIN)\n>>\n>> And possibly a \"likely\" on the while.\n>\n> I don't think decoration the code with likely() and unlikely() all\n> over the place is a very good idea.\n\n> Odds are good that we'll end up with a bunch that are actually \n> non-optimal, and nobody will ever figure it out because it's hard to \n> figure out.\n\nMy 0.02€: I'd tend to disagree.\n\nModern pipelined processors can take advantage of speculative execution on \nbranches, so if you know which branch is the more likely it can help.\n\nObviously if you get it wrong it does not, but for the above cases it \nseems to me that they are rather straightforward.\n\nIt also provides some \"this case is expected to be exceptional\" semantics \nto people reading the code.\n\n> I have a hard time believing that we're going to be much \n> worse off if we just write the code normally.\n\nI think that your point applies to more general programming in postgres, \nbut this is not the context here.\n\nFor low-level arithmetic code like this one, with tests and loops \ncontaining very few hardware instructions, I think that helping compiler \noptimizations is a good idea.\n\nMaybe in the \"while\" the compiler would assume that it is going to loop \nanyway by default, so it may be less useful there.\n\n-- \nFabien.",
"msg_date": "Mon, 6 Jan 2020 13:52:33 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 6:52 AM Fabien COELHO <coelho@cri.ensmp.fr> wrote:\n>\n>\n> Hello Robert,\n>\n> >> if (arg1 == PG_INT32_MIN)\n> >> if (arg2 == 0 || arg2 == PG_INT32_MIN)\n> >>\n> >> And possibly a \"likely\" on the while.\n> >\n> > I don't think decoration the code with likely() and unlikely() all\n> > over the place is a very good idea.\n>\n> > Odds are good that we'll end up with a bunch that are actually\n> > non-optimal, and nobody will ever figure it out because it's hard to\n> > figure out.\n>\n> My 0.02€: I'd tend to disagree.\n>\n> Modern pipelined processors can take advantage of speculative execution on\n> branches, so if you know which branch is the more likely it can help.\n>\n> Obviously if you get it wrong it does not, but for the above cases it\n> seems to me that they are rather straightforward.\n>\n> It also provides some \"this case is expected to be exceptional\" semantics\n> to people reading the code.\n>\n> > I have a hard time believing that we're going to be much\n> > worse off if we just write the code normally.\n>\n> I think that your point applies to more general programming in postgres,\n> but this is not the context here.\n>\n> For low-level arithmetic code like this one, with tests and loops\n> containing very few hardware instructions, I think that helping compiler\n> optimizations is a good idea.\n\nDo you have any performance data to back that up?\n\nmerlin\n\n\n",
"msg_date": "Mon, 6 Jan 2020 14:07:52 -0600",
"msg_from": "Merlin Moncure <mmoncure@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Hello Merlin,\n\n>> For low-level arithmetic code like this one, with tests and loops\n>> containing very few hardware instructions, I think that helping compiler\n>> optimizations is a good idea.\n>\n> Do you have any performance data to back that up?\n\nYep.\n\nA generic data is the woes about speculative execution related CVE (aka \nSpectre) fixes and their impact on performance, which is in percents, \npossibly tens of them, when the thing is more or less desactivated to \nmitigate the security issue:\n\n https://www.nextplatform.com/2018/03/16/how-spectre-and-meltdown-mitigation-hits-xeon-performance/\n\nSome data about the __builtin_expect compiler hints:\n\n http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2016/p0479r0.html\n\nBasically, they are talking about percents, up to tens in some cases, \nwhich is consistent with the previous example.\n\nAs I said, helping the compiler is a good idea, and pg has been doing that \nwith the likely/unlikely macros for some time, there are over an hundred \nof them, including in headers which get expanded (\"logging.h\", \"float.h\", \n\"simplehash.h\", …), which is a good thing.\n\nI do not see any good reason to stop doing that, especially in low-level \narithmetic functions.\n\nNow, I do not have specific data about the performance impact on a gcd \nimplementation. Mileage may vary depending on hardware, compiler, options \nand other overheads.\n\n-- \nFabien.",
"msg_date": "Tue, 7 Jan 2020 03:33:35 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Do we actually need the smallint versions of these functions?\n\nI would have thought that automatic casting would take care of any\ncases that need smallints, and I can't believe that there's any\nperformance benefit to be had that's worth maintaining the extra code.\n\nRegards,\nDean\n\n\n",
"msg_date": "Tue, 7 Jan 2020 11:21:30 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> Do we actually need the smallint versions of these functions?\n\nDoubt it. It'd be fairly hard even to call those, since e.g. \"42\"\nis an int not a smallint.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 07:30:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Tue, 7 Jan 2020 at 12:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n> > Do we actually need the smallint versions of these functions?\n>\n> Doubt it. It'd be fairly hard even to call those, since e.g. \"42\"\n> is an int not a smallint.\n>\n\nI see this has been marked RFC. I'll take it, and barring objections,\nI'll start by ripping out the smallint code.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 Jan 2020 07:44:52 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 20/01/2020 08:44, Dean Rasheed wrote:\n> On Tue, 7 Jan 2020 at 12:31, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Dean Rasheed <dean.a.rasheed@gmail.com> writes:\n>>> Do we actually need the smallint versions of these functions?\n>> Doubt it. It'd be fairly hard even to call those, since e.g. \"42\"\n>> is an int not a smallint.\n>>\n> I see this has been marked RFC. I'll take it, \n\n\nThanks!\n\n\n> and barring objections,\n> I'll start by ripping out the smallint code.\n\n\nNo strong objection.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Mon, 20 Jan 2020 09:03:50 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "Looking at the docs, I think it's worth going a little further than\njust saying what the acronyms stand for -- especially since the\nbehaviour for zero inputs is an implementation choice (albeit the most\ncommon one). I propose the following:\n\n+ <entry>\n+ greatest common divisor — the largest positive number that\n+ divides both inputs with no remainder; returns <literal>0</literal> if\n+ both inputs are zero\n+ </entry>\n\nand:\n\n+ <entry>\n+ least common multiple — the smallest strictly positive number\n+ that is an integer multiple of both inputs; returns\n<literal>0</literal>\n+ if either input is zero\n+ </entry>\n\n(I have tried to be precise in my use of terms like \"number\" and\n\"integer\", to cover the different cases)\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 Jan 2020 10:28:37 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 20/01/2020 11:28, Dean Rasheed wrote:\n> Looking at the docs, I think it's worth going a little further than\n> just saying what the acronyms stand for -- especially since the\n> behaviour for zero inputs is an implementation choice (albeit the most\n> common one). I propose the following:\n>\n> + <entry>\n> + greatest common divisor — the largest positive number that\n> + divides both inputs with no remainder; returns <literal>0</literal> if\n> + both inputs are zero\n> + </entry>\n>\n> and:\n>\n> + <entry>\n> + least common multiple — the smallest strictly positive number\n> + that is an integer multiple of both inputs; returns\n> <literal>0</literal>\n> + if either input is zero\n> + </entry>\n>\n> (I have tried to be precise in my use of terms like \"number\" and\n> \"integer\", to cover the different cases)\n\n\nIn that case should lcm be \"...that is an integral multiple...\" since\nthe numeric version will return numeric?\n\n\nOther than that, I'm happy with this change.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Mon, 20 Jan 2020 19:52:51 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 2020-Jan-20, Dean Rasheed wrote:\n\n> + <entry>\n> + greatest common divisor — the largest positive number that\n> + divides both inputs with no remainder; returns <literal>0</literal> if\n> + both inputs are zero\n> + </entry>\n\nWarning, severe TOC/bikeshedding ahead.\n\nI don't know why, but this dash-semicolon sequence reads strange to me\nand looks out of place. I would use parens for the first phrase and\nkeep the semicolon, that is \"greatest common divisor (the largest ...);\nreturns 0 if ...\"\n\nThat seems more natural to me, and we're already using parens in other\ndescription <entry>s.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 16:04:14 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Mon, 20 Jan 2020 at 18:52, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>\n> On 20/01/2020 11:28, Dean Rasheed wrote:\n> >\n> > + <entry>\n> > + least common multiple — the smallest strictly positive number\n> > + that is an integer multiple of both inputs; returns\n> > <literal>0</literal>\n> > + if either input is zero\n> > + </entry>\n>\n> In that case should lcm be \"...that is an integral multiple...\" since\n> the numeric version will return numeric?\n>\n\nSo \"integral multiple\" instead of \"integer multiple\"? I think I'm more\nused to the latter, but I'm happy with either.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 Jan 2020 20:13:12 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Mon, 20 Jan 2020 at 19:04, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n>\n> On 2020-Jan-20, Dean Rasheed wrote:\n>\n> > + <entry>\n> > + greatest common divisor — the largest positive number that\n> > + divides both inputs with no remainder; returns <literal>0</literal> if\n> > + both inputs are zero\n> > + </entry>\n>\n> Warning, severe TOC/bikeshedding ahead.\n>\n> I don't know why, but this dash-semicolon sequence reads strange to me\n> and looks out of place. I would use parens for the first phrase and\n> keep the semicolon, that is \"greatest common divisor (the largest ...);\n> returns 0 if ...\"\n>\n> That seems more natural to me, and we're already using parens in other\n> description <entry>s.\n>\n\nHmm, OK. I suppose that's more logical because then the bit in parens\nis the standard definition of gcd/lcm, and the part after the\nsemicolon is the implementation choice for the special case not\ncovered by the standard definition.\n\nRegards,\nDean\n\n\n",
"msg_date": "Mon, 20 Jan 2020 20:18:48 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On Mon, 20 Jan 2020 at 08:04, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>\n> On 20/01/2020 08:44, Dean Rasheed wrote:\n> >>\n> > I see this has been marked RFC. I'll take it,\n>\n\nCommitted with some adjustments, mostly cosmetic but a couple more substantive:\n\nThe code to guard against a floating point exception with inputs of\n(INT_MIN, -1) wasn't quite right because it actually just moved the\nproblem so that it would fall over with inputs of (INT_MIN, +1).\n\nThe convention in numeric.c is that the xxx_var() functions take\n*pointers* to their NumericVar arguments rather than copies, and they\ndo not modify their inputs, as indicated by the use of \"const\". You\nmight just have gotten away with what you were doing, but I think it\nwas bad style and potentially unsafe -- for example, someone calling\ngcd_var() with a NumericVar that came from some other computation and\nhaving a non-null buf would risk having the buf freed in the copy,\nleaving the original NumericVar with a buf pointing to freed memory.\n\nRegards,\nDean\n\n\n",
"msg_date": "Sat, 25 Jan 2020 14:18:33 +0000",
"msg_from": "Dean Rasheed <dean.a.rasheed@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Greatest Common Divisor"
},
{
"msg_contents": "On 25/01/2020 15:18, Dean Rasheed wrote:\n> \n> Committed with some adjustments, mostly cosmetic but a couple more substantive:\n\nThanks!\n\n> The code to guard against a floating point exception with inputs of\n> (INT_MIN, -1) wasn't quite right because it actually just moved the\n> problem so that it would fall over with inputs of (INT_MIN, +1).\n\nGood catch.\n\n> The convention in numeric.c is that the xxx_var() functions take\n> *pointers* to their NumericVar arguments rather than copies, and they\n> do not modify their inputs, as indicated by the use of \"const\". You\n> might just have gotten away with what you were doing, but I think it\n> was bad style and potentially unsafe -- for example, someone calling\n> gcd_var() with a NumericVar that came from some other computation and\n> having a non-null buf would risk having the buf freed in the copy,\n> leaving the original NumericVar with a buf pointing to freed memory.\n\nThank you for taking the time to look closely at this. This was my\nfirst time dealing with \"numeric\" so I was bound to make some mistakes.\n-- \nVik Fearing\n\n\n",
"msg_date": "Sun, 26 Jan 2020 06:52:05 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Greatest Common Divisor"
}
] |
[
{
"msg_contents": "It can sometimes be useful to match against a superuser in pg_hba.conf.\nFor example, one could imagine wanting to reject nonsuperuser from a\nparticular database.\n\n\nThis used to be possible by creating an empty role and matching against\nthat, but that functionality was removed (a long time ago) by commit\n94cd0f1ad8a.\n\n\nAdding another keyword can break backwards compatibility, of course. So\nthat is an issue that needs to be discussed, but I don't imagine too\nmany people are using role names \"superuser\" and \"nonsuperuser\". Those\nwho are will have to quote them.\n\n-- \n\nVik Fearing +33 6 46 75 15 36\nhttp://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support",
"msg_date": "Sat, 28 Dec 2019 18:19:58 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> It can sometimes be useful to match against a superuser in pg_hba.conf.\n\nSeems like a reasonable desire.\n\n> Adding another keyword can break backwards compatibility, of course. So\n> that is an issue that needs to be discussed, but I don't imagine too\n> many people are using role names \"superuser\" and \"nonsuperuser\". Those\n> who are will have to quote them.\n\nI'm not very happy about the continuing creep of pseudo-reserved database\nand user names in pg_hba.conf. I wish we'd adjust the notation so that\nthese keywords are syntactically distinct from ordinary names. Given\nthe precedent that \"+\" and \"@\" prefixes change what an identifier means,\nmaybe we could use \"*\" or some other punctuation character as a keyword\nprefix? We'd have to give grandfather exceptions to the existing\nkeywords, at least for a while, but we could say that new ones won't be\nrecognized without the prefix.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Dec 2019 13:07:31 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 28/12/2019 19:07, Tom Lane wrote:\n> Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n>> It can sometimes be useful to match against a superuser in pg_hba.conf.\n> Seems like a reasonable desire.\n>\n>> Adding another keyword can break backwards compatibility, of course. So\n>> that is an issue that needs to be discussed, but I don't imagine too\n>> many people are using role names \"superuser\" and \"nonsuperuser\". Those\n>> who are will have to quote them.\n> I'm not very happy about the continuing creep of pseudo-reserved database\n> and user names in pg_hba.conf. I wish we'd adjust the notation so that\n> these keywords are syntactically distinct from ordinary names. Given\n> the precedent that \"+\" and \"@\" prefixes change what an identifier means,\n> maybe we could use \"*\" or some other punctuation character as a keyword\n> prefix? We'd have to give grandfather exceptions to the existing\n> keywords, at least for a while, but we could say that new ones won't be\n> recognized without the prefix.\n\n\nI'm all for this (and even suggested it during the IRC conversation that\nprompted this patch). It's rife with bikeshedding, though. My original\nproposal was to use '&' and Andrew Gierth would have used ':'.\n\n\nI will submit two patches, one that recognizes the sigil for all the\nother keywords, and then an update of this patch.\n\n-- \n\nVik\n\n\n\n",
"msg_date": "Sat, 28 Dec 2019 20:02:38 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> > these keywords are syntactically distinct from ordinary names. Given\n> > the precedent that \"+\" and \"@\" prefixes change what an identifier means,\n> > maybe we could use \"*\" or some other punctuation character as a keyword\n> > prefix? We'd have to give grandfather exceptions to the existing\n> > keywords, at least for a while, but we could say that new ones won't be\n> > recognized without the prefix.\n>\n> I'm all for this (and even suggested it during the IRC conversation that\n> prompted this patch). It's rife with bikeshedding, though. My original\n> proposal was to use '&' and Andrew Gierth would have used ':'.\n\nI think this is a good proposal regardless of which character we\ndecide to use. My order of preference from highest-to-lowest would\nprobably be :*&, but maybe that's just because I'm reading this on\nSunday rather than on Tuesday.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Dec 2019 11:16:47 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>> I'm all for this (and even suggested it during the IRC conversation that\n>> prompted this patch). It's rife with bikeshedding, though. My original\n>> proposal was to use '&' and Andrew Gierth would have used ':'.\n\n> I think this is a good proposal regardless of which character we\n> decide to use. My order of preference from highest-to-lowest would\n> probably be :*&, but maybe that's just because I'm reading this on\n> Sunday rather than on Tuesday.\n\nI don't have any particular objection to '&' if people prefer that.\nBut ':' seems like it would introduce confusion with the\nvariable-substitution notation used in psql and some other places.\n\nIt's not that hard to imagine that somebody might want a\nvariable-substitution notation in pg_hba.conf someday, so we should\nleave syntax room for one, and ':' seems like a likely choice\nfor it (although I suppose a case could be made for '$' too).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Dec 2019 11:31:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 11:31 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I don't have any particular objection to '&' if people prefer that.\n> But ':' seems like it would introduce confusion with the\n> variable-substitution notation used in psql and some other places.\n>\n> It's not that hard to imagine that somebody might want a\n> variable-substitution notation in pg_hba.conf someday, so we should\n> leave syntax room for one, and ':' seems like a likely choice\n> for it (although I suppose a case could be made for '$' too).\n\nWell, as I say, I don't care very much... I hope we can agree on\nsomething and move forward.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 29 Dec 2019 11:48:19 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 29/12/2019 17:31, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n>> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>>> I'm all for this (and even suggested it during the IRC conversation that\n>>> prompted this patch). It's rife with bikeshedding, though. My original\n>>> proposal was to use '&' and Andrew Gierth would have used ':'.\n>> I think this is a good proposal regardless of which character we\n>> decide to use. My order of preference from highest-to-lowest would\n>> probably be :*&, but maybe that's just because I'm reading this on\n>> Sunday rather than on Tuesday.\n> I don't have any particular objection to '&' if people prefer that.\n\n\nI wrote the patch so I got to decide. :-) I will also volunteer to do\nthe grunt work of changing the symbol if consensus wants that, though.\n\n\nIt turns out that my original patch didn't really change, all the meat\nis in the keywords patch. The superuser patch is to be applied on top\nof the keywords patch.\n\n-- \n\nVik Fearing",
"msg_date": "Sun, 29 Dec 2019 23:10:12 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 29/12/2019 23:10, Vik Fearing wrote:\n> On 29/12/2019 17:31, Tom Lane wrote:\n>> Robert Haas <robertmhaas@gmail.com> writes:\n>>> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>>>> I'm all for this (and even suggested it during the IRC conversation that\n>>>> prompted this patch). It's rife with bikeshedding, though. My original\n>>>> proposal was to use '&' and Andrew Gierth would have used ':'.\n>>> I think this is a good proposal regardless of which character we\n>>> decide to use. My order of preference from highest-to-lowest would\n>>> probably be :*&, but maybe that's just because I'm reading this on\n>>> Sunday rather than on Tuesday.\n>> I don't have any particular objection to '&' if people prefer that.\n>\n> I wrote the patch so I got to decide. :-) I will also volunteer to do\n> the grunt work of changing the symbol if consensus wants that, though.\n>\n>\n> It turns out that my original patch didn't really change, all the meat\n> is in the keywords patch. The superuser patch is to be applied on top\n> of the keywords patch.\n>\n\nI missed a few places in the tap tests. New keywords patch attached,\nsuperuser patch unchanged.\n\n-- \n\nVik Fearing",
"msg_date": "Mon, 30 Dec 2019 11:56:17 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 11:56:17AM +0100, Vik Fearing wrote:\n> On 29/12/2019 23:10, Vik Fearing wrote:\n> > On 29/12/2019 17:31, Tom Lane wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> >>>> I'm all for this (and even suggested it during the IRC conversation that\n> >>>> prompted this patch). It's rife with bikeshedding, though. My original\n> >>>> proposal was to use '&' and Andrew Gierth would have used ':'.\n> >>> I think this is a good proposal regardless of which character we\n> >>> decide to use. My order of preference from highest-to-lowest would\n> >>> probably be :*&, but maybe that's just because I'm reading this on\n> >>> Sunday rather than on Tuesday.\n> >> I don't have any particular objection to '&' if people prefer that.\n> >\n> > I wrote the patch so I got to decide. :-)� I will also volunteer to do\n> > the grunt work of changing the symbol if consensus wants that, though.\n> >\n> >\n> > It turns out that my original patch didn't really change, all the meat\n> > is in the keywords patch.� The superuser patch is to be applied on top\n> > of the keywords patch.\n> >\n> \n> I missed a few places in the tap tests.� New keywords patch attached,\n> superuser patch unchanged.\n> \n> -- \n> \n> Vik Fearing\n> \n\n\nPatches apply cleanly to 0ce38730ac72029f3f2c95ae80b44f5b9060cbcc, and\ninclude documentation. They could use an example of the new\ncapability, possibly included in the sample pg_hba.conf, e.g. \n\n host &all &superuser 0.0.0.0/0 reject\n\nor similar.\n\nThe feature works as described, and is useful. I have thus far been\nunable to make it crash.\n\nI haven't used intentionally hostile strings to test it, as I didn't\nsee those as an important attack surface. This is because by the time\nsomeone hostile can write to pg_hba.conf, they've got all the control\nthey need to manipulate the entire node, including root exploits.\n\nI've marked this as Ready for Committer.\n\nBest,\nDavid.\n-- \nDavid Fetter <david(at)fetter(dot)org> http://fetter.org/\nPhone: +1 415 235 3778\n\nRemember to vote!\nConsider donating to Postgres: http://www.postgresql.org/about/donate\n\n\n",
"msg_date": "Mon, 30 Dec 2019 20:27:12 +0100",
"msg_from": "David Fetter <david@fetter.org>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\n* Vik Fearing (vik.fearing@2ndquadrant.com) wrote:\n> On 29/12/2019 23:10, Vik Fearing wrote:\n> > On 29/12/2019 17:31, Tom Lane wrote:\n> >> Robert Haas <robertmhaas@gmail.com> writes:\n> >>> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n> >>>> I'm all for this (and even suggested it during the IRC conversation that\n> >>>> prompted this patch). It's rife with bikeshedding, though. My original\n> >>>> proposal was to use '&' and Andrew Gierth would have used ':'.\n> >>> I think this is a good proposal regardless of which character we\n> >>> decide to use. My order of preference from highest-to-lowest would\n> >>> probably be :*&, but maybe that's just because I'm reading this on\n> >>> Sunday rather than on Tuesday.\n> >> I don't have any particular objection to '&' if people prefer that.\n> >\n> > I wrote the patch so I got to decide. :-) I will also volunteer to do\n> > the grunt work of changing the symbol if consensus wants that, though.\n> >\n> > It turns out that my original patch didn't really change, all the meat\n> > is in the keywords patch. The superuser patch is to be applied on top\n> > of the keywords patch.\n> \n> I missed a few places in the tap tests. New keywords patch attached,\n> superuser patch unchanged.\n\nWe already have a reserved namespace when it comes to roles,\nspecifically \"pg_\".. why invent something new like this '&' prefix when\nwe could just declare that 'pg_superusers' is a role to which all\nsuperusers are members? Or something along those lines?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 2 Jan 2020 14:52:02 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 02/01/2020 20:52, Stephen Frost wrote:\n> Greetings,\n>\n> * Vik Fearing (vik.fearing@2ndquadrant.com) wrote:\n>> On 29/12/2019 23:10, Vik Fearing wrote:\n>>> On 29/12/2019 17:31, Tom Lane wrote:\n>>>> Robert Haas <robertmhaas@gmail.com> writes:\n>>>>> On Sat, Dec 28, 2019 at 2:02 PM Vik Fearing <vik.fearing@2ndquadrant.com> wrote:\n>>>>>> I'm all for this (and even suggested it during the IRC conversation that\n>>>>>> prompted this patch). It's rife with bikeshedding, though. My original\n>>>>>> proposal was to use '&' and Andrew Gierth would have used ':'.\n>>>>> I think this is a good proposal regardless of which character we\n>>>>> decide to use. My order of preference from highest-to-lowest would\n>>>>> probably be :*&, but maybe that's just because I'm reading this on\n>>>>> Sunday rather than on Tuesday.\n>>>> I don't have any particular objection to '&' if people prefer that.\n>>> I wrote the patch so I got to decide. :-)� I will also volunteer to do\n>>> the grunt work of changing the symbol if consensus wants that, though.\n>>>\n>>> It turns out that my original patch didn't really change, all the meat\n>>> is in the keywords patch.� The superuser patch is to be applied on top\n>>> of the keywords patch.\n>> I missed a few places in the tap tests.� New keywords patch attached,\n>> superuser patch unchanged.\n> We already have a reserved namespace when it comes to roles,\n> specifically \"pg_\".. why invent something new like this '&' prefix when\n> we could just declare that 'pg_superusers' is a role to which all\n> superusers are members? Or something along those lines?\n\n\nThis is an argument against the superusers patch, but surely you are not\nsuggesting we add a pg_all role that contains all users?� And what about\nthe keywords that aren't for users?\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Thu, 2 Jan 2020 21:04:27 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> We already have a reserved namespace when it comes to roles,\n> specifically \"pg_\".. why invent something new like this '&' prefix when\n> we could just declare that 'pg_superusers' is a role to which all\n> superusers are members? Or something along those lines?\n\nMeh. If the things aren't actually roles, I think this'd just\nadd confusion. Or were you proposing to implement them as roles?\nI'm not sure if that would be practical in every case.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 15:04:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n > Stephen Frost <sfrost@snowman.net> writes:\n >> We already have a reserved namespace when it comes to roles,\n >> specifically \"pg_\".. why invent something new like this '&' prefix\n >> when we could just declare that 'pg_superusers' is a role to which\n >> all superusers are members? Or something along those lines?\n\n Tom> Meh. If the things aren't actually roles, I think this'd just add\n Tom> confusion. Or were you proposing to implement them as roles? I'm\n Tom> not sure if that would be practical in every case.\n\nIn fact my original suggestion when this idea was discussed on IRC was\nto remove the current superuser flag and turn it into a role; but the\nissue then is that role membership is inherited and superuserness\ncurrently isn't, so that's a more intrusive change.\n\n-- \nAndrew (irc:RhodiumToad)\n\n\n",
"msg_date": "Thu, 02 Jan 2020 20:13:00 +0000",
"msg_from": "Andrew Gierth <andrew@tao11.riddles.org.uk>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\nOn Thu, Jan 2, 2020 at 15:04 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Stephen Frost <sfrost@snowman.net> writes:\n> > We already have a reserved namespace when it comes to roles,\n> > specifically \"pg_\".. why invent something new like this '&' prefix when\n> > we could just declare that 'pg_superusers' is a role to which all\n> > superusers are members? Or something along those lines?\n>\n> Meh. If the things aren't actually roles, I think this'd just\n> add confusion. Or were you proposing to implement them as roles?\n> I'm not sure if that would be practical in every case.\n\n\nHaving them as roles might be interesting though I don’t think it would be\nrequired. As for your argument, surely we aren’t going to make\n“&superusers” an actual role with this, so you have to accept that’s what\nthere isn’t a real role either way. I don’t really care for this idea of\nmaking up new syntax that people have to learn, understand, train others\non, etc.\n\nThe pg_ prefix makes it clear that it’s a system role... literally by\ndefinition.\n\nAs for Vik’s thought about “pg_all”- I hadn’t been thinking we would do\nthat (“all” is already accepted there anyway and trying to deprecate that\nseems unlikely to result in ever actually removing it because that’s the\nkind of thing we will argue about and never do..), but it seems like an\ninteresting idea. Using “public” is maybe another interesting thought there\nsince that’s the same thing and also reserved...\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Thu, Jan 2, 2020 at 15:04 Tom Lane <tgl@sss.pgh.pa.us> wrote:Stephen Frost <sfrost@snowman.net> writes:\n> We already have a reserved namespace when it comes to roles,\n> specifically \"pg_\".. why invent something new like this '&' prefix when\n> we could just declare that 'pg_superusers' is a role to which all\n> superusers are members? Or something along those lines?\n\nMeh. If the things aren't actually roles, I think this'd just\nadd confusion. Or were you proposing to implement them as roles?\nI'm not sure if that would be practical in every case.Having them as roles might be interesting though I don’t think it would be required. As for your argument, surely we aren’t going to make “&superusers” an actual role with this, so you have to accept that’s what there isn’t a real role either way. I don’t really care for this idea of making up new syntax that people have to learn, understand, train others on, etc.The pg_ prefix makes it clear that it’s a system role... literally by definition.As for Vik’s thought about “pg_all”- I hadn’t been thinking we would do that (“all” is already accepted there anyway and trying to deprecate that seems unlikely to result in ever actually removing it because that’s the kind of thing we will argue about and never do..), but it seems like an interesting idea. Using “public” is maybe another interesting thought there since that’s the same thing and also reserved...Thanks,Stephen",
"msg_date": "Thu, 2 Jan 2020 15:17:26 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "## Stephen Frost (sfrost@snowman.net):\n\n> We already have a reserved namespace when it comes to roles,\n> specifically \"pg_\".. why invent something new like this '&' prefix when\n> we could just declare that 'pg_superusers' is a role to which all\n> superusers are members? Or something along those lines?\n\nTaking this idea one step further (back?): with any non-trivial\nnumber of (user-)roles in the database, DBAs would be well advised\nto use group(-role)s for privilege management anyways. It's not\nto unreasonable to grant SUPERUSER through a group, too. Although\nI'm not sure we'd need a new pg_superuser role here, we're not\ninventing a new set of object privileges as in e.g. pg_monitor;\nthe DBA can just create their own superuser group.\nIs there really a need to add more features, or would it be sufficient\nto make the applications of group roles more prominent in the docs?\n(I've seen way too many cases in which people where granting privileges\nto individual users when they should have used groups, so I might\nbe biased).\n\nRegards,\nChristoph\n\n-- \nSpare Space\n\n\n",
"msg_date": "Thu, 2 Jan 2020 21:19:32 +0100",
"msg_from": "Christoph Moench-Tegeder <cmt@burggraben.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Meh. If the things aren't actually roles, I think this'd just add\n> Tom> confusion. Or were you proposing to implement them as roles? I'm\n> Tom> not sure if that would be practical in every case.\n\n> In fact my original suggestion when this idea was discussed on IRC was\n> to remove the current superuser flag and turn it into a role; but the\n> issue then is that role membership is inherited and superuserness\n> currently isn't, so that's a more intrusive change.\n\nTo cover the proposed functionality, you'd still need some way to\nselect not-superuser. So I don't think this fully answers the need\neven if we wanted to do it.\n\nIt's possible that role-ifying everything and then allowing \"!role\"\nin the pg_hba.conf syntax would be enough. Not sure though.\n\nMore generally, allowing inheritance of superuser scares me a bit\nfrom a security standpoint. I wouldn't mind turning all the other\nlegacy role properties into grantable roles, but I *like* the fact\nthat that one is special.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 15:49:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\nOn Thu, Jan 2, 2020 at 15:50 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> > \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > Tom> Meh. If the things aren't actually roles, I think this'd just add\n> > Tom> confusion. Or were you proposing to implement them as roles? I'm\n> > Tom> not sure if that would be practical in every case.\n>\n> > In fact my original suggestion when this idea was discussed on IRC was\n> > to remove the current superuser flag and turn it into a role; but the\n> > issue then is that role membership is inherited and superuserness\n> > currently isn't, so that's a more intrusive change.\n>\n> To cover the proposed functionality, you'd still need some way to\n> select not-superuser. So I don't think this fully answers the need\n> even if we wanted to do it.\n\n\nSorry- why do we need that..? The first match for a pg_hba line wins, so\nyou can define all the access methods that superuser accounts are allowed\nto use first, then a “reject” line for superuser accounts, and then\nwhatever else you want after that.\n\nMore generally, allowing inheritance of superuser scares me a bit\n> from a security standpoint. I wouldn't mind turning all the other\n> legacy role properties into grantable roles, but I *like* the fact\n> that that one is special.\n\n\nRequiring an extra “set role whatever;” is good to make sure the user\nreally understands they’re running as superuser, but it doesn’t really\nimprove actual security at all since there’s no way to require a password\nor anything. That superuser-ness isn’t inherited but membership in the\n“postgres” or other role-that-owns-everything role is actually strikes me\nas less than ideal... the whole allow system table mods thing kinda helps\nwith that since you need that extra step to actually change most things but\nit’s still not great imv. I can’t get too excited about trying to improve\nthis though since I’d expect material changes to improve security to be\nbeat back with backwards incompatibility concerns.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Thu, Jan 2, 2020 at 15:50 Tom Lane <tgl@sss.pgh.pa.us> wrote:Andrew Gierth <andrew@tao11.riddles.org.uk> writes:\n> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Tom> Meh. If the things aren't actually roles, I think this'd just add\n> Tom> confusion. Or were you proposing to implement them as roles? I'm\n> Tom> not sure if that would be practical in every case.\n\n> In fact my original suggestion when this idea was discussed on IRC was\n> to remove the current superuser flag and turn it into a role; but the\n> issue then is that role membership is inherited and superuserness\n> currently isn't, so that's a more intrusive change.\n\nTo cover the proposed functionality, you'd still need some way to\nselect not-superuser. So I don't think this fully answers the need\neven if we wanted to do it.Sorry- why do we need that..? The first match for a pg_hba line wins, so you can define all the access methods that superuser accounts are allowed to use first, then a “reject” line for superuser accounts, and then whatever else you want after that.More generally, allowing inheritance of superuser scares me a bit\nfrom a security standpoint. I wouldn't mind turning all the other\nlegacy role properties into grantable roles, but I *like* the fact\nthat that one is special.Requiring an extra “set role whatever;” is good to make sure the user really understands they’re running as superuser, but it doesn’t really improve actual security at all since there’s no way to require a password or anything. That superuser-ness isn’t inherited but membership in the “postgres” or other role-that-owns-everything role is actually strikes me as less than ideal... the whole allow system table mods thing kinda helps with that since you need that extra step to actually change most things but it’s still not great imv. I can’t get too excited about trying to improve this though since I’d expect material changes to improve security to be beat back with backwards incompatibility concerns.Thanks,Stephen",
"msg_date": "Thu, 2 Jan 2020 16:01:43 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> On Thu, Jan 2, 2020 at 15:50 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> To cover the proposed functionality, you'd still need some way to\n>> select not-superuser. So I don't think this fully answers the need\n>> even if we wanted to do it.\n\n> Sorry- why do we need that..? The first match for a pg_hba line wins, so\n> you can define all the access methods that superuser accounts are allowed\n> to use first, then a “reject” line for superuser accounts, and then\n> whatever else you want after that.\n\nSeems kind of awkward. Or more to the point: you can already do whatever\nyou want in pg_hba.conf, as long as you're willing to be verbose enough\n(and, perhaps, willing to maintain group memberships to fit your needs).\nThe discussion here, IMO, is about offering useful shorthands.\nSo a facility like \"!role\" seems potentially useful. Maybe it's not\nreally, but I don't think we should reject it just because there's\na verbose and non-obvious way to get the same result.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 16:07:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> Stephen Frost <sfrost@snowman.net> writes:\n> > On Thu, Jan 2, 2020 at 15:50 Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> To cover the proposed functionality, you'd still need some way to\n> >> select not-superuser. So I don't think this fully answers the need\n> >> even if we wanted to do it.\n> \n> > Sorry- why do we need that..? The first match for a pg_hba line wins, so\n> > you can define all the access methods that superuser accounts are allowed\n> > to use first, then a “reject” line for superuser accounts, and then\n> > whatever else you want after that.\n> \n> Seems kind of awkward. Or more to the point: you can already do whatever\n> you want in pg_hba.conf, as long as you're willing to be verbose enough\n> (and, perhaps, willing to maintain group memberships to fit your needs).\n\nSure it's awkward, but it's how people actually deal with these things\ntoday. I'm not against improving on that situation but I also don't\nhear tons of complaints about it either, so I do think we should be\nthoughtful when it comes to making changes here.\n\n> The discussion here, IMO, is about offering useful shorthands.\n\nIn general, I'm alright with that idea, but I do want to make sure we're\nreally being thoughtful when it comes to inventing new syntax that will\nonly work in this one place and will have to be handled specially by any\ntools or anything that wants to generate or look at this.\n\nWhat are we going to have be displayed through pg_hba_file_rules() for\nthis '!role' or whatever else, in the 'user_name' column? (Also, ugh,\nI find calling that column 'user_name' mildly offensive considering that\nfunction was added well after roles, and it's not like it really meant\n'user name' even before then..).\n\nYes, I'm sure we could just have it be the text '!role' and make\neveryone who cares have to parse out that field, in SQL, to figure out\nwho it really applies to and basically just make everyone deal with it\nbut I remain skeptical about if it's really a particularly good\napproach.\n\n> So a facility like \"!role\" seems potentially useful. Maybe it's not\n> really, but I don't think we should reject it just because there's\n> a verbose and non-obvious way to get the same result.\n\nI don't agree that it's \"non-obvious\" that if you have a config file\nwhere \"first match wins\" that things which don't match the first line\nare, by definition, NOT whatever that first line was and then fall\nthrough to the next, where you could use 'reject' if you want. In fact,\nI've always kinda figured that's what 'reject' was for, though I'll\nadmit that it's been around for far longer than I've been involved in\nthe project (sadly, I hadn't discovered PG yet back in 1998).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 3 Jan 2020 10:04:03 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "So it's not clear to me whether we have any meeting of the minds\non wanting this patch. In the meantime, though, the cfbot\nreports that the patch breaks the ssl tests. Why is that?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 11:03:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 06/01/2020 17:03, Tom Lane wrote:\n> So it's not clear to me whether we have any meeting of the minds\n> on wanting this patch. In the meantime, though, the cfbot\n> reports that the patch breaks the ssl tests. Why is that?\n\n\nI have no idea. I cannot reproduce the failure locally.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Mon, 6 Jan 2020 18:49:45 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Vik Fearing <vik.fearing@2ndquadrant.com> writes:\n> On 06/01/2020 17:03, Tom Lane wrote:\n>> So it's not clear to me whether we have any meeting of the minds\n>> on wanting this patch. In the meantime, though, the cfbot\n>> reports that the patch breaks the ssl tests. Why is that?\n\n> I have no idea. I cannot reproduce the failure locally.\n\nHm, it blows up pretty thoroughly for me too, on a RHEL6 box.\nAre you sure you're running that test -- check-world doesn't do it?\n\nAt least in the 001_ssltests test, the failures seem to all look\nlike this in the TAP test's log file:\n\npsql: error: could not connect to server: could not initiate GSSAPI security context: Unspecified GSS failure. Minor code may provide more information\ncould not initiate GSSAPI security context: Credentials cache file '/tmp/krb5cc_502' not found\n\nThere are no matching entries in the postmaster log file, so this\nseems to be strictly a client-side failure.\n\n(Are we *really* putting security credentials in /tmp ???)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 21:36:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 2020-01-06 17:03, Tom Lane wrote:\n> So it's not clear to me whether we have any meeting of the minds\n> on wanting this patch.\n\nThis fairly far-ranging syntax reorganization of pg_hba.conf doesn't \nappeal to me. pg_hba.conf is complicated enough conceptually for users, \nbut AFAICT nobody ever complained about the syntax or the lexical \nstructure specifically. Assigning meaning to randomly chosen special \ncharacters, moreover in a security-relevant file, seems like the wrong \nway to go.\n\nMoreover, this thread has morphed from what it says in the subject line \nto changing the syntax of pg_hba.conf in a somewhat fundamental way. So \nat the very least someone should post a comprehensive summary of what is \nbeing proposed, instead of just attaching patches that implement \nwhatever was discussed across the thread.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 23:13:49 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 08/01/2020 23:13, Peter Eisentraut wrote:\n> On 2020-01-06 17:03, Tom Lane wrote:\n>> So it's not clear to me whether we have any meeting of the minds\n>> on wanting this patch.\n>\n> This fairly far-ranging syntax reorganization of pg_hba.conf doesn't\n> appeal to me. pg_hba.conf is complicated enough conceptually for\n> users, but AFAICT nobody ever complained about the syntax or the\n> lexical structure specifically. Assigning meaning to randomly chosen\n> special characters, moreover in a security-relevant file, seems like\n> the wrong way to go.\n>\n> Moreover, this thread has morphed from what it says in the subject\n> line to changing the syntax of pg_hba.conf in a somewhat fundamental\n> way. So at the very least someone should post a comprehensive summary\n> of what is being proposed, instead of just attaching patches that\n> implement whatever was discussed across the thread.\n>\n\nWhat is being proposed is what is in the Subject and the original\npatch. The other patch is because Tom didn't like \"the continuing creep\nof pseudo-reserved database and user names\" so I wrote a patch to mark\nsuch reserved names and rebased my original patch on top of it. Only\nthe docs changed in the rebase. The original patch (or its rebase) is\nwhat I am interested in.\n\n-- \n\nVik Fearing\n\n\n\n",
"msg_date": "Thu, 9 Jan 2020 00:55:37 +0100",
"msg_from": "Vik Fearing <vik.fearing@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Wed, 8 Jan 2020 at 23:55, Vik Fearing <vik.fearing@2ndquadrant.com>\nwrote:\n\n> On 08/01/2020 23:13, Peter Eisentraut wrote:\n> > On 2020-01-06 17:03, Tom Lane wrote:\n> >> So it's not clear to me whether we have any meeting of the minds\n> >> on wanting this patch.\n> >\n> > This fairly far-ranging syntax reorganization of pg_hba.conf doesn't\n> > appeal to me. pg_hba.conf is complicated enough conceptually for\n> > users, but AFAICT nobody ever complained about the syntax or the\n> > lexical structure specifically. Assigning meaning to randomly chosen\n> > special characters, moreover in a security-relevant file, seems like\n> > the wrong way to go.\n> >\n> > Moreover, this thread has morphed from what it says in the subject\n> > line to changing the syntax of pg_hba.conf in a somewhat fundamental\n> > way. So at the very least someone should post a comprehensive summary\n> > of what is being proposed, instead of just attaching patches that\n> > implement whatever was discussed across the thread.\n> >\n>\n> What is being proposed is what is in the Subject and the original\n> patch. The other patch is because Tom didn't like \"the continuing creep\n> of pseudo-reserved database and user names\" so I wrote a patch to mark\n> such reserved names and rebased my original patch on top of it. Only\n> the docs changed in the rebase. The original patch (or its rebase) is\n> what I am interested in.\n>\n\nHopefully there will be no danger of me gaining access if I have a crafted\nrolename?\n\npostgres=# create role \"&backdoor\";\nCREATE ROLE\n\n-- \nSimon Riggs http://www.2ndQuadrant.com/\n<http://www.2ndquadrant.com/>\nPostgreSQL Solutions for the Enterprise\n\nOn Wed, 8 Jan 2020 at 23:55, Vik Fearing <vik.fearing@2ndquadrant.com> wrote:On 08/01/2020 23:13, Peter Eisentraut wrote:\n> On 2020-01-06 17:03, Tom Lane wrote:\n>> So it's not clear to me whether we have any meeting of the minds\n>> on wanting this patch.\n>\n> This fairly far-ranging syntax reorganization of pg_hba.conf doesn't\n> appeal to me. pg_hba.conf is complicated enough conceptually for\n> users, but AFAICT nobody ever complained about the syntax or the\n> lexical structure specifically. Assigning meaning to randomly chosen\n> special characters, moreover in a security-relevant file, seems like\n> the wrong way to go.\n>\n> Moreover, this thread has morphed from what it says in the subject\n> line to changing the syntax of pg_hba.conf in a somewhat fundamental\n> way. So at the very least someone should post a comprehensive summary\n> of what is being proposed, instead of just attaching patches that\n> implement whatever was discussed across the thread.\n>\n\nWhat is being proposed is what is in the Subject and the original\npatch. The other patch is because Tom didn't like \"the continuing creep\nof pseudo-reserved database and user names\" so I wrote a patch to mark\nsuch reserved names and rebased my original patch on top of it. Only\nthe docs changed in the rebase. The original patch (or its rebase) is\nwhat I am interested in.Hopefully there will be no danger of me gaining access if I have a crafted rolename?postgres=# create role \"&backdoor\";\nCREATE ROLE -- Simon Riggs http://www.2ndQuadrant.com/PostgreSQL Solutions for the Enterprise",
"msg_date": "Thu, 9 Jan 2020 09:07:01 +0000",
"msg_from": "Simon Riggs <simon@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Simon Riggs <simon@2ndquadrant.com> writes:\n> On Wed, 8 Jan 2020 at 23:55, Vik Fearing <vik.fearing@2ndquadrant.com>\n> wrote:\n>> What is being proposed is what is in the Subject and the original\n>> patch. The other patch is because Tom didn't like \"the continuing creep\n>> of pseudo-reserved database and user names\" so I wrote a patch to mark\n>> such reserved names and rebased my original patch on top of it. Only\n>> the docs changed in the rebase. The original patch (or its rebase) is\n>> what I am interested in.\n\n> Hopefully there will be no danger of me gaining access if I have a crafted\n> rolename?\n> postgres=# create role \"&backdoor\";\n> CREATE ROLE\n\nWell, the existence of such a role name wouldn't by itself cause any\nchange in the way that pg_hba.conf is parsed. If you could then\npersuade a superuser to insert a pg_hba.conf line that is trying\nto match your username, the line might do something else than what the\nsuperuser expected, which is bad. But the *exact* same hazard applies\nto proposals based on inventing pseudo-reserved keywords (by which\nI mean things that look like names, but aren't reserved words, so that\nsomebody could create a role name matching them). Either way, an\nuninformed superuser could be tricked.\n\nWhat I'm basically objecting to is the pseudo-reservedness. If we\ndon't want to dodge that with special syntax, we should dodge it\nby making sure the keywords are actually reserved names. In other\nwords, add a \"pg_\" prefix, as somebody else suggested upthread.\nI don't personally find that prettier than a punctuation prefix,\nbut I can live with it if other people do.\n\nBTW, although that solution works for the immediate need of\nkeywords that have to be distinguished from role names, it doesn't\ncurrently scale to keywords for the database column, because we\ndon't treat \"pg_\" as a reserved prefix for database names:\n\nregression=# create role pg_zit;\nERROR: role name \"pg_zit\" is reserved\nDETAIL: Role names starting with \"pg_\" are reserved.\nregression=# create database pg_zit;\nCREATE DATABASE\n\nShould we do so, or wait till there's an immediate need to?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 10:06:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\n* Tom Lane (tgl@sss.pgh.pa.us) wrote:\n> What I'm basically objecting to is the pseudo-reservedness. If we\n> don't want to dodge that with special syntax, we should dodge it\n> by making sure the keywords are actually reserved names. In other\n> words, add a \"pg_\" prefix, as somebody else suggested upthread.\n\nYes, that was my suggestion, and it was also my change a few major\nversions ago that actually reserved the \"pg_\" prefix for roles.\n\n> BTW, although that solution works for the immediate need of\n> keywords that have to be distinguished from role names, it doesn't\n> currently scale to keywords for the database column, because we\n> don't treat \"pg_\" as a reserved prefix for database names:\n> \n> regression=# create role pg_zit;\n> ERROR: role name \"pg_zit\" is reserved\n> DETAIL: Role names starting with \"pg_\" are reserved.\n> regression=# create database pg_zit;\n> CREATE DATABASE\n> \n> Should we do so, or wait till there's an immediate need to?\n\nI seem to recall (but it was years ago, so I might be wrong) advocating\nthat we should reserve the 'pg_' prefix for *all* object types. All I\ncan recall is that there wasn't much backing for the idea (though I also\ndon't recall any specific objection, and it's also quite possible that\nthere was simply no response to the idea).\n\nFor my 2c, I'd *much* rather we reserve it across the board, and sooner\nthan later, since that would hopefully reduce the impact on people. The\nonly justification for *not* reserving it is if we KNOW that we'll never\nneed a special one of those, but, well, we're well past that for\ndatabase names already- look at the fact that we've got a \"replication\"\none, for example. Maybe we can't ever un-reserve that, but I like the\nidea of reserving \"pg_\" for database names and then having\n\"pg_replication\" be allowed to mean replication connections and then\nencouraging users to use that instead.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Jan 2020 10:17:18 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> What I'm basically objecting to is the pseudo-reservedness. If we\n> don't want to dodge that with special syntax, we should dodge it\n> by making sure the keywords are actually reserved names.\n\nYou know, as I was reading this email, I got to thinking: aren't we\nengineering a solution to a problem for which we already have a\nsolution?\n\nThe documentation says:\n\n\"Quoting one of the keywords in a database, user, or address field\n(e.g., all or replication) makes the word lose its special character,\nand just match a database, user, or host with that name.\"\n\nSo if you've writing a pg_hba.conf file that contains a specific role\nname, and you want to make sure it doesn't get confused with a current\nor future keyword, just quote it. If you don't quote it, make sure to\nRTFM at the time and when upgrading.\n\nIf you want to argue that this isn't the cleanest possible solution to\nthe problem, I think I would agree. If we were doing this over again,\nwe could probably design a better syntax for pg_hba.conf, perhaps one\nwhere all specific role names have to be quoted and anything that's\nnot quoted is expected to be a keyword. But, as it is, nothing's\nreally broken here, and practical confusion is unlikely. If someone\nhas a role named \"superuser\", then it's probably a superuser account,\nand the worst that will happen is that we'll match all superuser\naccounts rather than only that one. If someone has a non-superuser\naccount called \"superuser\", or if they have an account named\n\"nonsuperuser,\" then, uh, that's lame, and if this patch causes them\nto improve their choice of role names, that's good. If it causes them\nto use quotes, that's also good.\n\nBut I think I'm coming around to the view that we're making what ought\nto be a simple change complicated, and we should just not do that.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 10:24:04 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 9, 2020 at 10:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What I'm basically objecting to is the pseudo-reservedness. If we\n>> don't want to dodge that with special syntax, we should dodge it\n>> by making sure the keywords are actually reserved names.\n\n> ...\n> But I think I'm coming around to the view that we're making what ought\n> to be a simple change complicated, and we should just not do that.\n\nThe problem is that we keep deciding that okay, it probably won't hurt\nanybody if this particular thing-that-ought-to-be-a-reserved-word isn't\nreally reserved. Your exercise in justifying that for \"superuser\" is\nnot unlike every other previous argument about this. Sooner or later\nthat's going to fail, and somebody's going to have a security problem\nbecause they didn't know that a particular name has magic properties\nin a particular context. (Which, indeed, maybe it didn't have when\nthey chose it.) Claiming they should have known better isn't where\nI want to be when that happens.\n\nI don't want to keep going down that path. These things are effectively\nreserved words, and they need to act like reserved words, so that you\nget an error if you misuse them. Silently doing something else than\nwhat (one reasonable reading of) a pg_hba.conf entry seems to imply\nis *bad*.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 11:06:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 11:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> The problem is that we keep deciding that okay, it probably won't hurt\n> anybody if this particular thing-that-ought-to-be-a-reserved-word isn't\n> really reserved. Your exercise in justifying that for \"superuser\" is\n> not unlike every other previous argument about this. Sooner or later\n> that's going to fail, and somebody's going to have a security problem\n> because they didn't know that a particular name has magic properties\n> in a particular context. (Which, indeed, maybe it didn't have when\n> they chose it.) Claiming they should have known better isn't where\n> I want to be when that happens.\n\nBut, again, we already *have* a way of solving this problem: use\nquotes. As Simon pointed out, your proposed solution isn't really a\nsolution at all, because & can appear in role names. It probably\nwon't, but there probably also won't be a role name that matches\neither of these keywords, so it's just six of one, half a dozen of the\nother. The thing that really solves it is quoting.\n\nNow I admit that if we decide pg_hba.conf keywords have to start with\n\"pg_\" and prevent names beginning with \"pg_\" from being used as object\nnames, then we'd have TWO ways of distinguishing between a keyword and\nan object name. But I don't think TMTOWTDI is the right design\nprinciple here.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 11:21:56 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 9, 2020 at 11:06 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> The problem is that we keep deciding that okay, it probably won't hurt\n>> anybody if this particular thing-that-ought-to-be-a-reserved-word isn't\n>> really reserved.\n\n> But, again, we already *have* a way of solving this problem: use\n> quotes. As Simon pointed out, your proposed solution isn't really a\n> solution at all, because & can appear in role names.\n\nI'm not sure that the pg_hba.conf parser allows that without quotes,\nbut in any case it's irrelevant to the proposal to use a pg_ prefix.\nWe don't allow non-built-in role names to be spelled that way,\nquoted or not.\n\n> Now I admit that if we decide pg_hba.conf keywords have to start with\n> \"pg_\" and prevent names beginning with \"pg_\" from being used as object\n> names, then we'd have TWO ways of distinguishing between a keyword and\n> an object name. But I don't think TMTOWTDI is the right design\n> principle here.\n\nThe principle I'm concerned about is not letting a configuration file\nthat was perfectly fine in version N silently become a security hazard\nin version N+1. The only way I will accept your proposal is if we\nchange the pg_hba.conf parser to *require* quotes around every\nrole and DB name that's not meant to be a keyword, so that people get\nused to that requirement. But I doubt that idea will fly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 11:35:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> But, again, we already *have* a way of solving this problem: use\n> quotes. As Simon pointed out, your proposed solution isn't really a\n> solution at all, because & can appear in role names. It probably\n> won't, but there probably also won't be a role name that matches\n> either of these keywords, so it's just six of one, half a dozen of the\n> other. The thing that really solves it is quoting.\n\nI really just can't agree with the idea that:\n\n\"&superuser\"\n\nand\n\n&superuser\n\nin pg_hba.conf should mean materially different things and have far\nreaching security differences. Depending on quoting in pg_hba.conf for\nthis distinction is an altogether bad idea.\n\n> Now I admit that if we decide pg_hba.conf keywords have to start with\n> \"pg_\" and prevent names beginning with \"pg_\" from being used as object\n> names, then we'd have TWO ways of distinguishing between a keyword and\n> an object name. But I don't think TMTOWTDI is the right design\n> principle here.\n\nThere is a *really* big difference here though which makes this not \"two\nways to do the same thing\"- you *can't* create a user starting with\n\"pg_\". You *can* create a user with an '&' in it. If we prevented you\nfrom being able to create users with '&' in it then I'd be more open to\nthe idea of using '&' to mean something special in pg_hba, and then it\nreally would be two different ways to do the same thing, but that's not\nactually what's being proposed here.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 9 Jan 2020 11:36:38 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Hi,\n\nI see this patch is marked as RFC since 12/30, but there seems to be\nquite a lot of discussion about the syntax, keywords and how exactly to\nidentify the superuser. So I'll switch it back to needs review, which I\nthink is a better representation of the current state.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 22:38:20 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> I see this patch is marked as RFC since 12/30, but there seems to be\n> quite a lot of discussion about the syntax, keywords and how exactly to\n> identify the superuser. So I'll switch it back to needs review, which I\n> think is a better representation of the current state.\n\nSomebody switched it to RFC again, despite the facts that\n\n(a) there is absolutely no consensus about what syntax to use\n(and some of the proposals imply very different patches),\n\n(b) there's been no discussion at all since the last CF, and\n\n(c) the patch is still failing in the cfbot (src/test/ssl fails).\n\nWhile resolving (c) would seem to be the author's problem, I don't\nthink it's worth putting effort into that detail until we have\nsome meeting of the minds about (a). So I'll put this back to\n\"needs review\".\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Mar 2020 14:28:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "> On 30 Mar 2020, at 20:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> I see this patch is marked as RFC since 12/30, but there seems to be\n>> quite a lot of discussion about the syntax, keywords and how exactly to\n>> identify the superuser. So I'll switch it back to needs review, which I\n>> think is a better representation of the current state.\n> \n> Somebody switched it to RFC again, despite the facts that\n> \n> (a) there is absolutely no consensus about what syntax to use\n> (and some of the proposals imply very different patches),\n> \n> (b) there's been no discussion at all since the last CF, and\n> \n> (c) the patch is still failing in the cfbot (src/test/ssl fails).\n> \n> While resolving (c) would seem to be the author's problem, I don't\n> think it's worth putting effort into that detail until we have\n> some meeting of the minds about (a). So I'll put this back to\n> \"needs review\".\n\nSince there hasn't been any more progress on this since the last CF, and the\nfact that the outcome may result in a completely different patch, I'm inclined\nto mark this returned with feedback rather than have it linger. The discussion\ncan continue and the entry be re-opened.\n\nThoughts?\n\ncheers ./daniel\n\n",
"msg_date": "Thu, 2 Jul 2020 15:14:23 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
},
{
"msg_contents": "On 7/2/20 3:14 PM, Daniel Gustafsson wrote:\n>> On 30 Mar 2020, at 20:28, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>>\n>> Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>>> I see this patch is marked as RFC since 12/30, but there seems to be\n>>> quite a lot of discussion about the syntax, keywords and how exactly to\n>>> identify the superuser. So I'll switch it back to needs review, which I\n>>> think is a better representation of the current state.\n>>\n>> Somebody switched it to RFC again, despite the facts that\n>>\n>> (a) there is absolutely no consensus about what syntax to use\n>> (and some of the proposals imply very different patches),\n>>\n>> (b) there's been no discussion at all since the last CF, and\n>>\n>> (c) the patch is still failing in the cfbot (src/test/ssl fails).\n>>\n>> While resolving (c) would seem to be the author's problem, I don't\n>> think it's worth putting effort into that detail until we have\n>> some meeting of the minds about (a). So I'll put this back to\n>> \"needs review\".\n> \n> Since there hasn't been any more progress on this since the last CF, and the\n> fact that the outcome may result in a completely different patch, I'm inclined\n> to mark this returned with feedback rather than have it linger. The discussion\n> can continue and the entry be re-opened.\n> \n> Thoughts?\n\n\nNo objection.\n-- \nVik Fearing\n\n\n",
"msg_date": "Thu, 2 Jul 2020 15:21:34 +0200",
"msg_from": "Vik Fearing <vik@postgresfriends.org>",
"msg_from_op": false,
"msg_subject": "Re: Recognizing superuser in pg_hba.conf"
}
] |
[
{
"msg_contents": "We've often talked about the problem that we have no regression test\ncoverage for psql's tab completion code. I got interested in this\nissue while messing with the filename completion logic therein [1],\nso here is a draft patch that adds some testing for that code.\n\nThis is just preliminary investigation, so I've made no attempt\nto test tab-complete.c comprehensively (and I'm not sure there\nwould be much point in covering every last code path in it anyway).\nStill, it does get us from zero to 43% coverage of that file\nalready, and it does good things for the coverage of input.c\nand prompt.c as well.\n\nWhat I think is actually interesting at this stage is portability\nof the tests. There are a number of issues:\n\n* The script requires Perl's IO::Pty module (indirectly, in that IPC::Run\nrequires that to make pty connections), which isn't installed everywhere.\nI just made the script skip if that's not available, so that we're not\nmoving the goalposts for what has to be installed to run the TAP tests.\nIs that the right answer?\n\n* It seems pretty likely that this won't work on Windows, given all the\ncaveats in the IPC::Run documentation about nonfunctionality of the pty\nsupport there. Maybe we don't care, seeing that we don't really support\nreadline on Windows anyway. For the moment I assumed that the skip\nconditions for --without-readline and/or missing-IO::Pty would cover\nthis, but we might need an explicit check for Windows too. Or maybe\nsomebody wants to try to make it work on Windows; but that won't be me.\n\n* What's even more likely to be problematic is small variations in the\noutput behavior of different readline and libedit versions. According\nto my tests so far, though, all modern versions of them do pass these\ntest cases. I noted failures on very old Apple versions of libedit:\n\n1. macOS 10.5's version of libedit seems not to honor\nrl_completion_append_character; it never emits the trailing space\nwe're expecting. This seems like a plain old libedit bug, especially\nsince 10.4's version works as expected.\n\n2. Both 10.4 and 10.5 emit the alternative table names in the wrong\norder, suggesting that they're not internally sorting the completion\nresults. If this proves to be more widespread, we could likely fix\nit by adding ORDER BY to the completion queries, but I'm not sure that\nit's worth doing if only these dead macOS versions have the issue.\n(On the other hand, it seems like bad practice to be issuing queries\nthat have LIMIT without ORDER BY, so maybe we should fix them anyway.)\n\n\nI'm strongly tempted to just push this and see what the buildfarm\nthinks of it. If it fails in lots of places, maybe the idea is a\ndead end. If it works, I'd look into extending the tests --- in\nparticular, I'd like to have some coverage for the filename\ncompletion logic.\n\nThoughts?\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/14585.1577486216%40sss.pgh.pa.us",
"msg_date": "Sat, 28 Dec 2019 14:52:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "TAP testing for psql's tab completion code"
},
{
"msg_contents": "\nHello Tom,\n\n> We've often talked about the problem that we have no regression test\n> coverage for psql's tab completion code. I got interested in this\n> issue while messing with the filename completion logic therein [1],\n> so here is a draft patch that adds some testing for that code.\n>\n> This is just preliminary investigation, so I've made no attempt\n> to test tab-complete.c comprehensively (and I'm not sure there\n> would be much point in covering every last code path in it anyway).\n> Still, it does get us from zero to 43% coverage of that file\n> already, and it does good things for the coverage of input.c\n> and prompt.c as well.\n>\n> What I think is actually interesting at this stage is portability\n> of the tests. There are a number of issues:\n>\n> * The script requires Perl's IO::Pty module (indirectly, in that IPC::Run\n> requires that to make pty connections), which isn't installed everywhere.\n> I just made the script skip if that's not available, so that we're not\n> moving the goalposts for what has to be installed to run the TAP tests.\n> Is that the right answer?\n>\n> * It seems pretty likely that this won't work on Windows, given all the\n> caveats in the IPC::Run documentation about nonfunctionality of the pty\n> support there. Maybe we don't care, seeing that we don't really support\n> readline on Windows anyway. For the moment I assumed that the skip\n> conditions for --without-readline and/or missing-IO::Pty would cover\n> this, but we might need an explicit check for Windows too. Or maybe\n> somebody wants to try to make it work on Windows; but that won't be me.\n>\n> * What's even more likely to be problematic is small variations in the\n> output behavior of different readline and libedit versions. According\n> to my tests so far, though, all modern versions of them do pass these\n> test cases. I noted failures on very old Apple versions of libedit:\n>\n> 1. macOS 10.5's version of libedit seems not to honor\n> rl_completion_append_character; it never emits the trailing space\n> we're expecting. This seems like a plain old libedit bug, especially\n> since 10.4's version works as expected.\n>\n> 2. Both 10.4 and 10.5 emit the alternative table names in the wrong\n> order, suggesting that they're not internally sorting the completion\n> results. If this proves to be more widespread, we could likely fix\n> it by adding ORDER BY to the completion queries, but I'm not sure that\n> it's worth doing if only these dead macOS versions have the issue.\n> (On the other hand, it seems like bad practice to be issuing queries\n> that have LIMIT without ORDER BY, so maybe we should fix them anyway.)\n>\n>\n> I'm strongly tempted to just push this and see what the buildfarm\n> thinks of it. If it fails in lots of places, maybe the idea is a\n> dead end. If it works, I'd look into extending the tests --- in\n> particular, I'd like to have some coverage for the filename\n> completion logic.\n>\n> Thoughts?\n\nAfter you raised the issue, I submitted something last August, which did \nnot attract much attention.\n\n https://commitfest.postgresql.org/26/2262/\n\nIt covers some tab-completion stuff. It uses Expect for the interactive \nstuff (tab completion, \\h, ...).\n\n-- \nFabien.\n\n\n",
"msg_date": "Sat, 28 Dec 2019 22:16:52 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> We've often talked about the problem that we have no regression test\n>> coverage for psql's tab completion code. I got interested in this\n>> issue while messing with the filename completion logic therein [1],\n>> so here is a draft patch that adds some testing for that code.\n\n> After you raised the issue, I submitted something last August, which did \n> not attract much attention.\n> https://commitfest.postgresql.org/26/2262/\n> It covers some tab-completion stuff. It uses Expect for the interactive \n> stuff (tab completion, \\h, ...).\n\nNow that you mention it, I seem to recall looking at that and not being\nhappy with the additional dependency on Expect. Expect is *not* a\nstandard module; on the machines I have handy, the only one in which it\nappears in the default Perl installation is macOS. (Huh, what's Apple\ndoing out ahead of the pack?) I'm pretty sure that Expect also relies on\nIO::Pty, so it's a strictly worse dependency than what I've got here.\n\nCan we recast what you did into something like this patch's methods?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 28 Dec 2019 17:52:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "\nHello Tom,\n\n>>> We've often talked about the problem that we have no regression test\n>>> coverage for psql's tab completion code. I got interested in this\n>>> issue while messing with the filename completion logic therein [1],\n>>> so here is a draft patch that adds some testing for that code.\n>\n>> After you raised the issue, I submitted something last August, which did\n>> not attract much attention.\n>> https://commitfest.postgresql.org/26/2262/\n>> It covers some tab-completion stuff. It uses Expect for the interactive\n>> stuff (tab completion, \\h, ...).\n>\n> Now that you mention it, I seem to recall looking at that and not being\n> happy with the additional dependency on Expect.\n\nPossibly. You did not say it out very loud.\n\n> Expect is *not* a standard module;\n\nSomehow. It is an old one, though.\n\n> on the machines I have handy, the only one in which it appears in the \n> default Perl installation is macOS. (Huh, what's Apple doing out ahead \n> of the pack?) I'm pretty sure that Expect also relies on IO::Pty,\n\nIndeed, it does.\n\n> so it's a strictly worse dependency than what I've got here.\n\nIf you have to install IO::Pty anyway, ISTM you can also install Expect.\n\nIO::Pty documentation says that it is \"mainly used by Expect\", which is a \nclue that IO::Pty is not much better than Expect as a dependency.\n\nFor installation, \"apt install libexpect-perl\" did the trick for me. \"cpan \ninstall Expect\" should work as well on most setup.\n\nI guess it is possible to check whether Expect is available and to skip \nthe corresponding tests if not.\n\n> Can we recast what you did into something like this patch's methods?\n\nBasically it means reimplementing some expect functionality in the script, \nincluding new bugs. Modules were invented to avert that, so I cannot say \nI'm happy with the prospect of re-inventing the wheel. Note that Expect is \na pure-perl 1600-LOC module.\n\nAnyway, I'll have a look. At least I used a very limited subset of Expect \ncapabilities which should help matters along.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 29 Dec 2019 08:11:07 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> on the machines I have handy, the only one in which [Expect] appears in the \n>> default Perl installation is macOS. (Huh, what's Apple doing out ahead \n>> of the pack?) I'm pretty sure that Expect also relies on IO::Pty,\n\n> Indeed, it does.\n\n>> so it's a strictly worse dependency than what I've got here.\n\n> If you have to install IO::Pty anyway, ISTM you can also install Expect.\n\nMy point is precisely that buildfarm owners *won't* have to install\nIO::Pty; it comes in a default Perl install almost everywhere.\nI'm afraid that's not true of Expect.\n\nNow in both cases we could avoid raising the bar by allowing the\nscript to \"skip\" if the module isn't there. But I think we'd end up\nwith less coverage if we do that with Expect than with IO::Pty.\n\n> IO::Pty documentation says that it is \"mainly used by Expect\", which is a \n> clue that IO::Pty is not much better than Expect as a dependency.\n\nYou're just guessing, not looking at facts on the ground. I have looked\nat RHEL, Fedora, Debian, FreeBSD, NetBSD, and OpenBSD. The only one\nin which IO::Pty isn't in the standard Perl install is OpenBSD.\n\nWell, actually, it's possible that on some of these boxes it was pulled\nin by the IPC::Run package, as I have that installed on all of them.\nBut the point remains the same: almost nowhere will IO::Pty be a new\ndependency for a buildfarm owner, whereas Expect will be a new dependency\nalmost everywhere.\n\n(One reason I'm interested to push this sooner not later is to find\nout what fraction of the TAP-test-running buildfarm critters do have\nIO::Pty already. If it turns out not to be almost all of them, then\nmy assumptions are wrong and we could revisit this discussion.)\n\n> For installation, \"apt install libexpect-perl\" did the trick for me. \"cpan \n> install Expect\" should work as well on most setup.\n\nI'm well aware of the mechanisms for installing nonstandard Perl modules,\nthanks. It's a pain, as a general rule. The fact that the buildfarm\nrequires IPC::Run is a large barrier to entry, and I don't want to double\nthe pain by adding a second far-off-the-beaten-track dependency.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Dec 2019 12:13:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "\nHello Tom,\n\n>> If you have to install IO::Pty anyway, ISTM you can also install Expect.\n>\n> My point is precisely that buildfarm owners *won't* have to install\n> IO::Pty; it comes in a default Perl install almost everywhere.\n> I'm afraid that's not true of Expect.\n\nHmmm. That is a good argument.\n\n> Now in both cases we could avoid raising the bar by allowing the\n> script to \"skip\" if the module isn't there.\n\nYep.\n\n>> IO::Pty documentation says that it is \"mainly used by Expect\", which is a\n>> clue that IO::Pty is not much better than Expect as a dependency.\n>\n> You're just guessing, not looking at facts on the ground. [...]\n\nI'm not guessing what the documentation says:-) But for the consequences, \nindeed I was guessing.\n\n> Well, actually, it's possible that on some of these boxes it was pulled \n> in by the IPC::Run package,\n\nAh, you are guessing right, IPC::Run requires IO::Pty, so it should be \navailable everywhere the buildfarm scripts already run. Maybe.\n\nI've looked at your PoC implementation:\n\nI'm not fan of relying on the configure stuff (\"with_readline\"), in my \nExpect version I tested if history capabilities are available from psql \nitself.\n\nI did not paid attention not to overwrite the psql history file, though.\n\nFor the psql coverage patch, I was more ambitious and needed less \nassumption about the configuration, I only forced -X.\n\n-- \nFabien.\n\n\n",
"msg_date": "Sun, 29 Dec 2019 18:42:36 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n> I've looked at your PoC implementation:\n\n> I'm not fan of relying on the configure stuff (\"with_readline\"), in my \n> Expect version I tested if history capabilities are available from psql \n> itself.\n\nNo, I disagree with that. If configure thinks it built with readline,\nand then the actual binary acts like it doesn't have readline, that's\na bug that we'd like the tests to detect. We don't want the test\nsilently deciding that things are OK if the first thing it tries\ndoesn't work. (For comparison, the SSL tests are also enabled by\nconfigure's opinion not some other way -- I was mostly copying how\nthat works.)\n\n> For the psql coverage patch, I was more ambitious and needed less \n> assumption about the configuration, I only forced -X.\n\nI mainly just duplicated the environment set up by PostgresNode::psql\nas much as it seemed reasonable to. The -At options are kind of\nirrelevant for what we're going to test here, probably, but why not\nkeep the default behavior the same? I did drop -q since that\nsuppresses prompting, and we probably want to test prompt.c using\nthis infrastructure.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Dec 2019 12:53:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": ">> I'm not fan of relying on the configure stuff (\"with_readline\"), in my\n>> Expect version I tested if history capabilities are available from psql\n>> itself.\n>\n> No, I disagree with that. If configure thinks it built with readline,\n> and then the actual binary acts like it doesn't have readline, that's\n> a bug that we'd like the tests to detect.\n\nHmmm. Sure, that's a point.\n\nWhat about running some tests on an installed version?\n\n>> For the psql coverage patch, I was more ambitious and needed less\n>> assumption about the configuration, I only forced -X.\n>\n> I mainly just duplicated the environment set up by PostgresNode::psql\n> as much as it seemed reasonable to. The -At options are kind of\n> irrelevant for what we're going to test here, probably, but why not\n> keep the default behavior the same? I did drop -q since that\n> suppresses prompting, and we probably want to test prompt.c using\n> this infrastructure.\n\nThat is what my patch does: it tests prompts, tab completion, help, \ncommand options… and I added tests till I covered most psql source.\n\n-- \nFabien.",
"msg_date": "Sun, 29 Dec 2019 19:52:41 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> No, I disagree with that. If configure thinks it built with readline,\n>> and then the actual binary acts like it doesn't have readline, that's\n>> a bug that we'd like the tests to detect.\n\n> Hmmm. Sure, that's a point.\n\n> What about running some tests on an installed version?\n\nI think \"make installcheck\" has plenty of dependencies already on the\nbuild tree matching the installed version. For instance, src/pl\nwill/won't run regression tests on languages it thinks was/weren't built.\nIf you want to run such tests retroactively, you'd better make sure you\nconfigure your build tree to match the existing installation.\n\n>> I mainly just duplicated the environment set up by PostgresNode::psql\n>> as much as it seemed reasonable to. The -At options are kind of\n>> irrelevant for what we're going to test here, probably, but why not\n>> keep the default behavior the same? I did drop -q since that\n>> suppresses prompting, and we probably want to test prompt.c using\n>> this infrastructure.\n\n> That is what my patch does: it tests prompts, tab completion, help, \n> command options… and I added tests till I covered most psql source.\n\nWell, I think that where possible we ought to test using the existing test\ninfrastructure -- help, for example, seems like it could perfectly well be\ntested in src/test/regress/sql/psql.sql, or we could move stuff out to a\nnew set of SQL test scripts under src/bin/psql/sql/, if it seems like we\ndon't need it to be part of the core tests. But any tests using this new\ninfrastructure are going to be skipped by some percentage of test\nmachines, so we shouldn't skip what needn't be skipped.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Dec 2019 14:22:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": ">> That is what my patch does: it tests prompts, tab completion, help,\n>> command options… and I added tests till I covered most psql source.\n>\n> Well, I think that where possible we ought to test using the existing \n> test infrastructure -- help, for example, seems like it could perfectly \n> well be tested in src/test/regress/sql/psql.sql, or we could move stuff \n> out to a new set of SQL test scripts under src/bin/psql/sql/,\n\nI do not think it is a good idea, because help output is quite large, \nthere are many of them, and we should certainly not want it stored \nrepeatedly in output files for diffs. I rather trigger the output and only \ncheck for some related keywords, so that it fits TAP tests reasonably \nwell.\n\n-- \nFabien.",
"msg_date": "Mon, 30 Dec 2019 11:49:28 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "Fabien COELHO <coelho@cri.ensmp.fr> writes:\n>> Well, I think that where possible we ought to test using the existing \n>> test infrastructure -- help, for example, seems like it could perfectly \n>> well be tested in src/test/regress/sql/psql.sql, or we could move stuff \n>> out to a new set of SQL test scripts under src/bin/psql/sql/,\n\n> I do not think it is a good idea, because help output is quite large, \n> there are many of them, and we should certainly not want it stored \n> repeatedly in output files for diffs.\n\nHm, I don't follow --- we are most certainly not going to exercise\n\\help for every possible SQL keyword, that'd just be silly.\n\nHaving said that, the fact that \\help now includes a version-dependent\nURL in its output is probably enough to break the idea of testing it\nwith a conventional expected-output test, so maybe TAP is the only\nway for that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Dec 2019 09:10:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: TAP testing for psql's tab completion code"
},
{
"msg_contents": "\nHello Tom,\n\n>> I do not think it is a good idea, because help output is quite large,\n>> there are many of them, and we should certainly not want it stored\n>> repeatedly in output files for diffs.\n>\n> Hm, I don't follow --- we are most certainly not going to exercise\n> \\help for every possible SQL keyword, that'd just be silly.\n\nI am silly.\n\nPrice is pretty low, it helps with coverage in \"sql_help.c, it checks that \nthe help files returns adequate results so that adding new help contents \ndoes not hide existing stuff. I do not see why we should not do it, in TAP \ntests.\n\nThe alternative is that the project tolerates substandard test coverage. \nThe \"psql\" command is currently around 40-44%.\n\n> Having said that, the fact that \\help now includes a version-dependent\n> URL in its output is probably enough to break the idea of testing it\n> with a conventional expected-output test, so maybe TAP is the only\n> way for that.\n\nThe URL is a good thing, though.\n\n-- \nFabien.\n\n\n",
"msg_date": "Mon, 30 Dec 2019 15:40:14 +0100 (CET)",
"msg_from": "Fabien COELHO <coelho@cri.ensmp.fr>",
"msg_from_op": false,
"msg_subject": "Re: TAP testing for psql's tab completion code"
}
] |
[
{
"msg_contents": "Hello \nhere is an unexpected error found while testing IVM v11 patches\n\ncreate table b1 (id integer, x numeric(10,3));\ncreate incremental materialized view mv1 \nas select id, count(*),sum(x) from b1 group by id;\n\ndo $$ \ndeclare \n\ti integer;\nbegin \n\tfor i in 1..10000 \n\tloop \n\t\tinsert into b1 values (1,1); \n\tend loop; \nend;\n$$\n;\n\nERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\nCONTEXT: SQL statement \"DROP TABLE pg_temp_3.pg_temp_66154\"\nSQL statement \"insert into b1 values (1,1)\"\nPL/pgSQL function inline_code_block line 1 at SQL statement\n\nRegards\nPAscal\n\n\n\n--\nSent from: https://www.postgresql-archive.org/PostgreSQL-hackers-f1928748.html\n\n\n",
"msg_date": "Sat, 28 Dec 2019 13:15:09 -0700 (MST)",
"msg_from": "legrand legrand <legrand_legrand@hotmail.com>",
"msg_from_op": true,
"msg_subject": "Incremental View Maintenance: ERROR: out of shared memory"
},
{
"msg_contents": "> Hello \n> here is an unexpected error found while testing IVM v11 patches\n> \n> create table b1 (id integer, x numeric(10,3));\n> create incremental materialized view mv1 \n> as select id, count(*),sum(x) from b1 group by id;\n> \n> do $$ \n> declare \n> \ti integer;\n> begin \n> \tfor i in 1..10000 \n> \tloop \n> \t\tinsert into b1 values (1,1); \n> \tend loop; \n> end;\n> $$\n> ;\n> \n> ERROR: out of shared memory\n> HINT: You might need to increase max_locks_per_transaction.\n> CONTEXT: SQL statement \"DROP TABLE pg_temp_3.pg_temp_66154\"\n> SQL statement \"insert into b1 values (1,1)\"\n> PL/pgSQL function inline_code_block line 1 at SQL statement\n\nYeah, following code generates similar error as well even without IVM.\n\ndo $$ \ndeclare \n\ti integer;\nbegin \n\tfor i in 1..10000\n\tloop \n\t\tcreate temp table mytemp(i int);\n\t\tdrop table mytemp;\n\tend loop; \nend;\n$$\n;\n\nERROR: out of shared memory\nHINT: You might need to increase max_locks_per_transaction.\nCONTEXT: SQL statement \"create temp table mytemp(i int)\"\nPL/pgSQL function inline_code_block line 7 at SQL statement\n\nI think we could avoid such an error in IVM by reusing a temp table in\na session or a transaction.\n\nBest regards,\n--\nTatsuo Ishii\nSRA OSS, Inc. Japan\nEnglish: http://www.sraoss.co.jp/index_en.php\nJapanese:http://www.sraoss.co.jp\n\n\n",
"msg_date": "Sun, 29 Dec 2019 20:24:04 +0900 (JST)",
"msg_from": "Tatsuo Ishii <ishii@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance: ERROR: out of shared memory"
},
{
"msg_contents": "Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n>> here is an unexpected error found while testing IVM v11 patches\n>> ...\n>> ERROR: out of shared memory\n\n> I think we could avoid such an error in IVM by reusing a temp table in\n> a session or a transaction.\n\nI'm more than a little bit astonished that this proposed patch is\ncreating temp tables at all. ISTM that that implies that it's\nbeing implemented at the wrong level of abstraction, and it will be\nfull of security problems, as well as performance problems above\nand beyond the one described here.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 29 Dec 2019 12:27:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance: ERROR: out of shared memory"
},
{
"msg_contents": "On Sun, 29 Dec 2019 12:27:13 -0500\nTom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> >> here is an unexpected error found while testing IVM v11 patches\n> >> ...\n> >> ERROR: out of shared memory\n> \n> > I think we could avoid such an error in IVM by reusing a temp table in\n> > a session or a transaction.\n> \n> I'm more than a little bit astonished that this proposed patch is\n> creating temp tables at all. ISTM that that implies that it's\n> being implemented at the wrong level of abstraction, and it will be\n> full of security problems, as well as performance problems above\n> and beyond the one described here.\n\nWe realized that there is also other problems in using temp tables\nas pointed out in another thread. So, we are now working on rewrite\nour patch not to use temp tables.\n\nRegards,\nYugo Nagata\n\n\n-- \nYugo NAGATA <nagata@sraoss.co.jp>\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:33:48 +0900",
"msg_from": "Yugo NAGATA <nagata@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance: ERROR: out of shared memory"
},
{
"msg_contents": "Hi,\n\nOn Fri, 17 Jan 2020 17:33:48 +0900\nYugo NAGATA <nagata@sraoss.co.jp> wrote:\n\n> On Sun, 29 Dec 2019 12:27:13 -0500\n> Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> > Tatsuo Ishii <ishii@sraoss.co.jp> writes:\n> > >> here is an unexpected error found while testing IVM v11 patches\n> > >> ...\n> > >> ERROR: out of shared memory\n> > \n> > > I think we could avoid such an error in IVM by reusing a temp table in\n> > > a session or a transaction.\n> > \n> > I'm more than a little bit astonished that this proposed patch is\n> > creating temp tables at all. ISTM that that implies that it's\n> > being implemented at the wrong level of abstraction, and it will be\n> > full of security problems, as well as performance problems above\n> > and beyond the one described here.\n> \n> We realized that there is also other problems in using temp tables\n> as pointed out in another thread. So, we are now working on rewrite\n> our patch not to use temp tables.\n\nWe fixed this problem in latest patches (v14) in the following thread.\nhttps://www.postgresql.org/message-id/20200227150649.101ef342d0e7d7abee320159@sraoss.co.jp\n\nWe would appreciate it if you could review this.\n\n\nBest Regards,\n\nTakuma Hoshiai\n\n\n> Regards,\n> Yugo Nagata\n> \n> \n> -- \n> Yugo NAGATA <nagata@sraoss.co.jp>\n> \n> \n> \n\n\n-- \nTakuma Hoshiai <hoshiai@sraoss.co.jp>\n\n\n\n",
"msg_date": "Thu, 27 Feb 2020 15:35:00 +0900",
"msg_from": "Takuma Hoshiai <hoshiai@sraoss.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Incremental View Maintenance: ERROR: out of shared memory"
}
] |
[
{
"msg_contents": "I have found the collection of STATUS_* defines in c.h a bit curious. \nThere used to be a lot more even that have been removed over time. \nCurrently, STATUS_FOUND and STATUS_WAITING are only used in one group of \nfunctions each, so perhaps it would make more sense to remove these from \nthe global namespace and make them a local concern.\n\nAttached are two patches to remove these two symbols. STATUS_FOUND can \nbe replaced by a simple bool. STATUS_WAITING is replaced by a separate \nenum.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sun, 29 Dec 2019 11:33:34 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove some STATUS_* symbols"
},
{
"msg_contents": "On Sun, Dec 29, 2019 at 11:33:34AM +0100, Peter Eisentraut wrote:\n> Attached are two patches to remove these two symbols. STATUS_FOUND can be\n> replaced by a simple bool. STATUS_WAITING is replaced by a separate enum.\n\nPatch 0001 looks good to me, but I got to wonder why the check after\nwaitMask in LockAcquireExtended() is not done directly in\nLockCheckConflicts().\n\nRegarding patch 0002, I am not sure that the addition of\nProcWaitStatus brings much though in terms of code readability.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 15:31:59 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On 2020-01-06 07:31, Michael Paquier wrote:\n> On Sun, Dec 29, 2019 at 11:33:34AM +0100, Peter Eisentraut wrote:\n>> Attached are two patches to remove these two symbols. STATUS_FOUND can be\n>> replaced by a simple bool. STATUS_WAITING is replaced by a separate enum.\n> \n> Patch 0001 looks good to me, but I got to wonder why the check after\n> waitMask in LockAcquireExtended() is not done directly in\n> LockCheckConflicts().\n\nYou mean put he subsequent GrantLock() calls into LockCheckConflicts()? \nThat would technically save some duplicate code, but it seems weird, \nbecause LockCheckConflicts() is notionally a read-only function that \nshouldn't change state.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 11:15:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On Thu, Jan 09, 2020 at 11:15:08AM +0100, Peter Eisentraut wrote:\n> You mean put he subsequent GrantLock() calls into LockCheckConflicts()? That\n> would technically save some duplicate code, but it seems weird, because\n> LockCheckConflicts() is notionally a read-only function that shouldn't\n> change state.\n\nNah. I was thinking about the first part of this \"if\" clause\nLockCheckConflicts is part of here:\n if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n status = STATUS_FOUND;\n else\n status = LockCheckConflicts(lockMethodTable, lockmode,\n lock, proclock);\n\nBut now that I look at it closely it messes up heavily with\nProcSleep() ;)\n--\nMichael",
"msg_date": "Fri, 10 Jan 2020 14:23:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On 2020-01-10 06:23, Michael Paquier wrote:\n> On Thu, Jan 09, 2020 at 11:15:08AM +0100, Peter Eisentraut wrote:\n>> You mean put he subsequent GrantLock() calls into LockCheckConflicts()? That\n>> would technically save some duplicate code, but it seems weird, because\n>> LockCheckConflicts() is notionally a read-only function that shouldn't\n>> change state.\n> \n> Nah. I was thinking about the first part of this \"if\" clause\n> LockCheckConflicts is part of here:\n> if (lockMethodTable->conflictTab[lockmode] & lock->waitMask)\n> status = STATUS_FOUND;\n> else\n> status = LockCheckConflicts(lockMethodTable, lockmode,\n> lock, proclock);\n> \n> But now that I look at it closely it messes up heavily with\n> ProcSleep() ;)\n\nOK, pushed as it was then.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sat, 11 Jan 2020 08:14:17 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On Sat, Jan 11, 2020 at 08:14:17AM +0100, Peter Eisentraut wrote:\n> OK, pushed as it was then.\n\nThanks, that looks fine. I am still not sure whether the second patch\nadding an enum via ProcWaitStatus improves the code readability\nthough, so my take would be to discard it for now. Perhaps others\nthink differently, I don't know.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 14:50:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "At Thu, 16 Jan 2020 14:50:01 +0900, Michael Paquier <michael@paquier.xyz> wrote in \n> On Sat, Jan 11, 2020 at 08:14:17AM +0100, Peter Eisentraut wrote:\n> > OK, pushed as it was then.\n> \n> Thanks, that looks fine. I am still not sure whether the second patch\n> adding an enum via ProcWaitStatus improves the code readability\n> though, so my take would be to discard it for now. Perhaps others\n> think differently, I don't know.\n\nI feel the same about the second patch.\n\nAlthough actually STATUS_WAITING is used only by ProcSleep and related\nfunctions, likewise STATUS_EOF is seen only in auth.c/h. Other files,\npqcomm.c, crypt.c postmaster.c, hba.c, fe-auth.c , fe-connect.c,\nfe-gssapi-common.c are using only STATUS_OK and ERROR. I haven't had a\nclose look but all of the usages would be equivalent to bool.\n\nOn the other hand many functions in fe-*.c and pqcomm.c returns\nEOF(-1)/0 instead of STATUS_EOF(-2)/STATUS_OK(0).\n\nWe could reorganize the values and their usage but it doesn't seem to\nbe a big win..\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Thu, 16 Jan 2020 19:35:48 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 12:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n> Thanks, that looks fine. I am still not sure whether the second patch\n> adding an enum via ProcWaitStatus improves the code readability\n> though, so my take would be to discard it for now. Perhaps others\n> think differently, I don't know.\n\nIMHO, custom enums for each particular case would be a big improvement\nover supposedly-generic STATUS codes. It makes it clearer which values\nare possible in each code path, and it comes out nicer in the\ndebugger, too.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 16 Jan 2020 07:56:38 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On 2020-01-16 13:56, Robert Haas wrote:\n> On Thu, Jan 16, 2020 at 12:50 AM Michael Paquier <michael@paquier.xyz> wrote:\n>> Thanks, that looks fine. I am still not sure whether the second patch\n>> adding an enum via ProcWaitStatus improves the code readability\n>> though, so my take would be to discard it for now. Perhaps others\n>> think differently, I don't know.\n> \n> IMHO, custom enums for each particular case would be a big improvement\n> over supposedly-generic STATUS codes. It makes it clearer which values\n> are possible in each code path, and it comes out nicer in the\n> debugger, too.\n\nGiven this feedback, I would like to re-propose the original patch, \nattached again here.\n\nAfter this, the use of the remaining STATUS_* symbols will be contained \nto the frontend and backend libpq code, so it'll be more coherent.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Thu, 11 Jun 2020 15:55:59 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On Thu, Jun 11, 2020 at 03:55:59PM +0200, Peter Eisentraut wrote:\n> On 2020-01-16 13:56, Robert Haas wrote:\n>> IMHO, custom enums for each particular case would be a big improvement\n>> over supposedly-generic STATUS codes. It makes it clearer which values\n>> are possible in each code path, and it comes out nicer in the\n>> debugger, too.\n> \n> Given this feedback, I would like to re-propose the original patch, attached\n> again here.\n> \n> After this, the use of the remaining STATUS_* symbols will be contained to\n> the frontend and backend libpq code, so it'll be more coherent.\n\nI am still in a so-so state regarding this patch, but I find the\ndebugger argument a good one. And please don't consider me as a\nblocker.\n\n> Add a separate enum for use in the locking APIs, which were the only\n> user.\n\n> +typedef enum\n> +{\n> +\tPROC_WAIT_STATUS_OK,\n> +\tPROC_WAIT_STATUS_WAITING,\n> +\tPROC_WAIT_STATUS_ERROR,\n> +} ProcWaitStatus;\n\nProcWaitStatus, and more particularly PROC_WAIT_STATUS_WAITING are\nstrange names (the latter refers to \"wait\" twice). What do you think\nabout renaming the enum to ProcStatus and the flags to PROC_STATUS_*?\n--\nMichael",
"msg_date": "Fri, 12 Jun 2020 16:30:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: remove some STATUS_* symbols"
},
{
"msg_contents": "On 2020-06-12 09:30, Michael Paquier wrote:\n> On Thu, Jun 11, 2020 at 03:55:59PM +0200, Peter Eisentraut wrote:\n>> On 2020-01-16 13:56, Robert Haas wrote:\n>>> IMHO, custom enums for each particular case would be a big improvement\n>>> over supposedly-generic STATUS codes. It makes it clearer which values\n>>> are possible in each code path, and it comes out nicer in the\n>>> debugger, too.\n>>\n>> Given this feedback, I would like to re-propose the original patch, attached\n>> again here.\n>>\n>> After this, the use of the remaining STATUS_* symbols will be contained to\n>> the frontend and backend libpq code, so it'll be more coherent.\n> \n> I am still in a so-so state regarding this patch, but I find the\n> debugger argument a good one. And please don't consider me as a\n> blocker.\n\nOkay, I have committed it.\n\n>> Add a separate enum for use in the locking APIs, which were the only\n>> user.\n> \n>> +typedef enum\n>> +{\n>> +\tPROC_WAIT_STATUS_OK,\n>> +\tPROC_WAIT_STATUS_WAITING,\n>> +\tPROC_WAIT_STATUS_ERROR,\n>> +} ProcWaitStatus;\n> \n> ProcWaitStatus, and more particularly PROC_WAIT_STATUS_WAITING are\n> strange names (the latter refers to \"wait\" twice). What do you think\n> about renaming the enum to ProcStatus and the flags to PROC_STATUS_*?\n\nI see your point, but I don't think that's better. That would just \ninvite someone else to use it for other process-related status things. \nWe typically name enum constants like the type followed by a suffix. \nThe fact that the suffix is similar to the prefix here is more of a \ncoincidence.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 17 Jun 2020 10:18:19 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove some STATUS_* symbols"
}
] |
[
{
"msg_contents": "selfuncs.c convert_to_scalar() says:\n\n|* The several datatypes representing absolute times are all converted\n|* to Timestamp, which is actually a double, and then we just use that\n|* double value. Note this will give correct results even for the \"special\"\n|* values of Timestamp, since those are chosen to compare correctly;\n|* see timestamp_cmp.\n\nBut:\nhttps://www.postgresql.org/docs/10/release-10.html\n|Remove support for floating-point timestamps and intervals (Tom Lane)\n|This removes configure's --disable-integer-datetimes option. Floating-point timestamps have few advantages and have not been the default since PostgreSQL 8.3.\n|b6aa17e De-support floating-point timestamps.\n|configure | 18 ++++++------------\n|configure.in | 12 ++++++------\n|doc/src/sgml/config.sgml | 8 +++-----\n|doc/src/sgml/datatype.sgml | 55 +++++++++++--------------------------------------------\n|doc/src/sgml/installation.sgml | 22 ----------------------\n|src/include/c.h | 7 ++++---\n|src/include/pg_config.h.in | 4 ----\n|src/include/pg_config.h.win32 | 4 ----\n|src/interfaces/ecpg/include/ecpg_config.h.in | 4 ----\n|src/interfaces/ecpg/include/pgtypes_interval.h | 2 --\n|src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.c | 6 ++----\n|src/interfaces/ecpg/test/expected/pgtypeslib-dt_test2.stdout | 2 ++\n|src/interfaces/ecpg/test/pgtypeslib/dt_test2.pgc | 6 ++----\n|src/tools/msvc/Solution.pm | 9 ---------\n|src/tools/msvc/config_default.pl | 1 -\n|15 files changed, 36 insertions(+), 124 deletions(-)\n\nIt's true that convert_to_scalar sees doubles:\n|static double\n|convert_timevalue_to_scalar(Datum value, Oid typid, bool *failure)\n|{\n| switch (typid)\n| {\n| case TIMESTAMPOID:\n| return DatumGetTimestamp(value);\n\nBut:\n$ git grep DatumGetTimestamp src/include/\nsrc/include/utils/timestamp.h:#define DatumGetTimestamp(X) ((Timestamp) DatumGetInt64(X))\n\nSo I propose it should say something like:\n\n|* The several datatypes representing absolute times are all converted\n|* to Timestamp, which is actually an int64, and then we just promote that\n|* to double. Note this will give correct results even for the \"special\"\n|* values of Timestamp, since those are chosen to compare correctly;\n|* see timestamp_cmp.\n\nThat seems to be only used for ineq_histogram_selectivity() interpolation of\nhistogram bins. It looks to me that at least isn't working for \"special\nvalues\", and needs to use something other than isnan(). I added debugging code\nand tested the attached like:\n\nDROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\nINSERT INTO t VALUES('-infinity');\nALTER TABLE t ALTER t SET STATISTICS 1;\nANALYZE t;\nexplain SELECT * FROM t WHERE t>='2010-12-29';",
"msg_date": "Mon, 30 Dec 2019 01:47:21 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "comment regarding double timestamps; and, infinite timestamps and NaN"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> selfuncs.c convert_to_scalar() says:\n> |* The several datatypes representing absolute times are all converted\n> |* to Timestamp, which is actually a double, and then we just use that\n> |* double value.\n\n> So I propose it should say something like:\n\n> |* The several datatypes representing absolute times are all converted\n> |* to Timestamp, which is actually an int64, and then we just promote that\n> |* to double.\n\nCheck, obviously this comment never got updated.\n\n> That seems to be only used for ineq_histogram_selectivity() interpolation of\n> histogram bins. It looks to me that at least isn't working for \"special\n> values\", and needs to use something other than isnan().\n\nUh, what? This seems completely wrong to me. We could possibly\npromote DT_NOBEGIN and DT_NOEND to +/- infinity (not NaN), but\nI don't really see the point. They'll compare to other timestamp\nvalues correctly without that, cf timestamp_cmp_internal().\nThe example you give seems to me to be working sanely, or at least\nas sanely as it can given the number of histogram points available,\nwith the existing code. In any case, shoving NaNs into the\ncomputation is not going to make anything better.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Dec 2019 09:05:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: comment regarding double timestamps;\n and, infinite timestamps and NaN"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 09:05:24AM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > That seems to be only used for ineq_histogram_selectivity() interpolation of\n> > histogram bins. It looks to me that at least isn't working for \"special\n> > values\", and needs to use something other than isnan().\n> \n> Uh, what? This seems completely wrong to me. We could possibly\n> promote DT_NOBEGIN and DT_NOEND to +/- infinity (not NaN), but\n> I don't really see the point. They'll compare to other timestamp\n> values correctly without that, cf timestamp_cmp_internal().\n> The example you give seems to me to be working sanely, or at least\n> as sanely as it can given the number of histogram points available,\n> with the existing code. In any case, shoving NaNs into the\n> computation is not going to make anything better.\n\nAs I see it, the problem is that the existing code tests for isnan(), but\ninfinite timestamps are PG_INT64_MIN/MAX (here, stored in a double), so there's\nabsurdly large values being used as if they were isnormal().\n\nsrc/include/datatype/timestamp.h:#define DT_NOBEGIN PG_INT64_MIN\nsrc/include/datatype/timestamp.h-#define DT_NOEND PG_INT64_MAX\n\nOn v12, my test gives:\n|DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n|INSERT INTO t VALUES('-infinity');\n|ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n|explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n| Seq Scan on t (cost=0.00..5.62 rows=3 width=8) (actual time=0.012..0.042 rows=289 loops=1)\n\nvs patched master:\n|DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n|INSERT INTO t VALUES('-infinity');\n|ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n|explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n| Seq Scan on t (cost=0.00..5.62 rows=146 width=8) (actual time=0.048..0.444 rows=289 loops=1)\n\nIMO 146 rows is a reasonable estimate given a single histogram bucket of\ninfinite width, and 3 rows is a less reasonable result of returning INT64_MAX\nin one place and then handling it as a normal value. The comments in\nineq_histogram seem to indicate that this case is intended to get binfrac=0.5:\n\n| Watch out for the possibility that we got a NaN or Infinity from the\n| division. This can happen despite the previous checks, if for example \"low\" is\n| -Infinity.\n\nI changed to use INFINITY, -INFINITY and !isnormal() rather than nan() and\nisnan() (although binfrac is actually NAN at that point so the existing test is\nok).\n\nJustin\n\n\n",
"msg_date": "Mon, 30 Dec 2019 09:18:29 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: comment regarding double timestamps; and, infinite timestamps\n and NaN"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Dec 30, 2019 at 09:05:24AM -0500, Tom Lane wrote:\n>> Uh, what? This seems completely wrong to me. We could possibly\n>> promote DT_NOBEGIN and DT_NOEND to +/- infinity (not NaN), but\n>> I don't really see the point. They'll compare to other timestamp\n>> values correctly without that, cf timestamp_cmp_internal().\n>> The example you give seems to me to be working sanely, or at least\n>> as sanely as it can given the number of histogram points available,\n>> with the existing code. In any case, shoving NaNs into the\n>> computation is not going to make anything better.\n\n> As I see it, the problem is that the existing code tests for isnan(), but\n> infinite timestamps are PG_INT64_MIN/MAX (here, stored in a double), so there's\n> absurdly large values being used as if they were isnormal().\n\nI still say that (1) you're confusing NaN with Infinity, and (2)\nyou haven't actually shown that there's a problem to fix.\nThese endpoint values are *not* NaNs.\n\n> On v12, my test gives:\n> |DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n> |INSERT INTO t VALUES('-infinity');\n> |ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n> |explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n> | Seq Scan on t (cost=0.00..5.62 rows=3 width=8) (actual time=0.012..0.042 rows=289 loops=1)\n\nThis is what it should do. There's only one histogram bucket, and\nit extends down to -infinity, so the conclusion is going to be that\nthe WHERE clause excludes all but a small part of the bucket. This\nis the correct answer based on the available stats; the problem is\nnot with the calculation, but with the miserable granularity of the\navailable stats.\n\n> vs patched master:\n> |DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n> |INSERT INTO t VALUES('-infinity');\n> |ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n> |explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n> | Seq Scan on t (cost=0.00..5.62 rows=146 width=8) (actual time=0.048..0.444 rows=289 loops=1)\n\nThis answer is simply broken. You've caused it to estimate half\nof the bucket, which is an insane estimate for the given bucket\nboundaries and WHERE constraint.\n\n> IMO 146 rows is a reasonable estimate given a single histogram bucket of\n> infinite width,\n\nNo, it isn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 30 Dec 2019 14:18:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: comment regarding double timestamps;\n and, infinite timestamps and NaN"
},
{
"msg_contents": "On Mon, Dec 30, 2019 at 02:18:17PM -0500, Tom Lane wrote:\n> > On v12, my test gives:\n> > |DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n> > |INSERT INTO t VALUES('-infinity');\n> > |ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n> > |explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n> > | Seq Scan on t (cost=0.00..5.62 rows=3 width=8) (actual time=0.012..0.042 rows=289 loops=1)\n> \n> This is what it should do. There's only one histogram bucket, and\n> it extends down to -infinity, so the conclusion is going to be that\n> the WHERE clause excludes all but a small part of the bucket. This\n> is the correct answer based on the available stats; the problem is\n> not with the calculation, but with the miserable granularity of the\n> available stats.\n> \n> > vs patched master:\n> > |DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(now(), now()+'1 day', '5 minutes');\n> > |INSERT INTO t VALUES('-infinity');\n> > |ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n> > |explain analyze SELECT * FROM t WHERE t>='2010-12-29';\n> > | Seq Scan on t (cost=0.00..5.62 rows=146 width=8) (actual time=0.048..0.444 rows=289 loops=1)\n> \n> This answer is simply broken. You've caused it to estimate half\n> of the bucket, which is an insane estimate for the given bucket\n> boundaries and WHERE constraint.\n> \n> > IMO 146 rows is a reasonable estimate given a single histogram bucket of\n> > infinite width,\n> \n> No, it isn't.\n\nWhen using floats, v12 also returns half the histogram:\n\n DROP TABLE t; CREATE TABLE t(t) AS SELECT generate_series(0, 99, 1)::float;\n INSERT INTO t VALUES('-Infinity');\n ALTER TABLE t ALTER t SET STATISTICS 1; ANALYZE t;\n explain analyze SELECT * FROM t WHERE t>='50';\n Seq Scan on t (cost=0.00..2.26 rows=51 width=8) (actual time=0.014..0.020 rows=50 loops=1)\n\nI'm fine if the isnan() logic changes, but the comment indicates it's intended\nto be hit for an infinite histogram bound, but that doesn't work for timestamps\n(convert_to_scalar() should return (double)INFINITY and not\n(double)INT64_MIN/MAX).\n\nOn Mon, Dec 30, 2019 at 02:18:17PM -0500, Tom Lane wrote:\n> Justin Pryzby <pryzby@telsasoft.com> writes:\n> > On Mon, Dec 30, 2019 at 09:05:24AM -0500, Tom Lane wrote:\n> >> Uh, what? This seems completely wrong to me. We could possibly\n> >> promote DT_NOBEGIN and DT_NOEND to +/- infinity (not NaN), but\n> >> I don't really see the point. They'll compare to other timestamp\n> >> values correctly without that, cf timestamp_cmp_internal().\n> >> The example you give seems to me to be working sanely, or at least\n> >> as sanely as it can given the number of histogram points available,\n> >> with the existing code. In any case, shoving NaNs into the\n> >> computation is not going to make anything better.\n> \n> > As I see it, the problem is that the existing code tests for isnan(), but\n> > infinite timestamps are PG_INT64_MIN/MAX (here, stored in a double), so there's\n> > absurdly large values being used as if they were isnormal().\n> \n> I still say that (1) you're confusing NaN with Infinity, and (2)\n> you haven't actually shown that there's a problem to fix.\n> These endpoint values are *not* NaNs.\n\nI probably did confuse it while trying to make the behavior match the comment\nfor timestamps.\nThe Subject says NAN since isnan(binfrac) is what's supposed to be hit for that\ncase.\n\nThe NAN is intended to come from:\n\n|binfrac = (val - low) / (high - low);\n\nwhich is some variation of -inf / inf.\n\nJustin\n\n\n",
"msg_date": "Thu, 2 Jan 2020 07:55:39 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "infinite histogram bounds and nan (Re: comment regarding double\n timestamps; and, infinite timestamps and NaN)"
},
{
"msg_contents": "Justin Pryzby <pryzby@telsasoft.com> writes:\n> On Mon, Dec 30, 2019 at 02:18:17PM -0500, Tom Lane wrote:\n>> This answer is simply broken. You've caused it to estimate half\n>> of the bucket, which is an insane estimate for the given bucket\n>> boundaries and WHERE constraint.\n\n> I'm fine if the isnan() logic changes, but the comment indicates it's intended\n> to be hit for an infinite histogram bound, but that doesn't work for timestamps\n> (convert_to_scalar() should return (double)INFINITY and not\n> (double)INT64_MIN/MAX).\n\nI suppose the code you're looking at is\n\n binfrac = (val - low) / (high - low);\n\n /*\n * Watch out for the possibility that we got a NaN or\n * Infinity from the division. This can happen\n * despite the previous checks, if for example \"low\"\n * is -Infinity.\n */\n if (isnan(binfrac) ||\n binfrac < 0.0 || binfrac > 1.0)\n binfrac = 0.5;\n\nThis doesn't really have any goals beyond \"make sure we get a result\nbetween 0.0 and 1.0, even if the calculation went pear-shaped for\nsome reason\". You could make an argument that it should be like\n\n if (isnan(binfrac))\n binfrac = 0.5; /* throw up our hands for NaN */\n else if (binfrac <= 0.0)\n binfrac = 0.0; /* clamp in case of -Inf or -0 */\n else if (binfrac > 1.0)\n binfrac = 1.0; /* clamp in case of +Inf */\n\nwhich would probably produce saner results in edge cases like these.\nI think it'd also obviate the need for fooling with the conversion in\nconvert_to_scalar: while DT_NOBEGIN/DT_NOEND wouldn't produce exactly\nthe same result (hard 0.0 or 1.0) as an infinity, they'd produce\nresults very close to that.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 09:11:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: infinite histogram bounds and nan (Re: comment regarding double\n timestamps; and, infinite timestamps and NaN)"
}
] |
[
{
"msg_contents": "I'm guessing the initial data for pg_(sh)description is output into\nseparate files because it was too difficult for the traditional shell\nscript to maintain enough state to do otherwise. With Perl, it's just\nas easy to assemble the data into the same format as the rest of the\ncatalogs and then let the generic code path output it into\npostgres.bki. The attached patch does that and simplifies the catalog\nmakefile and initdb.c. I'll add a commitfest entry for this.\n--\nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 30 Dec 2019 18:08:54 -0600",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "remove separate postgres.(sh)description files"
},
{
"msg_contents": "On 31/12/2019 02:08, John Naylor wrote:\n> I'm guessing the initial data for pg_(sh)description is output into\n> separate files because it was too difficult for the traditional shell\n> script to maintain enough state to do otherwise.\n\nYeah, I guess so. The roots of postgres.description goes all the way \nback to 1997, when not only genbki was a shell script, but also initdb.\n\n> With Perl, it's just as easy to assemble the data into the same\n> format as the rest of the catalogs and then let the generic code path\n> output it into postgres.bki. The attached patch does that and\n> simplifies the catalog makefile and initdb.c.\nNice cleanup! Looks like we didn't have any mention of the \npostgres.(sh)decription files in the docs, so no doc updates needed. \nGrepping around, there are a few stray references to \npostgres.description still:\n\n$ git grep -r -I postgres.shdescript .\nsrc/backend/catalog/.gitignore:/postgres.shdescription\nsrc/backend/catalog/Makefile:# postgres.bki, postgres.description, \npostgres.shdescription,\nsrc/tools/msvc/clean.bat:if %DIST%==1 if exist \nsrc\\backend\\catalog\\postgres.shdescription del /q \nsrc\\backend\\catalog\\postgres.shdescription\n\nBarring objections, I'll remove those too, and commit this.\n\n- Heikki\n\n\n",
"msg_date": "Wed, 8 Jan 2020 14:33:23 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: remove separate postgres.(sh)description files"
},
{
"msg_contents": "On Wed, Jan 08, 2020 at 02:33:23PM +0200, Heikki Linnakangas wrote:\n>On 31/12/2019 02:08, John Naylor wrote:\n>>I'm guessing the initial data for pg_(sh)description is output into\n>>separate files because it was too difficult for the traditional shell\n>>script to maintain enough state to do otherwise.\n>\n>Yeah, I guess so. The roots of postgres.description goes all the way \n>back to 1997, when not only genbki was a shell script, but also \n>initdb.\n>\n>>With Perl, it's just as easy to assemble the data into the same\n>>format as the rest of the catalogs and then let the generic code path\n>>output it into postgres.bki. The attached patch does that and\n>>simplifies the catalog makefile and initdb.c.\n>Nice cleanup! Looks like we didn't have any mention of the \n>postgres.(sh)decription files in the docs, so no doc updates needed. \n>Grepping around, there are a few stray references to \n>postgres.description still:\n>\n>$ git grep -r -I postgres.shdescript .\n>src/backend/catalog/.gitignore:/postgres.shdescription\n>src/backend/catalog/Makefile:# postgres.bki, postgres.description, \n>postgres.shdescription,\n>src/tools/msvc/clean.bat:if %DIST%==1 if exist \n>src\\backend\\catalog\\postgres.shdescription del /q \n>src\\backend\\catalog\\postgres.shdescription\n>\n>Barring objections, I'll remove those too, and commit this.\n>\n\n+1 from me. Let's remove these small RFC patches out of the way.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 16 Jan 2020 22:39:49 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: remove separate postgres.(sh)description files"
},
{
"msg_contents": "On 16/01/2020 23:39, Tomas Vondra wrote:\n> On Wed, Jan 08, 2020 at 02:33:23PM +0200, Heikki Linnakangas wrote:\n>> On 31/12/2019 02:08, John Naylor wrote:\n>>> I'm guessing the initial data for pg_(sh)description is output into\n>>> separate files because it was too difficult for the traditional shell\n>>> script to maintain enough state to do otherwise.\n>>\n>> Yeah, I guess so. The roots of postgres.description goes all the way\n>> back to 1997, when not only genbki was a shell script, but also\n>> initdb.\n>>\n>>> With Perl, it's just as easy to assemble the data into the same\n>>> format as the rest of the catalogs and then let the generic code path\n>>> output it into postgres.bki. The attached patch does that and\n>>> simplifies the catalog makefile and initdb.c.\n>> Nice cleanup! Looks like we didn't have any mention of the\n>> postgres.(sh)decription files in the docs, so no doc updates needed.\n>> Grepping around, there are a few stray references to\n>> postgres.description still:\n>>\n>> $ git grep -r -I postgres.shdescript .\n>> src/backend/catalog/.gitignore:/postgres.shdescription\n>> src/backend/catalog/Makefile:# postgres.bki, postgres.description,e0ed6817c0..7aaefadaac\n>> postgres.shdescription,\n>> src/tools/msvc/clean.bat:if %DIST%==1 if exist\n>> src\\backend\\catalog\\postgres.shdescription del /q\n>> src\\backend\\catalog\\postgres.shdescription\n>>\n>> Barring objections, I'll remove those too, and commit this.\n> \n> +1 from me. Let's remove these small RFC patches out of the way.\n\nPushed, thanks!\n\n- Heikki\n\n\n",
"msg_date": "Sun, 19 Jan 2020 13:58:23 +0200",
"msg_from": "Heikki Linnakangas <hlinnaka@iki.fi>",
"msg_from_op": false,
"msg_subject": "Re: remove separate postgres.(sh)description files"
},
{
"msg_contents": "On Sun, Jan 19, 2020 at 7:58 PM Heikki Linnakangas <hlinnaka@iki.fi> wrote:\n>\n> > On Wed, Jan 08, 2020 at 02:33:23PM +0200, Heikki Linnakangas wrote:\n> >> Grepping around, there are a few stray references to\n> >> postgres.description still:\n> >>\n> >> $ git grep -r -I postgres.shdescript .\n> >> src/backend/catalog/.gitignore:/postgres.shdescription\n> >> src/backend/catalog/Makefile:# postgres.bki, postgres.description,e0ed6817c0..7aaefadaac\n> >> postgres.shdescription,\n> >> src/tools/msvc/clean.bat:if %DIST%==1 if exist\n> >> src\\backend\\catalog\\postgres.shdescription del /q\n> >> src\\backend\\catalog\\postgres.shdescription\n>\n> Pushed, thanks!\n\nThanks for taking care of those loose ends -- that was a bit sloppy of me.\n\n-- \nJohn Naylor https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 20 Jan 2020 08:27:41 +0800",
"msg_from": "John Naylor <john.naylor@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: remove separate postgres.(sh)description files"
}
] |
[
{
"msg_contents": "Hi, hackers.\n\nAttached are 2 tiny doc typo fixes.\n\nJon",
"msg_date": "Mon, 30 Dec 2019 18:11:36 -0700 (MST)",
"msg_from": "Jon Jensen <jon@endpoint.com>",
"msg_from_op": true,
"msg_subject": "[PATCH] Fix parallel query doc typos"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 6:41 AM Jon Jensen <jon@endpoint.com> wrote:\n>\n> Hi, hackers.\n>\n> Attached are 2 tiny doc typo fixes.\n>\n\nLGTM. I will commit this tomorrow unless someone has any comments.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 2 Jan 2020 15:53:05 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix parallel query doc typos"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 5:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> LGTM. I will commit this tomorrow unless someone has any comments.\n\nLGTM, too.\n\nThanks, Jon, for the patch.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 10:26:14 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix parallel query doc typos"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:\n\n> On Thu, Jan 2, 2020 at 5:23 AM Amit Kapila <amit.kapila16@gmail.com>\n> wrote:\n> > LGTM. I will commit this tomorrow unless someone has any comments.\n>\n> LGTM, too.\n>\n>\nPushed.\n\n\n> Thanks, Jon, for the patch.\n>\n\n+1.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nOn Thu, Jan 2, 2020 at 8:56 PM Robert Haas <robertmhaas@gmail.com> wrote:On Thu, Jan 2, 2020 at 5:23 AM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> LGTM. I will commit this tomorrow unless someone has any comments.\n\nLGTM, too.\nPushed. \nThanks, Jon, for the patch.+1.-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Fri, 3 Jan 2020 12:54:56 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] Fix parallel query doc typos"
}
] |
[
{
"msg_contents": "Hi!\n\nHere is the case.\n\nAssume we have a master to slave replication with shared_buffers set up \nto 2 GB at the master and 4 GB at the slave. All of the data is written \nto the master, while reading occurs from slave.\n\nNow we decided to drop many tables, let's say 1000 or 10000 not in a \nsingle transaction, but each table in a separate one. So, due to \"plain\" \nshared_buffers memory we have to do for loop for every relation which \nleads to lag between master and slave.\n\nIn real case scenario such issue lead to not a minutes lag, but hours \nlag. At the same time PostgreSQL have a great routine to delete many \nrelations in a single transaction.\n\nSo, to get rid of this kind of issue here came up an idea: what if not \nto delete everyone of relations right away and just store them in an \narray, prevent shared buffers (correspond to a deleted relations) from \nbeen flushed. And then array reaches it max size we need to walk all \nbuffers only once to \"free\" shared buffers correspond to a deleted \nrelations.\n\nHere some values from the test which I am made.\n\nWithout patch:\n\n1.\n\n(master 2 GB) - drop 1000 tables took 6 sec\n\n(slave 4 GB) - drop 1000 tables took 8 sec\n\n2.\n\n(master 4 GB) - drop 1000 tables took 10 sec\n\n(slave 8 GB) - drop 1000 tables took 16 sec\n\n3.\n\n(master 10 GB) - drop 1000 tables took 22 sec\n\n(slave 20 GB) - drop 1000 tables took 38 sec\n\n\nWith patch:\n\n1.\n\n(master 2 GB) - drop 1000 tables took 2 sec\n\n(slave 4 GB) - drop 1000 tables took 2 sec\n\n2.\n\n(master 4 GB) - drop 1000 tables took 3 sec\n\n(slave 8 GB) - drop 1000 tables took 3 sec\n\n3.\n\n(master 10 GB) - drop 1000 tables took 4 sec\n\n(slave 20 GB) - drop 1000 tables took 4 sec\n\n-- \nMax Orlov\nE-mail: m.orlov@postgrespro.ru",
"msg_date": "Tue, 31 Dec 2019 13:16:49 +0300",
"msg_from": "Maxim Orlov <m.orlov@postgrespro.ru>",
"msg_from_op": true,
"msg_subject": "[PATCH] lazy relations delete"
},
{
"msg_contents": "Hello.\n\nAt Tue, 31 Dec 2019 13:16:49 +0300, Maxim Orlov <m.orlov@postgrespro.ru> wrote in \n> Now we decided to drop many tables, let's say 1000 or 10000 not in a\n> single transaction, but each table in a separate one. So, due to\n> \"plain\" shared_buffers memory we have to do for loop for every\n> relation which leads to lag between master and slave.\n> \n> In real case scenario such issue lead to not a minutes lag, but hours\n> lag. At the same time PostgreSQL have a great routine to delete many\n> relations in a single transaction.\n> \n> So, to get rid of this kind of issue here came up an idea: what if not\n> to delete everyone of relations right away and just store them in an\n> array, prevent shared buffers (correspond to a deleted relations) from\n> been flushed. And then array reaches it max size we need to walk all\n> buffers only once to \"free\" shared buffers correspond to a deleted\n> relations.\n\nThat is a greate performane gain, but the proposal seems to lead to\ndatabase corruption. We must avoid such cases.\n\nRelfilenode can be reused right after commit. There can be a case\nwhere readers of the resued relfilenode see the pages from already\nremoved files left on shared buffers. On the other hand newly\nallocated buffers for the reused relfilenode are not flushed out until\nthe lazy invalidate machinery actually frees the \"garbage\" buffers and\nit leads to a broken database after a crash. But finally the\nmachinery trashes away the buffers involving the correct ones at\nexecution time.\n\nAs for performance, hash reference for every BufferFlush call could be\na cost for unrelated transactions. And it leaves garbage buffers as\ndead until more than LAZY_DELETE_ARRAY_SIZE relfilenodes are\nremoved.\n\n\nregares.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Wed, 08 Jan 2020 13:18:33 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] lazy relations delete"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 5:20 PM Kyotaro Horiguchi\n<horikyota.ntt@gmail.com> wrote:\n> Relfilenode can be reused right after commit. There can be a case\n> where readers of the resued relfilenode see the pages from already\n> removed files left on shared buffers. On the other hand newly\n> allocated buffers for the reused relfilenode are not flushed out until\n> the lazy invalidate machinery actually frees the \"garbage\" buffers and\n> it leads to a broken database after a crash. But finally the\n> machinery trashes away the buffers involving the correct ones at\n> execution time.\n\nThe relfilenode can't be reused until the next checkpoint, can it?\nThe truncated file remains in the file system, specifically to prevent\nanyone from reusing the relfilenode. See the comment for mdunlink().\nThere may be other problems with the idea, but wouldn't the zombie\nbuffers be harmless, if they are invalidated before\nSyncPostCheckpoint() unlinks the underlying files (and you never try\nto flush them)?\n\n\n",
"msg_date": "Wed, 8 Jan 2020 17:56:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCH] lazy relations delete"
}
] |
[
{
"msg_contents": "With the attached patch, I propose to enable the colored output by \ndefault in PG13.\n\nFor those who don't like color output, I also add support for the \nenvironment variable NO_COLOR, which is an emerging standard for turning \noff color across different software packages (https://no-color.org/). \nOf course, you can also continue to use the PG_COLOR variable.\n\nI have looked around how other packages do the automatic color \ndetection. It's usually a combination of mysterious termcap stuff and \nslightly less mysterious matching of the TERM variable against a list of \nknown terminal types. I figured we can skip the termcap stuff and still \nget really good coverage in practice, so that's what I did.\n\nI have also added a documentation appendix to explain all of this. \n(Perhaps we should now remove the repetitive mention of the PG_COLOR \nvariable in each man page, but I haven't done that in this patch.)\n\nI'm aware of the pending patch to improve color support on Windows. \nI'll check that one out as well, but it appears to be orthogonal to this \none.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 31 Dec 2019 11:40:28 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "color by default"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 11:40 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n>\n> I'm aware of the pending patch to improve color support on Windows.\n> I'll check that one out as well, but it appears to be orthogonal to this\n> one.\n>\n>\nActually I think it would be better to rebase that patch on top of this, as\nthe Windows function enable_vt_mode() incorporates the logic of both\nisatty() and terminal_supports_color() by enabling CMDs support of VT100\nescape codes.\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Tue, Dec 31, 2019 at 11:40 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\nI'm aware of the pending patch to improve color support on Windows. \nI'll check that one out as well, but it appears to be orthogonal to this \none.Actually I think it would be better to rebase that patch on top of this, as the Windows function enable_vt_mode() incorporates the logic of both isatty() and terminal_supports_color() by enabling CMDs support of VT100 escape codes.Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 31 Dec 2019 13:13:32 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> With the attached patch, I propose to enable the colored output by \n> default in PG13.\n\nFWIW, I shall be setting NO_COLOR permanently if this gets committed.\nI wonder how many people there are who actually *like* colored output?\nI find it to be invariably less readable than plain B&W text.\n\nI may well be in the minority, but I think some kind of straw poll\nmight be advisable, rather than doing this just because.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Dec 2019 08:35:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 7:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > With the attached patch, I propose to enable the colored output by\n> > default in PG13.\n>\n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n> I wonder how many people there are who actually *like* colored output?\n> I find it to be invariably less readable than plain B&W text.\n>\n> I may well be in the minority, but I think some kind of straw poll\n> might be advisable, rather than doing this just because.\n>\n\n+1\n\n\n> regards, tom lane\n>\n>\n>\n\nOn Tue, Dec 31, 2019 at 7:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> With the attached patch, I propose to enable the colored output by \n> default in PG13.\n\nFWIW, I shall be setting NO_COLOR permanently if this gets committed.\nI wonder how many people there are who actually *like* colored output?\nI find it to be invariably less readable than plain B&W text.\n\nI may well be in the minority, but I think some kind of straw poll\nmight be advisable, rather than doing this just because.+1 \n\n regards, tom lane",
"msg_date": "Tue, 31 Dec 2019 07:52:33 -0600",
"msg_from": "Abel Abraham Camarillo Ojeda <acamari@verlet.org>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "> On 31 Dec 2019, at 14:35, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> \n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> With the attached patch, I propose to enable the colored output by \n>> default in PG13.\n> \n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n\nMe too.\n\n> I may well be in the minority, but I think some kind of straw poll\n> might be advisable, rather than doing this just because.\n\n+1\n\ncheers ./daniel\n\n\n",
"msg_date": "Tue, 31 Dec 2019 14:59:04 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "På tirsdag 31. desember 2019 kl. 14:35:39, skrev Tom Lane <tgl@sss.pgh.pa.us \n<mailto:tgl@sss.pgh.pa.us>>: \nPeter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n > With the attached patch, I propose to enable the colored output by\n > default in PG13.\n\n FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n I wonder how many people there are who actually *like* colored output?\n I find it to be invariably less readable than plain B&W text.\n\n I may well be in the minority, but I think some kind of straw poll\n might be advisable, rather than doing this just because. \n\n\nIt's easier to spot errors/warnings when they are colored/emphasized imo. Much \nlike colored output from grep/diff; We humans have colored vision for a reason. \n\n\n\n--\n Andreas Joseph Krogh",
"msg_date": "Tue, 31 Dec 2019 15:12:34 +0100 (CET)",
"msg_from": "Andreas Joseph Krogh <andreas@visena.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2019-Dec-31, Andreas Joseph Krogh wrote:\n\n> It's easier to spot errors/warnings when they are colored/emphasized imo. Much \n> like colored output from grep/diff; We humans have colored vision for a reason. \n\nI do use color output (and find it useful), for that reason.\n\nI'm not sure that the documentation addition properly describes the\nlogic to be used; if it does, I'm not sure that the logic is really what\nwe want. Is the logic in the docs supposed to be \"last rule that\nmatches wins\" or \"first rule that matches wins\"? I think that should be\nexplicit. Do we want to have NO_COLORS override the TERM heuristics?\n(I'm pretty sure we do.) OTOH we also want PG_COLORS to override\nNO_COLORS.\n\nPer https://no-colors.org (thanks for the link) it seems pretty clear\nthat people who don't want colors should be already setting NO_COLORS,\nand everyone would be happy. It's not just PG programs that are\ncolorizing stuff.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 31 Dec 2019 12:18:00 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Tue, 31 Dec 2019 at 10:18, Alvaro Herrera <alvherre@2ndquadrant.com>\nwrote:\n\nPer https://no-colors.org (thanks for the link) it seems pretty clear\n>\n\nhttps://no-color.org\n\nOn Tue, 31 Dec 2019 at 10:18, Alvaro Herrera <alvherre@2ndquadrant.com> wrote:Per https://no-colors.org (thanks for the link) it seems pretty clear\nhttps://no-color.org",
"msg_date": "Tue, 31 Dec 2019 10:46:43 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 31/12/19 14:35, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> With the attached patch, I propose to enable the colored output by\n>> default in PG13.\n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n> I wonder how many people there are who actually *like* colored output?\n> I find it to be invariably less readable than plain B&W text.\n>\n> I may well be in the minority, but I think some kind of straw poll\n> might be advisable, rather than doing this just because.\n\n+1\n\n...and Happy New Year!\n\n\n / J.L.\n\n\n\n\n",
"msg_date": "Tue, 31 Dec 2019 18:32:25 +0100",
"msg_from": "Jose Luis Tallon <jltallon@adv-solutions.net>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 01/01/2020 02:35, Tom Lane wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> With the attached patch, I propose to enable the colored output by\n>> default in PG13.\n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n> I wonder how many people there are who actually *like* colored output?\n> I find it to be invariably less readable than plain B&W text.\n>\n> I may well be in the minority, but I think some kind of straw poll\n> might be advisable, rather than doing this just because.\n>\n> \t\t\tregards, tom lane\n>\n>\nI find coloured output very difficult to read, as the colours seem to be \nchosen on the basis everyone uses white as the background colour for \nterminals -- whereas I use black, as do a lot of other people.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Fri, 3 Jan 2020 12:38:09 +1300",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 6:38 PM Gavin Flower\n<GavinFlower@archidevsys.co.nz> wrote:\n> I find coloured output very difficult to read, as the colours seem to be\n> chosen on the basis everyone uses white as the background colour for\n> terminals -- whereas I use black, as do a lot of other people.\n\nI don't like colored output either.\n\n(It is, however, probably not a surprise to anyone that I am\nold-school in many regards, so how much my opinion ought to count is\ndebatable. I still use \\pset linestyle old-ascii when I remember to\nset it, use vi to edit, with hjkl rather than arrow keys, and almost\nalways prefer a CLI to a GUI when I have the option. I have conceded\nthe utility of indoor heat and plumbing, though, so maybe there's hope\nfor me yet.)\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:10:30 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > With the attached patch, I propose to enable the colored output by\n> > default in PG13.\n>\n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n> I wonder how many people there are who actually *like* colored output?\n> I find it to be invariably less readable than plain B&W text.\n>\n>\nI find color massively useful for grep and its variants, where the hit can\nshow up anywhere on the line. It was also kind of useful for git,\nespecially \"git grep\", but on my current system git's colorizing seems\nhopelessly borked, so I had to turn it off.\n\nBut I turned PG_COLOR on and played with many commands, and must say I\ndon't really see much of a point. When most of these command fail, they\nonly generate a few lines of output, and it isn't hard to spot the error\nmessage. When pg_restore goes wrong, you get a lot of messages but\ncolorizing them isn't really helpful. I don't need 'error' to show up in\nred in order to know that I have a lot of errors, especially since the\nlines which do report errors always have 'error' as the 2nd word on the\nline, where it isn't hard to spot. If it could distinguish the important\nerrors from unimportant errors, that would be more helpful. But if it\ncould reliably do that, why print the unimportant ones at all?\n\nIt doesn't seem like this is useful enough to have it on by default, and\nwithout it being on by default there is no point in having NO_COLOR to turn\nif off. There is something to be said for going with the flow, but the\n\"emerging standard\" seems like it has quite a bit further to emerge before\nI think that would be an important reason.\n\nCheers,\n\nJeff\n\nOn Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> With the attached patch, I propose to enable the colored output by \n> default in PG13.\n\nFWIW, I shall be setting NO_COLOR permanently if this gets committed.\nI wonder how many people there are who actually *like* colored output?\nI find it to be invariably less readable than plain B&W text.\nI find color massively useful for grep and its variants, where the hit can show up anywhere on the line. It was also kind of useful for git, especially \"git grep\", but on my current system git's colorizing seems hopelessly borked, so I had to turn it off.But I turned PG_COLOR on and played with many commands, and must say I don't really see much of a point. When most of these command fail, they only generate a few lines of output, and it isn't hard to spot the error message. When pg_restore goes wrong, you get a lot of messages but colorizing them isn't really helpful. I don't need 'error' to show up in red in order to know that I have a lot of errors, especially since the lines which do report errors always have 'error' as the 2nd word on the line, where it isn't hard to spot. If it could distinguish the important errors from unimportant errors, that would be more helpful. But if it could reliably do that, why print the unimportant ones at all?It doesn't seem like this is useful enough to have it on by default, and without it being on by default there is no point in having NO_COLOR to turn if off. There is something to be said for going with the flow, but the \"emerging standard\" seems like it has quite a bit further to emerge before I think that would be an important reason.Cheers,Jeff",
"msg_date": "Fri, 3 Jan 2020 15:25:47 -0500",
"msg_from": "Jeff Janes <jeff.janes@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 01:10:30PM -0500, Robert Haas wrote:\n> On Thu, Jan 2, 2020 at 6:38 PM Gavin Flower\n> <GavinFlower@archidevsys.co.nz> wrote:\n>> I find coloured output very difficult to read, as the colours seem to be\n>> chosen on the basis everyone uses white as the background colour for\n>> terminals -- whereas I use black, as do a lot of other people.\n> \n> I don't like colored output either.\n\nI don't like colored output either. However there is an easy way to\ndisable that so applying this patch does not change things IMO as\nanybody unhappy with colors can just disable it with a one-liner in\na bashrc or such.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 14:38:24 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 06/01/2020 18:38, Michael Paquier wrote:\n> On Fri, Jan 03, 2020 at 01:10:30PM -0500, Robert Haas wrote:\n>> On Thu, Jan 2, 2020 at 6:38 PM Gavin Flower\n>> <GavinFlower@archidevsys.co.nz> wrote:\n>>> I find coloured output very difficult to read, as the colours seem to be\n>>> chosen on the basis everyone uses white as the background colour for\n>>> terminals -- whereas I use black, as do a lot of other people.\n>> I don't like colored output either.\n> I don't like colored output either. However there is an easy way to\n> disable that so applying this patch does not change things IMO as\n> anybody unhappy with colors can just disable it with a one-liner in\n> a bashrc or such.\n> --\n> Michael\n\nThat's kind of like using a sledgehammer to crack a nut.\n\nThe colour in grep output is often useful.\n\nI'd like to control it per application.\n\n\nCheers,\nGavin\n\n\n\n",
"msg_date": "Mon, 6 Jan 2020 20:26:39 +1300",
"msg_from": "Gavin Flower <GavinFlower@archidevsys.co.nz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "The patch to improve color support on Windows has been commited [1], and I\nwould like to share some of the discussion there that might affect this\npatch.\n\n- The documentation/comments could make a better job of explaining the case\nof PG_COLOR equals 'always', explicitly saying that no checks are done\nabout the output channel.\n\nAside from the decision about what the default coloring behaviour should\nbe, there are parts of this patch that could be applied independently, as\nan improvement on the current state.\n\n- The new function terminal_supports_color() should also apply when\nPG_COLOR is 'auto', to minimize the chances of seeing escape characters in\nthe user terminal.\n\n- The new entry in the documentation, specially as the PG_COLORS parameter\nseems to be currently undocumented. The programs that can use PG_COLOR\nwould benefit from getting a link to it.\n\n[1]\nhttps://www.postgresql.org/message-id/20200302064842.GE32059%40paquier.xyz\n\nRegards,\n\nJuan José Santamaría Flecha\n\nThe patch to improve color support on Windows has been commited [1], and I would like to share some of the discussion there that might affect this patch.- The documentation/comments could make a better job of explaining the case of PG_COLOR equals 'always', explicitly saying that no checks are done about the output channel. Aside from the decision about what the default coloring behaviour should be, there are parts of this patch that could be applied independently, as an improvement on the current state.- The new function terminal_supports_color() should also apply when PG_COLOR is 'auto', to minimize the chances of seeing escape characters in the user terminal. - The new entry in the documentation, specially as the PG_COLORS parameter seems to be currently undocumented. The programs that can use PG_COLOR would benefit from getting a link to it.[1] https://www.postgresql.org/message-id/20200302064842.GE32059%40paquier.xyzRegards,Juan José Santamaría Flecha",
"msg_date": "Mon, 2 Mar 2020 13:00:44 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Mon, Mar 02, 2020 at 01:00:44PM +0100, Juan José Santamaría Flecha wrote:\n> - The new entry in the documentation, specially as the PG_COLORS parameter\n> seems to be currently undocumented. The programs that can use PG_COLOR\n> would benefit from getting a link to it.\n\nThe actual problem here is that we don't have an actual centralized\nplace where we could put that stuff. And anything able to use this\noption is basically anything using src/common/logging.c.\n\nRegarding PG_COLORS, the commit message of cc8d415 mentions it, but we\nhave no actual example of how to use it, and the original thread has\nzero reference to it:\nhttps://www.postgresql.org/message-id/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com\n\nAnd in fact, it took me a while to figure out that using it is a mix\nof three keywords (\"error\", \"warning\" or \"locus\") separated by colons\nwhich need to have an equal sign to the color defined. Here is for\nexample how to make the locus show up in yellow with errors in blue:\nexport PG_COLORS='error=01;34:locus=01;33'\n\nHaving to dig into the code to find out that stuff is not a good user\nexperience. And I found out about that only because I worked on a\npatch touching this area yesterday.\n--\nMichael",
"msg_date": "Tue, 3 Mar 2020 14:31:01 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Tue, Mar 3, 2020 at 02:31:01PM +0900, Michael Paquier wrote:\n> On Mon, Mar 02, 2020 at 01:00:44PM +0100, Juan Jos� Santamar�a Flecha wrote:\n> > - The new entry in the documentation, specially as the PG_COLORS parameter\n> > seems to be currently undocumented. The programs that can use PG_COLOR\n> > would benefit from getting a link to it.\n> \n> The actual problem here is that we don't have an actual centralized\n> place where we could put that stuff. And anything able to use this\n> option is basically anything using src/common/logging.c.\n> \n> Regarding PG_COLORS, the commit message of cc8d415 mentions it, but we\n> have no actual example of how to use it, and the original thread has\n> zero reference to it:\n> https://www.postgresql.org/message-id/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com\n> \n> And in fact, it took me a while to figure out that using it is a mix\n> of three keywords (\"error\", \"warning\" or \"locus\") separated by colons\n> which need to have an equal sign to the color defined. Here is for\n> example how to make the locus show up in yellow with errors in blue:\n> export PG_COLORS='error=01;34:locus=01;33'\n> \n> Having to dig into the code to find out that stuff is not a good user\n> experience. And I found out about that only because I worked on a\n> patch touching this area yesterday.\n\nI can confirm there is still no mention of PG_COLORS in our\ndocumentation.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Thu, 19 Mar 2020 22:15:57 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Thu, Mar 19, 2020 at 10:15:57PM -0400, Bruce Momjian wrote:\n> On Tue, Mar 3, 2020 at 02:31:01PM +0900, Michael Paquier wrote:\n> > On Mon, Mar 02, 2020 at 01:00:44PM +0100, Juan Jos� Santamar�a Flecha wrote:\n> > > - The new entry in the documentation, specially as the PG_COLORS parameter\n> > > seems to be currently undocumented. The programs that can use PG_COLOR\n> > > would benefit from getting a link to it.\n> > \n> > The actual problem here is that we don't have an actual centralized\n> > place where we could put that stuff. And anything able to use this\n> > option is basically anything using src/common/logging.c.\n> > \n> > Regarding PG_COLORS, the commit message of cc8d415 mentions it, but we\n> > have no actual example of how to use it, and the original thread has\n> > zero reference to it:\n> > https://www.postgresql.org/message-id/6a609b43-4f57-7348-6480-bd022f924310@2ndquadrant.com\n> > \n> > And in fact, it took me a while to figure out that using it is a mix\n> > of three keywords (\"error\", \"warning\" or \"locus\") separated by colons\n> > which need to have an equal sign to the color defined. Here is for\n> > example how to make the locus show up in yellow with errors in blue:\n> > export PG_COLORS='error=01;34:locus=01;33'\n> > \n> > Having to dig into the code to find out that stuff is not a good user\n> > experience. And I found out about that only because I worked on a\n> > patch touching this area yesterday.\n> \n> I can confirm there is still no mention of PG_COLORS in our\n> documentation.\n\nMy mistake, PG_COLOR (not PG_COLORS) is documented properly.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Mar 2020 22:55:22 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n>> I can confirm there is still no mention of PG_COLORS in our\n>> documentation.\n\n> My mistake, PG_COLOR (not PG_COLORS) is documented properly.\n\nYeah, but the point is precisely that pg_logging_init()\nalso responds to PG_COLORS, which is not documented anywhere.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 20 Mar 2020 23:15:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 11:15:07PM -0400, Tom Lane wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> >> I can confirm there is still no mention of PG_COLORS in our\n> >> documentation.\n> \n> > My mistake, PG_COLOR (not PG_COLORS) is documented properly.\n> \n> Yeah, but the point is precisely that pg_logging_init()\n> also responds to PG_COLORS, which is not documented anywhere.\n\nOh, I thought it was a typo. OK, so it still an open item.\n\n-- \n Bruce Momjian <bruce@momjian.us> https://momjian.us\n EnterpriseDB https://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 20 Mar 2020 23:22:07 -0400",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> > With the attached patch, I propose to enable the colored output by\n> > default in PG13.\n>\n> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n> I wonder how many people there are who actually *like* colored output?\n> I find it to be invariably less readable than plain B&W text.\n>\n\nSame.\n\n\n> I may well be in the minority, but I think some kind of straw poll\n> might be advisable, rather than doing this just because.\n>\n\n+1 on no color by default.\n\n-- \nJonah H. Harris\n\nOn Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> With the attached patch, I propose to enable the colored output by \n> default in PG13.\n\nFWIW, I shall be setting NO_COLOR permanently if this gets committed.\nI wonder how many people there are who actually *like* colored output?\nI find it to be invariably less readable than plain B&W text.Same. I may well be in the minority, but I think some kind of straw poll\nmight be advisable, rather than doing this just because.+1 on no color by default.-- Jonah H. Harris",
"msg_date": "Sat, 21 Mar 2020 00:25:28 -0400",
"msg_from": "\"Jonah H. Harris\" <jonah.harris@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Sat, 21 Mar 2020 at 00:25, Jonah H. Harris <jonah.harris@gmail.com>\nwrote:\n\n> On Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n>> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> > With the attached patch, I propose to enable the colored output by\n>> > default in PG13.\n>>\n>> FWIW, I shall be setting NO_COLOR permanently if this gets committed.\n>> I wonder how many people there are who actually *like* colored output?\n>> I find it to be invariably less readable than plain B&W text.\n>>\n>\n> Same.\n>\n\nFor me it depends on what the colour is doing. I was very pleased when I\nfirst saw coloured output from ls, which if I remember correctly repeats\nthe information provided by -F but more prominently. Similarly, I\nappreciate diff output that highlights the differences. At the same time I\ncan appreciate a preference for \"just plain text please\".\n\n\n> I may well be in the minority, but I think some kind of straw poll\n>> might be advisable, rather than doing this just because.\n>>\n>\n> +1 on no color by default.\n>\n\nGiven that there is apparently a standard NO_COLOR environment variable\nwhich can be set, I think it's reasonable to default to gentle use of\ncolour, but turn it off if the standard variable is set. Somebody who wants\nno color will probably already have the variable set, so in effect for the\npeople who want it that way no colour would already be the default (not the\ndefault default, but the de facto default).\n\nOn Sat, 21 Mar 2020 at 00:25, Jonah H. Harris <jonah.harris@gmail.com> wrote:On Tue, Dec 31, 2019 at 8:35 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> With the attached patch, I propose to enable the colored output by \n> default in PG13.\n\nFWIW, I shall be setting NO_COLOR permanently if this gets committed.\nI wonder how many people there are who actually *like* colored output?\nI find it to be invariably less readable than plain B&W text.Same. For me it depends on what the colour is doing. I was very pleased when I first saw coloured output from ls, which if I remember correctly repeats the information provided by -F but more prominently. Similarly, I appreciate diff output that highlights the differences. At the same time I can appreciate a preference for \"just plain text please\". I may well be in the minority, but I think some kind of straw poll\nmight be advisable, rather than doing this just because.+1 on no color by default. Given that there is apparently a standard NO_COLOR environment variable which can be set, I think it's reasonable to default to gentle use of colour, but turn it off if the standard variable is set. Somebody who wants no color will probably already have the variable set, so in effect for the people who want it that way no colour would already be the default (not the default default, but the de facto default).",
"msg_date": "Sat, 21 Mar 2020 09:30:19 -0400",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Fri, Mar 20, 2020 at 11:22:07PM -0400, Bruce Momjian wrote:\n> On Fri, Mar 20, 2020 at 11:15:07PM -0400, Tom Lane wrote:\n>> Yeah, but the point is precisely that pg_logging_init()\n>> also responds to PG_COLORS, which is not documented anywhere.\n> \n> Oh, I thought it was a typo. OK, so it still an open item.\n\nYes, I really think that we should have a new section in the docs for\nthat with more meaningful examples rather than copy-paste that stuff\nacross more pages of the docs. Note that 5aaa584 has plugged all the\nholes related to PG_COLOR I could find, and that pg_ctl and pg_upgrade\ninitialize logging with pg_logging_init() but these two cannot use\ncoloring because they have their own idea of what logging should be.\n--\nMichael",
"msg_date": "Mon, 23 Mar 2020 14:04:10 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2020-03-23 06:04, Michael Paquier wrote:\n> On Fri, Mar 20, 2020 at 11:22:07PM -0400, Bruce Momjian wrote:\n>> On Fri, Mar 20, 2020 at 11:15:07PM -0400, Tom Lane wrote:\n>>> Yeah, but the point is precisely that pg_logging_init()\n>>> also responds to PG_COLORS, which is not documented anywhere.\n>>\n>> Oh, I thought it was a typo. OK, so it still an open item.\n> \n> Yes, I really think that we should have a new section in the docs for\n> that with more meaningful examples rather than copy-paste that stuff\n> across more pages of the docs. Note that 5aaa584 has plugged all the\n> holes related to PG_COLOR I could find, and that pg_ctl and pg_upgrade\n> initialize logging with pg_logging_init() but these two cannot use\n> coloring because they have their own idea of what logging should be.\n\nI'm giving up on making color the default, since there is clearly no \nconsensus.\n\nAttached is the documentation patch reworked.\n\nShould we delete all the repetitive mentions on the man pages?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Mon, 23 Mar 2020 09:32:08 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 9:32 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n>\n> I'm giving up on making color the default, since there is clearly no\n> consensus.\n>\n> Attached is the documentation patch reworked.\n>\n\nI think there is also some value in adding the functionality proposed in\nterminal_supports_color().\n\nRegards,\n\nJuan José Santamaría Flecha\n\nOn Mon, Mar 23, 2020 at 9:32 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:\nI'm giving up on making color the default, since there is clearly no \nconsensus.\n\nAttached is the documentation patch reworked.I think there is also some value in adding the functionality proposed in terminal_supports_color().Regards,Juan José Santamaría Flecha",
"msg_date": "Tue, 24 Mar 2020 15:34:34 +0100",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Mon, Mar 23, 2020 at 09:32:08AM +0100, Peter Eisentraut wrote:\n> Attached is the documentation patch reworked.\n\nThanks!\n\n> Should we delete all the repetitive mentions on the man pages?\n\nI am not sure that deleting all the mentions would be a good idea, as\nwe'd lose track of which tool supports coloring or not, and that could\nconfuse users. What about switching the existing paragraph to a\nsimple sentence with a link to the new appendix you are adding? Say:\n\"pg_foo supports <place_your_link_here>colorized output</>\".\n\n> + <para>\n> + The actual colors to be used are configured using the environment variable\n> + <envar>PG_COLORS</envar><indexterm><primary>PG_COLORS</primary></indexterm>\n> + (note plural). The value is a colon-separated list of\n> + <literal><replaceable>key</replaceable>=<replaceable>value</replaceable></literal>\n> + pairs. The keys specify what the color is to be used for. The values are\n> + SGR (Select Graphic Rendition) specifications, which are interpreted by the\n> + terminal.\n> + </para>\n\nA reference to SGR to understand better what's the list of values\nsupported would be nice?\n\n> + <para>\n> + The default value is <literal>error=01;31:warning=01;35:locus=01</literal>.\n> + </para>\n\nCould it be possible to have more details about what those three\nfields map to?\n--\nMichael",
"msg_date": "Thu, 26 Mar 2020 15:36:25 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2020-03-26 07:36, Michael Paquier wrote:\n> I am not sure that deleting all the mentions would be a good idea, as\n> we'd lose track of which tool supports coloring or not, and that could\n> confuse users. What about switching the existing paragraph to a\n> simple sentence with a link to the new appendix you are adding? Say:\n> \"pg_foo supports <place_your_link_here>colorized output</>\".\n\nI didn't do this because it would create additional complications in the \nman pages. But there is now an index entry, so it's possible to find \nmore information.\n\n>> + <para>\n>> + The actual colors to be used are configured using the environment variable\n>> + <envar>PG_COLORS</envar><indexterm><primary>PG_COLORS</primary></indexterm>\n>> + (note plural). The value is a colon-separated list of\n>> + <literal><replaceable>key</replaceable>=<replaceable>value</replaceable></literal>\n>> + pairs. The keys specify what the color is to be used for. The values are\n>> + SGR (Select Graphic Rendition) specifications, which are interpreted by the\n>> + terminal.\n>> + </para>\n> \n> A reference to SGR to understand better what's the list of values\n> supported would be nice?\n\nI'm not sure how to do that. The full list of possible values is huge, \nand exactly what is supported depends on the terminal.\n\n>> + <para>\n>> + The default value is <literal>error=01;31:warning=01;35:locus=01</literal>.\n>> + </para>\n> \n> Could it be possible to have more details about what those three\n> fields map to?\n\nI have added information about that and explained the example values. I \nthink if you search for \"Select Graphic Rendition\" and look for the \nexample values, you can make sense of this.\n\nCommitted with those changes. This closes the commit fest item.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Mar 2020 11:56:15 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2020-03-24 15:34, Juan José Santamaría Flecha wrote:\n> I think there is also some value in adding the functionality proposed in \n> terminal_supports_color().\n\nWhat do you want to do with that functionality?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 29 Mar 2020 11:56:51 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 11:56 AM Peter Eisentraut <\npeter.eisentraut@2ndquadrant.com> wrote:\n\n> On 2020-03-24 15:34, Juan José Santamaría Flecha wrote:\n> > I think there is also some value in adding the functionality proposed in\n> > terminal_supports_color().\n>\n> What do you want to do with that functionality?\n\n\nAdd it to the tests done when PG_COLOR is \"auto\".\n\nRegards\n\nOn Sun, Mar 29, 2020 at 11:56 AM Peter Eisentraut <peter.eisentraut@2ndquadrant.com> wrote:On 2020-03-24 15:34, Juan José Santamaría Flecha wrote:\n> I think there is also some value in adding the functionality proposed in \n> terminal_supports_color().\n\nWhat do you want to do with that functionality?Add it to the tests done when PG_COLOR is \"auto\".Regards",
"msg_date": "Sun, 29 Mar 2020 14:55:37 +0200",
"msg_from": "=?UTF-8?Q?Juan_Jos=C3=A9_Santamar=C3=ADa_Flecha?=\n <juanjo.santamaria@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 02:55:37PM +0200, Juan José Santamaría Flecha wrote:\n> Add it to the tests done when PG_COLOR is \"auto\".\n\nFWIW, I am not sure that it is a good idea to stick into the code\nknowledge inherent to TERM. That would likely rot depending on how\nterminals evolve in the future, and it is easy to test if a terminal\nsupports color or not but just switching PG_COLOR in a given\nenvironment and look at the error message produced by anything\nable to support coloring.\n--\nMichael",
"msg_date": "Mon, 30 Mar 2020 17:03:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Sun, Mar 29, 2020 at 11:56:15AM +0200, Peter Eisentraut wrote:\n> I didn't do this because it would create additional complications in the man\n> pages. But there is now an index entry, so it's possible to find more\n> information.\n\nCannot you add a link to the page for color support in each one of\nthem? That seems more user-friendly to me.\n\n> I'm not sure how to do that. The full list of possible values is huge, and\n> exactly what is supported depends on the terminal.\n\nAn idea is to add a reference to SGR parameters directly from\nwikipedia:\nhttps://en.wikipedia.org/wiki/ANSI_escape_code\nHowever I recall that you don't like adding references to\nWiki-sensei. Please feel free to discard this idea if you don't like\nit.\n\n> Committed with those changes. This closes the commit fest item.\n\nThanks for the addition.\n--\nMichael",
"msg_date": "Mon, 30 Mar 2020 17:08:16 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2020-03-30 10:03, Michael Paquier wrote:\n> On Sun, Mar 29, 2020 at 02:55:37PM +0200, Juan Jos� Santamar�a Flecha wrote:\n>> Add it to the tests done when PG_COLOR is \"auto\".\n> \n> FWIW, I am not sure that it is a good idea to stick into the code\n> knowledge inherent to TERM. That would likely rot depending on how\n> terminals evolve in the future, and it is easy to test if a terminal\n> supports color or not but just switching PG_COLOR in a given\n> environment and look at the error message produced by anything\n> able to support coloring.\n\nThere could be some value in this, I think. Other systems also do this \nin some variant. However, it's unclear to me to what extent this is \nlegacy behavior or driven by current needs. I'd be willing to refine \nthis, but it should be based on some actual needs. What terminals (or \nterminal-like things) don't support color, and how do we detect them?\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Apr 2020 15:50:42 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On 2020-03-30 10:08, Michael Paquier wrote:\n> On Sun, Mar 29, 2020 at 11:56:15AM +0200, Peter Eisentraut wrote:\n>> I didn't do this because it would create additional complications in the man\n>> pages. But there is now an index entry, so it's possible to find more\n>> information.\n> \n> Cannot you add a link to the page for color support in each one of\n> them? That seems more user-friendly to me.\n\nDo you have a specific phrasing or look in mind?\n\n>> I'm not sure how to do that. The full list of possible values is huge, and\n>> exactly what is supported depends on the terminal.\n> \n> An idea is to add a reference to SGR parameters directly from\n> wikipedia:\n> https://en.wikipedia.org/wiki/ANSI_escape_code\n> However I recall that you don't like adding references to\n> Wiki-sensei. Please feel free to discard this idea if you don't like\n> it.\n\nYeah, we could perhaps do this.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Apr 2020 15:52:17 +0200",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": true,
"msg_subject": "Re: color by default"
},
{
"msg_contents": "On Wed, Apr 01, 2020 at 03:52:17PM +0200, Peter Eisentraut wrote:\n> On 2020-03-30 10:08, Michael Paquier wrote:\n>> Cannot you add a link to the page for color support in each one of\n>> them? That seems more user-friendly to me.\n> \n> Do you have a specific phrasing or look in mind?\n\nI actually do. Please see the attached, which seems to bring more\nconsistency across all the docs for all the tools.\n\n>> An idea is to add a reference to SGR parameters directly from\n>> wikipedia:\n>> https://en.wikipedia.org/wiki/ANSI_escape_code\n>> However I recall that you don't like adding references to\n>> Wiki-sensei. Please feel free to discard this idea if you don't like\n>> it.\n>\n> Yeah, we could perhaps do this.\n\nActually, the standard ECMA-48 could just be directly used for that:\nhttps://www.ecma-international.org/publications/standards/Ecma-048.htm\n\nSo, what do you think?\n--\nMichael",
"msg_date": "Thu, 2 Apr 2020 16:22:55 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: color by default"
}
] |
[
{
"msg_contents": "Does the next decade start on 2020-01-01 or 2021-01-01? Postgres says\nit start on the former date:\n\n\tSELECT EXTRACT(DECADE FROM '2019-01-01'::date);\n\t date_part\n\t-----------\n\t 201\n\t\n\tSELECT EXTRACT(DECADE FROM '2020-01-01'::date);\n\t date_part\n\t-----------\n\t 202\n\nbut the _century_ starts on 2001-01-01, not 2000-01-01:\n\n\tSELECT EXTRACT(CENTURY FROM '2000-01-01'::date);\n\t date_part\n\t-----------\n\t 20\n\t\n\tSELECT EXTRACT(CENTURY FROM '2001-01-01'::date);\n\t date_part\n\t-----------\n\t 21\n\nThat seems inconsistent to me. /pgtop/src/backend/utils/adt/timestamp.c\nhas this C comment:\n\n\t * what is a decade wrt dates? let us assume that decade 199\n\t * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1\n\t * is 11 BC thru 2 BC...\n\nFYI, these two URLs suggest the inconsistency is OK:\n\n\thttps://www.timeanddate.com/calendar/decade.html\n\thttps://en.wikipedia.org/wiki/Decade\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Tue, 31 Dec 2019 11:35:46 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Decade indication"
},
{
"msg_contents": "Funnily enough I was having a conversation with my wife on exactly this as I opened your email.\n\nIf the Wikipedia article is to be trusted, the following seems fitting:\n\n SELECT EXTRACT(ORDINAL DECADE FROM '2020-01-01'::date);\n date_part\n -----------\n 201\n\nAnd the default:\n\nSELECT EXTRACT(CARDINAL DECADE FROM '2020-01-01'::date);\n date_part\n -----------\n 202\n\n On Tuesday, 31 December 2019, 16:36:02 GMT, Bruce Momjian <bruce@momjian.us> wrote: \n \n Does the next decade start on 2020-01-01 or 2021-01-01? Postgres says\nit start on the former date:\n\n SELECT EXTRACT(DECADE FROM '2019-01-01'::date);\n date_part\n -----------\n 201\n \n SELECT EXTRACT(DECADE FROM '2020-01-01'::date);\n date_part\n -----------\n 202\n\nbut the _century_ starts on 2001-01-01, not 2000-01-01:\n\n SELECT EXTRACT(CENTURY FROM '2000-01-01'::date);\n date_part\n -----------\n 20\n \n SELECT EXTRACT(CENTURY FROM '2001-01-01'::date);\n date_part\n -----------\n 21\n\nThat seems inconsistent to me. /pgtop/src/backend/utils/adt/timestamp.c\nhas this C comment:\n\n * what is a decade wrt dates? let us assume that decade 199\n * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1\n * is 11 BC thru 2 BC...\n\nFYI, these two URLs suggest the inconsistency is OK:\n\n https://www.timeanddate.com/calendar/decade.html\n https://en.wikipedia.org/wiki/Decade\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n \n Funnily enough I was having a conversation with my wife on exactly this as I opened your email.If the Wikipedia article is to be trusted, the following seems fitting: SELECT EXTRACT(ORDINAL DECADE FROM '2020-01-01'::date); date_part ----------- 201And the default:SELECT EXTRACT(CARDINAL DECADE FROM '2020-01-01'::date); date_part ----------- 202 On Tuesday, 31 December 2019, 16:36:02 GMT, Bruce Momjian <bruce@momjian.us> wrote: Does the next decade start on 2020-01-01 or 2021-01-01? Postgres saysit start on the former date: SELECT EXTRACT(DECADE FROM '2019-01-01'::date); date_part ----------- 201 SELECT EXTRACT(DECADE FROM '2020-01-01'::date); date_part ----------- 202but the _century_ starts on 2001-01-01, not 2000-01-01: SELECT EXTRACT(CENTURY FROM '2000-01-01'::date); date_part ----------- 20 SELECT EXTRACT(CENTURY FROM '2001-01-01'::date); date_part ----------- 21That seems inconsistent to me. /pgtop/src/backend/utils/adt/timestamp.chas this C comment: * what is a decade wrt dates? let us assume that decade 199 * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1 * is 11 BC thru 2 BC...FYI, these two URLs suggest the inconsistency is OK: https://www.timeanddate.com/calendar/decade.html https://en.wikipedia.org/wiki/Decade-- Bruce Momjian <bruce@momjian.us> http://momjian.us EnterpriseDB http://enterprisedb.com+ As you are, so once was I. As I am, so you will be. ++ Ancient Roman grave inscription +",
"msg_date": "Tue, 31 Dec 2019 21:26:56 +0000 (UTC)",
"msg_from": "Glyn Astill <glynastill@yahoo.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "On Wed, Jan 1, 2020 at 3:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n>\n> Does the next decade start on 2020-01-01 or 2021-01-01? Postgres says\n> it start on the former date:\n>\n> SELECT EXTRACT(DECADE FROM '2019-01-01'::date);\n> date_part\n> -----------\n> 201\n>\n> SELECT EXTRACT(DECADE FROM '2020-01-01'::date);\n> date_part\n> -----------\n> 202\n>\n> but the _century_ starts on 2001-01-01, not 2000-01-01:\n>\n> SELECT EXTRACT(CENTURY FROM '2000-01-01'::date);\n> date_part\n> -----------\n> 20\n>\n> SELECT EXTRACT(CENTURY FROM '2001-01-01'::date);\n> date_part\n> -----------\n> 21\n>\n> That seems inconsistent to me. /pgtop/src/backend/utils/adt/timestamp.c\n> has this C comment:\n>\n> * what is a decade wrt dates? let us assume that decade 199\n> * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1\n> * is 11 BC thru 2 BC...\n>\n> FYI, these two URLs suggest the inconsistency is OK:\n>\n> https://www.timeanddate.com/calendar/decade.html\n> https://en.wikipedia.org/wiki/Decade\n>\n\n\nhttps://en.wikipedia.org/wiki/Century says:\n\n\"Although a century can mean any arbitrary period of 100 years, there\nare two viewpoints on the nature of standard centuries. One is based\non strict construction, while the other is based on popular\nperspective (general usage).\n\nAccording to the strict construction of the Gregorian calendar, the\n1st century AD began with 1 AD and ended with 100 AD, with the same\npattern continuing onward. In this model, the n-th century\nstarted/will start on the year (100 × n) − 99 and ends in 100 × n.\nBecause of this, a century will only include one year, the centennial\nyear, that starts with the century's number (e.g. 1900 was the last\nyear of the 19th century).[2]\n\nIn general usage, centuries are aligned with decades by grouping years\nbased on their shared digits. In this model, the 'n' -th century\nstarted/will start on the year (100 x n) - 100 and ends in (100 x n) -\n1. For example, the 20th century is generally regarded as from 1900 to\n1999, inclusive. This is sometimes known as the odometer effect. The\nastronomical year numbering and ISO 8601 systems both contain a year\nzero, so the first century begins with the year zero, rather than the\nyear one.\"\n\n\nIf I had to choose I'd go with the \"general usage\" rule above, but I\ndon't think we should change behaviour now.\n\n\ncheers\n\nandrew\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 1 Jan 2020 08:04:59 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Wed, Jan 1, 2020 at 3:05 AM Bruce Momjian <bruce@momjian.us> wrote:\n>> Does the next decade start on 2020-01-01 or 2021-01-01? Postgres says\n>> it start on the former date:\n>> ...\n>> That seems inconsistent to me. /pgtop/src/backend/utils/adt/timestamp.c\n>> has this C comment:\n>> \n>> * what is a decade wrt dates? let us assume that decade 199\n>> * is 1990 thru 1999... decade 0 starts on year 1 BC, and -1\n>> * is 11 BC thru 2 BC...\n\n> If I had to choose I'd go with the \"general usage\" rule above, but I\n> don't think we should change behaviour now.\n\nWell, yeah, that. The quoted comment dates to commit 46be0c18f of\n2004-08-20, and a bit of excavation shows that it was just explaining\nbehavior that existed before, clear back to when Lockhart installed\nall this functionality in 2001.\n\nIt's pretty darn difficult to justify changing behavior that's stood\nfor 18+ years, especially when the argument that it's wrong is subject\nto debate. Either users think it's correct, or nobody uses this\nfunction. In either case, nobody will thank us for changing it.\n\nIt's possible that we could add an alternate keyword for a different\ndecade (and/or century) definition, but I'd want to see some actual\nfield demand for that first.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 31 Dec 2019 16:53:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "Bruce Momjian <bruce@momjian.us> writes:\n> Does the next decade start on 2020-01-01 or 2021-01-01?\n\nI see Randall Munroe has weighed in on this topic:\n\nhttps://xkcd.com/2249/\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 01 Jan 2020 23:01:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "On Wed, Jan 1, 2020 at 11:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bruce Momjian <bruce@momjian.us> writes:\n> > Does the next decade start on 2020-01-01 or 2021-01-01?\n>\n> I see Randall Munroe has weighed in on this topic:\n>\n> https://xkcd.com/2249/\n\nAnd the conclusion is ... the whole discussion is stupid?\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 07:57:54 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jan 1, 2020 at 11:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> I see Randall Munroe has weighed in on this topic:\n>> https://xkcd.com/2249/\n\n> And the conclusion is ... the whole discussion is stupid?\n\nWell, it's not terribly useful anyway. Arguments founded on an\nassumption that there's anything rational or consistent about\nhuman calendars tend to run into problems with that assumption.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 08:52:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 08:52:17AM -0500, Tom Lane wrote:\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Jan 1, 2020 at 11:01 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >> I see Randall Munroe has weighed in on this topic:\n> >> https://xkcd.com/2249/\n> \n> > And the conclusion is ... the whole discussion is stupid?\n> \n> Well, it's not terribly useful anyway. Arguments founded on an\n> assumption that there's anything rational or consistent about\n> human calendars tend to run into problems with that assumption.\n\nI assume there is enough agreement that decades start on 20X0 that we\ndon't need to document that Postgres does that.\n\n-- \n Bruce Momjian <bruce@momjian.us> http://momjian.us\n EnterpriseDB http://enterprisedb.com\n\n+ As you are, so once was I. As I am, so you will be. +\n+ Ancient Roman grave inscription +\n\n\n",
"msg_date": "Fri, 17 Jan 2020 17:52:01 -0500",
"msg_from": "Bruce Momjian <bruce@momjian.us>",
"msg_from_op": true,
"msg_subject": "Re: Decade indication"
},
{
"msg_contents": "On Fri, 17 Jan 2020 at 17:52, Bruce Momjian <bruce@momjian.us> wrote:\n\n\n> I assume there is enough agreement that decades start on 20X0 that we\n> don't need to document that Postgres does that.\n>\n\nI think the inconsistency between years, decades, centuries, and millenia\nis worthy of documentation. In fact, it already is for EXTRACT:\n\nhttps://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACT\n\nIt describes decade as \"The year field divided by 10\", whereas for century\nand millennium it refers to centuries and millennia beginning in '01 years.\nI think if I were designing EXTRACT I would probably have decades follow\nthe pattern of century and millennium, mostly because if somebody wants\nyear / 10 they can just write that. But I am, to say the least, not\nproposing any modifications to this particular API, for multiple reasons\nwhich I'm sure almost any reader of this list will agree with.\n\nOn Fri, 17 Jan 2020 at 17:52, Bruce Momjian <bruce@momjian.us> wrote: \nI assume there is enough agreement that decades start on 20X0 that we\ndon't need to document that Postgres does that.I think the inconsistency between years, decades, centuries, and millenia is worthy of documentation. In fact, it already is for EXTRACT:https://www.postgresql.org/docs/current/functions-datetime.html#FUNCTIONS-DATETIME-EXTRACTIt describes decade as \"The year field divided by 10\", whereas for century and millennium it refers to centuries and millennia beginning in '01 years. I think if I were designing EXTRACT I would probably have decades follow the pattern of century and millennium, mostly because if somebody wants year / 10 they can just write that. But I am, to say the least, not proposing any modifications to this particular API, for multiple reasons which I'm sure almost any reader of this list will agree with.",
"msg_date": "Mon, 20 Jan 2020 18:11:18 -0500",
"msg_from": "Isaac Morland <isaac.morland@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Decade indication"
}
] |
[
{
"msg_contents": "Hello,\n\nWe right now don't support TRUNCATE on foreign tables.\nIt may be a strange missing piece and restriction of operations.\nFor example, if a partitioned table contains some foreign tables in its leaf,\nuser cannot use TRUNCATE command to clean up the partitioned table.\n\nProbably, API design is not complicated. We add a new callback for truncate\non the FdwRoutine, and ExecuteTruncateGuts() calls it if relation is foreign-\ntable. In case of postgres_fdw, it also issues \"TRUNCATE\" command on the\nremote side in the transaction block [*1].\n\n[*1] But I hope oracle_fdw does not follow this implementation as is. :-)\n\nHow about your thought?\n\nI noticed this restriction when I'm working on Arrow_Fdw enhancement for\n\"writable\" capability. Because Apache Arrow [*2] is a columnar file format,\nit is not designed for UPDATE/DELETE, but capable to bulk-INSERT.\nIt is straightforward idea to support only INSERT, and clear data by TRUNCATE.\n\n[*2] Apache Arrow - https://arrow.apache.org/docs/format/Columnar.html\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 1 Jan 2020 11:46:11 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "TRUNCATE on foreign tables"
},
{
"msg_contents": "Hello,\n\nThe attached patch adds TRUNCATE support on foreign table.\n\nThis patch adds an optional callback ExecForeignTruncate(Relation rel)\nto FdwRoutine.\nIt is invoked during ExecuteTruncateGuts, then FDW driver hands over\nthe jobs related\nto complete \"truncate on the foreign table\".\nOf course, it is not clear to define the concept of \"truncate\" on some\nFDW drivers.\nIn this case, TRUNCATE command prohibits to apply these foreign tables.\n\n2019 is not finished at everywhere on the earth yet, so I believe it\nis Ok to add this patch\nto CF-2020:Jan.\n\nBest regards,\n\n2020年1月1日(水) 11:46 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> Hello,\n>\n> We right now don't support TRUNCATE on foreign tables.\n> It may be a strange missing piece and restriction of operations.\n> For example, if a partitioned table contains some foreign tables in its leaf,\n> user cannot use TRUNCATE command to clean up the partitioned table.\n>\n> Probably, API design is not complicated. We add a new callback for truncate\n> on the FdwRoutine, and ExecuteTruncateGuts() calls it if relation is foreign-\n> table. In case of postgres_fdw, it also issues \"TRUNCATE\" command on the\n> remote side in the transaction block [*1].\n>\n> [*1] But I hope oracle_fdw does not follow this implementation as is. :-)\n>\n> How about your thought?\n>\n> I noticed this restriction when I'm working on Arrow_Fdw enhancement for\n> \"writable\" capability. Because Apache Arrow [*2] is a columnar file format,\n> it is not designed for UPDATE/DELETE, but capable to bulk-INSERT.\n> It is straightforward idea to support only INSERT, and clear data by TRUNCATE.\n>\n> [*2] Apache Arrow - https://arrow.apache.org/docs/format/Columnar.html\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Wed, 1 Jan 2020 15:07:57 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On 2020-Jan-01, Kohei KaiGai wrote:\n\n> Hello,\n> \n> The attached patch adds TRUNCATE support on foreign table.\n> \n> This patch adds an optional callback ExecForeignTruncate(Relation rel)\n> to FdwRoutine.\n> It is invoked during ExecuteTruncateGuts, then FDW driver hands over\n> the jobs related to complete \"truncate on the foreign table\".\n\nI think this would need to preserve the notion of multi-table truncates.\nOtherwise it won't be possible to truncate tables linked by FKs. I\nthink this means the new entrypoint needs to receive a list of rels to\ntruncate, not just one. (Maybe an alternative is to make it \"please\ntruncate rel X, and be aware that relations Y,Z are also being\ntruncated at the same time\".)\n\nLooking at apache arrow documentation, it doesn't appear that it has\nanything like FK constraints.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Jan 2020 00:16:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月2日(木) 12:16 Alvaro Herrera <alvherre@2ndquadrant.com>:\n>\n> On 2020-Jan-01, Kohei KaiGai wrote:\n>\n> > Hello,\n> >\n> > The attached patch adds TRUNCATE support on foreign table.\n> >\n> > This patch adds an optional callback ExecForeignTruncate(Relation rel)\n> > to FdwRoutine.\n> > It is invoked during ExecuteTruncateGuts, then FDW driver hands over\n> > the jobs related to complete \"truncate on the foreign table\".\n>\n> I think this would need to preserve the notion of multi-table truncates.\n> Otherwise it won't be possible to truncate tables linked by FKs. I\n> think this means the new entrypoint needs to receive a list of rels to\n> truncate, not just one. (Maybe an alternative is to make it \"please\n> truncate rel X, and be aware that relations Y,Z are also being\n> truncated at the same time\".)\n>\nPlease check at ExecuteTruncateGuts(). It makes a list of relations to be\ntruncated, including the relations that references the specified table by FK,\nprior to invocation of the new FDW callback.\nSo, if multiple foreign tables are involved in a single TRUNCATE command,\nthis callback can be invoked multiple times.\n\n> Looking at apache arrow documentation, it doesn't appear that it has\n> anything like FK constraints.\n>\nYes. It is just a bunch of columnar data.\nIn Apache Arrow, no constraint are defined except for \"NOT NULL\".\n(In case when Field::nullable == false, all the values are considered\nvalid date.)\n\nThanks,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 2 Jan 2020 15:39:51 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On 2020-Jan-02, Kohei KaiGai wrote:\n\n> 2020年1月2日(木) 12:16 Alvaro Herrera <alvherre@2ndquadrant.com>:\n> >\n> > I think this would need to preserve the notion of multi-table truncates.\n> > Otherwise it won't be possible to truncate tables linked by FKs. I\n> > think this means the new entrypoint needs to receive a list of rels to\n> > truncate, not just one. (Maybe an alternative is to make it \"please\n> > truncate rel X, and be aware that relations Y,Z are also being\n> > truncated at the same time\".)\n>\n> Please check at ExecuteTruncateGuts(). It makes a list of relations to be\n> truncated, including the relations that references the specified table by FK,\n> prior to invocation of the new FDW callback.\n> So, if multiple foreign tables are involved in a single TRUNCATE command,\n> this callback can be invoked multiple times.\n\nYeah, that's my concern: if you have postgres_fdw tables linked by FKs\nin the remote server, the truncate will fail because it'll try to\ntruncate them in separate commands instead of using a multi-table\ntruncate.\n\n> > Looking at apache arrow documentation, it doesn't appear that it has\n> > anything like FK constraints.\n> >\n> Yes. It is just a bunch of columnar data.\n> In Apache Arrow, no constraint are defined except for \"NOT NULL\".\n> (In case when Field::nullable == false, all the values are considered\n> valid date.)\n\nOK, I suppose that means there are no concerns such as what I mention\nabove.\n\n-- \nÁlvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Jan 2020 08:56:40 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月2日(木) 20:56 Alvaro Herrera <alvherre@2ndquadrant.com>:\n>\n> On 2020-Jan-02, Kohei KaiGai wrote:\n>\n> > 2020年1月2日(木) 12:16 Alvaro Herrera <alvherre@2ndquadrant.com>:\n> > >\n> > > I think this would need to preserve the notion of multi-table truncates.\n> > > Otherwise it won't be possible to truncate tables linked by FKs. I\n> > > think this means the new entrypoint needs to receive a list of rels to\n> > > truncate, not just one. (Maybe an alternative is to make it \"please\n> > > truncate rel X, and be aware that relations Y,Z are also being\n> > > truncated at the same time\".)\n> >\n> > Please check at ExecuteTruncateGuts(). It makes a list of relations to be\n> > truncated, including the relations that references the specified table by FK,\n> > prior to invocation of the new FDW callback.\n> > So, if multiple foreign tables are involved in a single TRUNCATE command,\n> > this callback can be invoked multiple times.\n>\n> Yeah, that's my concern: if you have postgres_fdw tables linked by FKs\n> in the remote server, the truncate will fail because it'll try to\n> truncate them in separate commands instead of using a multi-table\n> truncate.\n>\nAh, it makes sense.\nProbably, backend can make sub-list of the foreign tables to be\ntruncated for each\npair of FDW and Server, then invoke the FDW callback only once with this list.\nFDW driver can issue multi-tables truncate on all the foreign tables\nsupplied, with\nnothing difficult to do.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Thu, 2 Jan 2020 22:05:48 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Greetings,\n\n* Kohei KaiGai (kaigai@heterodb.com) wrote:\n> 2020年1月2日(木) 20:56 Alvaro Herrera <alvherre@2ndquadrant.com>:\n> > On 2020-Jan-02, Kohei KaiGai wrote:\n> > > 2020年1月2日(木) 12:16 Alvaro Herrera <alvherre@2ndquadrant.com>:\n> > > > I think this would need to preserve the notion of multi-table truncates.\n> > > > Otherwise it won't be possible to truncate tables linked by FKs. I\n> > > > think this means the new entrypoint needs to receive a list of rels to\n> > > > truncate, not just one. (Maybe an alternative is to make it \"please\n> > > > truncate rel X, and be aware that relations Y,Z are also being\n> > > > truncated at the same time\".)\n> > >\n> > > Please check at ExecuteTruncateGuts(). It makes a list of relations to be\n> > > truncated, including the relations that references the specified table by FK,\n> > > prior to invocation of the new FDW callback.\n\nErm, sure it does, but we don't support having FKs on foreign tables\ntoday, so that doesn't really help with this issue, does it?\n\n> > > So, if multiple foreign tables are involved in a single TRUNCATE command,\n> > > this callback can be invoked multiple times.\n> >\n> > Yeah, that's my concern: if you have postgres_fdw tables linked by FKs\n> > in the remote server, the truncate will fail because it'll try to\n> > truncate them in separate commands instead of using a multi-table\n> > truncate.\n\nI agree that the FDW callback should support multiple tables in the\nTRUNCATE, but I think it also should include CASCADE as an option and\nhave that be passed on to the FDW to handle.\n\n> Ah, it makes sense.\n> Probably, backend can make sub-list of the foreign tables to be\n> truncated for each\n> pair of FDW and Server, then invoke the FDW callback only once with this list.\n> FDW driver can issue multi-tables truncate on all the foreign tables\n> supplied, with\n> nothing difficult to do.\n\nThis doesn't really make sense as we don't track FK relationships in the\nlocal server for foreign tables today- now, perhaps we should (and things\nlike primary keys too..), but I don't think that needs to be the job of\nthis particular patch. Instead, I'd suggest we have the core code build\nup a list of tables to truncate, for each server, based just on the list\npassed in by the user, and then also pass in if CASCADE was included or\nnot, and then let the FDW handle that in whatever way makes sense for\nthe foreign server (which, for a PG system, would probably be just\nbuilding up the TRUNCATE command and running it with or without the\nCASCADE option, but it might be different on other systems).\n\nJust to be clear- I don't mean to suggest that we should explicitly\navoid the logic in TruncateGuts that builds up the list when CASCADE is\nused, just saying that it's not going to actually do anything when we're\ntalking about foreign tables- and that's *fine*. I don't think we need\nto do more here until we're actually tracking remote FKs locally.\n\nSo, I think the patch just needs a bit of minor adjustment for that to\nmake it work for the case that Alvaro is concerned about. One thing\nthat isn't really clear to me is if we should also support the 'ONLY'\noption to TRUNCATE when it comes to FDWs; a table can't be both foreign\nand partitioned, so it's not an issue there, but a foreign table CAN\nbe a child table of another foreign table.\n\nOf course, if that's the case, things get pretty odd looking pretty\nquickly if both sides see the table as a child table because we\nactually end up scanning the foreign parent (which will include rows\nfrom the child on the remote side) and then scanning the foreign child\n*again*, resulting in duplicate rows coming back, so I'm not really sure\nhow much effort we should be thinking about putting into dealing with\nchild foreign tables..\n\nThanks,\n\nStephen",
"msg_date": "Thu, 2 Jan 2020 09:46:44 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 09:46:44AM -0500, Stephen Frost wrote:\n> I agree that the FDW callback should support multiple tables in the\n> TRUNCATE, but I think it also should include CASCADE as an option and\n> have that be passed on to the FDW to handle.\n\nAs much as RESTRICT, ONLY and the INDENTITY clauses, no? Just think\nabout postgres_fdw.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 16:47:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Greetings,\n\n* Michael Paquier (michael@paquier.xyz) wrote:\n> On Thu, Jan 02, 2020 at 09:46:44AM -0500, Stephen Frost wrote:\n> > I agree that the FDW callback should support multiple tables in the\n> > TRUNCATE, but I think it also should include CASCADE as an option and\n> > have that be passed on to the FDW to handle.\n> \n> As much as RESTRICT, ONLY and the INDENTITY clauses, no? Just think\n> about postgres_fdw.\n\nRESTRICT, yes. I don't know about ONLY being sensible as we don't\nreally deal with inheritance and foreign tables very cleanly today, as I\nsaid up-thread, so I'm not sure if we really want to put in the effort\nthat it'd require to figure out how to make ONLY make sense. The\nquestion about how to handle IDENTITY is a good one. I suppose we could\njust pass that down and let the FDW sort it out..?\n\nThanks,\n\nStephen",
"msg_date": "Mon, 6 Jan 2020 16:32:39 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Mon, Jan 06, 2020 at 04:32:39PM -0500, Stephen Frost wrote:\n> RESTRICT, yes. I don't know about ONLY being sensible as we don't\n> really deal with inheritance and foreign tables very cleanly today, as I\n> said up-thread, so I'm not sure if we really want to put in the effort\n> that it'd require to figure out how to make ONLY make sense.\n\nTrue enough.\n\n> The question about how to handle IDENTITY is a good one. I suppose\n> we could just pass that down and let the FDW sort it out..?\n\nLooking at the code, ExecuteTruncateGuts() passes down restart_seqs,\nso it sounds sensible to me to just pass down that to the FDW\ncallback and the callback decide what to do.\n--\nMichael",
"msg_date": "Tue, 7 Jan 2020 16:02:57 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月7日(火) 16:03 Michael Paquier <michael@paquier.xyz>:\n>\n> On Mon, Jan 06, 2020 at 04:32:39PM -0500, Stephen Frost wrote:\n> > RESTRICT, yes. I don't know about ONLY being sensible as we don't\n> > really deal with inheritance and foreign tables very cleanly today, as I\n> > said up-thread, so I'm not sure if we really want to put in the effort\n> > that it'd require to figure out how to make ONLY make sense.\n>\n> True enough.\n>\n> > The question about how to handle IDENTITY is a good one. I suppose\n> > we could just pass that down and let the FDW sort it out..?\n>\n> Looking at the code, ExecuteTruncateGuts() passes down restart_seqs,\n> so it sounds sensible to me to just pass down that to the FDW\n> callback and the callback decide what to do.\n>\nIt looks to me the local sequences owned by a foreign table shall be restarted\nby the core, regardless of relkind of the owner relation. So, even if FDW driver\nis buggy, consistency of the local database is kept, right?\nIndeed, \"restart_seqs\" flag is needed to propagate the behavior, however,\nit shall be processed on the remote side via the secondary \"TRUNCATE\" command.\nIs it so sensitive?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 8 Jan 2020 01:08:02 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Hello,\n\nThe attached patch is the revised version of TRUNCATE on foreign tables.\n\nDefinition of the callback is revised as follows:\n\ntypedef void (*ExecForeignTruncate_function) (List *frels_list,\n bool is_cascade,\n bool restart_seqs);\n\nThe \"frels_list\" is a list of foreign tables that are connected to a particular\nforeign server, thus, the server-id pulled out by foreign tables id should be\nidentical for all the relations in the list.\nDue to the API design, this callback shall be invoked for each foreign server\ninvolved in the TRUNCATE command, not per table basis.\n\nThe 2nd and 3rd arguments also informs FDW driver other options of the\ncommand. If FDW has a concept of \"cascaded truncate\" or \"restart sequence\",\nit can adjust its remote query. In postgres_fdw, it follows the manner of\nusual TRUNCATE command.\n\nBest regards,\n\n2020年1月8日(水) 1:08 Kohei KaiGai <kaigai@heterodb.com>:\n>\n> 2020年1月7日(火) 16:03 Michael Paquier <michael@paquier.xyz>:\n> >\n> > On Mon, Jan 06, 2020 at 04:32:39PM -0500, Stephen Frost wrote:\n> > > RESTRICT, yes. I don't know about ONLY being sensible as we don't\n> > > really deal with inheritance and foreign tables very cleanly today, as I\n> > > said up-thread, so I'm not sure if we really want to put in the effort\n> > > that it'd require to figure out how to make ONLY make sense.\n> >\n> > True enough.\n> >\n> > > The question about how to handle IDENTITY is a good one. I suppose\n> > > we could just pass that down and let the FDW sort it out..?\n> >\n> > Looking at the code, ExecuteTruncateGuts() passes down restart_seqs,\n> > so it sounds sensible to me to just pass down that to the FDW\n> > callback and the callback decide what to do.\n> >\n> It looks to me the local sequences owned by a foreign table shall be restarted\n> by the core, regardless of relkind of the owner relation. So, even if FDW driver\n> is buggy, consistency of the local database is kept, right?\n> Indeed, \"restart_seqs\" flag is needed to propagate the behavior, however,\n> it shall be processed on the remote side via the secondary \"TRUNCATE\" command.\n> Is it so sensitive?\n>\n> Best regards,\n> --\n> HeteroDB, Inc / The PG-Strom Project\n> KaiGai Kohei <kaigai@heterodb.com>\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Tue, 14 Jan 2020 18:16:17 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Tue, Jan 14, 2020 at 06:16:17PM +0900, Kohei KaiGai wrote:\n> The \"frels_list\" is a list of foreign tables that are connected to a particular\n> foreign server, thus, the server-id pulled out by foreign tables id should be\n> identical for all the relations in the list.\n> Due to the API design, this callback shall be invoked for each foreign server\n> involved in the TRUNCATE command, not per table basis.\n> \n> The 2nd and 3rd arguments also informs FDW driver other options of the\n> command. If FDW has a concept of \"cascaded truncate\" or \"restart sequence\",\n> it can adjust its remote query. In postgres_fdw, it follows the manner of\n> usual TRUNCATE command.\n\nI have done a quick read through the patch. You have modified the\npatch to pass down to the callback a list of relation OIDs to execute\none command for all, and there are tests for FKs so that coverage\nlooks fine.\n\nRegression tests are failing with this patch:\n -- TRUNCATE doesn't work on foreign tables, either directly or\n recursively\n TRUNCATE ft2; -- ERROR\n-ERROR: \"ft2\" is not a table\n+ERROR: foreign-data wrapper \"dummy\" has no handler\nYou visibly just need to update the output because no handlers are\navailable for truncate in this case. \n\n+void\n+deparseTruncateSql(StringInfo buf, Relation rel)\n+{\n+ deparseRelation(buf, rel);\n+}\nDon't see much point in having this routine.\n\n+ If FDW does not provide this callback, PostgreSQL considers\n+ <command>TRUNCATE</command> is not supported on the foreign table.\n+ </para>\nThis sentence is weird. Perhaps you meant \"as not supported\"?\n\n+ <literal>frels_list</literal> is a list of foreign tables that are\n+ connected to a particular foreign server; thus, these foreign tables\n+ should have identical foreign server ID\nThe list is built by the backend code, so that has to be true.\n\n+ foreach (lc, frels_list)\n+ {\n+ Relation frel = lfirst(lc);\n+ Oid frel_oid = RelationGetRelid(frel);\n+\n+ if (server_id == GetForeignServerIdByRelId(frel_oid))\n+ {\n+ frels_list = foreach_delete_current(frels_list, lc);\n+ curr_frels = lappend(curr_frels, frel);\n+ }\n+ }\nWouldn't it be better to fill in a hash table for each server with a\nlist of relations?\n\n+typedef void (*ExecForeignTruncate_function) (List *frels_list,\n+ bool is_cascade,\n+ bool restart_seqs);\nI would recommend to pass down directly DropBehavior instead of a\nboolean to the callback. That's more extensible.\n--\nMichael",
"msg_date": "Wed, 15 Jan 2020 17:11:26 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月15日(水) 17:11 Michael Paquier <michael@paquier.xyz>:\n>\n> On Tue, Jan 14, 2020 at 06:16:17PM +0900, Kohei KaiGai wrote:\n> > The \"frels_list\" is a list of foreign tables that are connected to a particular\n> > foreign server, thus, the server-id pulled out by foreign tables id should be\n> > identical for all the relations in the list.\n> > Due to the API design, this callback shall be invoked for each foreign server\n> > involved in the TRUNCATE command, not per table basis.\n> >\n> > The 2nd and 3rd arguments also informs FDW driver other options of the\n> > command. If FDW has a concept of \"cascaded truncate\" or \"restart sequence\",\n> > it can adjust its remote query. In postgres_fdw, it follows the manner of\n> > usual TRUNCATE command.\n>\n> I have done a quick read through the patch. You have modified the\n> patch to pass down to the callback a list of relation OIDs to execute\n> one command for all, and there are tests for FKs so that coverage\n> looks fine.\n>\n> Regression tests are failing with this patch:\n> -- TRUNCATE doesn't work on foreign tables, either directly or\n> recursively\n> TRUNCATE ft2; -- ERROR\n> -ERROR: \"ft2\" is not a table\n> +ERROR: foreign-data wrapper \"dummy\" has no handler\n> You visibly just need to update the output because no handlers are\n> available for truncate in this case.\n>\nWhat error message is better in this case? It does not print \"ft2\" anywhere,\nso user may not notice that \"ft2\" is the source of the error.\nHow about 'foreign table \"ft2\" does not support truncate' ?\n\n> +void\n> +deparseTruncateSql(StringInfo buf, Relation rel)\n> +{\n> + deparseRelation(buf, rel);\n> +}\n> Don't see much point in having this routine.\n>\ndeparseRelation() is a static function in postgres_fdw/deparse.c\nOn the other hand, it may be better to move entire logic to construct\nremote TRUNCATE command in the deparse.c side like other commands.\n\n> + If FDW does not provide this callback, PostgreSQL considers\n> + <command>TRUNCATE</command> is not supported on the foreign table.\n> + </para>\n> This sentence is weird. Perhaps you meant \"as not supported\"?\n>\nYes.\nIf FDW does not provide this callback, PostgreSQL performs as if TRUNCATE\nis not supported on the foreign table.\n\n> + <literal>frels_list</literal> is a list of foreign tables that are\n> + connected to a particular foreign server; thus, these foreign tables\n> + should have identical foreign server ID\n> The list is built by the backend code, so that has to be true.\n>\n> + foreach (lc, frels_list)\n> + {\n> + Relation frel = lfirst(lc);\n> + Oid frel_oid = RelationGetRelid(frel);\n> +\n> + if (server_id == GetForeignServerIdByRelId(frel_oid))\n> + {\n> + frels_list = foreach_delete_current(frels_list, lc);\n> + curr_frels = lappend(curr_frels, frel);\n> + }\n> + }\n> Wouldn't it be better to fill in a hash table for each server with a\n> list of relations?\n>\nIt's just a matter of preference. A temporary hash-table with server-id\nand list of foreign-tables is an idea. Let me try to revise.\n\n> +typedef void (*ExecForeignTruncate_function) (List *frels_list,\n> + bool is_cascade,\n> + bool restart_seqs);\n> I would recommend to pass down directly DropBehavior instead of a\n> boolean to the callback. That's more extensible.\n>\nOk,\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Wed, 15 Jan 2020 23:33:07 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Wed, Jan 15, 2020 at 11:33:07PM +0900, Kohei KaiGai wrote:\n> 2020年1月15日(水) 17:11 Michael Paquier <michael@paquier.xyz>:\n>> I have done a quick read through the patch. You have modified the\n>> patch to pass down to the callback a list of relation OIDs to execute\n>> one command for all, and there are tests for FKs so that coverage\n>> looks fine.\n>>\n>> Regression tests are failing with this patch:\n>> -- TRUNCATE doesn't work on foreign tables, either directly or\n>> recursively\n>> TRUNCATE ft2; -- ERROR\n>> -ERROR: \"ft2\" is not a table\n>> +ERROR: foreign-data wrapper \"dummy\" has no handler\n>> You visibly just need to update the output because no handlers are\n>> available for truncate in this case.\n>>\n> What error message is better in this case? It does not print \"ft2\" anywhere,\n> so user may not notice that \"ft2\" is the source of the error.\n> How about 'foreign table \"ft2\" does not support truncate' ?\n\nIt sounds to me that this message is kind of right. This FDW \"dummy\"\ndoes not have any FDW handler at all, so it complains about it.\nHaving no support for TRUNCATE is something that may happen after\nthat. Actually, this error message from your patch used for a FDW\nwhich has a handler but no TRUNCATE support could be reworked:\n+ if (!fdwroutine->ExecForeignTruncate)\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"\\\"%s\\\" is not a supported foreign table\",\n+ relname)));\nSomething like \"Foreign-data wrapper \\\"%s\\\" does not support\nTRUNCATE\"?\n\n>> + <literal>frels_list</literal> is a list of foreign tables that are\n>> + connected to a particular foreign server; thus, these foreign tables\n>> + should have identical foreign server ID\n>> The list is built by the backend code, so that has to be true.\n>>\n>> + foreach (lc, frels_list)\n>> + {\n>> + Relation frel = lfirst(lc);\n>> + Oid frel_oid = RelationGetRelid(frel);\n>> +\n>> + if (server_id == GetForeignServerIdByRelId(frel_oid))\n>> + {\n>> + frels_list = foreach_delete_current(frels_list, lc);\n>> + curr_frels = lappend(curr_frels, frel);\n>> + }\n>> + }\n>> Wouldn't it be better to fill in a hash table for each server with a\n>> list of relations?\n>\n> It's just a matter of preference. A temporary hash-table with server-id\n> and list of foreign-tables is an idea. Let me try to revise.\n\nThanks. It would not matter much for relations without inheritance\nchildren, but if truncating a partition tree with many foreign tables\nusing various FDWs that could matter performance-wise.\n--\nMichael",
"msg_date": "Thu, 16 Jan 2020 14:40:31 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Hi,\n\nThe v3 patch updated the points below:\n- 2nd arg of ExecForeignTruncate was changed to DropBehavior, not bool\n- ExecuteTruncateGuts() uses a local hash table to track a pair of server-id\n and list of the foreign tables managed by the server.\n- Error message on truncate_check_rel() was revised as follows:\n \"foreign data wrapper \\\"%s\\\" on behalf of the foreign table \\\"%s\\\"\ndoes not support TRUNCATE\"\n- deparseTruncateSql() of postgres_fdw generates entire remote SQL as\nlike other commands.\n- Document SGML was updated.\n\nBest regards,\n\n2020年1月16日(木) 14:40 Michael Paquier <michael@paquier.xyz>:\n>\n> On Wed, Jan 15, 2020 at 11:33:07PM +0900, Kohei KaiGai wrote:\n> > 2020年1月15日(水) 17:11 Michael Paquier <michael@paquier.xyz>:\n> >> I have done a quick read through the patch. You have modified the\n> >> patch to pass down to the callback a list of relation OIDs to execute\n> >> one command for all, and there are tests for FKs so that coverage\n> >> looks fine.\n> >>\n> >> Regression tests are failing with this patch:\n> >> -- TRUNCATE doesn't work on foreign tables, either directly or\n> >> recursively\n> >> TRUNCATE ft2; -- ERROR\n> >> -ERROR: \"ft2\" is not a table\n> >> +ERROR: foreign-data wrapper \"dummy\" has no handler\n> >> You visibly just need to update the output because no handlers are\n> >> available for truncate in this case.\n> >>\n> > What error message is better in this case? It does not print \"ft2\" anywhere,\n> > so user may not notice that \"ft2\" is the source of the error.\n> > How about 'foreign table \"ft2\" does not support truncate' ?\n>\n> It sounds to me that this message is kind of right. This FDW \"dummy\"\n> does not have any FDW handler at all, so it complains about it.\n> Having no support for TRUNCATE is something that may happen after\n> that. Actually, this error message from your patch used for a FDW\n> which has a handler but no TRUNCATE support could be reworked:\n> + if (!fdwroutine->ExecForeignTruncate)\n> + ereport(ERROR,\n> + (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n> + errmsg(\"\\\"%s\\\" is not a supported foreign table\",\n> + relname)));\n> Something like \"Foreign-data wrapper \\\"%s\\\" does not support\n> TRUNCATE\"?\n>\n> >> + <literal>frels_list</literal> is a list of foreign tables that are\n> >> + connected to a particular foreign server; thus, these foreign tables\n> >> + should have identical foreign server ID\n> >> The list is built by the backend code, so that has to be true.\n> >>\n> >> + foreach (lc, frels_list)\n> >> + {\n> >> + Relation frel = lfirst(lc);\n> >> + Oid frel_oid = RelationGetRelid(frel);\n> >> +\n> >> + if (server_id == GetForeignServerIdByRelId(frel_oid))\n> >> + {\n> >> + frels_list = foreach_delete_current(frels_list, lc);\n> >> + curr_frels = lappend(curr_frels, frel);\n> >> + }\n> >> + }\n> >> Wouldn't it be better to fill in a hash table for each server with a\n> >> list of relations?\n> >\n> > It's just a matter of preference. A temporary hash-table with server-id\n> > and list of foreign-tables is an idea. Let me try to revise.\n>\n> Thanks. It would not matter much for relations without inheritance\n> children, but if truncating a partition tree with many foreign tables\n> using various FDWs that could matter performance-wise.\n> --\n> Michael\n\n\n\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Fri, 17 Jan 2020 22:49:41 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Hello.\n\nAt Fri, 17 Jan 2020 22:49:41 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in \n> The v3 patch updated the points below:\n\nDid you attached it?\n\nregards.\n\n-- \nKyotaro Horiguchi\nNTT Open Source Software Center\n\n\n",
"msg_date": "Mon, 20 Jan 2020 11:07:54 +0900 (JST)",
"msg_from": "Kyotaro Horiguchi <horikyota.ntt@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月20日(月) 11:09 Kyotaro Horiguchi <horikyota.ntt@gmail.com>:\n>\n> Hello.\n>\n> At Fri, 17 Jan 2020 22:49:41 +0900, Kohei KaiGai <kaigai@heterodb.com> wrote in\n> > The v3 patch updated the points below:\n>\n> Did you attached it?\n>\nSorry, it was a midnight job on Friday.\nPlease check the attached patch.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Mon, 20 Jan 2020 11:30:34 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 11:30:34AM +0900, Kohei KaiGai wrote:\n> Sorry, it was a midnight job on Friday.\n\nShould I be, err, worried about that? ;)\n\n> Please check the attached patch.\n\n+ switch (behavior)\n+ {\n+ case DROP_RESTRICT:\n+ appendStringInfoString(buf, \" RESTRICT\");\n+ break;\n+ case DROP_CASCADE:\n+ appendStringInfoString(buf, \" CASCADE\");\n+ break;\n+ default:\n+ elog(ERROR, \"Bug? unexpected DropBehavior (%d)\",\n(int)behavior);\n+ break;\n+ }\nHere, you can actually remove the default clause. By doing so,\ncompilation would generate a warning if a new value is added to\nDropBehavior if it is not listed. So anybody adding a new value to\nthe enum will need to think about this code path.\n\n+ ereport(ERROR,\n+ (errcode(ERRCODE_WRONG_OBJECT_TYPE),\n+ errmsg(\"foreign data wrapper \\\"%s\\\" on behalf of the\nforeign table \\\"%s\\\" does not support TRUNCATE\",\n+ fdw->fdwname, relname)));\nI see two problems here:\n- The error message is too complicated. I would just use \"cannot\ntruncate foreign table \\\"%s\\\"\".\n- The error code should be ERRCODE_FEATURE_NOT_SUPPORTED.\n\nThe docs for the FDW description can be improved. I found that a\nlarge portion of it used rather unclear English, and that things were\nnot clear enough regarding the use of a list of relations, when an\nerror is raised because ExecForeignTruncate is NULL, etc. I have also\ncut the last paragraph which is actually implementation-specific\n(think for example about callbacks at xact commit/abort time).\n\nDocumentation needs to be added to postgres_fdw about the truncation\nsupport. Particularly, providing details about the possibility to do\ntruncates in our shot for a set of relations so as dependencies are\nautomatically handled is an advantage to mention.\n\nThere is no need to include the truncate routine in\nForeignTruncateInfo, as the server OID can be used to find it.\n\nAnother thing is that I would prefer splitting the patch into two\nseparate commits, so attached are two patches:\n- 0001 for the addition of the in-core API\n- 0002 for the addition of support in postgres_fdw.\n\nI have spent a good amount of time polishing 0001, tweaking the docs,\ncomments, error messages and a bit its logic. I am getting\ncomfortable with it, but it still needs an extra lookup, an indent run\nwhich has some noise and I lacked of time today. 0002 has some of its\nissues fixed and I have not reviewed it fully yet. There are still\nsome places not adapted in it (why do you use \"Bug?\" in all your\nelog() messages by the way?), so the postgres_fdw part needs more\nattention. Could you think about some docs for it by the way?\n--\nMichael",
"msg_date": "Mon, 20 Jan 2020 22:50:21 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Mon, Jan 20, 2020 at 10:50:21PM +0900, Michael Paquier wrote:\n> I have spent a good amount of time polishing 0001, tweaking the docs,\n> comments, error messages and a bit its logic. I am getting\n> comfortable with it, but it still needs an extra lookup, an indent run\n> which has some noise and I lacked of time today. 0002 has some of its\n> issues fixed and I have not reviewed it fully yet. There are still\n> some places not adapted in it (why do you use \"Bug?\" in all your\n> elog() messages by the way?), so the postgres_fdw part needs more\n> attention. Could you think about some docs for it by the way?\n\nI have more comments about the postgres_fdw that need to be\naddressed.\n\n+ if (!OidIsValid(server_id))\n+ {\n+ server_id = GetForeignServerIdByRelId(frel_oid);\n+ user = GetUserMapping(GetUserId(), server_id);\n+ conn = GetConnection(user, false);\n+ }\n+ else if (server_id != GetForeignServerIdByRelId(frel_oid))\n+ elog(ERROR, \"Bug? inconsistent Server-IDs were supplied\");\nI agree here that an elog() looks more adapted than an assert.\nHowever I would change the error message to be more like \"incorrect\nserver OID supplied by the TRUNCATE callback\" or something similar.\nThe server OID has to be valid anyway, so don't you just bypass any\nerrors if it is not set?\n\n+-- truncate two tables at a command\nWhat does this sentence mean? Isn't that \"truncate two tables in one\nsingle command\"?\n\nThe table names in the tests are rather hard to parse. I think that\nit would be better to avoid underscores at the beginning of the\nrelation names.\n\nIt would be nice to have some coverage with inheritance, and also\ntrack down in the tests what we expect when ONLY is specified in that\ncase (with and without ONLY, both parent and child relations).\n\nAnyway, attached is the polished version for 0001 that I would be fine\nto commit, except for one point: are there objections if we do not\nhave extra handling for ONLY when it comes to foreign tables with\ninheritance? As the patch stands, the list of relations is first\nbuilt, with an inheritance recursive lookup done depending on if ONLY\nis used or not. Hence, if using \"TRUNCATE ONLY foreign_tab, ONLY\nforeign_tab2\", then only those two tables would be passed down to the\nFDW. If ONLY is removed, both tables as well as their children are\nadded to the lists of relations split by server OID. One problem is\nthat this could be confusing for some users I guess? For example,\nwith a 1:1 mapping in the schema of the local and remote servers, a\nuser asking for TRUNCATE ONLY foreign_tab would pass down to the\nremote just the equivalent of \"TRUNCATE foreign_tab\" using\npostgres_fdw, causing the full inheritance tree to be truncated on the\nremote side. The concept of ONLY mixed with inherited foreign tables\nis rather blurry (point raised by Stephen upthread). \n--\nMichael",
"msg_date": "Tue, 21 Jan 2020 15:38:03 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "2020年1月21日(火) 15:38 Michael Paquier <michael@paquier.xyz>:\n>\n> On Mon, Jan 20, 2020 at 10:50:21PM +0900, Michael Paquier wrote:\n> > I have spent a good amount of time polishing 0001, tweaking the docs,\n> > comments, error messages and a bit its logic. I am getting\n> > comfortable with it, but it still needs an extra lookup, an indent run\n> > which has some noise and I lacked of time today. 0002 has some of its\n> > issues fixed and I have not reviewed it fully yet. There are still\n> > some places not adapted in it (why do you use \"Bug?\" in all your\n> > elog() messages by the way?), so the postgres_fdw part needs more\n> > attention. Could you think about some docs for it by the way?\n>\n> I have more comments about the postgres_fdw that need to be\n> addressed.\n>\n> + if (!OidIsValid(server_id))\n> + {\n> + server_id = GetForeignServerIdByRelId(frel_oid);\n> + user = GetUserMapping(GetUserId(), server_id);\n> + conn = GetConnection(user, false);\n> + }\n> + else if (server_id != GetForeignServerIdByRelId(frel_oid))\n> + elog(ERROR, \"Bug? inconsistent Server-IDs were supplied\");\n> I agree here that an elog() looks more adapted than an assert.\n> However I would change the error message to be more like \"incorrect\n> server OID supplied by the TRUNCATE callback\" or something similar.\n> The server OID has to be valid anyway, so don't you just bypass any\n> errors if it is not set?\n>\n> +-- truncate two tables at a command\n> What does this sentence mean? Isn't that \"truncate two tables in one\n> single command\"?\n>\n> The table names in the tests are rather hard to parse. I think that\n> it would be better to avoid underscores at the beginning of the\n> relation names.\n>\n> It would be nice to have some coverage with inheritance, and also\n> track down in the tests what we expect when ONLY is specified in that\n> case (with and without ONLY, both parent and child relations).\n>\n> Anyway, attached is the polished version for 0001 that I would be fine\n> to commit, except for one point: are there objections if we do not\n> have extra handling for ONLY when it comes to foreign tables with\n> inheritance? As the patch stands, the list of relations is first\n> built, with an inheritance recursive lookup done depending on if ONLY\n> is used or not. Hence, if using \"TRUNCATE ONLY foreign_tab, ONLY\n> foreign_tab2\", then only those two tables would be passed down to the\n> FDW. If ONLY is removed, both tables as well as their children are\n> added to the lists of relations split by server OID. One problem is\n> that this could be confusing for some users I guess? For example,\n> with a 1:1 mapping in the schema of the local and remote servers, a\n> user asking for TRUNCATE ONLY foreign_tab would pass down to the\n> remote just the equivalent of \"TRUNCATE foreign_tab\" using\n> postgres_fdw, causing the full inheritance tree to be truncated on the\n> remote side. The concept of ONLY mixed with inherited foreign tables\n> is rather blurry (point raised by Stephen upthread).\n>\nHmm. Do we need to deliver another list to inform why these relations are\ntrundated?\nIf callback is invoked with a foreign-relation that is specified by TRUNCATE\ncommand with ONLY, it seems to me reasonable that remote TRUNCATE\ncommand specifies the relation on behalf of the foreign table with ONLY.\n\nForeign-tables can be truncated because ...\n1. it is specified by user with ONLY-clause.\n2. it is specified by user without ONLY-clause.\n3. it is inherited child of the relations specified at 2.\n4. it depends on the relations picked up at 1-3.\n\nSo, if ExecForeignTruncate() has another list to inform the context for each\nrelation, postgres_fdw can build proper remote query that may specify the\nremote tables with ONLY-clause.\n\nRegarding to the other comments, it's all Ok for me. I'll update the patch.\nAnd, I forgot \"updatable\" option at postgres_fdw. It should be checked on\nthe truncate also, right?\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>\n\n\n",
"msg_date": "Mon, 27 Jan 2020 23:08:36 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "On Mon, Jan 27, 2020 at 11:08:36PM +0900, Kohei KaiGai wrote:\n> Hmm. Do we need to deliver another list to inform why these relations are\n> trundated?\n\nI got to think more about this one, and being able to control ONLY on\na per-relation basis would be the least surprising approach for the\ncommands generated. But at least this avoids truncating a full\ninheritance tree on a remote cluster even if ONLY is specified\nlocally. Note that I'd like to assume that most applications have a\n1:1 mapping in their schemas between a local and remote cluster, but\nthat's most likely not always the case ;)\n\n> If callback is invoked with a foreign-relation that is specified by TRUNCATE\n> command with ONLY, it seems to me reasonable that remote TRUNCATE\n> command specifies the relation on behalf of the foreign table with ONLY.\n>\n> So, if ExecForeignTruncate() has another list to inform the context for each\n> relation, postgres_fdw can build proper remote query that may specify the\n> remote tables with ONLY-clause.\n\nYeah, TRUNCATE can specify ONLY on a per-table basis, so having a\nsecond list makes sense. Then in the FDW, just make sure to\nelog(ERROR) if the lengths do no match, and then use forboth() to loop\nover them. One thing that you need to be careful about is that tables\nwhich are added to the list because of inheritance should not be\nmarked with ONLY when generating the command to the remote.\n\n> Regarding to the other comments, it's all Ok for me. I'll update the patch.\n> And, I forgot \"updatable\" option at postgres_fdw. It should be checked on\n> the truncate also, right?\n\nHmm. Good point. Being able to filter that silently through a\nconfiguration parameter is kind of interesting. Now I think that this\nshould be a separate option because updatable applies to DMLs. Like,\ntruncatable?\n\nFor now, as the patch needs more work for its implementation, docs and\ntests, I am marking it as returned with feedback. \n--\nMichael",
"msg_date": "Tue, 28 Jan 2020 13:03:46 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "Hello,\n\nThe attached is revised version.\n\n> > If callback is invoked with a foreign-relation that is specified by TRUNCATE\n> > command with ONLY, it seems to me reasonable that remote TRUNCATE\n> > command specifies the relation on behalf of the foreign table with ONLY.\n> >\n> > So, if ExecForeignTruncate() has another list to inform the context for each\n> > relation, postgres_fdw can build proper remote query that may specify the\n> > remote tables with ONLY-clause.\n>\n> Yeah, TRUNCATE can specify ONLY on a per-table basis, so having a\n> second list makes sense. Then in the FDW, just make sure to\n> elog(ERROR) if the lengths do no match, and then use forboth() to loop\n> over them. One thing that you need to be careful about is that tables\n> which are added to the list because of inheritance should not be\n> marked with ONLY when generating the command to the remote.\n>\nThe v5 patch added separated list for the FDW callback, to inform the context\nwhen relations are specified by TRUNCATE command. The frels_extra\nargument is a list of integers. 0 means that relevant foreign-table is specified\nwithout \"ONLY\" clause. and positive means specified with \"ONLY\" clause.\nNegative value means that foreign-tables are not specified in the TRUNCATE\ncommand, but truncated due to dependency (like partition's child leaf).\n\nThe remote SQL generates TRUNCATE command according to the above\n\"extra\" information. So, \"TRUNCATE ONLY ftable\" generate a remote query\nwith \"TRUNCATE ONLY mapped_remote_table\".\nOn the other hand, it can make strange results, although it is a corner case.\nThe example below shows the result of TRUNCATE ONLY on a foreign-table\nthat mapps a remote table with an inherited children.\nThe rows id < 10 belongs to the parent table, thus TRUNCATE ONLY tru_ftable\neliminated the remote parent, however, it looks the tru_ftable still\ncontains rows\nafter TRUNCATE command.\n\nI wonder whether it is tangible behavior for users. Of course, \"ONLY\" clause\ncontrols local hierarchy of partitioned / inherited tables, however, I'm not\ncertain whether the concept shall be expanded to the structure of remote tables.\n\n+SELECT * FROM tru_ftable;\n+ id | x\n+----+----------------------------------\n+ 5 | e4da3b7fbbce2345d7772b0674a318d5\n+ 6 | 1679091c5a880faf6fb5e6087eb1b2dc\n+ 7 | 8f14e45fceea167a5a36dedd4bea2543\n+ 8 | c9f0f895fb98ab9159f51fd0297e236d\n+ 9 | 45c48cce2e2d7fbdea1afc51c7c6ad26\n+ 10 | d3d9446802a44259755d38e6d163e820\n+ 11 | 6512bd43d9caa6e02c990b0a82652dca\n+ 12 | c20ad4d76fe97759aa27a0c99bff6710\n+ 13 | c51ce410c124a10e0db5e4b97fc2af39\n+ 14 | aab3238922bcc25a6f606eb525ffdc56\n+(10 rows)\n+\n+TRUNCATE ONLY tru_ftable; -- truncate only parent portion\n+SELECT * FROM tru_ftable;\n+ id | x\n+----+----------------------------------\n+ 10 | d3d9446802a44259755d38e6d163e820\n+ 11 | 6512bd43d9caa6e02c990b0a82652dca\n+ 12 | c20ad4d76fe97759aa27a0c99bff6710\n+ 13 | c51ce410c124a10e0db5e4b97fc2af39\n+ 14 | aab3238922bcc25a6f606eb525ffdc56\n+(5 rows)\n\n> > Regarding to the other comments, it's all Ok for me. I'll update the patch.\n> > And, I forgot \"updatable\" option at postgres_fdw. It should be checked on\n> > the truncate also, right?\n>\n> Hmm. Good point. Being able to filter that silently through a\n> configuration parameter is kind of interesting. Now I think that this\n> should be a separate option because updatable applies to DMLs. Like,\n> truncatable?\n>\nOk, \"truncatable\" option was added.\nPlease check the regression test and documentation updates.\n\nBest regards,\n-- \nHeteroDB, Inc / The PG-Strom Project\nKaiGai Kohei <kaigai@heterodb.com>",
"msg_date": "Sun, 1 Mar 2020 11:24:22 +0900",
"msg_from": "Kohei KaiGai <kaigai@heterodb.com>",
"msg_from_op": true,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "> On 1 Mar 2020, at 03:24, Kohei KaiGai <kaigai@heterodb.com> wrote:\n\n> The attached is revised version.\n\nThis version fails to apply to HEAD due to conflicts in postgres_fdw expected\ntest output. Can you please submit a rebased version. Marking the entry\nWaiting on Author in the meantime.\n\ncheers ./daniel\n\n\n",
"msg_date": "Thu, 2 Jul 2020 16:40:49 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
},
{
"msg_contents": "> On 2 Jul 2020, at 16:40, Daniel Gustafsson <daniel@yesql.se> wrote:\n> \n>> On 1 Mar 2020, at 03:24, Kohei KaiGai <kaigai@heterodb.com> wrote:\n> \n>> The attached is revised version.\n> \n> This version fails to apply to HEAD due to conflicts in postgres_fdw expected\n> test output. Can you please submit a rebased version. Marking the entry\n> Waiting on Author in the meantime.\n\nAs this has stalled, I've marked this patch Returned with Feedback. Feel free\nto open a new entry if you return to this patch.\n\ncheers ./daniel\n\n",
"msg_date": "Mon, 3 Aug 2020 00:19:06 +0200",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: TRUNCATE on foreign tables"
}
] |
[
{
"msg_contents": "Hi,\n\nI found one crash in pg_restore, this occurs when there is a failure before\nall the child workers are created. Back trace for the same is given below:\n#0 0x00007f9c6d31e337 in raise () from /lib64/libc.so.6\n#1 0x00007f9c6d31fa28 in abort () from /lib64/libc.so.6\n#2 0x00007f9c6d317156 in __assert_fail_base () from /lib64/libc.so.6\n#3 0x00007f9c6d317202 in __assert_fail () from /lib64/libc.so.6\n#4 0x0000000000407c9e in WaitForTerminatingWorkers (pstate=0x14af7f0) at\nparallel.c:515\n#5 0x0000000000407bf9 in ShutdownWorkersHard (pstate=0x14af7f0) at\nparallel.c:451\n#6 0x0000000000407ae9 in archive_close_connection (code=1, arg=0x6315a0\n<shutdown_info>) at parallel.c:368\n#7 0x000000000041a7c7 in exit_nicely (code=1) at pg_backup_utils.c:99\n#8 0x0000000000408180 in ParallelBackupStart (AH=0x14972e0) at\nparallel.c:967\n#9 0x000000000040a3dd in RestoreArchive (AHX=0x14972e0) at\npg_backup_archiver.c:661\n#10 0x0000000000404125 in main (argc=6, argv=0x7ffd5146f308) at\npg_restore.c:443\n\nThe problem is like:\n\n - The variable pstate->numWorkers is being set with the number of\n workers initially in ParallelBackupStart.\n - Then the workers are created one by one.\n - Before creating all the process there is a failure.\n - Then the parent terminates the child process and waits for all the\n child process to get terminated.\n - This function WaitForTerminatingWorkers checks if all process is\n terminated by calling HasEveryWorkerTerminated.\n - HasEveryWorkerTerminated will always return false because it will\n check for the numWorkers rather than the actual forked process count and\n hits the next assert \"Assert(j < pstate->numWorkers);\".\n\n\nAttached patch has the fix for the same. Fixed it by setting\npstate->numWorkers with the actual worker count when the child process is\nbeing created.\n\nThoughts?\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Wed, 1 Jan 2020 09:20:39 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "pg_restore crash when there is a failure before all child process is\n created"
},
{
"msg_contents": "On Wed, Jan 29, 2020 at 6:54 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n>\n> Hi Vignesh,\n>\n> Can you share a test case or steps that you are using to reproduce this issue? Are you reproducing this using a debugger?\n>\n\nI could reproduce with the following steps:\nMake cluster setup.\nCreate few tables.\nTake a dump in directory format using pg_dump.\nRestore the dump generated above using pg_restore with very high\nnumber for --jobs options around 600.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 30 Jan 2020 09:54:17 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
},
{
"msg_contents": "vignesh C <vignesh21@gmail.com> writes:\n> On Wed, Jan 29, 2020 at 6:54 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n>> Can you share a test case or steps that you are using to reproduce this issue? Are you reproducing this using a debugger?\n\n> I could reproduce with the following steps:\n> Make cluster setup.\n> Create few tables.\n> Take a dump in directory format using pg_dump.\n> Restore the dump generated above using pg_restore with very high\n> number for --jobs options around 600.\n\nI agree this is quite broken. Another way to observe the crash is\nto make the fork() call randomly fail, as per booby-trap-fork.patch\nbelow (not intended for commit, obviously).\n\nI don't especially like the proposed patch, though, as it introduces\na great deal of confusion into what ParallelState.numWorkers means.\nI think it's better to leave that as being the allocated array size,\nand instead clean up all the fuzzy thinking about whether workers\nare actually running or not. Like 0001-fix-worker-status.patch below.\n\n\t\t\tregards, tom lane",
"msg_date": "Thu, 30 Jan 2020 14:39:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
},
{
"msg_contents": "I have applied tested both patches separately and ran regression with both.\nNo new test cases are failing with both patches.\n\nThe issues is fixed by both patches however the fix from Tom looks more\nelegant. I haven't done a detailed code review.\n\nOn Fri, Jan 31, 2020 at 12:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Wed, Jan 29, 2020 at 6:54 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n> >> Can you share a test case or steps that you are using to reproduce this\n> issue? Are you reproducing this using a debugger?\n>\n> > I could reproduce with the following steps:\n> > Make cluster setup.\n> > Create few tables.\n> > Take a dump in directory format using pg_dump.\n> > Restore the dump generated above using pg_restore with very high\n> > number for --jobs options around 600.\n>\n> I agree this is quite broken. Another way to observe the crash is\n> to make the fork() call randomly fail, as per booby-trap-fork.patch\n> below (not intended for commit, obviously).\n>\n> I don't especially like the proposed patch, though, as it introduces\n> a great deal of confusion into what ParallelState.numWorkers means.\n> I think it's better to leave that as being the allocated array size,\n> and instead clean up all the fuzzy thinking about whether workers\n> are actually running or not. Like 0001-fix-worker-status.patch below.\n>\n> regards, tom lane\n>\n>\n\n-- \nHighgo Software (Canada/China/Pakistan)\nURL : http://www.highgo.ca\nADDR: 10318 WHALLEY BLVD, Surrey, BC\nEMAIL: mailto: ahsan.hadi@highgo.ca\n\nI have applied tested both patches separately and ran regression with both. No new test cases are failing with both patches.The issues is fixed by both patches however the fix from Tom looks more elegant. I haven't done a detailed code review.On Fri, Jan 31, 2020 at 12:39 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:vignesh C <vignesh21@gmail.com> writes:\n> On Wed, Jan 29, 2020 at 6:54 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n>> Can you share a test case or steps that you are using to reproduce this issue? Are you reproducing this using a debugger?\n\n> I could reproduce with the following steps:\n> Make cluster setup.\n> Create few tables.\n> Take a dump in directory format using pg_dump.\n> Restore the dump generated above using pg_restore with very high\n> number for --jobs options around 600.\n\nI agree this is quite broken. Another way to observe the crash is\nto make the fork() call randomly fail, as per booby-trap-fork.patch\nbelow (not intended for commit, obviously).\n\nI don't especially like the proposed patch, though, as it introduces\na great deal of confusion into what ParallelState.numWorkers means.\nI think it's better to leave that as being the allocated array size,\nand instead clean up all the fuzzy thinking about whether workers\nare actually running or not. Like 0001-fix-worker-status.patch below.\n\n regards, tom lane\n\n-- Highgo Software (Canada/China/Pakistan)URL : http://www.highgo.caADDR: 10318 WHALLEY BLVD, Surrey, BCEMAIL: mailto: ahsan.hadi@highgo.ca",
"msg_date": "Fri, 31 Jan 2020 15:15:37 +0500",
"msg_from": "Ahsan Hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
},
{
"msg_contents": "On Fri, Jan 31, 2020 at 1:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> vignesh C <vignesh21@gmail.com> writes:\n> > On Wed, Jan 29, 2020 at 6:54 PM Ahsan Hadi <ahsan.hadi@gmail.com> wrote:\n> >> Can you share a test case or steps that you are using to reproduce this issue? Are you reproducing this using a debugger?\n>\n> > I could reproduce with the following steps:\n> > Make cluster setup.\n> > Create few tables.\n> > Take a dump in directory format using pg_dump.\n> > Restore the dump generated above using pg_restore with very high\n> > number for --jobs options around 600.\n>\n> I agree this is quite broken. Another way to observe the crash is\n> to make the fork() call randomly fail, as per booby-trap-fork.patch\n> below (not intended for commit, obviously).\n>\n> I don't especially like the proposed patch, though, as it introduces\n> a great deal of confusion into what ParallelState.numWorkers means.\n> I think it's better to leave that as being the allocated array size,\n> and instead clean up all the fuzzy thinking about whether workers\n> are actually running or not. Like 0001-fix-worker-status.patch below.\n>\n\nThe patch looks fine to me. The test is also getting fixed by the patch.\n\nRegards,\nVignesh\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 31 Jan 2020 16:43:07 +0530",
"msg_from": "vignesh C <vignesh21@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
},
{
"msg_contents": "The following review has been posted through the commitfest application:\nmake installcheck-world: tested, passed\nImplements feature: not tested\nSpec compliant: tested, passed\nDocumentation: not tested\n\nI have applied tested both patches separately and ran regression with both. No new test cases are failing with both patches.\r\n\r\nThe issues is fixed by both patches however the fix from Tom (0001-fix-worker-status.patch) looks more elegant. I haven't done a detailed code review.",
"msg_date": "Fri, 31 Jan 2020 12:54:33 +0000",
"msg_from": "ahsan hadi <ahsan.hadi@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
},
{
"msg_contents": "ahsan hadi <ahsan.hadi@gmail.com> writes:\n> I have applied tested both patches separately and ran regression with both. No new test cases are failing with both patches.\n> The issues is fixed by both patches however the fix from Tom (0001-fix-worker-status.patch) looks more elegant. I haven't done a detailed code review.\n\nPushed, thanks for looking!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 31 Jan 2020 14:45:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_restore crash when there is a failure before all child process\n is created"
}
] |
[
{
"msg_contents": "Hello hackers,\n\nBased on ideas from earlier discussions[1][2], here is an experimental\nWIP patch to improve recovery speed by prefetching blocks. If you set\nwal_prefetch_distance to a positive distance, measured in bytes, then\nthe recovery loop will look ahead in the WAL and call PrefetchBuffer()\nfor referenced blocks. This can speed things up with cold caches\n(example: after a server reboot) and working sets that don't fit in\nmemory (example: large scale pgbench).\n\nResults vary, but in contrived larger-than-memory pgbench crash\nrecovery experiments on a Linux development system, I've seen recovery\nrunning as much as 20x faster with full_page_writes=off and\nwal_prefetch_distance=8kB. FPWs reduce the potential speed-up as\ndiscussed in the other thread.\n\nSome notes:\n\n* PrefetchBuffer() is only beneficial if your kernel and filesystem\nhave a working POSIX_FADV_WILLNEED implementation. That includes\nLinux ext4 and xfs, but excludes macOS and Windows. In future we\nmight use asynchronous I/O to bring data all the way into our own\nbuffer pool; hopefully the PrefetchBuffer() interface wouldn't change\nmuch and this code would automatically benefit.\n\n* For now, for proof-of-concept purposes, the patch uses a second\nXLogReader to read ahead in the WAL. I am thinking about how to write\na two-cursor XLogReader that reads and decodes each record just once.\n\n* It can handle simple crash recovery and streaming replication\nscenarios, but doesn't yet deal with complications like timeline\nchanges (the way to do that might depend on how the previous point\nworks out). The integration with WAL receiver probably needs some\nwork, I've been testing pretty narrow cases so far, and the way I\nhijacked read_local_xlog_page() probably isn't right.\n\n* On filesystems with block size <= BLCKSZ, it's a waste of a syscall\nto try to prefetch a block that we have a FPW for, but otherwise it\ncan avoid a later stall due to a read-before-write at pwrite() time,\nso I added a second GUC wal_prefetch_fpw to make that optional.\n\nEarlier work, and how this patch compares:\n\n* Sean Chittenden wrote pg_prefaulter[1], an external process that\nuses worker threads to pread() referenced pages some time before\nrecovery does, and demonstrated very good speed-up, triggering a lot\nof discussion of this topic. My WIP patch differs mainly in that it's\nintegrated with PostgreSQL, and it uses POSIX_FADV_WILLNEED rather\nthan synchronous I/O from worker threads/processes. Sean wouldn't\nhave liked my patch much because he was working on ZFS and that\ndoesn't support POSIX_FADV_WILLNEED, but with a small patch to ZFS it\nworks pretty well, and I'll try to get that upstreamed.\n\n* Konstantin Knizhnik proposed a dedicated PostgreSQL process that\nwould do approximately the same thing[2]. My WIP patch differs mainly\nin that it does the prefetching work in the recovery loop itself, and\nuses PrefetchBuffer() rather than FilePrefetch() directly. This\navoids a bunch of communication and complications, but admittedly does\nintroduce new system calls into a hot loop (for now); perhaps I could\npay for that by removing more lseek(SEEK_END) noise. It also deals\nwith various edge cases relating to created, dropped and truncated\nrelations a bit differently. It also tries to avoid generating\nsequential WILLNEED advice, based on experimental evidence[3] that\nthat affects Linux's readahead heuristics negatively, though I don't\nunderstand the exact mechanism there.\n\nHere are some cases where I expect this patch to perform badly:\n\n* Your WAL has multiple intermixed sequential access streams (ie\nsequential access to N different relations), so that sequential access\nis not detected, and then all the WILLNEED advice prevents Linux's\nautomagic readahead from working well. Perhaps that could be\nmitigated by having a system that can detect up to N concurrent\nstreams, where N is more than the current 1, or by flagging buffers in\nthe WAL as part of a sequential stream. I haven't looked into this.\n\n* The data is always found in our buffer pool, so PrefetchBuffer() is\ndoing nothing useful and you might as well not be calling it or doing\nthe extra work that leads up to that. Perhaps that could be mitigated\nwith an adaptive approach: too many PrefetchBuffer() hits and we stop\ntrying to prefetch, too many XLogReadBufferForRedo() misses and we\nstart trying to prefetch. That might work nicely for systems that\nstart out with cold caches but eventually warm up. I haven't looked\ninto this.\n\n* The data is actually always in the kernel's cache, so the advice is\na waste of a syscall. That might imply that you should probably be\nrunning with a larger shared_buffers (?). It's technically possible\nto ask the operating system if a region is cached on many systems,\nwhich could in theory be used for some kind of adaptive heuristic that\nwould disable pointless prefetching, but I'm not proposing that.\nUltimately this problem would be avoided by moving to true async I/O,\nwhere we'd be initiating the read all the way into our buffers (ie it\nreplaces the later pread() so it's a wash, at worst).\n\n* The prefetch distance is set too low so that pread() waits are not\navoided, or your storage subsystem can't actually perform enough\nconcurrent I/O to get ahead of the random access pattern you're\ngenerating, so no distance would be far enough ahead. To help with\nthe former case, perhaps we could invent something smarter than a\nuser-supplied distance (something like \"N cold block references\nahead\", possibly using effective_io_concurrency, rather than \"N bytes\nahead\").\n\n[1] https://www.pgcon.org/2018/schedule/track/Case%20Studies/1204.en.html\n[2] https://www.postgresql.org/message-id/flat/49df9cd2-7086-02d0-3f8d-535a32d44c82%40postgrespro.ru\n[3] https://github.com/macdice/some-io-tests",
"msg_date": "Thu, 2 Jan 2020 02:39:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 02:39:04AM +1300, Thomas Munro wrote:\n>Hello hackers,\n>\n>Based on ideas from earlier discussions[1][2], here is an experimental\n>WIP patch to improve recovery speed by prefetching blocks. If you set\n>wal_prefetch_distance to a positive distance, measured in bytes, then\n>the recovery loop will look ahead in the WAL and call PrefetchBuffer()\n>for referenced blocks. This can speed things up with cold caches\n>(example: after a server reboot) and working sets that don't fit in\n>memory (example: large scale pgbench).\n>\n\nThanks, I only did a very quick review so far, but the patch looks fine.\n\nIn general, I find it somewhat non-intuitive to configure prefetching by\nspecifying WAL distance. I mean, how would you know what's a good value?\nIf you know the storage hardware, you probably know the optimal queue\ndepth i.e. you know you the number of requests to get best throughput.\nBut how do you deduce the WAL distance from that? I don't know.\n\nCould we instead specify the number of blocks to prefetch? We'd probably\nneed to track additional details needed to determine number of blocks to\nprefetch (essentially LSN for all prefetch requests).\n\nAnother thing to consider might be skipping recently prefetched blocks.\nConsider you have a loop that does DML, where each statement creates a\nseparate WAL record, but it can easily touch the same block over and\nover (say inserting to the same page). That means the prefetches are\nnot really needed, but I'm not sure how expensive it really is.\n\n>Results vary, but in contrived larger-than-memory pgbench crash\n>recovery experiments on a Linux development system, I've seen recovery\n>running as much as 20x faster with full_page_writes=off and\n>wal_prefetch_distance=8kB. FPWs reduce the potential speed-up as\n>discussed in the other thread.\n>\n\nOK, so how did you test that? I'll do some tests with a traditional\nstreaming replication setup, multiple sessions on the primary (and maybe\na weaker storage system on the replica). I suppose that's another setup\nthat should benefit from this.\n\n> ...\n>\n>Earlier work, and how this patch compares:\n>\n>* Sean Chittenden wrote pg_prefaulter[1], an external process that\n>uses worker threads to pread() referenced pages some time before\n>recovery does, and demonstrated very good speed-up, triggering a lot\n>of discussion of this topic. My WIP patch differs mainly in that it's\n>integrated with PostgreSQL, and it uses POSIX_FADV_WILLNEED rather\n>than synchronous I/O from worker threads/processes. Sean wouldn't\n>have liked my patch much because he was working on ZFS and that\n>doesn't support POSIX_FADV_WILLNEED, but with a small patch to ZFS it\n>works pretty well, and I'll try to get that upstreamed.\n>\n\nHow long would it take to get the POSIX_FADV_WILLNEED to ZFS systems, if\neverything goes fine? I'm not sure what's the usual life-cycle, but I\nassume it may take a couple years to get it on most production systems.\n\nWhat other common filesystems are missing support for this?\n\nPresumably we could do what Sean's extension does, i.e. use a couple of\nbgworkers, each doing simple pread() calls. Of course, that's\nunnecessarily complicated on systems that have FADV_WILLNEED.\n\n> ...\n>\n>Here are some cases where I expect this patch to perform badly:\n>\n>* Your WAL has multiple intermixed sequential access streams (ie\n>sequential access to N different relations), so that sequential access\n>is not detected, and then all the WILLNEED advice prevents Linux's\n>automagic readahead from working well. Perhaps that could be\n>mitigated by having a system that can detect up to N concurrent\n>streams, where N is more than the current 1, or by flagging buffers in\n>the WAL as part of a sequential stream. I haven't looked into this.\n>\n\nHmmm, wouldn't it be enough to prefetch blocks in larger batches (not\none by one), and doing some sort of sorting? That should allow readahead\nto kick in.\n\n>* The data is always found in our buffer pool, so PrefetchBuffer() is\n>doing nothing useful and you might as well not be calling it or doing\n>the extra work that leads up to that. Perhaps that could be mitigated\n>with an adaptive approach: too many PrefetchBuffer() hits and we stop\n>trying to prefetch, too many XLogReadBufferForRedo() misses and we\n>start trying to prefetch. That might work nicely for systems that\n>start out with cold caches but eventually warm up. I haven't looked\n>into this.\n>\n\nI think the question is what's the cost of doing such unnecessary\nprefetch. Presumably it's fairly cheap, especially compared to the\nopposite case (not prefetching a block not in shared buffers). I wonder\nhow expensive would the adaptive logic be on cases that never need a\nprefetch (i.e. datasets smaller than shared_buffers).\n\n>* The data is actually always in the kernel's cache, so the advice is\n>a waste of a syscall. That might imply that you should probably be\n>running with a larger shared_buffers (?). It's technically possible\n>to ask the operating system if a region is cached on many systems,\n>which could in theory be used for some kind of adaptive heuristic that\n>would disable pointless prefetching, but I'm not proposing that.\n>Ultimately this problem would be avoided by moving to true async I/O,\n>where we'd be initiating the read all the way into our buffers (ie it\n>replaces the later pread() so it's a wash, at worst).\n>\n\nMakes sense.\n\n>* The prefetch distance is set too low so that pread() waits are not\n>avoided, or your storage subsystem can't actually perform enough\n>concurrent I/O to get ahead of the random access pattern you're\n>generating, so no distance would be far enough ahead. To help with\n>the former case, perhaps we could invent something smarter than a\n>user-supplied distance (something like \"N cold block references\n>ahead\", possibly using effective_io_concurrency, rather than \"N bytes\n>ahead\").\n>\n\nIn general, I find it quite non-intuitive to configure prefetching by\nspecifying WAL distance. I mean, how would you know what's a good value?\nIf you know the storage hardware, you probably know the optimal queue\ndepth i.e. you know you the number of requests to get best throughput.\n\nBut how do you deduce the WAL distance from that? I don't know. Plus\nright after the checkpoint the WAL contains FPW, reducing the number of\nblocks in a given amount of WAL (compared to right before a checkpoint).\nSo I expect users might pick unnecessarily high WAL distance. OTOH with\nFPW we don't quite need agressive prefetching, right?\n\nCould we instead specify the number of blocks to prefetch? We'd probably\nneed to track additional details needed to determine number of blocks to\nprefetch (essentially LSN for all prefetch requests).\n\nAnother thing to consider might be skipping recently prefetched blocks.\nConsider you have a loop that does DML, where each statement creates a\nseparate WAL record, but it can easily touch the same block over and\nover (say inserting to the same page). That means the prefetches are\nnot really needed, but I'm not sure how expensive it really is.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 2 Jan 2020 19:10:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 7:10 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, Jan 02, 2020 at 02:39:04AM +1300, Thomas Munro wrote:\n> >Based on ideas from earlier discussions[1][2], here is an experimental\n> >WIP patch to improve recovery speed by prefetching blocks. If you set\n> >wal_prefetch_distance to a positive distance, measured in bytes, then\n> >the recovery loop will look ahead in the WAL and call PrefetchBuffer()\n> >for referenced blocks. This can speed things up with cold caches\n> >(example: after a server reboot) and working sets that don't fit in\n> >memory (example: large scale pgbench).\n> >\n>\n> Thanks, I only did a very quick review so far, but the patch looks fine.\n\nThanks for looking!\n\n> >Results vary, but in contrived larger-than-memory pgbench crash\n> >recovery experiments on a Linux development system, I've seen recovery\n> >running as much as 20x faster with full_page_writes=off and\n> >wal_prefetch_distance=8kB. FPWs reduce the potential speed-up as\n> >discussed in the other thread.\n>\n> OK, so how did you test that? I'll do some tests with a traditional\n> streaming replication setup, multiple sessions on the primary (and maybe\n> a weaker storage system on the replica). I suppose that's another setup\n> that should benefit from this.\n\nUsing a 4GB RAM 16 thread virtual machine running Linux debian10\n4.19.0-6-amd64 with an ext4 filesystem on NVMe storage:\n\npostgres -D pgdata \\\n -c full_page_writes=off \\\n -c checkpoint_timeout=60min \\\n -c max_wal_size=10GB \\\n -c synchronous_commit=off\n\n# in another shell\npgbench -i -s300 postgres\npsql postgres -c checkpoint\npgbench -T60 -Mprepared -c4 -j4 postgres\nkillall -9 postgres\n\n# save the crashed pgdata dir for repeated experiments\nmv pgdata pgdata-save\n\n# repeat this with values like wal_prefetch_distance=-1, 1kB, 8kB, 64kB, ...\nrm -fr pgdata\ncp -r pgdata-save pgdata\npostgres -D pgdata -c wal_prefetch_distance=-1\n\nWhat I see on my desktop machine is around 10x speed-up:\n\nwal_prefetch_distance=-1 -> 62s (same number for unpatched)\nwal_prefetch_distance=8kb -> 6s\nwal_prefetch_distance=64kB -> 5s\n\nOn another dev machine I managed to get a 20x speedup, using a much\nlonger test. It's probably more interesting to try out some more\nrealistic workloads rather than this cache-destroying uniform random\nstuff, though. It might be interesting to test on systems with high\nrandom read latency, but high concurrency; I can think of a bunch of\nnetwork storage environments where that's the case, but I haven't\nlooked into them, beyond some toy testing with (non-Linux) NFS over a\nslow network (results were promising).\n\n> >Earlier work, and how this patch compares:\n> >\n> >* Sean Chittenden wrote pg_prefaulter[1], an external process that\n> >uses worker threads to pread() referenced pages some time before\n> >recovery does, and demonstrated very good speed-up, triggering a lot\n> >of discussion of this topic. My WIP patch differs mainly in that it's\n> >integrated with PostgreSQL, and it uses POSIX_FADV_WILLNEED rather\n> >than synchronous I/O from worker threads/processes. Sean wouldn't\n> >have liked my patch much because he was working on ZFS and that\n> >doesn't support POSIX_FADV_WILLNEED, but with a small patch to ZFS it\n> >works pretty well, and I'll try to get that upstreamed.\n> >\n>\n> How long would it take to get the POSIX_FADV_WILLNEED to ZFS systems, if\n> everything goes fine? I'm not sure what's the usual life-cycle, but I\n> assume it may take a couple years to get it on most production systems.\n\nAssuming they like it enough to commit it (and initial informal\nfeedback on the general concept has been positive -- it's not messing\nwith their code at all, it's just boilerplate code to connect the\nrelevant Linux and FreeBSD VFS callbacks), it could indeed be quite a\nwhile before it appear in conservative package repos, but I don't\nknow, it depends where you get your OpenZFS/ZoL module from.\n\n> What other common filesystems are missing support for this?\n\nUsing our build farm as a way to know which operating systems we care\nabout as a community, in no particular order:\n\n* I don't know for exotic or network filesystems on Linux\n* AIX 7.2's manual says \"Valid option, but this value does not perform\nany action\" for every kind of advice except POSIX_FADV_NOWRITEBEHIND\n(huh, nonstandard advice).\n* Solaris's posix_fadvise() was a dummy libc function, as of 10 years\nago when they closed the source; who knows after that.\n* FreeBSD's UFS and NFS support other advice through a default handler\nbut unfortunately ignore WILLNEED (I have patches for those too, not\ngood enough to send anywhere yet).\n* OpenBSD has no such syscall\n* NetBSD has the syscall, and I can see that it's hooked up to\nreadahead code, so that's probably the only unqualified yes in this\nlist\n* Windows has no equivalent syscall; the closest thing might be to use\nReadFileEx() to initiate an async read into a dummy buffer; maybe you\ncan use a zero event so it doesn't even try to tell you when the I/O\ncompletes, if you don't care?\n* macOS has no such syscall, but you could in theory do an aio_read()\ninto a dummy buffer. On the other hand I don't think that interface\nis a general solution for POSIX systems, because on at least Linux and\nSolaris, aio_read() is emulated by libc with a whole bunch of threads\nand we are allergic to those things (and even if we weren't, we\nwouldn't want a whole threadpool in every PostgreSQL process, so you'd\nneed to hand off to a worker process, and then why bother?).\n* HPUX, I don't know\n\nWe could test any of those with a simple test I wrote[1], but I'm not\nlikely to test any non-open-source OS myself due to lack of access.\nAmazingly, HPUX's posix_fadvise() doesn't appear to conform to POSIX:\nit sets errno and returns -1, while POSIX says that it should return\nan error number. Checking our source tree, I see that in\npg_flush_data(), we also screwed that up and expect errno to be set,\nthough we got it right in FilePrefetch().\n\nIn any case, Linux must be at the very least 90% of PostgreSQL\ninstallations. Incidentally, sync_file_range() without wait is a sort\nof opposite of WILLNEED (it means something like\n\"POSIX_FADV_WILLSYNC\"), and no one seem terribly upset that we really\nonly have that on Linux (the emulations are pretty poor AFAICS).\n\n> Presumably we could do what Sean's extension does, i.e. use a couple of\n> bgworkers, each doing simple pread() calls. Of course, that's\n> unnecessarily complicated on systems that have FADV_WILLNEED.\n\nThat is a good idea, and I agree. I have a patch set that does\nexactly that. It's nearly independent of the WAL prefetch work; it\njust changes how PrefetchBuffer() is implemented, affecting bitmap\nindex scans, vacuum and any future user of PrefetchBuffer. If you\napply these patches too then WAL prefetch will use it (just set\nmax_background_readers = 4 or whatever):\n\nhttps://github.com/postgres/postgres/compare/master...macdice:bgreader\n\nThat's simplified from an abandoned patch I had lying around because I\nwas experimenting with prefetching all the way into shared buffers\nthis way. The simplified version just does pread() into a dummy\nbuffer, for the side effect of warming the kernel's cache, pretty much\nlike pg_prefaulter. There are some tricky questions around whether\nit's better to wait or not when the request queue is full; the way I\nhave that is far too naive, and that question is probably related to\nyour point about being cleverer about how many prefetch blocks you\nshould try to have in flight. A future version of PrefetchBuffer()\nmight lock the buffer then tell the worker (or some kernel async I/O\nfacility) to write the data into the buffer. If I understand\ncorrectly, to make that work we need Robert's IO lock/condition\nvariable transplant[2], and Andres's scheme for a suitable\ninterlocking protocol, and no doubt some bulletproof cleanup\nmachinery. I'm not working on any of that myself right now because I\ndon't want to step on Andres's toes.\n\n> >Here are some cases where I expect this patch to perform badly:\n> >\n> >* Your WAL has multiple intermixed sequential access streams (ie\n> >sequential access to N different relations), so that sequential access\n> >is not detected, and then all the WILLNEED advice prevents Linux's\n> >automagic readahead from working well. Perhaps that could be\n> >mitigated by having a system that can detect up to N concurrent\n> >streams, where N is more than the current 1, or by flagging buffers in\n> >the WAL as part of a sequential stream. I haven't looked into this.\n> >\n>\n> Hmmm, wouldn't it be enough to prefetch blocks in larger batches (not\n> one by one), and doing some sort of sorting? That should allow readahead\n> to kick in.\n\nYeah, but I don't want to do too much work in the startup process, or\nget too opinionated about how the underlying I/O stack works. I think\nwe'd need to do things like that in a direct I/O future, but we'd\nprobably offload it (?). I figured the best approach for early work\nin this space would be to just get out of the way if we detect\nsequential access.\n\n> >* The data is always found in our buffer pool, so PrefetchBuffer() is\n> >doing nothing useful and you might as well not be calling it or doing\n> >the extra work that leads up to that. Perhaps that could be mitigated\n> >with an adaptive approach: too many PrefetchBuffer() hits and we stop\n> >trying to prefetch, too many XLogReadBufferForRedo() misses and we\n> >start trying to prefetch. That might work nicely for systems that\n> >start out with cold caches but eventually warm up. I haven't looked\n> >into this.\n> >\n>\n> I think the question is what's the cost of doing such unnecessary\n> prefetch. Presumably it's fairly cheap, especially compared to the\n> opposite case (not prefetching a block not in shared buffers). I wonder\n> how expensive would the adaptive logic be on cases that never need a\n> prefetch (i.e. datasets smaller than shared_buffers).\n\nHmm. It's basically a buffer map probe. I think the adaptive logic\nwould probably be some kind of periodically resetting counter scheme,\nbut you're probably right to suspect that it might not even be worth\nbothering with, especially if a single XLogReader can be made to do\nthe readahead with no real extra cost. Perhaps we should work on\nmaking the cost of all prefetching overheads as low as possible first,\nbefore trying to figure out whether it's worth building a system for\navoiding it.\n\n> >* The prefetch distance is set too low so that pread() waits are not\n> >avoided, or your storage subsystem can't actually perform enough\n> >concurrent I/O to get ahead of the random access pattern you're\n> >generating, so no distance would be far enough ahead. To help with\n> >the former case, perhaps we could invent something smarter than a\n> >user-supplied distance (something like \"N cold block references\n> >ahead\", possibly using effective_io_concurrency, rather than \"N bytes\n> >ahead\").\n> >\n>\n> In general, I find it quite non-intuitive to configure prefetching by\n> specifying WAL distance. I mean, how would you know what's a good value?\n> If you know the storage hardware, you probably know the optimal queue\n> depth i.e. you know you the number of requests to get best throughput.\n\nFWIW, on pgbench tests on flash storage I've found that 1KB only helps\na bit, 8KB is great, and more than that doesn't get any better. Of\ncourse, this is meaningless in general; a zipfian workload might need\nto look a lot further head than a uniform one to find anything worth\nprefetching, and that's exactly what you're complaining about, and I\nagree.\n\n> But how do you deduce the WAL distance from that? I don't know. Plus\n> right after the checkpoint the WAL contains FPW, reducing the number of\n> blocks in a given amount of WAL (compared to right before a checkpoint).\n> So I expect users might pick unnecessarily high WAL distance. OTOH with\n> FPW we don't quite need agressive prefetching, right?\n\nYeah, so you need to be touching blocks more than once between\ncheckpoints, if you want to see speed-up on a system with blocks <=\nBLCKSZ and FPW on. If checkpoints are far enough apart you'll\neventually run out of FPWs and start replaying non-FPW stuff. Or you\ncould be on a filesystem with larger blocks than PostgreSQL.\n\n> Could we instead specify the number of blocks to prefetch? We'd probably\n> need to track additional details needed to determine number of blocks to\n> prefetch (essentially LSN for all prefetch requests).\n\nYeah, I think you're right, we should probably try to make a little\nqueue to track LSNs and count prefetch requests in and out. I think\nyou'd also want PrefetchBuffer() to tell you if the block was already\nin the buffer pool, so that you don't count blocks that it decided not\nto prefetch. I guess PrefetchBuffer() needs to return an enum (I\nalready had it returning a bool for another purpose relating to an\nedge case in crash recovery, when relations have been dropped by a\nlater WAL record). I will think about that.\n\n> Another thing to consider might be skipping recently prefetched blocks.\n> Consider you have a loop that does DML, where each statement creates a\n> separate WAL record, but it can easily touch the same block over and\n> over (say inserting to the same page). That means the prefetches are\n> not really needed, but I'm not sure how expensive it really is.\n\nThere are two levels of defence against repeatedly prefetching the\nsame block: PrefetchBuffer() checks for blocks that are already in our\ncache, and before that, PrefetchState remembers the last block so that\nwe can avoid fetching that block (or the following block).\n\n[1] https://github.com/macdice/some-io-tests\n[2] https://www.postgresql.org/message-id/CA%2BTgmoaj2aPti0yho7FeEf2qt-JgQPRWb0gci_o1Hfr%3DC56Xng%40mail.gmail.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 17:57:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 5:57 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Fri, Jan 3, 2020 at 7:10 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > Could we instead specify the number of blocks to prefetch? We'd probably\n> > need to track additional details needed to determine number of blocks to\n> > prefetch (essentially LSN for all prefetch requests).\n\nHere is a new WIP version of the patch set that does that. Changes:\n\n1. It now uses effective_io_concurrency to control how many\nconcurrent prefetches to allow. It's possible that we should have a\ndifferent GUC to control \"maintenance\" users of concurrency I/O as\ndiscussed elsewhere[1], but I'm staying out of that for now; if we\nagree to do that for VACUUM etc, we can change it easily here. Note\nthat the value is percolated through the ComputeIoConcurrency()\nfunction which I think we should discuss, but again that's off topic,\nI just want to use the standard infrastructure here.\n\n2. You can now change the relevant GUCs (wal_prefetch_distance,\nwal_prefetch_fpw, effective_io_concurrency) at runtime and reload for\nthem to take immediate effect. For example, you can enable the\nfeature on a running replica by setting wal_prefetch_distance=8kB\n(from the default of -1, which means off), and something like\neffective_io_concurrency=10, and telling the postmaster to reload.\n\n3. The new code is moved out to a new file\nsrc/backend/access/transam/xlogprefetcher.c, to minimise new bloat in\nthe mighty xlog.c file. Functions were renamed to make their purpose\nclearer, and a lot of comments were added.\n\n4. The WAL receiver now exposes the current 'write' position via an\natomic value in shared memory, so we don't need to hammer the WAL\nreceiver's spinlock.\n\n5. There is some rudimentary user documentation of the GUCs.\n\n[1] https://www.postgresql.org/message-id/13619.1557935593%40sss.pgh.pa.us",
"msg_date": "Wed, 12 Feb 2020 19:52:42 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Feb 12, 2020 at 7:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> 1. It now uses effective_io_concurrency to control how many\n> concurrent prefetches to allow. It's possible that we should have a\n> different GUC to control \"maintenance\" users of concurrency I/O as\n> discussed elsewhere[1], but I'm staying out of that for now; if we\n> agree to do that for VACUUM etc, we can change it easily here. Note\n> that the value is percolated through the ComputeIoConcurrency()\n> function which I think we should discuss, but again that's off topic,\n> I just want to use the standard infrastructure here.\n\nI started a separate thread[1] to discuss that GUC, because it's\nbasically an independent question. Meanwhile, here's a new version of\nthe WAL prefetch patch, with the following changes:\n\n1. A monitoring view:\n\n postgres=# select * from pg_stat_wal_prefetcher ;\n prefetch | skip_hit | skip_new | skip_fpw | skip_seq | distance | queue_depth\n ----------+----------+----------+----------+----------+----------+-------------\n 95854 | 291458 | 435 | 0 | 26245 | 261800 | 10\n (1 row)\n\nThat shows a bunch of counters for blocks prefetched and skipped for\nvarious reasons. It also shows the current read-ahead distance (in\nbytes of WAL) and queue depth (an approximation of how many I/Os might\nbe in flight, used for rate limiting; I'm struggling to come up with a\nbetter short name for this). This can be used to see the effects of\nexperiments with different settings, eg:\n\n alter system set effective_io_concurrency = 20;\n alter system set wal_prefetch_distance = '256kB';\n select pg_reload_conf();\n\n2. A log message when WAL prefetching begins and ends, so you can see\nwhat it did during crash recovery:\n\n LOG: WAL prefetch finished at 0/C5E98758; prefetch = 1112628,\nskip_hit = 3607540,\n skip_new = 45592, skip_fpw = 0, skip_seq = 177049, avg_distance =\n247907.942532,\n avg_queue_depth = 22.261352\n\n3. A bit of general user documentation.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGJUw08dPs_3EUcdO6M90GnjofPYrWp4YSLaBkgYwS-AqA%40mail.gmail.com",
"msg_date": "Mon, 2 Mar 2020 18:43:23 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "I tried my luck at a quick read of this patchset.\nI didn't manage to go over 0005 though, but I agree with Tomas that\nhaving this be configurable in terms of bytes of WAL is not very\nuser-friendly.\n\nFirst of all, let me join the crowd chanting that this is badly needed;\nI don't need to repeat what Chittenden's talk showed. \"WAL recovery is\nnow 10x-20x times faster\" would be a good item for pg13 press release, \nI think.\n\n> From a61b4e00c42ace5db1608e02165f89094bf86391 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Tue, 3 Dec 2019 17:13:40 +1300\n> Subject: [PATCH 1/5] Allow PrefetchBuffer() to be called with a SMgrRelation.\n> \n> Previously a Relation was required, but it's annoying to have\n> to create a \"fake\" one in recovery.\n\nLGTM.\n\nIt's a pity to have to include smgr.h in bufmgr.h. Maybe it'd be sane\nto use a forward struct declaration and \"struct SMgrRelation *\" instead.\n\n\n> From acbff1444d0acce71b0218ce083df03992af1581 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <tmunro@postgresql.org>\n> Date: Mon, 9 Dec 2019 17:10:17 +1300\n> Subject: [PATCH 2/5] Rename GetWalRcvWriteRecPtr() to GetWalRcvFlushRecPtr().\n> \n> The new name better reflects the fact that the value it returns\n> is updated only when received data has been flushed to disk.\n> \n> An upcoming patch will make use of the latest data that was\n> written without waiting for it to be flushed, so use more\n> precise function names.\n\nUgh. (Not for your patch -- I mean for the existing naming convention).\nIt would make sense to rename WalRcvData->receivedUpto in this commit,\nmaybe to flushedUpto.\n\n\n> From d7fa7d82c5f68d0cccf441ce9e8dfa40f64d3e0d Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <tmunro@postgresql.org>\n> Date: Mon, 9 Dec 2019 17:22:07 +1300\n> Subject: [PATCH 3/5] Add WalRcvGetWriteRecPtr() (new definition).\n> \n> A later patch will read received WAL to prefetch referenced blocks,\n> without waiting for the data to be flushed to disk. To do that,\n> it needs to be able to see the write pointer advancing in shared\n> memory.\n> \n> The function formerly bearing name was recently renamed to\n> WalRcvGetFlushRecPtr(), which better described what it does.\n\n> +\tpg_atomic_init_u64(&WalRcv->writtenUpto, 0);\n\nUmm, how come you're using WalRcv here instead of walrcv? I would flag\nthis patch for sneaky nastiness if this weren't mostly harmless. (I\nthink we should do away with local walrcv pointers altogether. But that\nshould be a separate patch, I think.)\n\n> +\tpg_atomic_uint64 writtenUpto;\n\nAre we already using uint64s for XLogRecPtrs anywhere? This seems\nnovel. Given this, I wonder if the comment near \"mutex\" needs an\nupdate (\"except where atomics are used\"), or perhaps just move the\nmember to after the line with mutex.\n\n\nI didn't understand the purpose of inc_counter() as written. Why not\njust pg_atomic_fetch_add_u64(..., 1)?\n\n> /*\n> *\tsmgrprefetch() -- Initiate asynchronous read of the specified block of a relation.\n> + *\n> + *\t\tIn recovery only, this can return false to indicate that a file\n> + *\t\tdoesn't\texist (presumably it has been dropped by a later WAL\n> + *\t\trecord).\n> */\n> -void\n> +bool\n> smgrprefetch(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum)\n\nI think this API, where the behavior of a low-level module changes\ndepending on InRecovery, is confusingly crazy. I'd rather have the\ncallers specifying whether they're OK with a file that doesn't exist.\n\n> +extern PrefetchBufferResult SharedPrefetchBuffer(SMgrRelation smgr_reln,\n> +\t\t\t\t\t\t\t\t\t\t\t\t ForkNumber forkNum,\n> +\t\t\t\t\t\t\t\t\t\t\t\t BlockNumber blockNum);\n> extern void PrefetchBuffer(Relation reln, ForkNumber forkNum,\n> \t\t\t\t\t\t BlockNumber blockNum);\n\nUmm, I would keep the return values of both these functions in sync.\nIt's really strange that PrefetchBuffer does not return\nPrefetchBufferResult, don't you think?\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Fri, 13 Mar 2020 18:15:09 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi Alvaro,\n\nOn Sat, Mar 14, 2020 at 10:15 AM Alvaro Herrera\n<alvherre@2ndquadrant.com> wrote:\n> I tried my luck at a quick read of this patchset.\n\nThanks! Here's a new patch set, and some inline responses to your feedback:\n\n> I didn't manage to go over 0005 though, but I agree with Tomas that\n> having this be configurable in terms of bytes of WAL is not very\n> user-friendly.\n\nThe primary control is now maintenance_io_concurrency, which is\nbasically what Tomas suggested.\n\nThe byte-based control is just a cap to prevent it reading a crazy\ndistance ahead, that also functions as the on/off switch for the\nfeature. In this version I've added \"max\" to the name, to make that\nclearer.\n\n> First of all, let me join the crowd chanting that this is badly needed;\n> I don't need to repeat what Chittenden's talk showed. \"WAL recovery is\n> now 10x-20x times faster\" would be a good item for pg13 press release,\n> I think.\n\nWe should be careful about over-promising here: Sean basically had a\nbest case scenario for this type of techology, partly due to his 16kB\nfilesystem blocks. Common results may be a lot more pedestrian,\nthough it could get more interesting if we figure out how to get rid\nof FPWs...\n\n> > From a61b4e00c42ace5db1608e02165f89094bf86391 Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <thomas.munro@gmail.com>\n> > Date: Tue, 3 Dec 2019 17:13:40 +1300\n> > Subject: [PATCH 1/5] Allow PrefetchBuffer() to be called with a SMgrRelation.\n> >\n> > Previously a Relation was required, but it's annoying to have\n> > to create a \"fake\" one in recovery.\n>\n> LGTM.\n>\n> It's a pity to have to include smgr.h in bufmgr.h. Maybe it'd be sane\n> to use a forward struct declaration and \"struct SMgrRelation *\" instead.\n\nOK, done.\n\nWhile staring at this, I decided that SharedPrefetchBuffer() was a\nweird word order, so I changed it to PrefetchSharedBuffer(). Then, by\nanalogy, I figured I should also change the pre-existing function\nLocalPrefetchBuffer() to PrefetchLocalBuffer(). Do you think this is\nan improvement?\n\n> > From acbff1444d0acce71b0218ce083df03992af1581 Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <tmunro@postgresql.org>\n> > Date: Mon, 9 Dec 2019 17:10:17 +1300\n> > Subject: [PATCH 2/5] Rename GetWalRcvWriteRecPtr() to GetWalRcvFlushRecPtr().\n> >\n> > The new name better reflects the fact that the value it returns\n> > is updated only when received data has been flushed to disk.\n> >\n> > An upcoming patch will make use of the latest data that was\n> > written without waiting for it to be flushed, so use more\n> > precise function names.\n>\n> Ugh. (Not for your patch -- I mean for the existing naming convention).\n> It would make sense to rename WalRcvData->receivedUpto in this commit,\n> maybe to flushedUpto.\n\nOk, I renamed that variable and a related one. There are more things\nyou could rename if you pull on that thread some more, including\npg_stat_wal_receiver's received_lsn column, but I didn't do that in\nthis patch.\n\n> > From d7fa7d82c5f68d0cccf441ce9e8dfa40f64d3e0d Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <tmunro@postgresql.org>\n> > Date: Mon, 9 Dec 2019 17:22:07 +1300\n> > Subject: [PATCH 3/5] Add WalRcvGetWriteRecPtr() (new definition).\n> >\n> > A later patch will read received WAL to prefetch referenced blocks,\n> > without waiting for the data to be flushed to disk. To do that,\n> > it needs to be able to see the write pointer advancing in shared\n> > memory.\n> >\n> > The function formerly bearing name was recently renamed to\n> > WalRcvGetFlushRecPtr(), which better described what it does.\n>\n> > + pg_atomic_init_u64(&WalRcv->writtenUpto, 0);\n>\n> Umm, how come you're using WalRcv here instead of walrcv? I would flag\n> this patch for sneaky nastiness if this weren't mostly harmless. (I\n> think we should do away with local walrcv pointers altogether. But that\n> should be a separate patch, I think.)\n\nOK, done.\n\n> > + pg_atomic_uint64 writtenUpto;\n>\n> Are we already using uint64s for XLogRecPtrs anywhere? This seems\n> novel. Given this, I wonder if the comment near \"mutex\" needs an\n> update (\"except where atomics are used\"), or perhaps just move the\n> member to after the line with mutex.\n\nMoved.\n\nWe use [u]int64 in various places in the replication code. Ideally\nI'd have a magic way to say atomic<XLogRecPtr> so I didn't have to\nassume that pg_atomic_uint64 is the right atomic integer width and\nsignedness, but here we are. In dsa.h I made a special typedef for\nthe atomic version of something else, but that's because the size of\nthat thing varied depending on the build, whereas our LSNs are of a\nfixed width that ought to be en... <trails off>.\n\n> I didn't understand the purpose of inc_counter() as written. Why not\n> just pg_atomic_fetch_add_u64(..., 1)?\n\nI didn't want counters that wrap at ~4 billion, but I did want to be\nable to read and write concurrently without tearing. Instructions\nlike \"lock xadd\" would provide more guarantees that I don't need,\nsince only one thread is doing all the writing and there's no ordering\nrequirement. It's basically just counter++, but some platforms need a\nspinlock to perform atomic read and write of 64 bit wide numbers, so\nmore hoop jumping is required.\n\n> > /*\n> > * smgrprefetch() -- Initiate asynchronous read of the specified block of a relation.\n> > + *\n> > + * In recovery only, this can return false to indicate that a file\n> > + * doesn't exist (presumably it has been dropped by a later WAL\n> > + * record).\n> > */\n> > -void\n> > +bool\n> > smgrprefetch(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum)\n>\n> I think this API, where the behavior of a low-level module changes\n> depending on InRecovery, is confusingly crazy. I'd rather have the\n> callers specifying whether they're OK with a file that doesn't exist.\n\nHmm. But... md.c has other code like that. It's true that I'm adding\nInRecovery awareness to a function that didn't previously have it, but\nthat's just because we previously had no reason to prefetch stuff in\nrecovery.\n\n> > +extern PrefetchBufferResult SharedPrefetchBuffer(SMgrRelation smgr_reln,\n> > + ForkNumber forkNum,\n> > + BlockNumber blockNum);\n> > extern void PrefetchBuffer(Relation reln, ForkNumber forkNum,\n> > BlockNumber blockNum);\n>\n> Umm, I would keep the return values of both these functions in sync.\n> It's really strange that PrefetchBuffer does not return\n> PrefetchBufferResult, don't you think?\n\nAgreed, and changed. I suspect that other users of the main\nPrefetchBuffer() call will eventually want that, to do a better job of\nkeeping the request queue full, for example bitmap heap scan and\n(hypothetical) btree scan with prefetch.",
"msg_date": "Tue, 17 Mar 2020 19:32:55 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 2020-Mar-17, Thomas Munro wrote:\n\nHi Thomas\n\n> On Sat, Mar 14, 2020 at 10:15 AM Alvaro Herrera\n> <alvherre@2ndquadrant.com> wrote:\n\n> > I didn't manage to go over 0005 though, but I agree with Tomas that\n> > having this be configurable in terms of bytes of WAL is not very\n> > user-friendly.\n> \n> The primary control is now maintenance_io_concurrency, which is\n> basically what Tomas suggested.\n\n> The byte-based control is just a cap to prevent it reading a crazy\n> distance ahead, that also functions as the on/off switch for the\n> feature. In this version I've added \"max\" to the name, to make that\n> clearer.\n\nMumble. I guess I should wait to comment on this after reading 0005\nmore in depth.\n\n> > First of all, let me join the crowd chanting that this is badly needed;\n> > I don't need to repeat what Chittenden's talk showed. \"WAL recovery is\n> > now 10x-20x times faster\" would be a good item for pg13 press release,\n> > I think.\n> \n> We should be careful about over-promising here: Sean basically had a\n> best case scenario for this type of techology, partly due to his 16kB\n> filesystem blocks. Common results may be a lot more pedestrian,\n> though it could get more interesting if we figure out how to get rid\n> of FPWs...\n\nWell, in my mind it's an established fact that our WAL replay uses far\ntoo little of the available I/O speed. I guess if the system is\ngenerating little WAL, then this change will show no benefit, but that's\nnot the kind of system that cares about this anyway -- for the others,\nthe parallelisation gains will be substantial, I'm sure.\n\n> > > From a61b4e00c42ace5db1608e02165f89094bf86391 Mon Sep 17 00:00:00 2001\n> > > From: Thomas Munro <thomas.munro@gmail.com>\n> > > Date: Tue, 3 Dec 2019 17:13:40 +1300\n> > > Subject: [PATCH 1/5] Allow PrefetchBuffer() to be called with a SMgrRelation.\n> > >\n> > > Previously a Relation was required, but it's annoying to have\n> > > to create a \"fake\" one in recovery.\n\n> While staring at this, I decided that SharedPrefetchBuffer() was a\n> weird word order, so I changed it to PrefetchSharedBuffer(). Then, by\n> analogy, I figured I should also change the pre-existing function\n> LocalPrefetchBuffer() to PrefetchLocalBuffer(). Do you think this is\n> an improvement?\n\nLooks good. I doubt you'll break anything by renaming that routine.\n\n> > > From acbff1444d0acce71b0218ce083df03992af1581 Mon Sep 17 00:00:00 2001\n> > > From: Thomas Munro <tmunro@postgresql.org>\n> > > Date: Mon, 9 Dec 2019 17:10:17 +1300\n> > > Subject: [PATCH 2/5] Rename GetWalRcvWriteRecPtr() to GetWalRcvFlushRecPtr().\n> > >\n> > > The new name better reflects the fact that the value it returns\n> > > is updated only when received data has been flushed to disk.\n> > >\n> > > An upcoming patch will make use of the latest data that was\n> > > written without waiting for it to be flushed, so use more\n> > > precise function names.\n> >\n> > Ugh. (Not for your patch -- I mean for the existing naming convention).\n> > It would make sense to rename WalRcvData->receivedUpto in this commit,\n> > maybe to flushedUpto.\n> \n> Ok, I renamed that variable and a related one. There are more things\n> you could rename if you pull on that thread some more, including\n> pg_stat_wal_receiver's received_lsn column, but I didn't do that in\n> this patch.\n\n+1 for that approach. Maybe we'll want to rename the SQL-visible name,\nbut I wouldn't burden this patch with that, lest we lose the entire\nseries to that :-)\n\n> > > + pg_atomic_uint64 writtenUpto;\n> >\n> > Are we already using uint64s for XLogRecPtrs anywhere? This seems\n> > novel. Given this, I wonder if the comment near \"mutex\" needs an\n> > update (\"except where atomics are used\"), or perhaps just move the\n> > member to after the line with mutex.\n> \n> Moved.\n\nLGTM.\n\n> We use [u]int64 in various places in the replication code. Ideally\n> I'd have a magic way to say atomic<XLogRecPtr> so I didn't have to\n> assume that pg_atomic_uint64 is the right atomic integer width and\n> signedness, but here we are. In dsa.h I made a special typedef for\n> the atomic version of something else, but that's because the size of\n> that thing varied depending on the build, whereas our LSNs are of a\n> fixed width that ought to be en... <trails off>.\n\nLet's rewrite Postgres in Rust ...\n\n> > I didn't understand the purpose of inc_counter() as written. Why not\n> > just pg_atomic_fetch_add_u64(..., 1)?\n> \n> I didn't want counters that wrap at ~4 billion, but I did want to be\n> able to read and write concurrently without tearing. Instructions\n> like \"lock xadd\" would provide more guarantees that I don't need,\n> since only one thread is doing all the writing and there's no ordering\n> requirement. It's basically just counter++, but some platforms need a\n> spinlock to perform atomic read and write of 64 bit wide numbers, so\n> more hoop jumping is required.\n\nAh, I see, you don't want lock xadd ... That's non-obvious. I suppose\nthe function could use more commentary on *why* you're doing it that way\nthen.\n\n> > > /*\n> > > * smgrprefetch() -- Initiate asynchronous read of the specified block of a relation.\n> > > + *\n> > > + * In recovery only, this can return false to indicate that a file\n> > > + * doesn't exist (presumably it has been dropped by a later WAL\n> > > + * record).\n> > > */\n> > > -void\n> > > +bool\n> > > smgrprefetch(SMgrRelation reln, ForkNumber forknum, BlockNumber blocknum)\n> >\n> > I think this API, where the behavior of a low-level module changes\n> > depending on InRecovery, is confusingly crazy. I'd rather have the\n> > callers specifying whether they're OK with a file that doesn't exist.\n> \n> Hmm. But... md.c has other code like that. It's true that I'm adding\n> InRecovery awareness to a function that didn't previously have it, but\n> that's just because we previously had no reason to prefetch stuff in\n> recovery.\n\nTrue. I'm uncomfortable about it anyway. I also noticed that\n_mdfd_getseg() already has InRecovery-specific behavior flags.\nClearly that ship has sailed. Consider my objection^W comment withdrawn.\n\n> > Umm, I would keep the return values of both these functions in sync.\n> > It's really strange that PrefetchBuffer does not return\n> > PrefetchBufferResult, don't you think?\n> \n> Agreed, and changed. I suspect that other users of the main\n> PrefetchBuffer() call will eventually want that, to do a better job of\n> keeping the request queue full, for example bitmap heap scan and\n> (hypothetical) btree scan with prefetch.\n\nLGTM.\n\nAs before, I didn't get to reading 0005 in depth.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 17 Mar 2020 22:47:54 -0300",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Mar 18, 2020 at 2:47 PM Alvaro Herrera <alvherre@2ndquadrant.com> wrote:\n> On 2020-Mar-17, Thomas Munro wrote:\n> > I didn't want counters that wrap at ~4 billion, but I did want to be\n> > able to read and write concurrently without tearing. Instructions\n> > like \"lock xadd\" would provide more guarantees that I don't need,\n> > since only one thread is doing all the writing and there's no ordering\n> > requirement. It's basically just counter++, but some platforms need a\n> > spinlock to perform atomic read and write of 64 bit wide numbers, so\n> > more hoop jumping is required.\n>\n> Ah, I see, you don't want lock xadd ... That's non-obvious. I suppose\n> the function could use more commentary on *why* you're doing it that way\n> then.\n\nI updated the comment:\n\n+/*\n+ * On modern systems this is really just *counter++. On some older systems\n+ * there might be more to it, due to inability to read and write 64 bit values\n+ * atomically. The counters will only be written to by one process, and there\n+ * is no ordering requirement, so there's no point in using higher overhead\n+ * pg_atomic_fetch_add_u64().\n+ */\n+static inline void inc_counter(pg_atomic_uint64 *counter)\n\n> > > Umm, I would keep the return values of both these functions in sync.\n> > > It's really strange that PrefetchBuffer does not return\n> > > PrefetchBufferResult, don't you think?\n> >\n> > Agreed, and changed. I suspect that other users of the main\n> > PrefetchBuffer() call will eventually want that, to do a better job of\n> > keeping the request queue full, for example bitmap heap scan and\n> > (hypothetical) btree scan with prefetch.\n>\n> LGTM.\n\nHere's a new version that changes that part just a bit more, after a\nbrief chat with Andres about his async I/O plans. It seems clear that\nreturning an enum isn't very extensible, so I decided to try making\nPrefetchBufferResult a struct whose contents can be extended in the\nfuture. In this patch set it's still just used to distinguish 3 cases\n(hit, miss, no file), but it's now expressed as a buffer and a flag to\nindicate whether I/O was initiated. You could imagine that the second\nthing might be replaced by a pointer to an async I/O handle you can\nwait on or some other magical thing from the future.\n\nThe concept here is that eventually we'll have just one XLogReader for\nboth read ahead and recovery, and we could attach the prefetch results\nto the decoded records, and then recovery would try to use already\nlooked up buffers to avoid a bit of work (and then recheck). In other\nwords, the WAL would be decoded only once, and the buffers would\nhopefully be looked up only once, so you'd claw back all of the\noverheads of this patch. For now that's not done, and the buffer in\nthe result is only compared with InvalidBuffer to check if there was a\nhit or not.\n\nSimilar things could be done for bitmap heap scan and btree prefetch\nwith this interface: their prefetch machinery could hold onto these\nresults in their block arrays and try to avoid a more expensive\nReadBuffer() call if they already have a buffer (though as before,\nthere's a small chance it turns out to be the wrong one and they need\nto fall back to ReadBuffer()).\n\n> As before, I didn't get to reading 0005 in depth.\n\nUpdated to account for the above-mentioned change, and with a couple\nof elog() calls changed to ereport().",
"msg_date": "Wed, 18 Mar 2020 18:18:44 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2020-03-18 18:18:44 +1300, Thomas Munro wrote:\n> From 1b03eb5ada24c3b23ab8ca6db50e0c5d90d38259 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <tmunro@postgresql.org>\n> Date: Mon, 9 Dec 2019 17:22:07 +1300\n> Subject: [PATCH 3/5] Add WalRcvGetWriteRecPtr() (new definition).\n> \n> A later patch will read received WAL to prefetch referenced blocks,\n> without waiting for the data to be flushed to disk. To do that, it\n> needs to be able to see the write pointer advancing in shared memory.\n> \n> The function formerly bearing name was recently renamed to\n> WalRcvGetFlushRecPtr(), which better described what it does.\n\nHm. I'm a bit weary of reusing the name with a different meaning. If\nthere's any external references, this'll hide that they need to\nadapt. Perhaps, even if it's a bit clunky, name it GetUnflushedRecPtr?\n\n\n> From c62fde23f70ff06833d743a1c85716e15f3c813c Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Tue, 17 Mar 2020 17:26:41 +1300\n> Subject: [PATCH 4/5] Allow PrefetchBuffer() to report what happened.\n> \n> Report whether a prefetch was actually initiated due to a cache miss, so\n> that callers can limit the number of concurrent I/Os they try to issue,\n> without counting the prefetch calls that did nothing because the page\n> was already in our buffers.\n> \n> If the requested block was already cached, return a valid buffer. This\n> might enable future code to avoid a buffer mapping lookup, though it\n> will need to recheck the buffer before using it because it's not pinned\n> so could be reclaimed at any time.\n> \n> Report neither hit nor miss when a relation's backing file is missing,\n> to prepare for use during recovery. This will be used to handle cases\n> of relations that are referenced in the WAL but have been unlinked\n> already due to actions covered by WAL records that haven't been replayed\n> yet, after a crash.\n\nWe probably should take this into account in nodeBitmapHeapscan.c\n\n\n> diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> index d30aed6fd9..4ceb40a856 100644\n> --- a/src/backend/storage/buffer/bufmgr.c\n> +++ b/src/backend/storage/buffer/bufmgr.c\n> @@ -469,11 +469,13 @@ static int\tts_ckpt_progress_comparator(Datum a, Datum b, void *arg);\n> /*\n> * Implementation of PrefetchBuffer() for shared buffers.\n> */\n> -void\n> +PrefetchBufferResult\n> PrefetchSharedBuffer(struct SMgrRelationData *smgr_reln,\n> \t\t\t\t\t ForkNumber forkNum,\n> \t\t\t\t\t BlockNumber blockNum)\n> {\n> +\tPrefetchBufferResult result = { InvalidBuffer, false };\n> +\n> #ifdef USE_PREFETCH\n> \tBufferTag\tnewTag;\t\t/* identity of requested block */\n> \tuint32\t\tnewHash;\t/* hash value for newTag */\n> @@ -497,7 +499,23 @@ PrefetchSharedBuffer(struct SMgrRelationData *smgr_reln,\n> \n> \t/* If not in buffers, initiate prefetch */\n> \tif (buf_id < 0)\n> -\t\tsmgrprefetch(smgr_reln, forkNum, blockNum);\n> +\t{\n> +\t\t/*\n> +\t\t * Try to initiate an asynchronous read. This returns false in\n> +\t\t * recovery if the relation file doesn't exist.\n> +\t\t */\n> +\t\tif (smgrprefetch(smgr_reln, forkNum, blockNum))\n> +\t\t\tresult.initiated_io = true;\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/*\n> +\t\t * Report the buffer it was in at that time. The caller may be able\n> +\t\t * to avoid a buffer table lookup, but it's not pinned and it must be\n> +\t\t * rechecked!\n> +\t\t */\n> +\t\tresult.buffer = buf_id + 1;\n\nPerhaps it'd be better to name this \"last_buffer\" or such, to make it\nclearer that it may be outdated?\n\n\n> -void\n> +PrefetchBufferResult\n> PrefetchBuffer(Relation reln, ForkNumber forkNum, BlockNumber blockNum)\n> {\n> #ifdef USE_PREFETCH\n> @@ -540,13 +564,17 @@ PrefetchBuffer(Relation reln, ForkNumber forkNum, BlockNumber blockNum)\n> \t\t\t\t\t errmsg(\"cannot access temporary tables of other sessions\")));\n> \n> \t\t/* pass it off to localbuf.c */\n> -\t\tPrefetchLocalBuffer(reln->rd_smgr, forkNum, blockNum);\n> +\t\treturn PrefetchLocalBuffer(reln->rd_smgr, forkNum, blockNum);\n> \t}\n> \telse\n> \t{\n> \t\t/* pass it to the shared buffer version */\n> -\t\tPrefetchSharedBuffer(reln->rd_smgr, forkNum, blockNum);\n> +\t\treturn PrefetchSharedBuffer(reln->rd_smgr, forkNum, blockNum);\n> \t}\n> +#else\n> +\tPrefetchBuffer result = { InvalidBuffer, false };\n> +\n> +\treturn result;\n> #endif\t\t\t\t\t\t\t/* USE_PREFETCH */\n> }\n\nHm. Now that results are returned indicating whether the buffer is in\ns_b - shouldn't the return value be accurate regardless of USE_PREFETCH?\n\n\n\n> +/*\n> + * Type returned by PrefetchBuffer().\n> + */\n> +typedef struct PrefetchBufferResult\n> +{\n> +\tBuffer\t\tbuffer;\t\t\t/* If valid, a hit (recheck needed!) */\n\nI assume there's no user of this yet? Even if there's not, I wonder if\nit still is worth adding and referencing a helper to do so correctly?\n\n\n> From 42ba0a89260d46230ac0df791fae18bfdca0092f Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Wed, 18 Mar 2020 16:35:27 +1300\n> Subject: [PATCH 5/5] Prefetch referenced blocks during recovery.\n> \n> Introduce a new GUC max_wal_prefetch_distance. If it is set to a\n> positive number of bytes, then read ahead in the WAL at most that\n> distance, and initiate asynchronous reading of referenced blocks. The\n> goal is to avoid I/O stalls and benefit from concurrent I/O. The number\n> of concurrency asynchronous reads is capped by the existing\n> maintenance_io_concurrency GUC. The feature is disabled by default.\n> \n> Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>\n> Discussion:\n> https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n\nWhy is it disabled by default? Just for \"risk management\"?\n\n\n> + <varlistentry id=\"guc-max-wal-prefetch-distance\" xreflabel=\"max_wal_prefetch_distance\">\n> + <term><varname>max_wal_prefetch_distance</varname> (<type>integer</type>)\n> + <indexterm>\n> + <primary><varname>max_wal_prefetch_distance</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + The maximum distance to look ahead in the WAL during recovery, to find\n> + blocks to prefetch. Prefetching blocks that will soon be needed can\n> + reduce I/O wait times. The number of concurrent prefetches is limited\n> + by this setting as well as <xref linkend=\"guc-maintenance-io-concurrency\"/>.\n> + If this value is specified without units, it is taken as bytes.\n> + The default is -1, meaning that WAL prefetching is disabled.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nIs it worth noting that a too large distance could hurt, because the\nbuffers might get evicted again?\n\n\n> + <varlistentry id=\"guc-wal-prefetch-fpw\" xreflabel=\"wal_prefetch_fpw\">\n> + <term><varname>wal_prefetch_fpw</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>wal_prefetch_fpw</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Whether to prefetch blocks with full page images during recovery.\n> + Usually this doesn't help, since such blocks will not be read. However,\n> + on file systems with a block size larger than\n> + <productname>PostgreSQL</productname>'s, prefetching can avoid a costly\n> + read-before-write when a blocks are later written.\n> + This setting has no effect unless\n> + <xref linkend=\"guc-max-wal-prefetch-distance\"/> is set to a positive number.\n> + The default is off.\n> + </para>\n> + </listitem>\n> + </varlistentry>\n\nHm. I think this needs more details - it's not clear enough what this\nactually controls. I assume it's about prefetching for WAL records that\ncontain the FPW, but it also could be read to be about not prefetching\nany pages that had FPWs before, or such?\n\n\n> </variablelist>\n> </sect2>\n> <sect2 id=\"runtime-config-wal-archiving\">\n> diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> index 987580d6df..df4291092b 100644\n> --- a/doc/src/sgml/monitoring.sgml\n> +++ b/doc/src/sgml/monitoring.sgml\n> @@ -320,6 +320,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> </entry>\n> </row>\n> \n> + <row>\n> + <entry><structname>pg_stat_wal_prefetcher</structname><indexterm><primary>pg_stat_wal_prefetcher</primary></indexterm></entry>\n> + <entry>Only one row, showing statistics about blocks prefetched during recovery.\n> + See <xref linkend=\"pg-stat-wal-prefetcher-view\"/> for details.\n> + </entry>\n> + </row>\n> +\n\n'prefetcher' somehow sounds odd to me. I also suspect that we'll want to\nhave additional prefetching stat tables going forward. Perhaps\n'pg_stat_prefetch_wal'?\n\n\n> + <row>\n> + <entry><structfield>distance</structfield></entry>\n> + <entry><type>integer</type></entry>\n> + <entry>How far ahead of recovery the WAL prefetcher is currently reading, in bytes</entry>\n> + </row>\n> + <row>\n> + <entry><structfield>queue_depth</structfield></entry>\n> + <entry><type>integer</type></entry>\n> + <entry>How many prefetches have been initiated but are not yet known to have completed</entry>\n> + </row>\n> + </tbody>\n> + </tgroup>\n> + </table>\n\nIs there a way we could have a \"historical\" version of at least some of\nthese? An average queue depth, or such?\n\nIt'd be useful to somewhere track the time spent initiating prefetch\nrequests. Otherwise it's quite hard to evaluate whether the queue is too\ndeep (and just blocks in the OS).\n\nI think it'd be good to have a 'reset time' column.\n\n\n> + <para>\n> + The <structname>pg_stat_wal_prefetcher</structname> view will contain only\n> + one row. It is filled with nulls if recovery is not running or WAL\n> + prefetching is not enabled. See <xref linkend=\"guc-max-wal-prefetch-distance\"/>\n> + for more information. The counters in this view are reset whenever the\n> + <xref linkend=\"guc-max-wal-prefetch-distance\"/>,\n> + <xref linkend=\"guc-wal-prefetch-fpw\"/> or\n> + <xref linkend=\"guc-maintenance-io-concurrency\"/> setting is changed and\n> + the server configuration is reloaded.\n> + </para>\n> +\n\nSo pg_stat_reset_shared() cannot be used? If so, why?\n\nIt sounds like the counters aren't persisted via the stats system - if\nso, why?\n\n\n\n> @@ -7105,6 +7114,31 @@ StartupXLOG(void)\n> \t\t\t\t/* Handle interrupt signals of startup process */\n> \t\t\t\tHandleStartupProcInterrupts();\n> \n> +\t\t\t\t/*\n> +\t\t\t\t * The first time through, or if any relevant settings or the\n> +\t\t\t\t * WAL source changes, we'll restart the prefetching machinery\n> +\t\t\t\t * as appropriate. This is simpler than trying to handle\n> +\t\t\t\t * various complicated state changes.\n> +\t\t\t\t */\n> +\t\t\t\tif (unlikely(reset_wal_prefetcher))\n> +\t\t\t\t{\n> +\t\t\t\t\t/* If we had one already, destroy it. */\n> +\t\t\t\t\tif (prefetcher)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tXLogPrefetcherFree(prefetcher);\n> +\t\t\t\t\t\tprefetcher = NULL;\n> +\t\t\t\t\t}\n> +\t\t\t\t\t/* If we want one, create it. */\n> +\t\t\t\t\tif (max_wal_prefetch_distance > 0)\n> +\t\t\t\t\t\t\tprefetcher = XLogPrefetcherAllocate(xlogreader->ReadRecPtr,\n> +\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\tcurrentSource == XLOG_FROM_STREAM);\n> +\t\t\t\t\treset_wal_prefetcher = false;\n> +\t\t\t\t}\n\nDo we really need all of this code in StartupXLOG() itself? Could it be\nin HandleStartupProcInterrupts() or at least a helper routine called\nhere?\n\n\n> +\t\t\t\t/* Peform WAL prefetching, if enabled. */\n> +\t\t\t\tif (prefetcher)\n> +\t\t\t\t\tXLogPrefetcherReadAhead(prefetcher, xlogreader->ReadRecPtr);\n> +\n> \t\t\t\t/*\n> \t\t\t\t * Pause WAL replay, if requested by a hot-standby session via\n> \t\t\t\t * SetRecoveryPause().\n\nPersonally, I'd rather have the if () be in\nXLogPrefetcherReadAhead(). With an inline wrapper doing the check, if\nthe call bothers you (but I don't think it needs to).\n\n\n> +/*-------------------------------------------------------------------------\n> + *\n> + * xlogprefetcher.c\n> + *\t\tPrefetching support for PostgreSQL write-ahead log manager\n> + *\n\nAn architectural overview here would be good.\n\n\n> +struct XLogPrefetcher\n> +{\n> +\t/* Reader and current reading state. */\n> +\tXLogReaderState *reader;\n> +\tXLogReadLocalOptions options;\n> +\tbool\t\t\thave_record;\n> +\tbool\t\t\tshutdown;\n> +\tint\t\t\t\tnext_block_id;\n> +\n> +\t/* Book-keeping required to avoid accessing non-existing blocks. */\n> +\tHTAB\t\t *filter_table;\n> +\tdlist_head\t\tfilter_queue;\n> +\n> +\t/* Book-keeping required to limit concurrent prefetches. */\n> +\tXLogRecPtr\t *prefetch_queue;\n> +\tint\t\t\t\tprefetch_queue_size;\n> +\tint\t\t\t\tprefetch_head;\n> +\tint\t\t\t\tprefetch_tail;\n> +\n> +\t/* Details of last prefetch to skip repeats and seq scans. */\n> +\tSMgrRelation\tlast_reln;\n> +\tRelFileNode\t\tlast_rnode;\n> +\tBlockNumber\t\tlast_blkno;\n\nDo you have a comment somewhere explaining why you want to avoid\nseqscans (I assume it's about avoiding regressions in linux, but only\nbecause I recall chatting with you about it).\n\n\n> +/*\n> + * On modern systems this is really just *counter++. On some older systems\n> + * there might be more to it, due to inability to read and write 64 bit values\n> + * atomically. The counters will only be written to by one process, and there\n> + * is no ordering requirement, so there's no point in using higher overhead\n> + * pg_atomic_fetch_add_u64().\n> + */\n> +static inline void inc_counter(pg_atomic_uint64 *counter)\n> +{\n> +\tpg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> +}\n\nCould be worthwhile to add to the atomics infrastructure itself - on the\nplatforms where this needs spinlocks this will lead to two acquisitions,\nrather than one.\n\n\n> +/*\n> + * Create a prefetcher that is ready to begin prefetching blocks referenced by\n> + * WAL that is ahead of the given lsn.\n> + */\n> +XLogPrefetcher *\n> +XLogPrefetcherAllocate(XLogRecPtr lsn, bool streaming)\n> +{\n> +\tstatic HASHCTL hash_table_ctl = {\n> +\t\t.keysize = sizeof(RelFileNode),\n> +\t\t.entrysize = sizeof(XLogPrefetcherFilter)\n> +\t};\n> +\tXLogPrefetcher *prefetcher = palloc0(sizeof(*prefetcher));\n> +\n> +\tprefetcher->options.nowait = true;\n> +\tif (streaming)\n> +\t{\n> +\t\t/*\n> +\t\t * We're only allowed to read as far as the WAL receiver has written.\n> +\t\t * We don't have to wait for it to be flushed, though, as recovery\n> +\t\t * does, so that gives us a chance to get a bit further ahead.\n> +\t\t */\n> +\t\tprefetcher->options.read_upto_policy = XLRO_WALRCV_WRITTEN;\n> +\t}\n> +\telse\n> +\t{\n> +\t\t/* We're allowed to read as far as we can. */\n> +\t\tprefetcher->options.read_upto_policy = XLRO_LSN;\n> +\t\tprefetcher->options.lsn = (XLogRecPtr) -1;\n> +\t}\n> +\tprefetcher->reader = XLogReaderAllocate(wal_segment_size,\n> +\t\t\t\t\t\t\t\t\t\t\tNULL,\n> +\t\t\t\t\t\t\t\t\t\t\tread_local_xlog_page,\n> +\t\t\t\t\t\t\t\t\t\t\t&prefetcher->options);\n> +\tprefetcher->filter_table = hash_create(\"PrefetchFilterTable\", 1024,\n> +\t\t\t\t\t\t\t\t\t\t &hash_table_ctl,\n> +\t\t\t\t\t\t\t\t\t\t HASH_ELEM | HASH_BLOBS);\n> +\tdlist_init(&prefetcher->filter_queue);\n> +\n> +\t/*\n> +\t * The size of the queue is based on the maintenance_io_concurrency\n> +\t * setting. In theory we might have a separate queue for each tablespace,\n> +\t * but it's not clear how that should work, so for now we'll just use the\n> +\t * general GUC to rate-limit all prefetching.\n> +\t */\n> +\tprefetcher->prefetch_queue_size = maintenance_io_concurrency;\n> +\tprefetcher->prefetch_queue = palloc0(sizeof(XLogRecPtr) * prefetcher->prefetch_queue_size);\n> +\tprefetcher->prefetch_head = prefetcher->prefetch_tail = 0;\n> +\n> +\t/* Prepare to read at the given LSN. */\n> +\tereport(LOG,\n> +\t\t\t(errmsg(\"WAL prefetch started at %X/%X\",\n> +\t\t\t\t\t(uint32) (lsn << 32), (uint32) lsn)));\n> +\tXLogBeginRead(prefetcher->reader, lsn);\n> +\n> +\tXLogPrefetcherResetMonitoringStats();\n> +\n> +\treturn prefetcher;\n> +}\n> +\n> +/*\n> + * Destroy a prefetcher and release all resources.\n> + */\n> +void\n> +XLogPrefetcherFree(XLogPrefetcher *prefetcher)\n> +{\n> +\tdouble\t\tavg_distance = 0;\n> +\tdouble\t\tavg_queue_depth = 0;\n> +\n> +\t/* Log final statistics. */\n> +\tif (prefetcher->samples > 0)\n> +\t{\n> +\t\tavg_distance = prefetcher->distance_sum / prefetcher->samples;\n> +\t\tavg_queue_depth = prefetcher->queue_depth_sum / prefetcher->samples;\n> +\t}\n> +\tereport(LOG,\n> +\t\t\t(errmsg(\"WAL prefetch finished at %X/%X; \"\n> +\t\t\t\t\t\"prefetch = \" UINT64_FORMAT \", \"\n> +\t\t\t\t\t\"skip_hit = \" UINT64_FORMAT \", \"\n> +\t\t\t\t\t\"skip_new = \" UINT64_FORMAT \", \"\n> +\t\t\t\t\t\"skip_fpw = \" UINT64_FORMAT \", \"\n> +\t\t\t\t\t\"skip_seq = \" UINT64_FORMAT \", \"\n> +\t\t\t\t\t\"avg_distance = %f, \"\n> +\t\t\t\t\t\"avg_queue_depth = %f\",\n> +\t\t\t (uint32) (prefetcher->reader->EndRecPtr << 32),\n> +\t\t\t (uint32) (prefetcher->reader->EndRecPtr),\n> +\t\t\t pg_atomic_read_u64(&MonitoringStats->prefetch),\n> +\t\t\t pg_atomic_read_u64(&MonitoringStats->skip_hit),\n> +\t\t\t pg_atomic_read_u64(&MonitoringStats->skip_new),\n> +\t\t\t pg_atomic_read_u64(&MonitoringStats->skip_fpw),\n> +\t\t\t pg_atomic_read_u64(&MonitoringStats->skip_seq),\n> +\t\t\t avg_distance,\n> +\t\t\t avg_queue_depth)));\n> +\tXLogReaderFree(prefetcher->reader);\n> +\thash_destroy(prefetcher->filter_table);\n> +\tpfree(prefetcher->prefetch_queue);\n> +\tpfree(prefetcher);\n> +\n> +\tXLogPrefetcherResetMonitoringStats();\n> +}\n\nIt's possibly overkill, but I think it'd be a good idea to do all the\nallocations within a prefetch specific memory context. That makes\ndetecting potential leaks or such easier.\n\n\n\n> +\t/* Can we drop any filters yet, due to problem records begin replayed? */\n\nOdd grammar.\n\n\n> +\tXLogPrefetcherCompleteFilters(prefetcher, replaying_lsn);\n\nHm, why isn't this part of the loop below?\n\n\n> +\t/* Main prefetch loop. */\n> +\tfor (;;)\n> +\t{\n\nThis kind of looks like a separate process' main loop. The name\nindicates similar. And there's no architecture documentation\ndisinclining one from that view...\n\n\nThe loop body is quite long. I think it should be split into a number of\nhelper functions. Perhaps one to ensure a block is read, one to maintain\nstats, and then one to process block references?\n\n\n> +\t\t/*\n> +\t\t * Scan the record for block references. We might already have been\n> +\t\t * partway through processing this record when we hit maximum I/O\n> +\t\t * concurrency, so start where we left off.\n> +\t\t */\n> +\t\tfor (int i = prefetcher->next_block_id; i <= reader->max_block_id; ++i)\n> +\t\t{\n\nSuper pointless nitpickery: For a loop-body this big I'd rather name 'i'\n'blockid' or such.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 24 Mar 2020 15:31:52 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nThanks for all that feedback. It's been a strange couple of weeks,\nbut I finally have a new version that addresses most of that feedback\n(but punts on a couple of suggestions for later development, due to\nlack of time).\n\nIt also fixes a couple of other problems I found with the previous version:\n\n1. While streaming, whenever it hit the end of available data (ie LSN\nwritten by WAL receiver), it would close and then reopen the WAL\nsegment. Fixed by the machinery in 0007 which allows for \"would\nblock\" as distinct from other errors.\n\n2. During crash recovery, there were some edge cases where it would\ntry to read the next WAL segment when there isn't one. Also fixed by\n0007.\n\n3. It was maxing out at maintenance_io_concurrency - 1 due to a silly\ncircular buffer fence post bug.\n\nNote that 0006 is just for illustration, it's not proposed for commit.\n\nOn Wed, Mar 25, 2020 at 11:31 AM Andres Freund <andres@anarazel.de> wrote:\n> On 2020-03-18 18:18:44 +1300, Thomas Munro wrote:\n> > From 1b03eb5ada24c3b23ab8ca6db50e0c5d90d38259 Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <tmunro@postgresql.org>\n> > Date: Mon, 9 Dec 2019 17:22:07 +1300\n> > Subject: [PATCH 3/5] Add WalRcvGetWriteRecPtr() (new definition).\n> >\n> > A later patch will read received WAL to prefetch referenced blocks,\n> > without waiting for the data to be flushed to disk. To do that, it\n> > needs to be able to see the write pointer advancing in shared memory.\n> >\n> > The function formerly bearing name was recently renamed to\n> > WalRcvGetFlushRecPtr(), which better described what it does.\n>\n> Hm. I'm a bit weary of reusing the name with a different meaning. If\n> there's any external references, this'll hide that they need to\n> adapt. Perhaps, even if it's a bit clunky, name it GetUnflushedRecPtr?\n\nWell, at least external code won't compile due to the change in arguments:\n\nextern XLogRecPtr GetWalRcvWriteRecPtr(XLogRecPtr *latestChunkStart,\nTimeLineID *receiveTLI);\nextern XLogRecPtr GetWalRcvWriteRecPtr(void);\n\nAnyone who is using that for some kind of data integrity purposes\nshould hopefully be triggered to investigate, no? I tried to think of\na better naming scheme but...\n\n> > From c62fde23f70ff06833d743a1c85716e15f3c813c Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <thomas.munro@gmail.com>\n> > Date: Tue, 17 Mar 2020 17:26:41 +1300\n> > Subject: [PATCH 4/5] Allow PrefetchBuffer() to report what happened.\n> >\n> > Report whether a prefetch was actually initiated due to a cache miss, so\n> > that callers can limit the number of concurrent I/Os they try to issue,\n> > without counting the prefetch calls that did nothing because the page\n> > was already in our buffers.\n> >\n> > If the requested block was already cached, return a valid buffer. This\n> > might enable future code to avoid a buffer mapping lookup, though it\n> > will need to recheck the buffer before using it because it's not pinned\n> > so could be reclaimed at any time.\n> >\n> > Report neither hit nor miss when a relation's backing file is missing,\n> > to prepare for use during recovery. This will be used to handle cases\n> > of relations that are referenced in the WAL but have been unlinked\n> > already due to actions covered by WAL records that haven't been replayed\n> > yet, after a crash.\n>\n> We probably should take this into account in nodeBitmapHeapscan.c\n\nIndeed. The naive version would be something like:\n\ndiff --git a/src/backend/executor/nodeBitmapHeapscan.c\nb/src/backend/executor/nodeBitmapHeapscan.c\nindex 726d3a2d9a..3cd644d0ac 100644\n--- a/src/backend/executor/nodeBitmapHeapscan.c\n+++ b/src/backend/executor/nodeBitmapHeapscan.c\n@@ -484,13 +484,11 @@ BitmapPrefetch(BitmapHeapScanState *node,\nTableScanDesc scan)\n node->prefetch_iterator = NULL;\n break;\n }\n- node->prefetch_pages++;\n\n /*\n * If we expect not to have to\nactually read this heap page,\n * skip this prefetch call, but\ncontinue to run the prefetch\n- * logic normally. (Would it be\nbetter not to increment\n- * prefetch_pages?)\n+ * logic normally.\n *\n * This depends on the assumption that\nthe index AM will\n * report the same recheck flag for\nthis future heap page as\n@@ -504,7 +502,13 @@ BitmapPrefetch(BitmapHeapScanState *node,\nTableScanDesc scan)\n\n &node->pvmbuffer));\n\n if (!skip_fetch)\n- PrefetchBuffer(scan->rs_rd,\nMAIN_FORKNUM, tbmpre->blockno);\n+ {\n+ PrefetchBufferResult prefetch;\n+\n+ prefetch =\nPrefetchBuffer(scan->rs_rd, MAIN_FORKNUM, tbmpre->blockno);\n+ if (prefetch.initiated_io)\n+ node->prefetch_pages++;\n+ }\n }\n }\n\n... but that might get arbitrarily far ahead, so it probably needs\nsome kind of cap, and the parallel version is a bit more complicated.\nSomething for later, along with more prefetching opportunities.\n\n> > diff --git a/src/backend/storage/buffer/bufmgr.c b/src/backend/storage/buffer/bufmgr.c\n> > index d30aed6fd9..4ceb40a856 100644\n> > --- a/src/backend/storage/buffer/bufmgr.c\n> > +++ b/src/backend/storage/buffer/bufmgr.c\n> > @@ -469,11 +469,13 @@ static int ts_ckpt_progress_comparator(Datum a, Datum b, void *arg);\n> > /*\n> > * Implementation of PrefetchBuffer() for shared buffers.\n> > */\n> > -void\n> > +PrefetchBufferResult\n> > PrefetchSharedBuffer(struct SMgrRelationData *smgr_reln,\n> > ForkNumber forkNum,\n> > BlockNumber blockNum)\n> > {\n> > + PrefetchBufferResult result = { InvalidBuffer, false };\n> > +\n> > #ifdef USE_PREFETCH\n> > BufferTag newTag; /* identity of requested block */\n> > uint32 newHash; /* hash value for newTag */\n> > @@ -497,7 +499,23 @@ PrefetchSharedBuffer(struct SMgrRelationData *smgr_reln,\n> >\n> > /* If not in buffers, initiate prefetch */\n> > if (buf_id < 0)\n> > - smgrprefetch(smgr_reln, forkNum, blockNum);\n> > + {\n> > + /*\n> > + * Try to initiate an asynchronous read. This returns false in\n> > + * recovery if the relation file doesn't exist.\n> > + */\n> > + if (smgrprefetch(smgr_reln, forkNum, blockNum))\n> > + result.initiated_io = true;\n> > + }\n> > + else\n> > + {\n> > + /*\n> > + * Report the buffer it was in at that time. The caller may be able\n> > + * to avoid a buffer table lookup, but it's not pinned and it must be\n> > + * rechecked!\n> > + */\n> > + result.buffer = buf_id + 1;\n>\n> Perhaps it'd be better to name this \"last_buffer\" or such, to make it\n> clearer that it may be outdated?\n\nOK. Renamed to \"recent_buffer\".\n\n> > -void\n> > +PrefetchBufferResult\n> > PrefetchBuffer(Relation reln, ForkNumber forkNum, BlockNumber blockNum)\n> > {\n> > #ifdef USE_PREFETCH\n> > @@ -540,13 +564,17 @@ PrefetchBuffer(Relation reln, ForkNumber forkNum, BlockNumber blockNum)\n> > errmsg(\"cannot access temporary tables of other sessions\")));\n> >\n> > /* pass it off to localbuf.c */\n> > - PrefetchLocalBuffer(reln->rd_smgr, forkNum, blockNum);\n> > + return PrefetchLocalBuffer(reln->rd_smgr, forkNum, blockNum);\n> > }\n> > else\n> > {\n> > /* pass it to the shared buffer version */\n> > - PrefetchSharedBuffer(reln->rd_smgr, forkNum, blockNum);\n> > + return PrefetchSharedBuffer(reln->rd_smgr, forkNum, blockNum);\n> > }\n> > +#else\n> > + PrefetchBuffer result = { InvalidBuffer, false };\n> > +\n> > + return result;\n> > #endif /* USE_PREFETCH */\n> > }\n>\n> Hm. Now that results are returned indicating whether the buffer is in\n> s_b - shouldn't the return value be accurate regardless of USE_PREFETCH?\n\nYeah. Done.\n\n> > +/*\n> > + * Type returned by PrefetchBuffer().\n> > + */\n> > +typedef struct PrefetchBufferResult\n> > +{\n> > + Buffer buffer; /* If valid, a hit (recheck needed!) */\n>\n> I assume there's no user of this yet? Even if there's not, I wonder if\n> it still is worth adding and referencing a helper to do so correctly?\n\nIt *is* used, but only to see if it's valid. 0006 is a not-for-commit\npatch to show how you might use it later to read a buffer. To\nactually use this for something like bitmap heap scan, you'd first\nneed to fix the modularity violations in that code (I mean we have\nPrefetchBuffer() in nodeBitmapHeapscan.c, but the corresponding\n[ReleaseAnd]ReadBuffer() in heapam.c, and you'd need to get these into\nthe same module and/or to communicate in some graceful way).\n\n> > From 42ba0a89260d46230ac0df791fae18bfdca0092f Mon Sep 17 00:00:00 2001\n> > From: Thomas Munro <thomas.munro@gmail.com>\n> > Date: Wed, 18 Mar 2020 16:35:27 +1300\n> > Subject: [PATCH 5/5] Prefetch referenced blocks during recovery.\n> >\n> > Introduce a new GUC max_wal_prefetch_distance. If it is set to a\n> > positive number of bytes, then read ahead in the WAL at most that\n> > distance, and initiate asynchronous reading of referenced blocks. The\n> > goal is to avoid I/O stalls and benefit from concurrent I/O. The number\n> > of concurrency asynchronous reads is capped by the existing\n> > maintenance_io_concurrency GUC. The feature is disabled by default.\n> >\n> > Reviewed-by: Tomas Vondra <tomas.vondra@2ndquadrant.com>\n> > Reviewed-by: Alvaro Herrera <alvherre@2ndquadrant.com>\n> > Discussion:\n> > https://postgr.es/m/CA%2BhUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq%3DAovOddfHpA%40mail.gmail.com\n>\n> Why is it disabled by default? Just for \"risk management\"?\n\nWell, it's not free, and might not help you, so not everyone would\nwant it on. I think the overheads can be mostly removed with more\nwork in a later release. Perhaps we could commit it enabled by\ndefault, and then discuss it before release after looking at some more\ndata? On that basis I have now made it default to on, with\nmax_wal_prefetch_distance = 256kB, if your build has USE_PREFETCH.\nObviously this number can be discussed.\n\n> > + <varlistentry id=\"guc-max-wal-prefetch-distance\" xreflabel=\"max_wal_prefetch_distance\">\n> > + <term><varname>max_wal_prefetch_distance</varname> (<type>integer</type>)\n> > + <indexterm>\n> > + <primary><varname>max_wal_prefetch_distance</varname> configuration parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + The maximum distance to look ahead in the WAL during recovery, to find\n> > + blocks to prefetch. Prefetching blocks that will soon be needed can\n> > + reduce I/O wait times. The number of concurrent prefetches is limited\n> > + by this setting as well as <xref linkend=\"guc-maintenance-io-concurrency\"/>.\n> > + If this value is specified without units, it is taken as bytes.\n> > + The default is -1, meaning that WAL prefetching is disabled.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Is it worth noting that a too large distance could hurt, because the\n> buffers might get evicted again?\n\nOK, I tried to explain that.\n\n> > + <varlistentry id=\"guc-wal-prefetch-fpw\" xreflabel=\"wal_prefetch_fpw\">\n> > + <term><varname>wal_prefetch_fpw</varname> (<type>boolean</type>)\n> > + <indexterm>\n> > + <primary><varname>wal_prefetch_fpw</varname> configuration parameter</primary>\n> > + </indexterm>\n> > + </term>\n> > + <listitem>\n> > + <para>\n> > + Whether to prefetch blocks with full page images during recovery.\n> > + Usually this doesn't help, since such blocks will not be read. However,\n> > + on file systems with a block size larger than\n> > + <productname>PostgreSQL</productname>'s, prefetching can avoid a costly\n> > + read-before-write when a blocks are later written.\n> > + This setting has no effect unless\n> > + <xref linkend=\"guc-max-wal-prefetch-distance\"/> is set to a positive number.\n> > + The default is off.\n> > + </para>\n> > + </listitem>\n> > + </varlistentry>\n>\n> Hm. I think this needs more details - it's not clear enough what this\n> actually controls. I assume it's about prefetching for WAL records that\n> contain the FPW, but it also could be read to be about not prefetching\n> any pages that had FPWs before, or such?\n\nOk, I have elaborated.\n\n> > </variablelist>\n> > </sect2>\n> > <sect2 id=\"runtime-config-wal-archiving\">\n> > diff --git a/doc/src/sgml/monitoring.sgml b/doc/src/sgml/monitoring.sgml\n> > index 987580d6df..df4291092b 100644\n> > --- a/doc/src/sgml/monitoring.sgml\n> > +++ b/doc/src/sgml/monitoring.sgml\n> > @@ -320,6 +320,13 @@ postgres 27093 0.0 0.0 30096 2752 ? Ss 11:34 0:00 postgres: ser\n> > </entry>\n> > </row>\n> >\n> > + <row>\n> > + <entry><structname>pg_stat_wal_prefetcher</structname><indexterm><primary>pg_stat_wal_prefetcher</primary></indexterm></entry>\n> > + <entry>Only one row, showing statistics about blocks prefetched during recovery.\n> > + See <xref linkend=\"pg-stat-wal-prefetcher-view\"/> for details.\n> > + </entry>\n> > + </row>\n> > +\n>\n> 'prefetcher' somehow sounds odd to me. I also suspect that we'll want to\n> have additional prefetching stat tables going forward. Perhaps\n> 'pg_stat_prefetch_wal'?\n\nWorks for me, though while thinking about this I realised that the\n\"WAL\" part was bothering me. It sounds like we're prefetching WAL\nitself, which would be a different thing. So I renamed this view to\npg_stat_prefetch_recovery.\n\nThen I renamed the main GUCs that control this thing to:\n\n max_recovery_prefetch_distance\n recovery_prefetch_fpw\n\n> > + <row>\n> > + <entry><structfield>distance</structfield></entry>\n> > + <entry><type>integer</type></entry>\n> > + <entry>How far ahead of recovery the WAL prefetcher is currently reading, in bytes</entry>\n> > + </row>\n> > + <row>\n> > + <entry><structfield>queue_depth</structfield></entry>\n> > + <entry><type>integer</type></entry>\n> > + <entry>How many prefetches have been initiated but are not yet known to have completed</entry>\n> > + </row>\n> > + </tbody>\n> > + </tgroup>\n> > + </table>\n>\n> Is there a way we could have a \"historical\" version of at least some of\n> these? An average queue depth, or such?\n\nOk, I added simple online averages for distance and queue depth that\ntake a sample every time recovery advances by 256kB.\n\n> It'd be useful to somewhere track the time spent initiating prefetch\n> requests. Otherwise it's quite hard to evaluate whether the queue is too\n> deep (and just blocks in the OS).\n\nI agree that that sounds useful, and I thought about various ways to\ndo that that involved new views, until I eventually found myself\nwondering: why isn't recovery's I/O already tracked via the existing\nstats views? For example, why can't I see blks_read, blks_hit,\nblk_read_time etc moving in pg_stat_database due to recovery activity?\n\nI seems like if you made that work first, or created a new view\npgstatio view for that, then you could add prefetching counters and\ntiming (if track_io_timing is on) to the existing machinery so that\nbufmgr.c would automatically capture it, and then not only recovery\nbut also stuff like bitmap heap scan could also be measured the same\nway.\n\nHowever, time is short, so I'm not attempting to do anything like that\nnow. You can measure the posix_fadvise() times with OS facilities in\nthe meantime.\n\n> I think it'd be good to have a 'reset time' column.\n\nDone, as stats_reset following other examples.\n\n> > + <para>\n> > + The <structname>pg_stat_wal_prefetcher</structname> view will contain only\n> > + one row. It is filled with nulls if recovery is not running or WAL\n> > + prefetching is not enabled. See <xref linkend=\"guc-max-wal-prefetch-distance\"/>\n> > + for more information. The counters in this view are reset whenever the\n> > + <xref linkend=\"guc-max-wal-prefetch-distance\"/>,\n> > + <xref linkend=\"guc-wal-prefetch-fpw\"/> or\n> > + <xref linkend=\"guc-maintenance-io-concurrency\"/> setting is changed and\n> > + the server configuration is reloaded.\n> > + </para>\n> > +\n>\n> So pg_stat_reset_shared() cannot be used? If so, why?\n\nHmm. OK, I made pg_stat_reset_shared('prefetch_recovery') work.\n\n> It sounds like the counters aren't persisted via the stats system - if\n> so, why?\n\nOk, I made it persist the simple counters by sending to the to stats\ncollector periodically. The view still shows data straight out of\nshmem though, not out of the stats file. Now I'm wondering if I\nshould have the view show it from the stats file, more like other\nthings, now that I understand that a bit better... hmm.\n\n> > @@ -7105,6 +7114,31 @@ StartupXLOG(void)\n> > /* Handle interrupt signals of startup process */\n> > HandleStartupProcInterrupts();\n> >\n> > + /*\n> > + * The first time through, or if any relevant settings or the\n> > + * WAL source changes, we'll restart the prefetching machinery\n> > + * as appropriate. This is simpler than trying to handle\n> > + * various complicated state changes.\n> > + */\n> > + if (unlikely(reset_wal_prefetcher))\n> > + {\n> > + /* If we had one already, destroy it. */\n> > + if (prefetcher)\n> > + {\n> > + XLogPrefetcherFree(prefetcher);\n> > + prefetcher = NULL;\n> > + }\n> > + /* If we want one, create it. */\n> > + if (max_wal_prefetch_distance > 0)\n> > + prefetcher = XLogPrefetcherAllocate(xlogreader->ReadRecPtr,\n> > + currentSource == XLOG_FROM_STREAM);\n> > + reset_wal_prefetcher = false;\n> > + }\n>\n> Do we really need all of this code in StartupXLOG() itself? Could it be\n> in HandleStartupProcInterrupts() or at least a helper routine called\n> here?\n\nIt's now done differently, so that StartupXLOG() only has three new\nlines: XLogPrefetchBegin() before the loop, XLogPrefetch() in the\nloop, and XLogPrefetchEnd() after the loop.\n\n> > + /* Peform WAL prefetching, if enabled. */\n> > + if (prefetcher)\n> > + XLogPrefetcherReadAhead(prefetcher, xlogreader->ReadRecPtr);\n> > +\n> > /*\n> > * Pause WAL replay, if requested by a hot-standby session via\n> > * SetRecoveryPause().\n>\n> Personally, I'd rather have the if () be in\n> XLogPrefetcherReadAhead(). With an inline wrapper doing the check, if\n> the call bothers you (but I don't think it needs to).\n\nDone.\n\n> > +/*-------------------------------------------------------------------------\n> > + *\n> > + * xlogprefetcher.c\n> > + * Prefetching support for PostgreSQL write-ahead log manager\n> > + *\n>\n> An architectural overview here would be good.\n\nOK, added.\n\n> > +struct XLogPrefetcher\n> > +{\n> > + /* Reader and current reading state. */\n> > + XLogReaderState *reader;\n> > + XLogReadLocalOptions options;\n> > + bool have_record;\n> > + bool shutdown;\n> > + int next_block_id;\n> > +\n> > + /* Book-keeping required to avoid accessing non-existing blocks. */\n> > + HTAB *filter_table;\n> > + dlist_head filter_queue;\n> > +\n> > + /* Book-keeping required to limit concurrent prefetches. */\n> > + XLogRecPtr *prefetch_queue;\n> > + int prefetch_queue_size;\n> > + int prefetch_head;\n> > + int prefetch_tail;\n> > +\n> > + /* Details of last prefetch to skip repeats and seq scans. */\n> > + SMgrRelation last_reln;\n> > + RelFileNode last_rnode;\n> > + BlockNumber last_blkno;\n>\n> Do you have a comment somewhere explaining why you want to avoid\n> seqscans (I assume it's about avoiding regressions in linux, but only\n> because I recall chatting with you about it).\n\nI've added a note to the new architectural comments.\n\n> > +/*\n> > + * On modern systems this is really just *counter++. On some older systems\n> > + * there might be more to it, due to inability to read and write 64 bit values\n> > + * atomically. The counters will only be written to by one process, and there\n> > + * is no ordering requirement, so there's no point in using higher overhead\n> > + * pg_atomic_fetch_add_u64().\n> > + */\n> > +static inline void inc_counter(pg_atomic_uint64 *counter)\n> > +{\n> > + pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> > +}\n>\n> Could be worthwhile to add to the atomics infrastructure itself - on the\n> platforms where this needs spinlocks this will lead to two acquisitions,\n> rather than one.\n\nOk, I added pg_atomic_unlocked_add_fetch_XXX(). (Could also be\n\"fetch_add\", I don't care, I don't use the result).\n\n> > +/*\n> > + * Create a prefetcher that is ready to begin prefetching blocks referenced by\n> > + * WAL that is ahead of the given lsn.\n> > + */\n> > +XLogPrefetcher *\n> > +XLogPrefetcherAllocate(XLogRecPtr lsn, bool streaming)\n> > +{\n> > + static HASHCTL hash_table_ctl = {\n> > + .keysize = sizeof(RelFileNode),\n> > + .entrysize = sizeof(XLogPrefetcherFilter)\n> > + };\n> > + XLogPrefetcher *prefetcher = palloc0(sizeof(*prefetcher));\n> > +\n> > + prefetcher->options.nowait = true;\n> > + if (streaming)\n> > + {\n> > + /*\n> > + * We're only allowed to read as far as the WAL receiver has written.\n> > + * We don't have to wait for it to be flushed, though, as recovery\n> > + * does, so that gives us a chance to get a bit further ahead.\n> > + */\n> > + prefetcher->options.read_upto_policy = XLRO_WALRCV_WRITTEN;\n> > + }\n> > + else\n> > + {\n> > + /* We're allowed to read as far as we can. */\n> > + prefetcher->options.read_upto_policy = XLRO_LSN;\n> > + prefetcher->options.lsn = (XLogRecPtr) -1;\n> > + }\n> > + prefetcher->reader = XLogReaderAllocate(wal_segment_size,\n> > + NULL,\n> > + read_local_xlog_page,\n> > + &prefetcher->options);\n> > + prefetcher->filter_table = hash_create(\"PrefetchFilterTable\", 1024,\n> > + &hash_table_ctl,\n> > + HASH_ELEM | HASH_BLOBS);\n> > + dlist_init(&prefetcher->filter_queue);\n> > +\n> > + /*\n> > + * The size of the queue is based on the maintenance_io_concurrency\n> > + * setting. In theory we might have a separate queue for each tablespace,\n> > + * but it's not clear how that should work, so for now we'll just use the\n> > + * general GUC to rate-limit all prefetching.\n> > + */\n> > + prefetcher->prefetch_queue_size = maintenance_io_concurrency;\n> > + prefetcher->prefetch_queue = palloc0(sizeof(XLogRecPtr) * prefetcher->prefetch_queue_size);\n> > + prefetcher->prefetch_head = prefetcher->prefetch_tail = 0;\n> > +\n> > + /* Prepare to read at the given LSN. */\n> > + ereport(LOG,\n> > + (errmsg(\"WAL prefetch started at %X/%X\",\n> > + (uint32) (lsn << 32), (uint32) lsn)));\n> > + XLogBeginRead(prefetcher->reader, lsn);\n> > +\n> > + XLogPrefetcherResetMonitoringStats();\n> > +\n> > + return prefetcher;\n> > +}\n> > +\n> > +/*\n> > + * Destroy a prefetcher and release all resources.\n> > + */\n> > +void\n> > +XLogPrefetcherFree(XLogPrefetcher *prefetcher)\n> > +{\n> > + double avg_distance = 0;\n> > + double avg_queue_depth = 0;\n> > +\n> > + /* Log final statistics. */\n> > + if (prefetcher->samples > 0)\n> > + {\n> > + avg_distance = prefetcher->distance_sum / prefetcher->samples;\n> > + avg_queue_depth = prefetcher->queue_depth_sum / prefetcher->samples;\n> > + }\n> > + ereport(LOG,\n> > + (errmsg(\"WAL prefetch finished at %X/%X; \"\n> > + \"prefetch = \" UINT64_FORMAT \", \"\n> > + \"skip_hit = \" UINT64_FORMAT \", \"\n> > + \"skip_new = \" UINT64_FORMAT \", \"\n> > + \"skip_fpw = \" UINT64_FORMAT \", \"\n> > + \"skip_seq = \" UINT64_FORMAT \", \"\n> > + \"avg_distance = %f, \"\n> > + \"avg_queue_depth = %f\",\n> > + (uint32) (prefetcher->reader->EndRecPtr << 32),\n> > + (uint32) (prefetcher->reader->EndRecPtr),\n> > + pg_atomic_read_u64(&MonitoringStats->prefetch),\n> > + pg_atomic_read_u64(&MonitoringStats->skip_hit),\n> > + pg_atomic_read_u64(&MonitoringStats->skip_new),\n> > + pg_atomic_read_u64(&MonitoringStats->skip_fpw),\n> > + pg_atomic_read_u64(&MonitoringStats->skip_seq),\n> > + avg_distance,\n> > + avg_queue_depth)));\n> > + XLogReaderFree(prefetcher->reader);\n> > + hash_destroy(prefetcher->filter_table);\n> > + pfree(prefetcher->prefetch_queue);\n> > + pfree(prefetcher);\n> > +\n> > + XLogPrefetcherResetMonitoringStats();\n> > +}\n>\n> It's possibly overkill, but I think it'd be a good idea to do all the\n> allocations within a prefetch specific memory context. That makes\n> detecting potential leaks or such easier.\n\nI looked into that, but in fact it's already pretty clear how much\nmemory this thing is using, if you call\nMemoryContextStats(TopMemoryContext), because it's almost all in a\nnamed hash table:\n\nTopMemoryContext: 155776 total in 6 blocks; 18552 free (8 chunks); 137224 used\n XLogPrefetcherFilterTable: 16384 total in 2 blocks; 4520 free (3\nchunks); 11864 used\n SP-GiST temporary context: 8192 total in 1 blocks; 7928 free (0\nchunks); 264 used\n GiST temporary context: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n GIN recovery temporary context: 8192 total in 1 blocks; 7928 free (0\nchunks); 264 used\n Btree recovery temporary context: 8192 total in 1 blocks; 7928 free\n(0 chunks); 264 used\n RecoveryLockLists: 8192 total in 1 blocks; 2584 free (0 chunks); 5608 used\n PrivateRefCount: 8192 total in 1 blocks; 2584 free (0 chunks); 5608 used\n MdSmgr: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n Pending ops context: 8192 total in 1 blocks; 7928 free (0 chunks); 264 used\n LOCALLOCK hash: 8192 total in 1 blocks; 512 free (0 chunks); 7680 used\n Timezones: 104128 total in 2 blocks; 2584 free (0 chunks); 101544 used\n ErrorContext: 8192 total in 1 blocks; 7928 free (4 chunks); 264 used\nGrand total: 358208 bytes in 20 blocks; 86832 free (15 chunks); 271376 used\n\nThe XLogPrefetcher struct itself is not measured seperately, but I\ndon't think that's a problem, it's small and there's only ever one at\na time. It's that XLogPrefetcherFilterTable that is of variable size\n(though it's often empty). While thinking about this, I made\nprefetch_queue into a flexible array rather than a pointer to palloc'd\nmemory, which seemed a bit tidier.\n\n> > + /* Can we drop any filters yet, due to problem records begin replayed? */\n>\n> Odd grammar.\n\nRewritten.\n\n> > + XLogPrefetcherCompleteFilters(prefetcher, replaying_lsn);\n>\n> Hm, why isn't this part of the loop below?\n\nIt only needs to run when replaying_lsn has advanced (ie when records\nhave been replayed). I hope the new comment makes that clearer.\n\n> > + /* Main prefetch loop. */\n> > + for (;;)\n> > + {\n>\n> This kind of looks like a separate process' main loop. The name\n> indicates similar. And there's no architecture documentation\n> disinclining one from that view...\n\nOK, I have updated the comment.\n\n> The loop body is quite long. I think it should be split into a number of\n> helper functions. Perhaps one to ensure a block is read, one to maintain\n> stats, and then one to process block references?\n\nI've broken the function up. It's now:\n\nStartupXLOG()\n -> XLogPrefetch()\n -> XLogPrefetcherReadAhead()\n -> XLogPrefetcherScanRecords()\n -> XLogPrefetcherScanBlocks()\n\n> > + /*\n> > + * Scan the record for block references. We might already have been\n> > + * partway through processing this record when we hit maximum I/O\n> > + * concurrency, so start where we left off.\n> > + */\n> > + for (int i = prefetcher->next_block_id; i <= reader->max_block_id; ++i)\n> > + {\n>\n> Super pointless nitpickery: For a loop-body this big I'd rather name 'i'\n> 'blockid' or such.\n\nDone.",
"msg_date": "Wed, 8 Apr 2020 04:24:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 4:24 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks for all that feedback. It's been a strange couple of weeks,\n> but I finally have a new version that addresses most of that feedback\n> (but punts on a couple of suggestions for later development, due to\n> lack of time).\n\nHere's an executive summary of an off-list chat with Andres:\n\n* he withdrew his objection to the new definition of\nGetWalRcvWriteRecPtr() based on my argument that any external code\nwill fail to compile anyway\n\n* he doesn't like the naive code that detects sequential access and\nskips prefetching; I agreed to rip it out for now and revisit if/when\nwe have better evidence that that's worth bothering with; the code\npath that does that and the pg_stat_recovery_prefetch.skip_seq counter\nwill remain, but be used only to skip prefetching of repeated access\nto the *same* block for now\n\n* he gave some feedback on the read_local_xlog_page() modifications: I\nprobably need to reconsider the change to logical.c that passes NULL\ninstead of cxt to the read_page callback; and the switch statement in\nread_local_xlog_page() probably should have a case for the preexisting\nmode\n\n* he +1s the plan to commit with the feature enabled, and revisit before release\n\n* he thinks the idea of a variant of ReadBuffer() that takes a\nPrefetchBufferResult (as sketched by the v6 0006 patch) broadly makes\nsense as a stepping stone towards his asychronous I/O proposal, but\nthere's no point in committing something like 0006 without a user\n\nI'm going to go and commit the first few patches in this series, and\ncome back in a bit with a new version of the main patch to fix the\nabove and a compiler warning reported by cfbot.\n\n\n",
"msg_date": "Wed, 8 Apr 2020 12:52:27 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 12:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> * he gave some feedback on the read_local_xlog_page() modifications: I\n> probably need to reconsider the change to logical.c that passes NULL\n> instead of cxt to the read_page callback; and the switch statement in\n> read_local_xlog_page() probably should have a case for the preexisting\n> mode\n\nSo... logical.c wants to give its LogicalDecodingContext to any\nXLogPageReadCB you give it, via \"private_data\"; that is, it really\nonly accepts XLogPageReadCB implementations that understand that (or\nignore it). What I want to do is give every XLogPageReadCB the chance\nto have its own state that it is control of (to receive settings\nspecific to the implementation, or whatever), that you supply along\nwith it. We can't do both kinds of things with private_data, so I\nhave added a second member read_page_data to XLogReaderState. If you\npass in read_local_xlog_page as read_page, then you can optionally\ninstall a pointer to XLogReadLocalOptions as reader->read_page_data,\nto activate the new behaviours I added for prefetching purposes.\n\nWhile working on that, I realised the readahead XLogReader was\nbreaking a rule expressed in XLogReadDetermineTimeLine(). Timelines\nare really confusing and there were probably several subtle or not to\nsubtle bugs there. So I added an option to skip all of that logic,\nand just say \"I command you to read only from TLI X\". It reads the\nsame TLI as recovery is reading, until it hits the end of readable\ndata and that causes prefetching to shut down. Then the main recovery\nloop resets the prefetching module when it sees a TLI switch, so then\nit starts up again. This seems to work reliably, but I've obviously\nhad limited time to test. Does this scheme sound sane?\n\nI think this is basically committable (though of course I wish I had\nmore time to test and review). Ugh. Feature freeze in half an hour.",
"msg_date": "Wed, 8 Apr 2020 23:27:56 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Apr 8, 2020 at 11:27 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 8, 2020 at 12:52 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > * he gave some feedback on the read_local_xlog_page() modifications: I\n> > probably need to reconsider the change to logical.c that passes NULL\n> > instead of cxt to the read_page callback; and the switch statement in\n> > read_local_xlog_page() probably should have a case for the preexisting\n> > mode\n>\n> So... logical.c wants to give its LogicalDecodingContext to any\n> XLogPageReadCB you give it, via \"private_data\"; that is, it really\n> only accepts XLogPageReadCB implementations that understand that (or\n> ignore it). What I want to do is give every XLogPageReadCB the chance\n> to have its own state that it is control of (to receive settings\n> specific to the implementation, or whatever), that you supply along\n> with it. We can't do both kinds of things with private_data, so I\n> have added a second member read_page_data to XLogReaderState. If you\n> pass in read_local_xlog_page as read_page, then you can optionally\n> install a pointer to XLogReadLocalOptions as reader->read_page_data,\n> to activate the new behaviours I added for prefetching purposes.\n>\n> While working on that, I realised the readahead XLogReader was\n> breaking a rule expressed in XLogReadDetermineTimeLine(). Timelines\n> are really confusing and there were probably several subtle or not to\n> subtle bugs there. So I added an option to skip all of that logic,\n> and just say \"I command you to read only from TLI X\". It reads the\n> same TLI as recovery is reading, until it hits the end of readable\n> data and that causes prefetching to shut down. Then the main recovery\n> loop resets the prefetching module when it sees a TLI switch, so then\n> it starts up again. This seems to work reliably, but I've obviously\n> had limited time to test. Does this scheme sound sane?\n>\n> I think this is basically committable (though of course I wish I had\n> more time to test and review). Ugh. Feature freeze in half an hour.\n\nOk, so the following parts of this work have been committed:\n\nb09ff536: Simplify the effective_io_concurrency setting.\nfc34b0d9: Introduce a maintenance_io_concurrency setting.\n3985b600: Support PrefetchBuffer() in recovery.\nd140f2f3: Rationalize GetWalRcv{Write,Flush}RecPtr().\n\nHowever, I didn't want to push the main patch into the tree at\n(literally) the last minute after doing such much work on it in the\nlast few days, without more review from recovery code experts and some\nindependent testing. Judging by the comments made in this thread and\nelsewhere, I think the feature is in demand so I hope there is a way\nwe could get it into 13 in the next couple of days, but I totally\naccept the release management team's prerogative on that.\n\n\n",
"msg_date": "Thu, 9 Apr 2020 00:12:13 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 4/8/20 8:12 AM, Thomas Munro wrote:\n> \n> Ok, so the following parts of this work have been committed:\n> \n> b09ff536: Simplify the effective_io_concurrency setting.\n> fc34b0d9: Introduce a maintenance_io_concurrency setting.\n> 3985b600: Support PrefetchBuffer() in recovery.\n> d140f2f3: Rationalize GetWalRcv{Write,Flush}RecPtr().\n> \n> However, I didn't want to push the main patch into the tree at\n> (literally) the last minute after doing such much work on it in the\n> last few days, without more review from recovery code experts and some\n> independent testing. \n\nI definitely think that was the right call.\n\n> Judging by the comments made in this thread and\n> elsewhere, I think the feature is in demand so I hope there is a way\n> we could get it into 13 in the next couple of days, but I totally\n> accept the release management team's prerogative on that.\n\nThat's up to the RMT, of course, but we did already have an extra week. \nMight be best to just get this in at the beginning of the PG14 cycle. \nFWIW, I do think the feature is really valuable.\n\nLooks like you'll need to rebase, so I'll move this to the next CF in \nWoA state.\n\nRegards,\n-- \n-David\ndavid@pgmasters.net\n\n\n",
"msg_date": "Wed, 8 Apr 2020 08:27:39 -0400",
"msg_from": "David Steele <david@pgmasters.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 9, 2020 at 12:27 AM David Steele <david@pgmasters.net> wrote:\n> On 4/8/20 8:12 AM, Thomas Munro wrote:\n> > Judging by the comments made in this thread and\n> > elsewhere, I think the feature is in demand so I hope there is a way\n> > we could get it into 13 in the next couple of days, but I totally\n> > accept the release management team's prerogative on that.\n>\n> That's up to the RMT, of course, but we did already have an extra week.\n> Might be best to just get this in at the beginning of the PG14 cycle.\n> FWIW, I do think the feature is really valuable.\n>\n> Looks like you'll need to rebase, so I'll move this to the next CF in\n> WoA state.\n\nThanks. Here's a rebase.",
"msg_date": "Thu, 9 Apr 2020 09:55:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "> On Thu, Apr 09, 2020 at 09:55:25AM +1200, Thomas Munro wrote:\n> Thanks. Here's a rebase.\n\nThanks for working on this patch, it seems like a great feature. I'm\nprobably a bit late to the party, but still want to make couple of\ncommentaries.\n\nThe patch indeed looks good, I couldn't find any significant issues so\nfar and almost all my questions I had while reading it were actually\nanswered in this thread. I'm still busy with benchmarking, mostly to see\nhow prefetching would work with different workload distributions and how\nmuch the kernel will actually prefetch.\n\nIn the meantime I have a few questions:\n\n> On Wed, Feb 12, 2020 at 07:52:42PM +1300, Thomas Munro wrote:\n> > On Fri, Jan 3, 2020 at 7:10 AM Tomas Vondra\n> > <tomas.vondra@2ndquadrant.com> wrote:\n> > > Could we instead specify the number of blocks to prefetch? We'd probably\n> > > need to track additional details needed to determine number of blocks to\n> > > prefetch (essentially LSN for all prefetch requests).\n>\n> Here is a new WIP version of the patch set that does that. Changes:\n>\n> 1. It now uses effective_io_concurrency to control how many\n> concurrent prefetches to allow. It's possible that we should have a\n> different GUC to control \"maintenance\" users of concurrency I/O as\n> discussed elsewhere[1], but I'm staying out of that for now; if we\n> agree to do that for VACUUM etc, we can change it easily here. Note\n> that the value is percolated through the ComputeIoConcurrency()\n> function which I think we should discuss, but again that's off topic,\n> I just want to use the standard infrastructure here.\n\nThis totally makes sense, I believe the question \"how much to prefetch\"\neventually depends equally on a type of workload (correlates with how\nfar in WAL to read) and how much resources are available for prefetching\n(correlates with queue depth). But in the documentation it looks like\nmaintenance-io-concurrency is just an \"unimportant\" option, and I'm\nalmost sure will be overlooked by many readers:\n\n The maximum distance to look ahead in the WAL during recovery, to find\n blocks to prefetch. Prefetching blocks that will soon be needed can\n reduce I/O wait times. The number of concurrent prefetches is limited\n by this setting as well as\n <xref linkend=\"guc-maintenance-io-concurrency\"/>. Setting it too high\n might be counterproductive, if it means that data falls out of the\n kernel cache before it is needed. If this value is specified without\n units, it is taken as bytes. A setting of -1 disables prefetching\n during recovery.\n\nMaybe it makes also sense to emphasize that maintenance-io-concurrency\ndirectly affects resource consumption and it's a \"primary control\"?\n\n> On Wed, Mar 18, 2020 at 06:18:44PM +1300, Thomas Munro wrote:\n>\n> Here's a new version that changes that part just a bit more, after a\n> brief chat with Andres about his async I/O plans. It seems clear that\n> returning an enum isn't very extensible, so I decided to try making\n> PrefetchBufferResult a struct whose contents can be extended in the\n> future. In this patch set it's still just used to distinguish 3 cases\n> (hit, miss, no file), but it's now expressed as a buffer and a flag to\n> indicate whether I/O was initiated. You could imagine that the second\n> thing might be replaced by a pointer to an async I/O handle you can\n> wait on or some other magical thing from the future.\n\nI like the idea of extensible PrefetchBufferResult. Just one commentary,\nif I understand correctly the way how it is being used together with\nprefetch_queue assumes one IO operation at a time. This limits potential\nextension of the underlying code, e.g. one can't implement some sort of\nbuffering of requests and submitting an iovec to a sycall, then\nprefetch_queue will no longer correctly represent inflight IO. Also,\ntaking into account that \"we don't have any awareness of when I/O really\ncompletes\", maybe in the future it makes to reconsider having queue in\nthe prefetcher itself and rather ask for this information from the\nunderlying code?\n\n> On Wed, Apr 08, 2020 at 04:24:21AM +1200, Thomas Munro wrote:\n> > Is there a way we could have a \"historical\" version of at least some of\n> > these? An average queue depth, or such?\n>\n> Ok, I added simple online averages for distance and queue depth that\n> take a sample every time recovery advances by 256kB.\n\nMaybe it was discussed in the past in other threads. But if I understand\ncorrectly, this implementation weights all the samples. Since at the\nmoment it depends directly on replaying speed (so a lot of IO involved),\ncouldn't it lead to a single outlier at the beginning skewing this value\nand make it less useful? Does it make sense to decay old values?\n\n\n",
"msg_date": "Sun, 19 Apr 2020 13:48:20 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sun, Apr 19, 2020 at 11:46 PM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> Thanks for working on this patch, it seems like a great feature. I'm\n> probably a bit late to the party, but still want to make couple of\n> commentaries.\n\nHi Dmitry,\n\nThanks for your feedback and your interest in this work!\n\n> The patch indeed looks good, I couldn't find any significant issues so\n> far and almost all my questions I had while reading it were actually\n> answered in this thread. I'm still busy with benchmarking, mostly to see\n> how prefetching would work with different workload distributions and how\n> much the kernel will actually prefetch.\n\nCool.\n\nOne report I heard recently said that if you get rid of I/O stalls,\npread() becomes cheap enough that the much higher frequency lseek()\ncalls I've complained about elsewhere[1] become the main thing\nrecovery is doing, at least on some systems, but I haven't pieced\ntogether the conditions required yet. I'd be interested to know if\nyou see that.\n\n> In the meantime I have a few questions:\n>\n> > 1. It now uses effective_io_concurrency to control how many\n> > concurrent prefetches to allow. It's possible that we should have a\n> > different GUC to control \"maintenance\" users of concurrency I/O as\n> > discussed elsewhere[1], but I'm staying out of that for now; if we\n> > agree to do that for VACUUM etc, we can change it easily here. Note\n> > that the value is percolated through the ComputeIoConcurrency()\n> > function which I think we should discuss, but again that's off topic,\n> > I just want to use the standard infrastructure here.\n>\n> This totally makes sense, I believe the question \"how much to prefetch\"\n> eventually depends equally on a type of workload (correlates with how\n> far in WAL to read) and how much resources are available for prefetching\n> (correlates with queue depth). But in the documentation it looks like\n> maintenance-io-concurrency is just an \"unimportant\" option, and I'm\n> almost sure will be overlooked by many readers:\n>\n> The maximum distance to look ahead in the WAL during recovery, to find\n> blocks to prefetch. Prefetching blocks that will soon be needed can\n> reduce I/O wait times. The number of concurrent prefetches is limited\n> by this setting as well as\n> <xref linkend=\"guc-maintenance-io-concurrency\"/>. Setting it too high\n> might be counterproductive, if it means that data falls out of the\n> kernel cache before it is needed. If this value is specified without\n> units, it is taken as bytes. A setting of -1 disables prefetching\n> during recovery.\n>\n> Maybe it makes also sense to emphasize that maintenance-io-concurrency\n> directly affects resource consumption and it's a \"primary control\"?\n\nYou're right. I will add something in the next version to emphasise that.\n\n> > On Wed, Mar 18, 2020 at 06:18:44PM +1300, Thomas Munro wrote:\n> >\n> > Here's a new version that changes that part just a bit more, after a\n> > brief chat with Andres about his async I/O plans. It seems clear that\n> > returning an enum isn't very extensible, so I decided to try making\n> > PrefetchBufferResult a struct whose contents can be extended in the\n> > future. In this patch set it's still just used to distinguish 3 cases\n> > (hit, miss, no file), but it's now expressed as a buffer and a flag to\n> > indicate whether I/O was initiated. You could imagine that the second\n> > thing might be replaced by a pointer to an async I/O handle you can\n> > wait on or some other magical thing from the future.\n>\n> I like the idea of extensible PrefetchBufferResult. Just one commentary,\n> if I understand correctly the way how it is being used together with\n> prefetch_queue assumes one IO operation at a time. This limits potential\n> extension of the underlying code, e.g. one can't implement some sort of\n> buffering of requests and submitting an iovec to a sycall, then\n> prefetch_queue will no longer correctly represent inflight IO. Also,\n> taking into account that \"we don't have any awareness of when I/O really\n> completes\", maybe in the future it makes to reconsider having queue in\n> the prefetcher itself and rather ask for this information from the\n> underlying code?\n\nYeah, you're right that it'd be good to be able to do some kind of\nbatching up of these requests to reduce system calls. Of course\nposix_fadvise() doesn't support that, but clearly in the AIO future[2]\nit would indeed make sense to buffer up a few of these and then make a\nsingle call to io_uring_enter() on Linux[3] or lio_listio() on a\nhypothetical POSIX AIO implementation[4]. (I'm not sure if there is a\nthing like that on Windows; at a glance, ReadFileScatter() is\nasynchronous (\"overlapped\") but works only on a single handle so it's\nlike a hypothetical POSIX aio_readv(), not like POSIX lio_list()).\n\nPerhaps there could be an extra call PrefetchBufferSubmit() that you'd\ncall at appropriate times, but you obviously can't call it too\ninfrequently.\n\nAs for how to make the prefetch queue a reusable component, rather\nthan having a custom thing like that for each part of our system that\nwants to support prefetching: that's a really good question. I didn't\nsee how to do it, but maybe I didn't try hard enough. I looked at the\nthree users I'm aware of, namely this patch, a btree prefetching patch\nI haven't shared yet, and the existing bitmap heap scan code, and they\nall needed to have their own custom book keeping for this, and I\ncouldn't figure out how to share more infrastructure. In the case of\nthis patch, you currently need to do LSN based book keeping to\nsimulate \"completion\", and that doesn't make sense for other users.\nMaybe it'll become clearer when we have support for completion\nnotification?\n\nSome related questions are why all these parts of our system that know\nhow to prefetch are allowed to do so independently without any kind of\nshared accounting, and why we don't give each tablespace (= our model\nof a device?) its own separate queue. I think it's OK to put these\nquestions off a bit longer until we have more infrastructure and\nexperience. Our current non-answer is at least consistent with our\nlack of an approach to system-wide memory and CPU accounting... I\npersonally think that a better XLogReader that can be used for\nprefetching AND recovery would be a higher priority than that.\n\n> > On Wed, Apr 08, 2020 at 04:24:21AM +1200, Thomas Munro wrote:\n> > > Is there a way we could have a \"historical\" version of at least some of\n> > > these? An average queue depth, or such?\n> >\n> > Ok, I added simple online averages for distance and queue depth that\n> > take a sample every time recovery advances by 256kB.\n>\n> Maybe it was discussed in the past in other threads. But if I understand\n> correctly, this implementation weights all the samples. Since at the\n> moment it depends directly on replaying speed (so a lot of IO involved),\n> couldn't it lead to a single outlier at the beginning skewing this value\n> and make it less useful? Does it make sense to decay old values?\n\nHmm.\n\nI wondered about a reporting one or perhaps three exponential moving\naverages (like Unix 1/5/15 minute load averages), but I didn't propose\nit because: (1) In crash recovery, you can't query it, you just get\nthe log message at the end, and mean unweighted seems OK in that case,\nno? (you are not more interested in the I/O saturation at the end of\nthe recovery compared to the start of recovery are you?), and (2) on a\nstreaming replica, if you want to sample the instantaneous depth and\ncompute an exponential moving average or some more exotic statistical\nconcoction in your monitoring tool, you're free to do so. I suppose\n(2) is an argument for removing the existing average completely from\nthe stat view; I put it in there at Andres's suggestion, but I'm not\nsure I really believe in it. Where is our average replication lag,\nand why don't we compute the stddev of X, Y or Z? I think we should\nprovide primary measurements and let people compute derived statistics\nfrom those.\n\nI suppose the reason for this request was the analogy with Linux\niostat -x's \"aqu-sz\", which is the primary way that people understand\ndevice queue depth on that OS. This number is actually computed by\niostat, not the kernel, so by analogy I could argue that a\nhypothetical pg_iostat program compute that for you from raw\ningredients. AFAIK iostat computes the *unweighted* average queue\ndepth during the time between output lines, by observing changes in\nthe \"aveq\" (\"the sum of how long all requests have spent in flight, in\nmilliseconds\") and \"use\" (\"how many milliseconds there has been at\nleast one IO in flight\") fields of /proc/diskstats. But it's OK that\nit's unweighted, because it computes a new value for every line it\noutput (ie every 5 seconds or whatever you asked for). It's not too\nclear how to do something like that here, but all suggestions are\nweclome.\n\nOr maybe we'll have something more general that makes this more\nspecific thing irrelevant, in future AIO infrastructure work.\n\nOn a more superficial note, one thing I don't like about the last\nversion of the patch is the difference in the ordering of the words in\nthe GUC recovery_prefetch_distance and the view\npg_stat_prefetch_recovery. Hrmph.\n\n[1] https://www.postgresql.org/message-id/CA%2BhUKG%2BNPZeEdLXAcNr%2Bw0YOZVb0Un0_MwTBpgmmVDh7No2jbg%40mail.gmail.com\n[2] https://anarazel.de/talks/2020-01-31-fosdem-aio/aio.pdf\n[3] https://kernel.dk/io_uring.pdf\n[4] https://pubs.opengroup.org/onlinepubs/009695399/functions/lio_listio.html\n\n\n",
"msg_date": "Tue, 21 Apr 2020 17:26:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "> On Tue, Apr 21, 2020 at 05:26:52PM +1200, Thomas Munro wrote:\n>\n> One report I heard recently said that if you get rid of I/O stalls,\n> pread() becomes cheap enough that the much higher frequency lseek()\n> calls I've complained about elsewhere[1] become the main thing\n> recovery is doing, at least on some systems, but I haven't pieced\n> together the conditions required yet. I'd be interested to know if\n> you see that.\n\nAt the moment I've performed couple of tests for the replication in case\nwhen almost everything is in memory (mostly by mistake, I was expecting\nthat a postgres replica within a badly memory limited cgroup will cause\nmore IO, but looks like kernel do not evict pages anyway). Not sure if\nthat's what you mean by getting rid of IO stalls, but in these tests\nprofiling shows lseek & pread appear in similar amount of samples.\n\nIf I understand correctly, eventually one can measure prefetching\ninfluence by looking at different redo function execution time (assuming\nthat data they operate with is already prefetched they should be\nfaster). I still have to clarify what is the exact reason, but even in\nthe situation described above (in memory) there is some visible\ndifference, e.g.\n\n # with prefetch\n Function = b'heap2_redo' [8064]\n nsecs : count distribution\n 4096 -> 8191 : 1213 | |\n 8192 -> 16383 : 66639 |****************************************|\n 16384 -> 32767 : 27846 |**************** |\n 32768 -> 65535 : 873 | |\n\n # without prefetch\n Function = b'heap2_redo' [17980]\n nsecs : count distribution\n 4096 -> 8191 : 1 | |\n 8192 -> 16383 : 66997 |****************************************|\n 16384 -> 32767 : 30966 |****************** |\n 32768 -> 65535 : 1602 | |\n\n # with prefetch\n Function = b'btree_redo' [8064]\n nsecs : count distribution\n 2048 -> 4095 : 0 | |\n 4096 -> 8191 : 246 |****************************************|\n 8192 -> 16383 : 5 | |\n 16384 -> 32767 : 2 | |\n\n # without prefetch\n Function = b'btree_redo' [17980]\n nsecs : count distribution\n 2048 -> 4095 : 0 | |\n 4096 -> 8191 : 82 |******************** |\n 8192 -> 16383 : 19 |**** |\n 16384 -> 32767 : 160 |****************************************|\n\nOf course it doesn't take into account time we spend doing extra\nsyscalls for prefetching, but still can give some interesting\ninformation.\n\n> > I like the idea of extensible PrefetchBufferResult. Just one commentary,\n> > if I understand correctly the way how it is being used together with\n> > prefetch_queue assumes one IO operation at a time. This limits potential\n> > extension of the underlying code, e.g. one can't implement some sort of\n> > buffering of requests and submitting an iovec to a sycall, then\n> > prefetch_queue will no longer correctly represent inflight IO. Also,\n> > taking into account that \"we don't have any awareness of when I/O really\n> > completes\", maybe in the future it makes to reconsider having queue in\n> > the prefetcher itself and rather ask for this information from the\n> > underlying code?\n>\n> Yeah, you're right that it'd be good to be able to do some kind of\n> batching up of these requests to reduce system calls. Of course\n> posix_fadvise() doesn't support that, but clearly in the AIO future[2]\n> it would indeed make sense to buffer up a few of these and then make a\n> single call to io_uring_enter() on Linux[3] or lio_listio() on a\n> hypothetical POSIX AIO implementation[4]. (I'm not sure if there is a\n> thing like that on Windows; at a glance, ReadFileScatter() is\n> asynchronous (\"overlapped\") but works only on a single handle so it's\n> like a hypothetical POSIX aio_readv(), not like POSIX lio_list()).\n>\n> Perhaps there could be an extra call PrefetchBufferSubmit() that you'd\n> call at appropriate times, but you obviously can't call it too\n> infrequently.\n>\n> As for how to make the prefetch queue a reusable component, rather\n> than having a custom thing like that for each part of our system that\n> wants to support prefetching: that's a really good question. I didn't\n> see how to do it, but maybe I didn't try hard enough. I looked at the\n> three users I'm aware of, namely this patch, a btree prefetching patch\n> I haven't shared yet, and the existing bitmap heap scan code, and they\n> all needed to have their own custom book keeping for this, and I\n> couldn't figure out how to share more infrastructure. In the case of\n> this patch, you currently need to do LSN based book keeping to\n> simulate \"completion\", and that doesn't make sense for other users.\n> Maybe it'll become clearer when we have support for completion\n> notification?\n\nYes, definitely.\n\n> Some related questions are why all these parts of our system that know\n> how to prefetch are allowed to do so independently without any kind of\n> shared accounting, and why we don't give each tablespace (= our model\n> of a device?) its own separate queue. I think it's OK to put these\n> questions off a bit longer until we have more infrastructure and\n> experience. Our current non-answer is at least consistent with our\n> lack of an approach to system-wide memory and CPU accounting... I\n> personally think that a better XLogReader that can be used for\n> prefetching AND recovery would be a higher priority than that.\n\nSure, this patch is quite valuable as it is, and those questions I've\nmentioned are targeting mostly future development.\n\n> > Maybe it was discussed in the past in other threads. But if I understand\n> > correctly, this implementation weights all the samples. Since at the\n> > moment it depends directly on replaying speed (so a lot of IO involved),\n> > couldn't it lead to a single outlier at the beginning skewing this value\n> > and make it less useful? Does it make sense to decay old values?\n>\n> Hmm.\n>\n> I wondered about a reporting one or perhaps three exponential moving\n> averages (like Unix 1/5/15 minute load averages), but I didn't propose\n> it because: (1) In crash recovery, you can't query it, you just get\n> the log message at the end, and mean unweighted seems OK in that case,\n> no? (you are not more interested in the I/O saturation at the end of\n> the recovery compared to the start of recovery are you?), and (2) on a\n> streaming replica, if you want to sample the instantaneous depth and\n> compute an exponential moving average or some more exotic statistical\n> concoction in your monitoring tool, you're free to do so. I suppose\n> (2) is an argument for removing the existing average completely from\n> the stat view; I put it in there at Andres's suggestion, but I'm not\n> sure I really believe in it. Where is our average replication lag,\n> and why don't we compute the stddev of X, Y or Z? I think we should\n> provide primary measurements and let people compute derived statistics\n> from those.\n\nFor once I disagree, since I believe this very approach, widely applied,\nleads to a slightly chaotic situation with monitoring. But of course\nyou're right, it has nothing to do with the patch itself. I also would\nbe in favour of removing the existing averages, unless Andres has more\narguments to keep it.\n\n\n",
"msg_date": "Sat, 25 Apr 2020 21:19:35 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "> On Sat, Apr 25, 2020 at 09:19:35PM +0200, Dmitry Dolgov wrote:\n> > On Tue, Apr 21, 2020 at 05:26:52PM +1200, Thomas Munro wrote:\n> >\n> > One report I heard recently said that if you get rid of I/O stalls,\n> > pread() becomes cheap enough that the much higher frequency lseek()\n> > calls I've complained about elsewhere[1] become the main thing\n> > recovery is doing, at least on some systems, but I haven't pieced\n> > together the conditions required yet. I'd be interested to know if\n> > you see that.\n>\n> At the moment I've performed couple of tests for the replication in case\n> when almost everything is in memory (mostly by mistake, I was expecting\n> that a postgres replica within a badly memory limited cgroup will cause\n> more IO, but looks like kernel do not evict pages anyway). Not sure if\n> that's what you mean by getting rid of IO stalls, but in these tests\n> profiling shows lseek & pread appear in similar amount of samples.\n>\n> If I understand correctly, eventually one can measure prefetching\n> influence by looking at different redo function execution time (assuming\n> that data they operate with is already prefetched they should be\n> faster). I still have to clarify what is the exact reason, but even in\n> the situation described above (in memory) there is some visible\n> difference, e.g.\n\nI've finally performed couple of tests involving more IO. The\nnot-that-big dataset of 1.5 GB for the replica with the memory allowing\nfitting ~ 1/6 of it, default prefetching parameters and an update\nworkload with uniform distribution. Rather a small setup, but causes\nstable reading into the page cache on the replica and allows to see a\nvisible influence of the patch (more measurement samples tend to happen\nat lower latencies):\n\n # with patch\n Function = b'heap_redo' [206]\n nsecs : count distribution\n 1024 -> 2047 : 0 | |\n 2048 -> 4095 : 32833 |********************** |\n 4096 -> 8191 : 59476 |****************************************|\n 8192 -> 16383 : 18617 |************ |\n 16384 -> 32767 : 3992 |** |\n 32768 -> 65535 : 425 | |\n 65536 -> 131071 : 5 | |\n 131072 -> 262143 : 326 | |\n 262144 -> 524287 : 6 | |\n\n # without patch\n Function = b'heap_redo' [130]\n nsecs : count distribution\n 1024 -> 2047 : 0 | |\n 2048 -> 4095 : 20062 |*********** |\n 4096 -> 8191 : 70662 |****************************************|\n 8192 -> 16383 : 12895 |******* |\n 16384 -> 32767 : 9123 |***** |\n 32768 -> 65535 : 560 | |\n 65536 -> 131071 : 1 | |\n 131072 -> 262143 : 460 | |\n 262144 -> 524287 : 3 | |\n\nNot that there were any doubts, but at the same time it was surprising\nto me how good linux readahead works in this situation. The results\nabove are shown with disabled readahead for filesystem and device, and\nwithout that there was almost no difference, since a lot of IO was\navoided by readahead (which was in fact the majority of all reads):\n\n # with patch\n flags = Read\n usecs : count distribution\n 16 -> 31 : 0 | |\n 32 -> 63 : 1 |******** |\n 64 -> 127 : 5 |****************************************|\n\n flags = ReadAhead-Read\n usecs : count distribution\n 32 -> 63 : 0 | |\n 64 -> 127 : 131 |****************************************|\n 128 -> 255 : 12 |*** |\n 256 -> 511 : 6 |* |\n\n # without patch\n flags = Read\n usecs : count distribution\n 16 -> 31 : 0 | |\n 32 -> 63 : 0 | |\n 64 -> 127 : 4 |****************************************|\n\n flags = ReadAhead-Read\n usecs : count distribution\n 32 -> 63 : 0 | |\n 64 -> 127 : 143 |****************************************|\n 128 -> 255 : 20 |***** |\n\nNumbers of reads in this case were similar with and without patch, which\nmeans it couldn't be attributed to the situation when a page was read\ntoo early, then evicted and read again later.\n\n\n",
"msg_date": "Sat, 2 May 2020 17:14:23 +0200",
"msg_from": "Dmitry Dolgov <9erthalion6@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sun, May 3, 2020 at 3:12 AM Dmitry Dolgov <9erthalion6@gmail.com> wrote:\n> I've finally performed couple of tests involving more IO. The\n> not-that-big dataset of 1.5 GB for the replica with the memory allowing\n> fitting ~ 1/6 of it, default prefetching parameters and an update\n> workload with uniform distribution. Rather a small setup, but causes\n> stable reading into the page cache on the replica and allows to see a\n> visible influence of the patch (more measurement samples tend to happen\n> at lower latencies):\n\nThanks for these tests Dmitry. You didn't mention the details of the\nworkload, but one thing I'd recommend for a uniform/random workload\nthat's generating a lot of misses on the primary server using N\nbackends is to make sure that maintenance_io_concurrency is set to a\nnumber like N*2 or higher, and to look at the queue depth on both\nsystems with iostat -x 1. Then you can experiment with ALTER SYSTEM\nSET maintenance_io_concurrency = X; SELECT pg_reload_conf(); to try to\nunderstand the way it works; there is a point where you've set it high\nenough and the replica is able to handle the same rate of concurrent\nI/Os as the primary. The default of 10 is actually pretty low unless\nyou've only got ~4 backends generating random updates on the primary.\nThat's with full_page_writes=off; if you leave it on, it takes a while\nto get into a scenario where it has much effect.\n\nHere's a rebase, after the recent XLogReader refactoring.",
"msg_date": "Thu, 28 May 2020 23:12:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro escribi�:\n\n> @@ -1094,8 +1103,16 @@ WALRead(XLogReaderState *state,\n> \t\t\tXLByteToSeg(recptr, nextSegNo, state->segcxt.ws_segsize);\n> \t\t\tstate->routine.segment_open(state, nextSegNo, &tli);\n> \n> -\t\t\t/* This shouldn't happen -- indicates a bug in segment_open */\n> -\t\t\tAssert(state->seg.ws_file >= 0);\n> +\t\t\t/* callback reported that there was no such file */\n> +\t\t\tif (state->seg.ws_file < 0)\n> +\t\t\t{\n> +\t\t\t\terrinfo->wre_errno = errno;\n> +\t\t\t\terrinfo->wre_req = 0;\n> +\t\t\t\terrinfo->wre_read = 0;\n> +\t\t\t\terrinfo->wre_off = startoff;\n> +\t\t\t\terrinfo->wre_seg = state->seg;\n> +\t\t\t\treturn false;\n> +\t\t\t}\n\nAh, this is what Michael was saying ... we need to fix WALRead so that\nit doesn't depend on segment_open alway returning a good FD. This needs\na fix everywhere, not just here, and improve the error report interface.\n\nMaybe it does make sense to get it fixed in pg13 and avoid a break\nlater.\n\n-- \n�lvaro Herrera https://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 28 May 2020 17:14:25 -0400",
"msg_from": "Alvaro Herrera <alvherre@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nI've spent some time testing this, mostly from the performance point of\nview. I've done a very simple thing, in order to have reproducible test:\n\n1) I've initialized pgbench with scale 8000 (so ~120GB on a machine with\n only 64GB of RAM)\n\n2) created a physical backup, enabled WAL archiving\n\n3) did 1h pgbench run with 32 clients\n\n4) disabled full-page writes and did another 1h pgbench run\n\nOnce I had this, I did a recovery using the physical backup and WAL\narchive, measuring how long it took to apply each WAL segment. First\nwithout any prefetching (current master), then twice with prefetching.\nFirst with default values (m_io_c=10, distance=256kB) and then with\nhigher values (100 + 2MB).\n\nI did this on two storage systems I have in the system - NVME SSD and\nSATA RAID (3 x 7.2k drives). So, a fast one and slow one.\n\n\n1) NVME\n\nOn the NVME, this generates ~26k WAL segments (~400GB), and each of the\npgbench runs generates ~120M transactions (~33k tps). Of course, wast\nmajority of the WAL segments ~16k comes from the first run, because\nthere's a lot of FPI due to the random nature of the workload.\n\nI have not expected a significant improvement from the prefetching, as\nthe NVME is pretty good in handling random I/O. The total duration looks\nlike this:\n\n no prefetch prefetch prefetch2\n 10618 10385 9403\n\nSo the default is a tiny bit faster, and the more aggressive config\nmakes it about 10% faster. Not bad, considering the expectations.\n\nAttached is a chart comparing the three runs. There are three clearly\nvisible parts - first the 1h run with f_p_w=on, with two checkpoints.\nThat's first ~16k segments. Then there's a bit of a gap before the\nsecond pgbench run was started - I think it's mostly autovacuum etc. And\nthen at segment ~23k the second pgbench (f_p_w=off) starts.\n\nI think this shows the prefetching starts to help as the number of FPIs\ndecreases. It's subtle, but it's there.\n\n\n2) SATA\n\nOn SATA it's just ~550 segments (~8.5GB), and the pgbench runs generate\nonly about 1M transactions. Again, vast majority of the segments comes\nfrom the first run, due to FPI.\n\nIn this case, I don't have complete results, but after processing 542\nsegments (out of the ~550) it looks like this:\n\n no prefetch prefetch prefetch2\n 6644 6635 8282\n\nSo the no prefetch and \"default\" prefetch are roughly on par, but the\n\"aggressive\" prefetch is way slower. I'll get back to this shortly, but\nI'd like to point out this is entirely due to the \"no FPI\" pgbench,\nbecause after the first ~525 initial segments it looks like this:\n\n no prefetch prefetch prefetch2\n 58 65 57\n\nSo it goes very fast by the initial segments with plenty of FPIs, and\nthen we get to the \"no FPI\" segments and the prefetch either does not\nhelp or makes it slower.\n\nLooking at how long it takes to apply the last few segments, it looks\nlike this:\n\n no prefetch prefetch prefetch2\n 280 298 478\n\nwhich is not particularly great, I guess. There however seems to be\nsomething wrong, because with the prefetching I see this in the log:\n\nprefetch:\n2020-06-05 02:47:25.970 CEST 1591318045.970 [22961] LOG: recovery no\nlonger prefetching: unexpected pageaddr 108/E8000000 in log segment\n0000000100000108000000FF, offset 0\n\nprefetch2:\n2020-06-05 15:29:23.895 CEST 1591363763.895 [26676] LOG: recovery no\nlonger prefetching: unexpected pageaddr 108/E8000000 in log segment\n000000010000010900000001, offset 0\n\nWhich seems pretty suspicious, but I have no idea what's wrong. I admit\nthe archive/restore commands are a bit hacky, but I've only seen this\nwith prefetching on the SATA storage, while all other cases seem to be\njust fine. I haven't seen in on NVME (which processes much more WAL).\nAnd the SATA baseline (no prefetching) also worked fine.\n\nMoreover, the pageaddr value is the same in both cases, but the WAL\nsegments are different (but just one segment apart). Seems strange.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Fri, 5 Jun 2020 17:20:52 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Jun 05, 2020 at 05:20:52PM +0200, Tomas Vondra wrote:\n>\n> ...\n>\n>which is not particularly great, I guess. There however seems to be\n>something wrong, because with the prefetching I see this in the log:\n>\n>prefetch:\n>2020-06-05 02:47:25.970 CEST 1591318045.970 [22961] LOG: recovery no\n>longer prefetching: unexpected pageaddr 108/E8000000 in log segment\n>0000000100000108000000FF, offset 0\n>\n>prefetch2:\n>2020-06-05 15:29:23.895 CEST 1591363763.895 [26676] LOG: recovery no\n>longer prefetching: unexpected pageaddr 108/E8000000 in log segment\n>000000010000010900000001, offset 0\n>\n>Which seems pretty suspicious, but I have no idea what's wrong. I admit\n>the archive/restore commands are a bit hacky, but I've only seen this\n>with prefetching on the SATA storage, while all other cases seem to be\n>just fine. I haven't seen in on NVME (which processes much more WAL).\n>And the SATA baseline (no prefetching) also worked fine.\n>\n>Moreover, the pageaddr value is the same in both cases, but the WAL\n>segments are different (but just one segment apart). Seems strange.\n>\n\nI suspected it might be due to a somewhat hackish restore_command that\nprefetches some of the WAL segments, so I tried again with a much\nsimpler restore_command - essentially just:\n\n restore_command = 'cp /archive/%f %p.tmp && mv %p.tmp %p'\n\nwhich I think should be fine for testing purposes. And I got this:\n\n LOG: recovery no longer prefetching: unexpected pageaddr 108/57000000\n in log segment 0000000100000108000000FF, offset 0\n LOG: restored log file \"0000000100000108000000FF\" from archive\n\nwhich is the same segment as in the earlier examples, but with a\ndifferent pageaddr value. Of course, there's no such pageaddr in the WAL\nsegment (and recovery of that segment succeeds).\n\nSo I think there's something broken ...\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 5 Jun 2020 22:04:14 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Jun 05, 2020 at 10:04:14PM +0200, Tomas Vondra wrote:\n>On Fri, Jun 05, 2020 at 05:20:52PM +0200, Tomas Vondra wrote:\n>>\n>>...\n>>\n>>which is not particularly great, I guess. There however seems to be\n>>something wrong, because with the prefetching I see this in the log:\n>>\n>>prefetch:\n>>2020-06-05 02:47:25.970 CEST 1591318045.970 [22961] LOG: recovery no\n>>longer prefetching: unexpected pageaddr 108/E8000000 in log segment\n>>0000000100000108000000FF, offset 0\n>>\n>>prefetch2:\n>>2020-06-05 15:29:23.895 CEST 1591363763.895 [26676] LOG: recovery no\n>>longer prefetching: unexpected pageaddr 108/E8000000 in log segment\n>>000000010000010900000001, offset 0\n>>\n>>Which seems pretty suspicious, but I have no idea what's wrong. I admit\n>>the archive/restore commands are a bit hacky, but I've only seen this\n>>with prefetching on the SATA storage, while all other cases seem to be\n>>just fine. I haven't seen in on NVME (which processes much more WAL).\n>>And the SATA baseline (no prefetching) also worked fine.\n>>\n>>Moreover, the pageaddr value is the same in both cases, but the WAL\n>>segments are different (but just one segment apart). Seems strange.\n>>\n>\n>I suspected it might be due to a somewhat hackish restore_command that\n>prefetches some of the WAL segments, so I tried again with a much\n>simpler restore_command - essentially just:\n>\n> restore_command = 'cp /archive/%f %p.tmp && mv %p.tmp %p'\n>\n>which I think should be fine for testing purposes. And I got this:\n>\n> LOG: recovery no longer prefetching: unexpected pageaddr 108/57000000\n> in log segment 0000000100000108000000FF, offset 0\n> LOG: restored log file \"0000000100000108000000FF\" from archive\n>\n>which is the same segment as in the earlier examples, but with a\n>different pageaddr value. Of course, there's no such pageaddr in the WAL\n>segment (and recovery of that segment succeeds).\n>\n>So I think there's something broken ...\n>\n\nBTW in all three cases it happens right after the first restart point in\nthe WAL stream:\n\n LOG: restored log file \"0000000100000108000000FD\" from archive\n LOG: restartpoint starting: time\n LOG: restored log file \"0000000100000108000000FE\" from archive\n LOG: restartpoint complete: wrote 236092 buffers (22.5%); 0 WAL ...\n LOG: recovery restart point at 108/FC000028\n DETAIL: Last completed transaction was at log time 2020-06-04\n 15:27:00.95139+02.\n LOG: recovery no longer prefetching: unexpected pageaddr\n 108/57000000 in log segment 0000000100000108000000FF, offset 0\n LOG: restored log file \"0000000100000108000000FF\" from archive\n\nIt looks exactly like this in case of all 3 failures ...\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Fri, 5 Jun 2020 22:40:57 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Jun 6, 2020 at 8:41 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> BTW in all three cases it happens right after the first restart point in\n> the WAL stream:\n>\n> LOG: restored log file \"0000000100000108000000FD\" from archive\n> LOG: restartpoint starting: time\n> LOG: restored log file \"0000000100000108000000FE\" from archive\n> LOG: restartpoint complete: wrote 236092 buffers (22.5%); 0 WAL ...\n> LOG: recovery restart point at 108/FC000028\n> DETAIL: Last completed transaction was at log time 2020-06-04\n> 15:27:00.95139+02.\n> LOG: recovery no longer prefetching: unexpected pageaddr\n> 108/57000000 in log segment 0000000100000108000000FF, offset 0\n> LOG: restored log file \"0000000100000108000000FF\" from archive\n>\n> It looks exactly like this in case of all 3 failures ...\n\nHuh. Thanks! I'll try to reproduce this here.\n\n\n",
"msg_date": "Sat, 6 Jun 2020 09:15:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nI wonder if we can collect some stats to measure how effective the\nprefetching actually is. Ultimately we want something like cache hit\nratio, but we're only preloading into page cache, so we can't easily\nmeasure that. Perhaps we could measure I/O timings in redo, though?\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 6 Jun 2020 00:34:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> I wonder if we can collect some stats to measure how effective the\n> prefetching actually is. Ultimately we want something like cache hit\n> ratio, but we're only preloading into page cache, so we can't easily\n> measure that. Perhaps we could measure I/O timings in redo, though?\n\nThat would certainly be interesting, particularly as this optimization\nseems likely to be useful on some platforms (eg, zfs, where the\nfilesystem block size is larger than ours..) and less on others\n(traditional systems which have a smaller block size).\n\nThanks,\n\nStephen",
"msg_date": "Fri, 5 Jun 2020 20:36:32 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Jun 6, 2020 at 12:36 PM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> > I wonder if we can collect some stats to measure how effective the\n> > prefetching actually is. Ultimately we want something like cache hit\n> > ratio, but we're only preloading into page cache, so we can't easily\n> > measure that. Perhaps we could measure I/O timings in redo, though?\n>\n> That would certainly be interesting, particularly as this optimization\n> seems likely to be useful on some platforms (eg, zfs, where the\n> filesystem block size is larger than ours..) and less on others\n> (traditional systems which have a smaller block size).\n\nI know one way to get information about cache hit ratios without the\npage cache fuzz factor: if you combine this patch with Andres's\nstill-in-development AIO prototype and tell it to use direct IO, you\nget the undiluted truth about hits and misses by looking at the\n\"prefetch\" and \"skip_hit\" columns of the stats view. I'm hoping to\nhave a bit more to say about how this patch works as a client of that\nnew magic soon, but I also don't want to make this dependent on that\n(it's mostly orthogonal, apart from the \"how deep is the queue\" part\nwhich will improve with better information).\n\nFYI I am still trying to reproduce and understand the problem Tomas\nreported; more soon.\n\n\n",
"msg_date": "Thu, 2 Jul 2020 15:09:29 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Jul 02, 2020 at 03:09:29PM +1200, Thomas Munro wrote:\n>On Sat, Jun 6, 2020 at 12:36 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> * Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n>> > I wonder if we can collect some stats to measure how effective the\n>> > prefetching actually is. Ultimately we want something like cache hit\n>> > ratio, but we're only preloading into page cache, so we can't easily\n>> > measure that. Perhaps we could measure I/O timings in redo, though?\n>>\n>> That would certainly be interesting, particularly as this optimization\n>> seems likely to be useful on some platforms (eg, zfs, where the\n>> filesystem block size is larger than ours..) and less on others\n>> (traditional systems which have a smaller block size).\n>\n>I know one way to get information about cache hit ratios without the\n>page cache fuzz factor: if you combine this patch with Andres's\n>still-in-development AIO prototype and tell it to use direct IO, you\n>get the undiluted truth about hits and misses by looking at the\n>\"prefetch\" and \"skip_hit\" columns of the stats view. I'm hoping to\n>have a bit more to say about how this patch works as a client of that\n>new magic soon, but I also don't want to make this dependent on that\n>(it's mostly orthogonal, apart from the \"how deep is the queue\" part\n>which will improve with better information).\n>\n>FYI I am still trying to reproduce and understand the problem Tomas\n>reported; more soon.\n\nAny luck trying to reproduce thigs? Should I try again and collect some\nadditional debug info?\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 3 Aug 2020 17:46:59 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Tue, Aug 4, 2020 at 3:47 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, Jul 02, 2020 at 03:09:29PM +1200, Thomas Munro wrote:\n> >FYI I am still trying to reproduce and understand the problem Tomas\n> >reported; more soon.\n>\n> Any luck trying to reproduce thigs? Should I try again and collect some\n> additional debug info?\n\nNo luck. I'm working on it now, and also trying to reduce the\noverheads so that we're not doing extra work when it doesn't help.\n\nBy the way, I also looked into recovery I/O stalls *other* than\nrelation buffer cache misses, and created\nhttps://commitfest.postgresql.org/29/2669/ to fix what I found. If\nyou avoid both kinds of stalls then crash recovery is finally CPU\nbound (to go faster after that we'll need parallel replay).\n\n\n",
"msg_date": "Thu, 6 Aug 2020 14:58:44 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Aug 06, 2020 at 02:58:44PM +1200, Thomas Munro wrote:\n>On Tue, Aug 4, 2020 at 3:47 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Thu, Jul 02, 2020 at 03:09:29PM +1200, Thomas Munro wrote:\n>> >FYI I am still trying to reproduce and understand the problem Tomas\n>> >reported; more soon.\n>>\n>> Any luck trying to reproduce thigs? Should I try again and collect some\n>> additional debug info?\n>\n>No luck. I'm working on it now, and also trying to reduce the\n>overheads so that we're not doing extra work when it doesn't help.\n>\n\nOK, I'll see if I can still reproduce it.\n\n>By the way, I also looked into recovery I/O stalls *other* than\n>relation buffer cache misses, and created\n>https://commitfest.postgresql.org/29/2669/ to fix what I found. If\n>you avoid both kinds of stalls then crash recovery is finally CPU\n>bound (to go faster after that we'll need parallel replay).\n\nYeah, I noticed. I'll take a look and do some testing in the next CF.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 6 Aug 2020 12:47:01 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Aug 6, 2020 at 10:47 PM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Thu, Aug 06, 2020 at 02:58:44PM +1200, Thomas Munro wrote:\n> >On Tue, Aug 4, 2020 at 3:47 AM Tomas Vondra\n> >> Any luck trying to reproduce thigs? Should I try again and collect some\n> >> additional debug info?\n> >\n> >No luck. I'm working on it now, and also trying to reduce the\n> >overheads so that we're not doing extra work when it doesn't help.\n>\n> OK, I'll see if I can still reproduce it.\n\nSince someone else ask me off-list, here's a rebase, with no\nfunctional changes. Soon I'll post a new improved version, but this\nversion just fixes the bitrot and hopefully turns cfbot green.",
"msg_date": "Thu, 13 Aug 2020 18:57:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "I have run some benchmarks for this patch. Overall it seems that there is a good improvement with the patch on recovery times:\r\n\r\nThe VMs I used have 32GB RAM, pgbench is initialized with a scale factor 3000(so it doesn’t fit to memory, ~45GB).\r\n\r\nIn order to avoid checkpoints during benchmark, max_wal_size(200GB) and checkpoint_timeout(200 mins) are set to a high value. \r\n\r\nThe run is cancelled when there is a reasonable amount of WAL ( > 25GB). The recovery times are measured from the REDO logs.\r\n\r\nI have tried combination of SSD, HDD, full_page_writes = on/off and max_io_concurrency = 10/50, the recovery times are as follows (in seconds):\r\n\r\n\t\t\t No prefetch\t | Default prefetch values |\t Default + max_io_concurrency = 50\r\nSSD, full_page_writes = on\t852\t\t301\t\t\t\t197\r\nSSD, full_page_writes = off\t1642\t\t1359\t\t\t\t1391\r\nHDD, full_page_writes = on\t6027\t\t6345\t\t\t\t6390\r\nHDD, full_page_writes = off\t738\t\t275\t\t\t\t192\r\n\r\nDefault prefetch values:\r\n-\tMax_recovery_prefetch_distance = 256KB\r\n-\tMax_io_concurrency = 10\r\n\r\nIt probably makes sense to compare each row separately as the size of WAL can be different.\r\n\r\nTalha.\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Thursday, August 13, 2020 9:57 AM\r\nTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>\r\nCc: Stephen Frost <sfrost@snowman.net>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; Andres Freund <andres@anarazel.de>; Alvaro Herrera <alvherre@2ndquadrant.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: [EXTERNAL] Re: WIP: WAL prefetch (another approach)\r\n\r\nOn Thu, Aug 6, 2020 at 10:47 PM Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\r\n> On Thu, Aug 06, 2020 at 02:58:44PM +1200, Thomas Munro wrote:\r\n> >On Tue, Aug 4, 2020 at 3:47 AM Tomas Vondra\r\n> >> Any luck trying to reproduce thigs? Should I try again and collect \r\n> >> some additional debug info?\r\n> >\r\n> >No luck. I'm working on it now, and also trying to reduce the \r\n> >overheads so that we're not doing extra work when it doesn't help.\r\n>\r\n> OK, I'll see if I can still reproduce it.\r\n\r\nSince someone else ask me off-list, here's a rebase, with no functional changes. Soon I'll post a new improved version, but this version just fixes the bitrot and hopefully turns cfbot green.\r\n",
"msg_date": "Wed, 26 Aug 2020 13:13:41 +0000",
"msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Aug 26, 2020 at 9:42 AM Sait Talha Nisanci\n<Sait.Nisanci@microsoft.com> wrote:\n> I have tried combination of SSD, HDD, full_page_writes = on/off and max_io_concurrency = 10/50, the recovery times are as follows (in seconds):\n>\n> No prefetch | Default prefetch values | Default + max_io_concurrency = 50\n> SSD, full_page_writes = on 852 301 197\n> SSD, full_page_writes = off 1642 1359 1391\n> HDD, full_page_writes = on 6027 6345 6390\n> HDD, full_page_writes = off 738 275 192\n\nThe regression on HDD with full_page_writes=on is interesting. I don't\nknow why that should happen, and I wonder if there is anything that\ncan be done to mitigate it.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Aug 2020 09:51:56 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Sait Talha Nisanci (Sait.Nisanci@microsoft.com) wrote:\n> I have run some benchmarks for this patch. Overall it seems that there is a good improvement with the patch on recovery times:\n\nMaybe I missed it somewhere, but what's the OS/filesystem being used\nhere..? What's the filesystem block size..?\n\nThanks,\n\nStephen",
"msg_date": "Thu, 27 Aug 2020 09:55:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi Stephen,\n\nOS version is Ubuntu 18.04.5 LTS.\nFilesystem is ext4 and block size is 4KB.\n\nTalha.\n\n-----Original Message-----\nFrom: Stephen Frost <sfrost@snowman.net> \nSent: Thursday, August 27, 2020 4:56 PM\nTo: Sait Talha Nisanci <Sait.Nisanci@microsoft.com>\nCc: Thomas Munro <thomas.munro@gmail.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; Andres Freund <andres@anarazel.de>; Alvaro Herrera <alvherre@2ndquadrant.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\nSubject: Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)\n\nGreetings,\n\n* Sait Talha Nisanci (Sait.Nisanci@microsoft.com) wrote:\n> I have run some benchmarks for this patch. Overall it seems that there is a good improvement with the patch on recovery times:\n\nMaybe I missed it somewhere, but what's the OS/filesystem being used here..? What's the filesystem block size..?\n\nThanks,\n\nStephen\n\n\n",
"msg_date": "Thu, 27 Aug 2020 17:36:01 +0000",
"msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Sait Talha Nisanci (Sait.Nisanci@microsoft.com) wrote:\n> OS version is Ubuntu 18.04.5 LTS.\n> Filesystem is ext4 and block size is 4KB.\n\n[...]\n\n* Sait Talha Nisanci (Sait.Nisanci@microsoft.com) wrote:\n> I have run some benchmarks for this patch. Overall it seems that there is a good improvement with the patch on recovery times:\n> \n> The VMs I used have 32GB RAM, pgbench is initialized with a scale factor 3000(so it doesn’t fit to memory, ~45GB).\n> \n> In order to avoid checkpoints during benchmark, max_wal_size(200GB) and checkpoint_timeout(200 mins) are set to a high value. \n> \n> The run is cancelled when there is a reasonable amount of WAL ( > 25GB). The recovery times are measured from the REDO logs.\n> \n> I have tried combination of SSD, HDD, full_page_writes = on/off and max_io_concurrency = 10/50, the recovery times are as follows (in seconds):\n> \n> \t\t\t No prefetch\t | Default prefetch values |\t Default + max_io_concurrency = 50\n> SSD, full_page_writes = on\t852\t\t301\t\t\t\t197\n> SSD, full_page_writes = off\t1642\t\t1359\t\t\t\t1391\n> HDD, full_page_writes = on\t6027\t\t6345\t\t\t\t6390\n> HDD, full_page_writes = off\t738\t\t275\t\t\t\t192\n> \n> Default prefetch values:\n> -\tMax_recovery_prefetch_distance = 256KB\n> -\tMax_io_concurrency = 10\n> \n> It probably makes sense to compare each row separately as the size of WAL can be different.\n\nIs WAL FPW compression enabled..? I'm trying to figure out how, given\nwhat's been shared here, that replaying 25GB of WAL is being helped out\nby 2.5x thanks to prefetch in the SSD case. That prefetch is hurting in\nthe HDD case entirely makes sense to me- we're spending time reading\npages from the HDD, which is entirely pointless work given that we're\njust going to write over those pages entirely with FPWs.\n\nFurther, if there's 32GB of RAM, and WAL compression isn't enabled and\nthe WAL is only 25GB, then it's very likely that every page touched by\nthe WAL ends up in memory (shared buffers or fs cache), and with FPWs we\nshouldn't ever need to actually read from the storage to get those\npages, right? So how is prefetch helping so much..?\n\nI'm not sure that the 'full_page_writes = off' tests are very\ninteresting in this case, since you're going to get torn pages and\ntherefore corruption and hopefully no one is running with that\nconfiguration with this OS/filesystem.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 27 Aug 2020 14:26:42 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi, \n\nOn August 27, 2020 11:26:42 AM PDT, Stephen Frost <sfrost@snowman.net> wrote:\n>Is WAL FPW compression enabled..? I'm trying to figure out how, given\n>what's been shared here, that replaying 25GB of WAL is being helped out\n>by 2.5x thanks to prefetch in the SSD case. That prefetch is hurting\n>in\n>the HDD case entirely makes sense to me- we're spending time reading\n>pages from the HDD, which is entirely pointless work given that we're\n>just going to write over those pages entirely with FPWs.\n\nHm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n\nAndres\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 27 Aug 2020 11:40:28 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On August 27, 2020 11:26:42 AM PDT, Stephen Frost <sfrost@snowman.net> wrote:\n> >Is WAL FPW compression enabled..? I'm trying to figure out how, given\n> >what's been shared here, that replaying 25GB of WAL is being helped out\n> >by 2.5x thanks to prefetch in the SSD case. That prefetch is hurting\n> >in\n> >the HDD case entirely makes sense to me- we're spending time reading\n> >pages from the HDD, which is entirely pointless work given that we're\n> >just going to write over those pages entirely with FPWs.\n> \n> Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n\nWe don't actually read the page when we're replaying an FPW though..?\nIf we don't read it, and we entirely write the page from the FPW, how is\npre-fetching helping..? I understood how it could be helpful for\nfilesystems which have a larger block size than ours (eg: zfs w/ 16kb\nblock sizes where the kernel needs to get the whole 16kb block when we\nonly write 8kb to it), but that's apparently not the case here.\n\nSo- what is it that pre-fetching is doing to result in such an\nimprovement? Is there something lower level where the SSD physical\nblock size is coming into play, which is typically larger..? I wouldn't\nhave thought so, but perhaps that's the case..\n\nThanks,\n\nStephen",
"msg_date": "Thu, 27 Aug 2020 14:51:16 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 2:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n>\n> We don't actually read the page when we're replaying an FPW though..?\n> If we don't read it, and we entirely write the page from the FPW, how is\n> pre-fetching helping..?\n\nSuppose there is a checkpoint. Then we replay a record with an FPW,\npre-fetching nothing. Then the buffer gets evicted from\nshared_buffers, and maybe the OS cache too. Then, before the next\ncheckpoint, we again replay a record for the same page. At this point,\npre-fetching should be helpful.\n\nAdmittedly, I don't quite understand whether that is what is happening\nin this test case, or why SDD vs. HDD should make any difference. But\nthere doesn't seem to be any reason why it doesn't make sense in\ntheory.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 27 Aug 2020 16:19:23 -0400",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Robert Haas (robertmhaas@gmail.com) wrote:\n> On Thu, Aug 27, 2020 at 2:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n> > > Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n> >\n> > We don't actually read the page when we're replaying an FPW though..?\n> > If we don't read it, and we entirely write the page from the FPW, how is\n> > pre-fetching helping..?\n> \n> Suppose there is a checkpoint. Then we replay a record with an FPW,\n> pre-fetching nothing. Then the buffer gets evicted from\n> shared_buffers, and maybe the OS cache too. Then, before the next\n> checkpoint, we again replay a record for the same page. At this point,\n> pre-fetching should be helpful.\n\nSure- but if we're talking about 25GB of WAL, on a server that's got\n32GB, then why would those pages end up getting evicted from memory\nentirely? Particularly, enough of them to end up with such a huge\ndifference in replay time..\n\nI do agree that if we've got more outstanding WAL between checkpoints\nthan the system's got memory then that certainly changes things, but\nthat wasn't what I understood the case to be here.\n\n> Admittedly, I don't quite understand whether that is what is happening\n> in this test case, or why SDD vs. HDD should make any difference. But\n> there doesn't seem to be any reason why it doesn't make sense in\n> theory.\n\nI agree that this could be a reason, but it doesn't seem to quite fit in\nthis particular case given the amount of memory and WAL. I'm suspecting\nthat it's something else and I'd very much like to know if it's a\ngeneral \"this applies to all (most? a lot of?) SSDs because the\nhardware has a larger than 8KB page size and therefore the kernel has to\nread it\", or if it's something odd about this particular system and\ndoesn't apply generally.\n\nThanks,\n\nStephen",
"msg_date": "Thu, 27 Aug 2020 16:28:54 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Aug 27, 2020 at 04:28:54PM -0400, Stephen Frost wrote:\n>Greetings,\n>\n>* Robert Haas (robertmhaas@gmail.com) wrote:\n>> On Thu, Aug 27, 2020 at 2:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n>> > > Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n>> >\n>> > We don't actually read the page when we're replaying an FPW though..?\n>> > If we don't read it, and we entirely write the page from the FPW, how is\n>> > pre-fetching helping..?\n>>\n>> Suppose there is a checkpoint. Then we replay a record with an FPW,\n>> pre-fetching nothing. Then the buffer gets evicted from\n>> shared_buffers, and maybe the OS cache too. Then, before the next\n>> checkpoint, we again replay a record for the same page. At this point,\n>> pre-fetching should be helpful.\n>\n>Sure- but if we're talking about 25GB of WAL, on a server that's got\n>32GB, then why would those pages end up getting evicted from memory\n>entirely? Particularly, enough of them to end up with such a huge\n>difference in replay time..\n>\n>I do agree that if we've got more outstanding WAL between checkpoints\n>than the system's got memory then that certainly changes things, but\n>that wasn't what I understood the case to be here.\n>\n\nI don't think it's very clear how much WAL there actually was in each\ncase - the message only said there was more than 25GB, but who knows how\nmany checkpoints that covers? In the cases with FPW=on this may easily\nbe much less than one checkpoint (because with scale 45GB an update to\nevery page will log 45GB of full-page images). It'd be interesting to\nsee some stats from pg_waldump etc.\n\n>> Admittedly, I don't quite understand whether that is what is happening\n>> in this test case, or why SDD vs. HDD should make any difference. But\n>> there doesn't seem to be any reason why it doesn't make sense in\n>> theory.\n>\n>I agree that this could be a reason, but it doesn't seem to quite fit in\n>this particular case given the amount of memory and WAL. I'm suspecting\n>that it's something else and I'd very much like to know if it's a\n>general \"this applies to all (most? a lot of?) SSDs because the\n>hardware has a larger than 8KB page size and therefore the kernel has to\n>read it\", or if it's something odd about this particular system and\n>doesn't apply generally.\n>\n\nNot sure. I doubt it has anything to do with the hardware page size,\nthat's mostly transparent to the kernel anyway. But it might be that the\nprefetching on a particular SSD has more overhead than what it saves.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 30 Aug 2020 00:14:50 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Thu, Aug 27, 2020 at 04:28:54PM -0400, Stephen Frost wrote:\n> >* Robert Haas (robertmhaas@gmail.com) wrote:\n> >>On Thu, Aug 27, 2020 at 2:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >>> > Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n> >>>\n> >>> We don't actually read the page when we're replaying an FPW though..?\n> >>> If we don't read it, and we entirely write the page from the FPW, how is\n> >>> pre-fetching helping..?\n> >>\n> >>Suppose there is a checkpoint. Then we replay a record with an FPW,\n> >>pre-fetching nothing. Then the buffer gets evicted from\n> >>shared_buffers, and maybe the OS cache too. Then, before the next\n> >>checkpoint, we again replay a record for the same page. At this point,\n> >>pre-fetching should be helpful.\n> >\n> >Sure- but if we're talking about 25GB of WAL, on a server that's got\n> >32GB, then why would those pages end up getting evicted from memory\n> >entirely? Particularly, enough of them to end up with such a huge\n> >difference in replay time..\n> >\n> >I do agree that if we've got more outstanding WAL between checkpoints\n> >than the system's got memory then that certainly changes things, but\n> >that wasn't what I understood the case to be here.\n> \n> I don't think it's very clear how much WAL there actually was in each\n> case - the message only said there was more than 25GB, but who knows how\n> many checkpoints that covers? In the cases with FPW=on this may easily\n> be much less than one checkpoint (because with scale 45GB an update to\n> every page will log 45GB of full-page images). It'd be interesting to\n> see some stats from pg_waldump etc.\n\nAlso in the message was this:\n\n--\nIn order to avoid checkpoints during benchmark, max_wal_size(200GB) and\ncheckpoint_timeout(200 mins) are set to a high value.\n--\n\nWhich lead me to suspect, at least, that this was much less than a\ncheckpoint, as you suggest. Also, given that the comment was 'run is\ncancelled when there is a reasonable amount of WAL (>25GB), seems likely\nthat it's at least *around* there.\n\nUltimately though, there just isn't enough information provided to\nreally be able to understand what's going on. I agree, pg_waldump stats\nwould be useful.\n\n> >>Admittedly, I don't quite understand whether that is what is happening\n> >>in this test case, or why SDD vs. HDD should make any difference. But\n> >>there doesn't seem to be any reason why it doesn't make sense in\n> >>theory.\n> >\n> >I agree that this could be a reason, but it doesn't seem to quite fit in\n> >this particular case given the amount of memory and WAL. I'm suspecting\n> >that it's something else and I'd very much like to know if it's a\n> >general \"this applies to all (most? a lot of?) SSDs because the\n> >hardware has a larger than 8KB page size and therefore the kernel has to\n> >read it\", or if it's something odd about this particular system and\n> >doesn't apply generally.\n> \n> Not sure. I doubt it has anything to do with the hardware page size,\n> that's mostly transparent to the kernel anyway. But it might be that the\n> prefetching on a particular SSD has more overhead than what it saves.\n\nRight- I wouldn't have thought the hardware page size would matter\neither, but it's entirely possible that assumption is wrong and that it\ndoes matter for some reason- perhaps with just some SSDs, or maybe with\na lot of them, or maybe there's something else entirely going on. About\nall I feel like I can say at the moment is that I'm very interested in\nways to make WAL replay go faster and it'd be great to get more\ninformation about what's going on here to see if there's something we\ncan do to generally improve WAL replay.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 30 Aug 2020 08:24:01 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nThe WAL size for \"SSD, full_page_writes=on\" was 36GB. I currently don't have the exact size for the other rows because my test VMs got auto-deleted. I can possibly redo the benchmark to get pg_waldump stats for each row.\n\nBest,\nTalha.\n\n\n-----Original Message-----\nFrom: Stephen Frost <sfrost@snowman.net> \nSent: Sunday, August 30, 2020 3:24 PM\nTo: Tomas Vondra <tomas.vondra@2ndquadrant.com>\nCc: Robert Haas <robertmhaas@gmail.com>; Andres Freund <andres@anarazel.de>; Sait Talha Nisanci <Sait.Nisanci@microsoft.com>; Thomas Munro <thomas.munro@gmail.com>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; Alvaro Herrera <alvherre@2ndquadrant.com>; pgsql-hackers <pgsql-hackers@postgresql.org>\nSubject: Re: [EXTERNAL] Re: WIP: WAL prefetch (another approach)\n\nGreetings,\n\n* Tomas Vondra (tomas.vondra@2ndquadrant.com) wrote:\n> On Thu, Aug 27, 2020 at 04:28:54PM -0400, Stephen Frost wrote:\n> >* Robert Haas (robertmhaas@gmail.com) wrote:\n> >>On Thu, Aug 27, 2020 at 2:51 PM Stephen Frost <sfrost@snowman.net> wrote:\n> >>> > Hm? At least earlier versions didn't do prefetching for records with an fpw, and only for subsequent records affecting the same or if not in s_b anymore.\n> >>>\n> >>> We don't actually read the page when we're replaying an FPW though..?\n> >>> If we don't read it, and we entirely write the page from the FPW, \n> >>> how is pre-fetching helping..?\n> >>\n> >>Suppose there is a checkpoint. Then we replay a record with an FPW, \n> >>pre-fetching nothing. Then the buffer gets evicted from \n> >>shared_buffers, and maybe the OS cache too. Then, before the next \n> >>checkpoint, we again replay a record for the same page. At this \n> >>point, pre-fetching should be helpful.\n> >\n> >Sure- but if we're talking about 25GB of WAL, on a server that's got \n> >32GB, then why would those pages end up getting evicted from memory \n> >entirely? Particularly, enough of them to end up with such a huge \n> >difference in replay time..\n> >\n> >I do agree that if we've got more outstanding WAL between checkpoints \n> >than the system's got memory then that certainly changes things, but \n> >that wasn't what I understood the case to be here.\n> \n> I don't think it's very clear how much WAL there actually was in each \n> case - the message only said there was more than 25GB, but who knows \n> how many checkpoints that covers? In the cases with FPW=on this may \n> easily be much less than one checkpoint (because with scale 45GB an \n> update to every page will log 45GB of full-page images). It'd be \n> interesting to see some stats from pg_waldump etc.\n\nAlso in the message was this:\n\n--\nIn order to avoid checkpoints during benchmark, max_wal_size(200GB) and\ncheckpoint_timeout(200 mins) are set to a high value.\n--\n\nWhich lead me to suspect, at least, that this was much less than a checkpoint, as you suggest. Also, given that the comment was 'run is cancelled when there is a reasonable amount of WAL (>25GB), seems likely that it's at least *around* there.\n\nUltimately though, there just isn't enough information provided to really be able to understand what's going on. I agree, pg_waldump stats would be useful.\n\n> >>Admittedly, I don't quite understand whether that is what is \n> >>happening in this test case, or why SDD vs. HDD should make any \n> >>difference. But there doesn't seem to be any reason why it doesn't \n> >>make sense in theory.\n> >\n> >I agree that this could be a reason, but it doesn't seem to quite fit \n> >in this particular case given the amount of memory and WAL. I'm \n> >suspecting that it's something else and I'd very much like to know if \n> >it's a general \"this applies to all (most? a lot of?) SSDs because \n> >the hardware has a larger than 8KB page size and therefore the kernel \n> >has to read it\", or if it's something odd about this particular \n> >system and doesn't apply generally.\n> \n> Not sure. I doubt it has anything to do with the hardware page size, \n> that's mostly transparent to the kernel anyway. But it might be that \n> the prefetching on a particular SSD has more overhead than what it saves.\n\nRight- I wouldn't have thought the hardware page size would matter either, but it's entirely possible that assumption is wrong and that it does matter for some reason- perhaps with just some SSDs, or maybe with a lot of them, or maybe there's something else entirely going on. About all I feel like I can say at the moment is that I'm very interested in ways to make WAL replay go faster and it'd be great to get more information about what's going on here to see if there's something we can do to generally improve WAL replay.\n\nThanks,\n\nStephen\n\n\n",
"msg_date": "Tue, 1 Sep 2020 06:14:52 +0000",
"msg_from": "Sait Talha Nisanci <Sait.Nisanci@microsoft.com>",
"msg_from_op": false,
"msg_subject": "RE: [EXTERNAL] Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Aug 13, 2020 at 06:57:20PM +1200, Thomas Munro wrote:\n>On Thu, Aug 6, 2020 at 10:47 PM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Thu, Aug 06, 2020 at 02:58:44PM +1200, Thomas Munro wrote:\n>> >On Tue, Aug 4, 2020 at 3:47 AM Tomas Vondra\n>> >> Any luck trying to reproduce thigs? Should I try again and collect some\n>> >> additional debug info?\n>> >\n>> >No luck. I'm working on it now, and also trying to reduce the\n>> >overheads so that we're not doing extra work when it doesn't help.\n>>\n>> OK, I'll see if I can still reproduce it.\n>\n>Since someone else ask me off-list, here's a rebase, with no\n>functional changes. Soon I'll post a new improved version, but this\n>version just fixes the bitrot and hopefully turns cfbot green.\n\nI've decided to do some tests with this patch version, but I immediately\nran into issues. What I did was initializing a 32GB pgbench database,\nbacked it up (shutdown + tar) and then ran 2h pgbench with archiving.\nAnd then I restored the backed-up data directory and instructed it to\nreplay WAL from the archive. There's about 16k WAL segments, so about\n256GB of WAL.\n\nUnfortunately, the very first thing that happens after starting the\nrecovery is this:\n\n LOG: starting archive recovery\n LOG: restored log file \"000000010000001600000080\" from archive\n LOG: consistent recovery state reached at 16/800000A0\n LOG: redo starts at 16/800000A0\n LOG: database system is ready to accept read only connections\n LOG: recovery started prefetching on timeline 1 at 0/800000A0\n LOG: recovery no longer prefetching: unexpected pageaddr 8/84000000 in log segment 000000010000001600000081, offset 0\n LOG: restored log file \"000000010000001600000081\" from archive\n LOG: restored log file \"000000010000001600000082\" from archive\n\nSo we start applying 000000010000001600000081 and it fails almost\nimmediately on the first segment. This is confirmed by prefetch stats,\nwhich look like this:\n\n -[ RECORD 1 ]---+-----------------------------\n stats_reset | 2020-09-01 15:02:31.18766+02\n prefetch | 1044\n skip_hit | 1995\n skip_new | 87\n skip_fpw | 2108\n skip_seq | 27\n distance | 0\n queue_depth | 0\n avg_distance | 135838.95\n avg_queue_depth | 8.852459\n\nSo we do a little bit of prefetching and then it gets disabled :-(\n\nThe segment looks perfectly fine when inspected using pg_waldump, see\nthe attached file.\n\nI've tested this applied on 6ca547cf75ef6e922476c51a3fb5e253eef5f1b6,\nand the failure seems fairly similar to what I reported before, except\nthat now it happened right at the very beginning.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Tue, 1 Sep 2020 15:14:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Sep 2, 2020 at 1:14 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> from the archive\n\nAhh, so perhaps that's the key.\n\n> I've tested this applied on 6ca547cf75ef6e922476c51a3fb5e253eef5f1b6,\n> and the failure seems fairly similar to what I reported before, except\n> that now it happened right at the very beginning.\n\nThanks, will see if I can work out why. My newer version probably has\nthe same problem.\n\n\n",
"msg_date": "Wed, 2 Sep 2020 02:05:10 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Sep 02, 2020 at 02:05:10AM +1200, Thomas Munro wrote:\n>On Wed, Sep 2, 2020 at 1:14 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> from the archive\n>\n>Ahh, so perhaps that's the key.\n>\n\nMaybe. For the record, the commands look like this:\n\narchive_command = 'gzip -1 -c %p > /mnt/raid/wal-archive/%f.gz'\n\nrestore_command = 'gunzip -c /mnt/raid/wal-archive/%f.gz > %p.tmp && mv %p.tmp %p'\n\n>> I've tested this applied on 6ca547cf75ef6e922476c51a3fb5e253eef5f1b6,\n>> and the failure seems fairly similar to what I reported before, except\n>> that now it happened right at the very beginning.\n>\n>Thanks, will see if I can work out why. My newer version probably has\n>the same problem.\n\nOK.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Tue, 1 Sep 2020 16:18:26 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Sep 2, 2020 at 2:18 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> On Wed, Sep 02, 2020 at 02:05:10AM +1200, Thomas Munro wrote:\n> >On Wed, Sep 2, 2020 at 1:14 AM Tomas Vondra\n> ><tomas.vondra@2ndquadrant.com> wrote:\n> >> from the archive\n> >\n> >Ahh, so perhaps that's the key.\n>\n> Maybe. For the record, the commands look like this:\n>\n> archive_command = 'gzip -1 -c %p > /mnt/raid/wal-archive/%f.gz'\n>\n> restore_command = 'gunzip -c /mnt/raid/wal-archive/%f.gz > %p.tmp && mv %p.tmp %p'\n\nYeah, sorry, I goofed here by not considering archive recovery\nproperly. I have special handling for crash recovery from files in\npg_wal (XLRO_END, means read until you run out of files) and streaming\nreplication (XLRO_WALRCV_WRITTEN, means read only as far as the wal\nreceiver has advertised as written in shared memory), as a way to\ncontrol the ultimate limit on how far ahead to read when\nmaintenance_io_concurrency and max_recovery_prefetch_distance don't\nlimit you first. But if you recover from a base backup with a WAL\narchive, it uses the XLRO_END policy which can run out of files just\nbecause a new file hasn't been restored yet, so it gives up\nprefetching too soon, as you're seeing. That doesn't cause any\ndamage, but it stops doing anything useful because the prefetcher\nthinks its job is finished.\n\nIt'd be possible to fix this somehow in the two-XLogReader design, but\nsince I'm testing a new version that has a unified\nXLogReader-with-read-ahead I'm not going to try to do that. I've\nadded a basebackup-with-archive recovery to my arsenal of test\nworkloads to make sure I don't forget about archive recovery mode\nagain, but I think it's actually harder to get this wrong in the new\ndesign. In the meantime, if you are still interested in studying the\npotential speed-up from WAL prefetching using the most recently shared\ntwo-XLogReader patch, you'll need to unpack all your archived WAL\nfiles into pg_wal manually beforehand.\n\n\n",
"msg_date": "Sat, 5 Sep 2020 12:05:52 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Sep 05, 2020 at 12:05:52PM +1200, Thomas Munro wrote:\n>On Wed, Sep 2, 2020 at 2:18 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> On Wed, Sep 02, 2020 at 02:05:10AM +1200, Thomas Munro wrote:\n>> >On Wed, Sep 2, 2020 at 1:14 AM Tomas Vondra\n>> ><tomas.vondra@2ndquadrant.com> wrote:\n>> >> from the archive\n>> >\n>> >Ahh, so perhaps that's the key.\n>>\n>> Maybe. For the record, the commands look like this:\n>>\n>> archive_command = 'gzip -1 -c %p > /mnt/raid/wal-archive/%f.gz'\n>>\n>> restore_command = 'gunzip -c /mnt/raid/wal-archive/%f.gz > %p.tmp && mv %p.tmp %p'\n>\n>Yeah, sorry, I goofed here by not considering archive recovery\n>properly. I have special handling for crash recovery from files in\n>pg_wal (XLRO_END, means read until you run out of files) and streaming\n>replication (XLRO_WALRCV_WRITTEN, means read only as far as the wal\n>receiver has advertised as written in shared memory), as a way to\n>control the ultimate limit on how far ahead to read when\n>maintenance_io_concurrency and max_recovery_prefetch_distance don't\n>limit you first. But if you recover from a base backup with a WAL\n>archive, it uses the XLRO_END policy which can run out of files just\n>because a new file hasn't been restored yet, so it gives up\n>prefetching too soon, as you're seeing. That doesn't cause any\n>damage, but it stops doing anything useful because the prefetcher\n>thinks its job is finished.\n>\n>It'd be possible to fix this somehow in the two-XLogReader design, but\n>since I'm testing a new version that has a unified\n>XLogReader-with-read-ahead I'm not going to try to do that. I've\n>added a basebackup-with-archive recovery to my arsenal of test\n>workloads to make sure I don't forget about archive recovery mode\n>again, but I think it's actually harder to get this wrong in the new\n>design. In the meantime, if you are still interested in studying the\n>potential speed-up from WAL prefetching using the most recently shared\n>two-XLogReader patch, you'll need to unpack all your archived WAL\n>files into pg_wal manually beforehand.\n\nOK, thanks for looking into this. I guess I'll wait for an updated patch\nbefore testing this further. The storage has limited capacity so I'd\nhave to either reduce the amount of data/WAL or juggle with the WAL\nsegments somehow. Doesn't seem worth it.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 9 Sep 2020 01:16:27 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Sep 9, 2020 at 11:16 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> OK, thanks for looking into this. I guess I'll wait for an updated patch\n> before testing this further. The storage has limited capacity so I'd\n> have to either reduce the amount of data/WAL or juggle with the WAL\n> segments somehow. Doesn't seem worth it.\n\nHere's a new WIP version that works for archive-based recovery in my tests.\n\nThe main change I have been working on is that there is now just a\nsingle XLogReaderState, so no more double-reading and double-decoding\nof the WAL. It provides XLogReadRecord(), as before, but now you can\nalso read further ahead with XLogReadAhead(). The user interface is\nmuch like before, except that the GUCs changed a bit. They are now:\n\n recovery_prefetch=on\n recovery_prefetch_fpw=off\n wal_decode_buffer_size=256kB\n maintenance_io_concurrency=10\n\nI recommend setting maintenance_io_concurrency and\nwal_decode_buffer_size much higher than those defaults.\n\nThere are a few TODOs and questions remaining. One issue I'm\nwondering about is whether it is OK that bulky FPI data is now\nmemcpy'd into the decode buffer, whereas before we avoided that\nsometimes, when it didn't happen to cross a page boundary; I have some\nideas on how to do better (basically two levels of ring buffer) but I\nhaven't looked into that yet. Another issue is the new 'nowait' API\nfor the page-read callback; I'm trying to figure out if that is\nsufficient, or something more sophisticated including perhaps a\ndifferent return value is required. Another thing I'm wondering about\nis whether I have timeline changes adequately handled.\n\nThis design opens up a lot of possibilities for future performance\nimprovements. Some example:\n\n1. By adding some workspace to decoded records, the prefetcher can\nleave breadcrumbs for XLogReadBufferForRedoExtended(), so that it\nusually avoids the need for a second buffer mapping table lookup.\nIncidentally this also skips the hot smgropen() calls that Jakub\ncomplained about. I have an added an experimental patch like that,\nbut I need to look into the interlocking some more.\n\n2. By inspecting future records in the record->next chain, a redo\nfunction could merge work in various ways in quite a simple and\nlocalised way. A couple of examples:\n2.1. If there is a sequence of records of the same type touching the\nsame page, you could process all of them while you have the page lock.\n2.2. If there is a sequence of relation extensions (say, a sequence\nof multi-tuple inserts to the end of a relation, as commonly seen in\nbulk data loads) then instead of generating a many pwrite(8KB of\nzeroes) syscalls record-by-record to extend the relation, a single\nposix_fallocate(1MB) could extend the file in one shot. Assuming the\nbgwriter is running and doing a good job, this would remove most of\nthe system calls from bulk-load-recovery.\n\n3. More sophisticated analysis could find records to merge that are a\nbit further apart, under carefully controlled conditions; for example\nif you have a sequence like heap-insert, btree-insert, heap-insert,\nbtree-insert, ... then a simple next-record system like 2 won't see\nthe opportunities, but something a teensy bit smarter could.\n\n4. Since the decoding buffer can be placed in shared memory (decoded\nrecords contain pointers but only don't point to any other memory\nregion, with the exception of clearly marked oversized records), we\ncould begin to contemplate handing work off to other processes, given\na clever dependency analysis scheme and some more infrastructure.",
"msg_date": "Thu, 24 Sep 2020 11:38:45 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 11:38 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Sep 9, 2020 at 11:16 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> > OK, thanks for looking into this. I guess I'll wait for an updated patch\n> > before testing this further. The storage has limited capacity so I'd\n> > have to either reduce the amount of data/WAL or juggle with the WAL\n> > segments somehow. Doesn't seem worth it.\n>\n> Here's a new WIP version that works for archive-based recovery in my tests.\n\nRebased over recent merge conflicts in xlog.c. I also removed a stray\ndebugging message.\n\nOne problem the current patch has is that if you use something like\npg_standby, that is, a restore command that waits for more data, then\nit'll block waiting for WAL when it's trying to prefetch, which means\nthat replay is delayed. I'm not sure what to think about that yet.",
"msg_date": "Tue, 6 Oct 2020 18:04:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Sep 24, 2020 at 11:38:45AM +1200, Thomas Munro wrote:\n>On Wed, Sep 9, 2020 at 11:16 AM Tomas Vondra\n><tomas.vondra@2ndquadrant.com> wrote:\n>> OK, thanks for looking into this. I guess I'll wait for an updated patch\n>> before testing this further. The storage has limited capacity so I'd\n>> have to either reduce the amount of data/WAL or juggle with the WAL\n>> segments somehow. Doesn't seem worth it.\n>\n>Here's a new WIP version that works for archive-based recovery in my tests.\n>\n>The main change I have been working on is that there is now just a\n>single XLogReaderState, so no more double-reading and double-decoding\n>of the WAL. It provides XLogReadRecord(), as before, but now you can\n>also read further ahead with XLogReadAhead(). The user interface is\n>much like before, except that the GUCs changed a bit. They are now:\n>\n> recovery_prefetch=on\n> recovery_prefetch_fpw=off\n> wal_decode_buffer_size=256kB\n> maintenance_io_concurrency=10\n>\n>I recommend setting maintenance_io_concurrency and\n>wal_decode_buffer_size much higher than those defaults.\n>\n\nI think you've left the original GUC (replaced by the buffer size) in\nthe postgresql.conf.sample file. Confused me for a bit ;-)\n\nI've done a bit of testing and so far it seems to work with WAL archive,\nso I'll do more testing and benchmarking over the next couple days.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Wed, 7 Oct 2020 02:58:38 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nI repeated the same testing I did before - I started with a 32GB pgbench\ndatabase with archiving, run a pgbench for 1h to generate plenty of WAL,\nand then performed recovery from a snapshot + archived WAL on different\nstorage types. The instance was running on NVMe SSD, allowing it ro\ngenerate ~200GB of WAL in 1h.\n\nThe recovery was done on two storage types - SATA RAID0 with 3 x 7.2k\nspinning drives and NVMe SSD. On each storage I tested three configs -\ndisabled prefetching, defaults and increased values:\n\n wal_decode_buffer_size = 4MB (so 8x the default)\n maintenance_io_concurrency = 100 (so 10x the default)\n\nFWIW there's a bunch of issues with the GUCs - the .conf.sample file\ndoes not include e.g. recovery_prefetch, and instead includes\n#max_recovery_prefetch_distance which was however replaced by\nwal_decode_buffer_size. Another thing is that the actual default value\ndiffer from the docs - e.g. the docs say that wal_decode_buffer_size is\n256kB by default, when in fact it's 512kB.\n\nNow, some results ...\n\n1) NVMe\n\nFro the fast storage, there's a modest improvement. The time it took to\nrecover the ~13k WAL segments are these\n\n no prefetch: 5532s\n default: 4613s\n increased: 4549s\n\nSo the speedup from enabled prefetch is ~20% but increasing the values\nto make it more aggressive has little effect. Fair enough, the NVMe\nis probably fast enough to not benefig from longer I/O queues here.\n\nThis is a bit misleading though, because the effectivity of prfetching\nvery much depends on the fraction of FPI in the WAL stream - and right\nafter checkpoint that's most of the WAL, which makes the prefetching\nless efficient. We still have to parse the WAL etc. without actually\nprefetching anything, so it's pure overhead.\n\nSo I've also generated a chart showing time (in milliseconds) needed to\napply individual WAL segments. It clearly shows that there are 3\ncheckpoints, and that for each checkpoint it's initially very cheap\n(thanks to FPI) and as the fraction of FPIs drops the redo gets more\nexpensive. At which point the prefetch actually helps, by up to 30% in\nsome cases (so a bit more than the overall speedup). All of this is\nexpected, of course.\n\n\n2) 3 x 7.2k SATA RAID0\n\nFor the spinning rust, I had to make some compromises. It's not feasible\nto apply all the 200GB of WAL - it would take way too long. I only\napplied ~2600 segments for each configuration (so not even one whole\ncheckpoint), and even that took ~20h in each case.\n\nThe durations look like this:\n\n no prefetch: 72446s\n default: 73653s\n increased: 55409s\n\nSo in this case the default settings is way too low - it actually makes\nthe recovery a bit slower, while with increased values there's ~25%\nspeedup, which is nice. I assume that if larger number of WAL segments\nwas applied (e.g. the whole checkpoint), the prefetch numbers would be\na bit better - the initial FPI part would play smaller role.\n\n From the attached \"average per segment\" chart you can see that the basic\nbehavior is about the same as for NVMe - initially it's slower due to\nFPIs in the WAL stream, and then it gets ~30% faster.\n\n\nOverall I think it looks good. I haven't looked at the code very much,\nand I can't comment on the potential optimizations mentioned a couple\ndays ago yet.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services",
"msg_date": "Sat, 10 Oct 2020 13:29:35 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sun, Oct 11, 2020 at 12:29 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n> I repeated the same testing I did before - I started with a 32GB pgbench\n> database with archiving, run a pgbench for 1h to generate plenty of WAL,\n> and then performed recovery from a snapshot + archived WAL on different\n> storage types. The instance was running on NVMe SSD, allowing it ro\n> generate ~200GB of WAL in 1h.\n\nThanks for running these tests! And sorry for the delay in replying.\n\n> The recovery was done on two storage types - SATA RAID0 with 3 x 7.2k\n> spinning drives and NVMe SSD. On each storage I tested three configs -\n> disabled prefetching, defaults and increased values:\n>\n> wal_decode_buffer_size = 4MB (so 8x the default)\n> maintenance_io_concurrency = 100 (so 10x the default)\n>\n> FWIW there's a bunch of issues with the GUCs - the .conf.sample file\n> does not include e.g. recovery_prefetch, and instead includes\n> #max_recovery_prefetch_distance which was however replaced by\n> wal_decode_buffer_size. Another thing is that the actual default value\n> differ from the docs - e.g. the docs say that wal_decode_buffer_size is\n> 256kB by default, when in fact it's 512kB.\n\nOops. Fixed, and rebased.\n\n> Now, some results ...\n>\n> 1) NVMe\n>\n> Fro the fast storage, there's a modest improvement. The time it took to\n> recover the ~13k WAL segments are these\n>\n> no prefetch: 5532s\n> default: 4613s\n> increased: 4549s\n>\n> So the speedup from enabled prefetch is ~20% but increasing the values\n> to make it more aggressive has little effect. Fair enough, the NVMe\n> is probably fast enough to not benefig from longer I/O queues here.\n>\n> This is a bit misleading though, because the effectivity of prfetching\n> very much depends on the fraction of FPI in the WAL stream - and right\n> after checkpoint that's most of the WAL, which makes the prefetching\n> less efficient. We still have to parse the WAL etc. without actually\n> prefetching anything, so it's pure overhead.\n\nYeah. I've tried to reduce that overhead as much as possible,\ndecoding once and looking up the buffer only once. The extra overhead\ncaused by making posix_fadvise() calls is unfortunate (especially if\nthey aren't helping due to small shared buffers but huge page cache),\nbut should be fixed by switching to proper AIO, independently of this\npatch, which will batch those and remove the pread().\n\n> So I've also generated a chart showing time (in milliseconds) needed to\n> apply individual WAL segments. It clearly shows that there are 3\n> checkpoints, and that for each checkpoint it's initially very cheap\n> (thanks to FPI) and as the fraction of FPIs drops the redo gets more\n> expensive. At which point the prefetch actually helps, by up to 30% in\n> some cases (so a bit more than the overall speedup). All of this is\n> expected, of course.\n\nThat is a nice way to see the effect of FPI on recovery.\n\n> 2) 3 x 7.2k SATA RAID0\n>\n> For the spinning rust, I had to make some compromises. It's not feasible\n> to apply all the 200GB of WAL - it would take way too long. I only\n> applied ~2600 segments for each configuration (so not even one whole\n> checkpoint), and even that took ~20h in each case.\n>\n> The durations look like this:\n>\n> no prefetch: 72446s\n> default: 73653s\n> increased: 55409s\n>\n> So in this case the default settings is way too low - it actually makes\n> the recovery a bit slower, while with increased values there's ~25%\n> speedup, which is nice. I assume that if larger number of WAL segments\n> was applied (e.g. the whole checkpoint), the prefetch numbers would be\n> a bit better - the initial FPI part would play smaller role.\n\nHuh. Interesting.\n\n> From the attached \"average per segment\" chart you can see that the basic\n> behavior is about the same as for NVMe - initially it's slower due to\n> FPIs in the WAL stream, and then it gets ~30% faster.\n\nYeah. I expect that one day not too far away we'll figure out how to\nget rid of FPIs (through a good enough double-write log or\nO_ATOMIC)...\n\n> Overall I think it looks good. I haven't looked at the code very much,\n> and I can't comment on the potential optimizations mentioned a couple\n> days ago yet.\n\nThanks!\n\nI'm not really sure what to do about achive restore scripts that\nblock. That seems to be fundamentally incompatible with what I'm\ndoing here.",
"msg_date": "Fri, 13 Nov 2020 15:20:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 11/13/20 3:20 AM, Thomas Munro wrote:\n>\n> ...\n> \n> I'm not really sure what to do about achive restore scripts that\n> block. That seems to be fundamentally incompatible with what I'm\n> doing here.\n> \n\nIMHO we can't do much about that, except for documenting it - if the\nprefetch can't work because of blocking restore script, someone has to\nfix/improve the script. No way around that, I'm afraid.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 13 Nov 2020 11:45:51 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 11/13/20 3:20 AM, Thomas Munro wrote:\n> > I'm not really sure what to do about achive restore scripts that\n> > block. That seems to be fundamentally incompatible with what I'm\n> > doing here.\n> \n> IMHO we can't do much about that, except for documenting it - if the\n> prefetch can't work because of blocking restore script, someone has to\n> fix/improve the script. No way around that, I'm afraid.\n\nI'm a bit confused about what the issue here is- is the concern that a\nrestore_command is specified that isn't allowed to run concurrently but\nthis patch is intending to run more than one concurrently..? There's\nanother patch that I was looking at for doing pre-fetching of WAL\nsegments, so if this is also doing that we should figure out which\npatch we want..\n\nI don't know that it's needed, but it feels likely that we could provide\na better result if we consider making changes to the restore_command API\n(eg: have a way to say \"please fetch this many segments ahead, and you\ncan put them in this directory with these filenames\" or something). I\nwould think we'd be able to continue supporting the existing API and\naccept that it might not be as performant.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 13 Nov 2020 10:13:45 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Nov 14, 2020 at 4:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> > On 11/13/20 3:20 AM, Thomas Munro wrote:\n> > > I'm not really sure what to do about achive restore scripts that\n> > > block. That seems to be fundamentally incompatible with what I'm\n> > > doing here.\n> >\n> > IMHO we can't do much about that, except for documenting it - if the\n> > prefetch can't work because of blocking restore script, someone has to\n> > fix/improve the script. No way around that, I'm afraid.\n>\n> I'm a bit confused about what the issue here is- is the concern that a\n> restore_command is specified that isn't allowed to run concurrently but\n> this patch is intending to run more than one concurrently..? There's\n> another patch that I was looking at for doing pre-fetching of WAL\n> segments, so if this is also doing that we should figure out which\n> patch we want..\n\nThe problem is that the recovery loop tries to look further ahead in\nbetween applying individual records, which causes the restore script\nto run, and if that blocks, we won't apply records that we already\nhave, because we're waiting for the next WAL file to appear. This\nbehaviour is on by default with my patch, so pg_standby will introduce\na weird replay delays. We could think of some ways to fix that, with\nmeaningful return codes and periodic polling or something, I suppose,\nbut something feels a bit weird about it.\n\n> I don't know that it's needed, but it feels likely that we could provide\n> a better result if we consider making changes to the restore_command API\n> (eg: have a way to say \"please fetch this many segments ahead, and you\n> can put them in this directory with these filenames\" or something). I\n> would think we'd be able to continue supporting the existing API and\n> accept that it might not be as performant.\n\nHmm. Every time I try to think of a protocol change for the\nrestore_command API that would be acceptable, I go around the same\ncircle of thoughts about event flow and realise that what we really\nneed for this is ... a WAL receiver...\n\nHere's a rebase over the recent commit \"Get rid of the dedicated latch\nfor signaling the startup process.\" just to fix cfbot; no other\nchanges.",
"msg_date": "Wed, 18 Nov 2020 18:10:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Sat, Nov 14, 2020 at 4:13 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> > > On 11/13/20 3:20 AM, Thomas Munro wrote:\n> > > > I'm not really sure what to do about achive restore scripts that\n> > > > block. That seems to be fundamentally incompatible with what I'm\n> > > > doing here.\n> > >\n> > > IMHO we can't do much about that, except for documenting it - if the\n> > > prefetch can't work because of blocking restore script, someone has to\n> > > fix/improve the script. No way around that, I'm afraid.\n> >\n> > I'm a bit confused about what the issue here is- is the concern that a\n> > restore_command is specified that isn't allowed to run concurrently but\n> > this patch is intending to run more than one concurrently..? There's\n> > another patch that I was looking at for doing pre-fetching of WAL\n> > segments, so if this is also doing that we should figure out which\n> > patch we want..\n> \n> The problem is that the recovery loop tries to look further ahead in\n> between applying individual records, which causes the restore script\n> to run, and if that blocks, we won't apply records that we already\n> have, because we're waiting for the next WAL file to appear. This\n> behaviour is on by default with my patch, so pg_standby will introduce\n> a weird replay delays. We could think of some ways to fix that, with\n> meaningful return codes and periodic polling or something, I suppose,\n> but something feels a bit weird about it.\n\nAh, yeah, that's clearly an issue that should be addressed. There's a\nnearby thread which is talking about doing exactly that, so, perhaps\nthis doesn't need to be worried about here..?\n\n> > I don't know that it's needed, but it feels likely that we could provide\n> > a better result if we consider making changes to the restore_command API\n> > (eg: have a way to say \"please fetch this many segments ahead, and you\n> > can put them in this directory with these filenames\" or something). I\n> > would think we'd be able to continue supporting the existing API and\n> > accept that it might not be as performant.\n> \n> Hmm. Every time I try to think of a protocol change for the\n> restore_command API that would be acceptable, I go around the same\n> circle of thoughts about event flow and realise that what we really\n> need for this is ... a WAL receiver...\n\nA WAL receiver, or an independent process which goes out ahead and\nfetches WAL..?\n\nStill, I wonder about having a way to inform the command that's run by\nthe restore_command of what it is we really want, eg:\n\nrestore_command = 'somecommand --async=%a --target=%t --target-name=%n --target-xid=%x --target-lsn=%l --target-timeline=%i --dest-dir=%d'\n\nSuch that '%a' is either yes, or no, indicating if the restore command\nshould operate asyncronously and pre-fetch WAL, %t is either empty (or\nmabye 'unset') or 'immediate', %n/%x/%l are similar to %t, %i is either\na specific timeline or 'immediate' (somecommand should be understanding\nof timelines and know how to parse history files to figure out the right\ntimeline to fetch along, based on the destination requested), and %d is\na directory for somecommand to place WAL files into (perhaps with an\nalternative naming scheme, if we feel we need one).\n\nThe amount pre-fetching which 'somecommand' would do, and how many\nprocesses it would use to do so, could either be configured as part of\nthe options passed to 'somecommand', which we would just pass through,\nor through its own configuration file.\n\nA restore_command which is set but doesn't include a %a or %d or such\nwould be assumed to work in the same manner as today.\n\nFor my part, at least, I don't think this is really that much of a\nstretch, to expect a restore_command to be able to pre-populate a\ndirectory with WAL files- certainly there's at least one that already\ndoes this, even though it doesn't have all the information directly\npassed to it.. Would be nice if it did. :)\n\nThanks,\n\nStephen",
"msg_date": "Wed, 18 Nov 2020 16:00:39 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Nov 19, 2020 at 10:00 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Thomas Munro (thomas.munro@gmail.com) wrote:\n> > Hmm. Every time I try to think of a protocol change for the\n> > restore_command API that would be acceptable, I go around the same\n> > circle of thoughts about event flow and realise that what we really\n> > need for this is ... a WAL receiver...\n>\n> A WAL receiver, or an independent process which goes out ahead and\n> fetches WAL..?\n\nWhat I really meant was: why would you want this over streaming rep?\nI just noticed this thread proposing to retire pg_standby on that\nbasis:\n\nhttps://www.postgresql.org/message-id/flat/20201029024412.GP5380%40telsasoft.com\n\nI'd be happy to see that land, to fix this problem with my plan. But\nare there other people writing restore scripts that block that would\nexpect them to work on PG14?\n\n\n",
"msg_date": "Wed, 25 Nov 2020 16:57:47 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> On Thu, Nov 19, 2020 at 10:00 AM Stephen Frost <sfrost@snowman.net> wrote:\n> > * Thomas Munro (thomas.munro@gmail.com) wrote:\n> > > Hmm. Every time I try to think of a protocol change for the\n> > > restore_command API that would be acceptable, I go around the same\n> > > circle of thoughts about event flow and realise that what we really\n> > > need for this is ... a WAL receiver...\n> >\n> > A WAL receiver, or an independent process which goes out ahead and\n> > fetches WAL..?\n> \n> What I really meant was: why would you want this over streaming rep?\n\nI have to admit to being pretty confused as to this question and maybe\nI'm just not understanding. Why wouldn't change patch be helpful for\nstreaming replication too..?\n\nIf I follow correctly, this patch will scan ahead in the WAL and let\nthe kernel know that certain blocks will be needed soon. Ideally,\nthough I don't think it does yet, we'd only do that for blocks that\naren't already in shared buffers, and only for non-FPIs (even better if\nwe can skip past pages for which we already, recently, passed an FPI).\n\nThe biggest caveat here, it seems to me anyway, is that for this to\nactually help you need to be running with checkpoints that are larger\nthan shared buffers, as otherwise all the pages we need will be in\nshared buffers already, thanks to FPIs bringing them in, except when\nrunning with hot standby, right?\n\nIn the hot standby case, other random pages could be getting pulled in\nto answer user queries and therefore this would be quite helpful to\nminimize the amount of time required to replay WAL, I would think.\nNaturally, this isn't very interesting if we're just always able to\nkeep up with the primary, but that's certainly not always the case.\n\n> I just noticed this thread proposing to retire pg_standby on that\n> basis:\n> \n> https://www.postgresql.org/message-id/flat/20201029024412.GP5380%40telsasoft.com\n> \n> I'd be happy to see that land, to fix this problem with my plan. But\n> are there other people writing restore scripts that block that would\n> expect them to work on PG14?\n\nOk, I think I finally get the concern that you're raising here-\nbasically that if a restore command was written to sit around and wait\nfor WAL segments to arrive, instead of just returning to PG and saying\n\"WAL segment not found\", that this would be a problem if we are running\nout ahead of the applying process and asking for WAL.\n\nThe thing is- that's an outright broken restore command script in the\nfirst place. If PG is in standby mode, we'll ask again if we get an\nerror result indicating that the WAL file wasn't found. The restore\ncommand documentation is quite clear on this point:\n\nThe command will be asked for file names that are not present in the\narchive; it must return nonzero when so asked.\n\nThere's no \"it can wait around for the next file to show up if it wants\nto\" in there- it *must* return nonzero when asked for files that don't\nexist.\n\nSo, I don't think that we really need to stress over this. The fact\nthat pg_standby offers options to have it wait instead of just returning\na non-zero error-code and letting the loop that we already do in the\ncore code seems like it's really just a legacy thing from before we were\ndoing that and probably should have been ripped out long ago... Even\nmore reason to get rid of pg_standby tho, imv, we haven't been properly\nadjusting it when we've been making changes to the core code, it seems.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 4 Dec 2020 13:27:38 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-04 13:27:38 -0500, Stephen Frost wrote:\n> If I follow correctly, this patch will scan ahead in the WAL and let\n> the kernel know that certain blocks will be needed soon. Ideally,\n> though I don't think it does yet, we'd only do that for blocks that\n> aren't already in shared buffers, and only for non-FPIs (even better if\n> we can skip past pages for which we already, recently, passed an FPI).\n\nThe patch uses PrefetchSharedBuffer(), which only initiates a prefetch\nif the page isn't already in s_b.\n\nAnd once we have AIO, it can actually initiate IO into s_b at that\npoint, rather than fetching it just into the kernel page cache.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Fri, 4 Dec 2020 10:51:57 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2020-12-04 13:27:38 -0500, Stephen Frost wrote:\n> > If I follow correctly, this patch will scan ahead in the WAL and let\n> > the kernel know that certain blocks will be needed soon. Ideally,\n> > though I don't think it does yet, we'd only do that for blocks that\n> > aren't already in shared buffers, and only for non-FPIs (even better if\n> > we can skip past pages for which we already, recently, passed an FPI).\n> \n> The patch uses PrefetchSharedBuffer(), which only initiates a prefetch\n> if the page isn't already in s_b.\n\nGreat, glad that's already been addressed in this, that's certainly\ngood. I think I knew that and forgot it while composing that response\nover the past rather busy week. :)\n\n> And once we have AIO, it can actually initiate IO into s_b at that\n> point, rather than fetching it just into the kernel page cache.\n\nSure.\n\nThanks,\n\nStephen",
"msg_date": "Fri, 4 Dec 2020 14:01:45 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas wrote:\r\n \r\n> Here's a rebase over the recent commit \"Get rid of the dedicated latch for\r\n> signaling the startup process.\" just to fix cfbot; no other changes.\r\n\r\nI wanted to contribute my findings - after dozens of various lengthy runs here - so far with WAL (asynchronous) recovery performance in the hot-standby case. TL;DR; this patch is awesome even on NVMe 😉\r\n\r\nThis email is a little bit larger topic than prefetching patch itself, but I did not want to loose context. Maybe it'll help somebody in operations or just to add to the general pool of knowledge amongst hackers here, maybe all of this stuff was already known to you. My plan is to leave it here like that as I'm probably lacking understanding, time, energy and ideas how to tweak it more.\r\n\r\nSETUP AND TEST:\r\n--------------- \r\nThere might be many different workloads, however I've only concentrated on single one namely - INSERT .. SELECT 100 rows - one that was predictible enough for me, quite generic and allows to uncover some deterministic hotspots. The result is that in such workload it is possible to replicate ~750Mbit/s of small rows traffic in stable conditions (catching-up is a different matter).\r\n\r\n- two i3.4xlarge AWS VMs with 14devel, see [0] for specs. 14devel already contains major optimizations of reducing lseeks() and SLRU CLOG flushing[1]\r\n- WIP WAL prefetching [2] by Thomas Munro applied, v14_000[12345] patches, especially v14_0005 is important here as it reduces dynahash calls.\r\n- FPWs were disabled to avoid hitting >2.5Gbps traffic spikes\r\n- hash_search_with_hash_value_memcmpopt() is my very poor man's copycat optimization of dynahash.c's hash_search_with_hash_value() to avoid indirect function calls of calling match() [3] \r\n- VDSO clock_gettime() just in case fix on AWS, tsc for clocksource0 instead of \"xen\" OR one could use track_io_timing=off to reduce syscalls\r\n\r\nPrimary tuning:\r\nin order to reliably measure standby WAL recovery performance, one needs to setup *STABLE* generator over time/size, on primary. In my case it was 2 indexes and 1 table: pgbench -n -f inserts.pgb -P 1 -T 86400 -c 16 -j 2 -R 4000 --latency-limit=50 db.\r\n\r\n\r\nVFS-CACHE-FITTING WORKLOAD @ 4k TPS:\r\n------------------------------------\r\n\r\ncreate sequence s1;\r\ncreate table tid (id bigint primary key, j bigint not null, blah text not null) partition by hash (id);\r\ncreate index j_tid on tid (j); -- to put some more realistic stress\r\ncreate table tid_h1 partition of tid FOR VALUES WITH (MODULUS 16, REMAINDER 0);\r\n[..]\r\ncreate table tid_h16 partition of tid FOR VALUES WITH (MODULUS 16, REMAINDER 15);\r\n\r\nThe clients (-c 16) needs to aligned with hash-partitioning to avoid LWLock/BufferContent. inserts.pgb was looking like:\r\ninsert into tid select nextval('s1'), g, 'some garbage text' from generate_series(1,100) g. \r\nThe sequence is of the key importance here. \"g\" is more or less randomly hitting here (the j_tid btree might quite grow on standby too).\r\n\r\nAdditionally due to drops on primary, I've disabled fsync as a stopgap measure because at least what to my understanding I was affected by global freezes of my insertion workload due to Lock/extends as one of the sessions was always in: mdextend() -> register_dirty_segment() -> RegisterSyncRequest() (fsync pg_usleep 0.01s), which caused frequent dips of performance even at the begginig (visible thanks to pgbench -P 1) and I wanted something completely linear. The fsync=off was simply a shortcut just in order to measure stuff properly on the standby (I needed this deterministic \"producer\").\r\n\r\nThe WAL recovery is not really single threaded thanks to prefetches with posix_fadvises() - performed by other (?) CPUs/kernel threads I suppose, CLOG flushing by checkpointer and the bgwriter itself. The walsender/walreciever were not the bottlenecks, but bgwriter and checkpointer needs to be really tuned on *standby* side too.\r\n\r\nSo, the above workload is CPU bound on the standby side for long time. I would classify it as \"standby-recovery-friendly\" as the IO-working-set of the main redo loop does NOT degrade over time/dbsize that much, so there is no lag till certain point. In order to classify the startup/recovery process one could use recent pidstat(1) -d \"iodelay\" metric. If one gets stable >= 10 centiseconds over more than few seconds, then one has probably I/O driven bottleneck. If iodelay==0 then it is completely VFS-cached I/O workload. \r\n\r\nIn such setup, primary can generate - without hiccups - 6000-6500 TPS (insert 100 rows) @ ~25% CPU util using 16 DB sessions. Of course it could push more, but we are using pgbench throttling. Standby can follow up to @ ~4000 TPS on the primary, without lag (@ 4500 TPS was having some lag even at start). The startup/recovering gets into CPU 95% utilization territory with ~300k (?) hash_search_with_hash_value_memcmpopt() executions per second (measured using perf-probe). The shorter the WAL record the more CPU-bound the WAL recovery performance is going to be. In my case ~220k WAL records @ WAL segment 16MB and I was running at stable 750Mbit/s. What is important - at least on my HW - due to dynahashs there's hard limit of this ~300..400 k WAL records/s (perf probe/stat reports that i'm having 300k of hash_search_with_hash_value_memcmpopt() / s, while my workload is 4k [rate] * 100 [rows] * 3 [table + 2 indexes] = 400k/s and no lag, discrepancy that I admit do not understand, maybe it's the Thomas's recent_buffer_fastpath from v14_0005 prefetcher). On some other OLTP production systems I've seen that there's 10k..120k WAL records/16MB segment. The perf picture looks like one in [4]. The \"tidseq-*\" graphs are about this scenario.\r\n\r\nOne could say that with lesser amount of bigger rows one could push more on the network and that's true however unrealistic in real-world systems (again with FPW=off, I was able to push up to @ 2.5Gbit/s stable without lag, but twice less rate and much bigger rows - ~270 WAL records/16MB segment and primary being the bottleneck). The top#1 CPU function was quite unexpectedly again the BufTableLookup() -> hash_search_with_hash_value_memcmpopt() even at such relatively low-records rate, which illustrates that even with a lot of bigger memcpy()s being done by recovery, those are not the problem as one would typically expect.\r\n\r\nVFS-CACHE-MISSES WORKLOAD @ 1.5k TPS:\r\n-------------------------------------\r\n\r\nInteresting behavior is that for the very similar data-loading scheme as described above, but for uuid PK and uuid_generate_v4() *random* UUIDs (pretty common pattern amongst developers), instead of bigint sequence, so something very similar to above like:\r\ncreate table trandomuuid (id uuid primary key , j bigint not null, t text not null) partition by hash (id);\r\n... picture radically changes if the active-working-I/O-set doesn't fit VFS cache and it's I/O bound on recovery side (again this is with prefetching already). This can checked via iodelay: if it goes let's say >= 10-20 centiseconds or BCC's cachetop(1) shows \"relatively low\" READ_HIT% for recovering (poking at it was ~40-45% in my case when recovery started to be really I/O heavy):\r\n\r\nDBsize@112GB , 1s sample:\r\n13:00:16 Buffers MB: 200 / Cached MB: 88678 / Sort: HITS / Order: descending\r\n PID UID CMD HITS MISSES DIRTIES READ_HIT% WRITE_HIT%\r\n 1849 postgres postgres 160697 67405 65794 41.6% 1.2% -- recovering\r\n 1853 postgres postgres 37011 36864 24576 16.8% 16.6% -- walreciever\r\n 1851 postgres postgres 15961 13968 14734 4.1% 0.0% -- bgwriter\r\n\r\nOn 128GB RAM, when DB size gets near the ~80-90GB boundary (128-32 for huge pages - $binaries - $kernel - $etc =~ 90GB free page cache) SOMETIMES in my experiments it started getting lag, but also at the same time even the primary cannot keep at rate of 1500TPS (IO/DataFileRead|Write may happen or still Lock/extend) and struggles of course this is well known behavior [5]. Also in this almost-pathological-INSERT-rate had pgstat_bgwriter.buffers_backend like 90% of buffers_alloc and I couldn't do much anything about it (small s_b on primary, tuning bgwriter settings to the max, even with bgwriter_delay=0 hack, BM_MAX_USAGE_COUNT=1). Any suggestion on how to make such $workload deterministic after certain DBsize under pgbench -P1 is welcome :)\r\n\r\nSo in order to deterministically - in multiple runs - demonstrate the impact of WAL prefetching by Thomas in such scenario (where primary was bottleneck itself), see \"trandomuuid-*\" graphs, one of the graphs has same commentary as here:\r\n- the system is running with WAL prefetching disabled (maitenance_io_concurrency=0)\r\n- once the DBsize >85-90GB primary cannot keep up, so there's drop of data produced - rxNET KB/s. At this stage I've did echo 3> drop_caches, to shock the system (there's very little jump of Lag, buit it goes to 0 again -- good, standby can still manage)\r\n- once the DBsize got near ~275GB standby couldn't follow even-the-chocked-primary (lag starts rising to >3000s, IOdelay indicates that startup/recovering is wasting like 70% of it's time on synchronous preads())\r\n- at DBsize ~315GB I've did set maitenance_io_concurrency=10 (enable the WAL prefetching/posix_fadvise()), lags starts dropping, and IOdelay is reduced to ~53, %CPU (not %sys) of the process jumps from 28% -> 48% (efficiency grows)\r\n- at DBsize ~325GB I've did set maitenance_io_concurrency=128 (give kernel more time to pre-read for us), lags starts dropping even faster, and IOdelay is reduced to ~30, %CPU part (not %sys) of the process jumps from 48% -> 70% (it's efficiency grows again, 2.5x more from baseline)\r\n\r\nAnother interesting observation is that standby's bgwriter is much more stressed and important than the recovery itself and several times more active than the one on primary. I've rechecked using Tomas Vondra's sequuid extension [6] and of course problem doesn't exist if the UUIDs are not that random (much more localized, so this small workload adjustment makes it behave like in \"VFS-CACHE-fitting\" scenario).\r\n\r\nAlso just in case for the patch review process: also I can confirm that data inserted in primary and standby did match on multiple occasions (sums of columns) after those tests (some of those were run up to 3TB mark).\r\n\r\nRandom thoughts:\r\n----------------\r\n1) Even with all those optimizations, I/O prefetching (posix_fadvise()) or even IO_URING in future there's going be the BufTableLookup()->dynahash single-threaded CPU limitation bottleneck. It may be that with IO_URING in future and proper HW, all workloads will start to be CPU-bound on standby ;) I do not see a simple way to optimize such fundamental pillar - other than parallelizing it ? I hope I'm wrong.\r\n\r\n1b) With the above patches I need to disappoint Alvaro Herrera, I was unable to reproduce the top#1 smgropen() -> hash_search_with_hash_value() in any way as I think right now v14_0005 simply kind of solves the problem.\r\n\r\n2) I'm kind of thinking that flushing dirty pages on standby should be much more aggressive than on primary, in order to unlock the startup/recovering potential. What I'm trying to say it might be even beneficial to spot if FlushBuffer() is happening too fast from inside the main redo recovery loop, and if it is then issue LOG/HINT from time to time (similar to famous \"checkpoints are occurring too frequently\") to tune the background writer on slave or investigate workload itself on primary. Generally speaking those \"bgwriter/checkpointer\" GUCs might be kind of artificial during the standby-processing scenario.\r\n\r\n3) The WAL recovery could (?) have some protection from noisy neighboring backends. As the hot standby is often used in read offload configurations it could be important to protect it's VFS cache (active, freshly replicated data needed for WAL recovery) from being polluted by some other backends issuing random SQL SELECTs.\r\n\r\n4) Even for scenarios with COPY/heap_multi_insert()-based-statements it emits a lot of interleaved Btree/INSERT_LEAF records that are CPU heavy if the table is indexed.\r\n\r\n6) I don't think walsender/walreciever are in any danger right now, as they at least in my case they had plenty of headroom (even @ 2.5Gbps walreciever was ~30-40% CPU) while issuing I/O writes of 8kB (but this was with fsync=off and on NVMe). Walsender was even in better shape mainly due to sendto(128kB). YMMV.\r\n\r\n7) As uuid-osp extension is present in the contrib and T.V.'s sequential-uuids is unfortunately NOT, developers more often than not, might run into those pathological scenarios. Same applies to any cloud-hosted database where one cannot deploy his extensions.\r\n\r\nWhat was not tested and what are further research questions:\r\n-----------------------------------------------------------\r\na) Impact of vacuum WAL records: I suspect that it might be that additional vacuum-generated workload that was added to the mix, during the VFS-cache-fitting workload that overwhelmed the recovering loop and it started catching lag.\r\n\r\nb) Impact of the noisy-neighboring-SQL queries on hot-standby:\r\nb1) research the impact of contention LWLock buffer_mappings between readers and recovery itself.\r\nb2) research/experiments maybe with cgroups2 VFS-cache memory isolation for processes.\r\n\r\nc) Impact of WAL prefetching's \"maintenance_io_concurrency\" VS iodelay for startup/recovering preads() is also unknown. They key question there is how far ahead to issue those posix_fadvise() so that pread() is nearly free. Some I/O calibration tool to set maitenance_io_concurrency would be nice.\r\n\r\n-J.\r\n\r\n[0] - specs: 2x AWS i3.4xlarge (1s8c16t, 128GB RAM, Intel(R) Xeon(R) CPU E5-2686 v4 @ 2.30GH), 2xNVMe in lvm striped VG, ext4. Tuned parameters: bgwriter_*, s_b=24GB with huge pages, checkpoint_completion_target=0.9, commit_delay=100000, commit_siblings=20, synchronous_commit=off, fsync=off, max_wal_size=40GB, recovery_prefetch=on, track_io_timing=on , wal_block_size=8192 (default), wal_decode_buffer_size=512kB (default WIP WAL prefetching), wal_buffers=256MB. Schema was always 16-way hash-parititoned to avoid LWLock/BufferContent waits.\r\n\r\n[1] - https://www.postgresql.org/message-id/flat/CA%2BhUKGLJ%3D84YT%2BNvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ%40mail.gmail.com\r\n\r\n[2] - https://commitfest.postgresql.org/31/2410/\r\n\r\n[3] - hash_search_with_hash_value() spends a lot of time near \"callq *%r14\" in tight loop assembly in my case (indirect call to hash comparision function). This hash_search_with_hash_value_memcmpopt() is just copycat function and instead directly calls memcmp() where it matters (smgr.c, buf_table.c). Blind shot at gcc's -flto also didn't help to gain a lot there (I was thinking it could optimize it by building many instances of hash_search_with_hash_value of per-match() use, but no). I did not quantify the benefit, I think it just failed optimization experiment, as it is still top#1 in my profiles, it could be even noise.\r\n\r\n[4] - 10s perf image of CPU-bound 14devel with all the mentioned patches:\r\n\r\n 17.38% postgres postgres [.] hash_search_with_hash_value_memcmpopt\r\n ---hash_search_with_hash_value_memcmpopt\r\n |--11.16%--BufTableLookup\r\n | |--9.44%--PrefetchSharedBuffer\r\n | | XLogPrefetcherReadAhead\r\n | | StartupXLOG\r\n | --1.72%--ReadBuffer_common\r\n | ReadBufferWithoutRelcache\r\n | XLogReadBufferExtended\r\n | --1.29%--XLogReadBufferForRedoExtended\r\n | --0.64%--XLogInitBufferForRedo\r\n |--3.86%--smgropen\r\n | |--2.79%--XLogPrefetcherReadAhead\r\n | | StartupXLOG\r\n | --0.64%--XLogReadBufferExtended\r\n --2.15%--XLogPrefetcherReadAhead\r\n StartupXLOG\r\n\r\n 10.30% postgres postgres [.] MarkBufferDirty\r\n ---MarkBufferDirty\r\n |--5.58%--btree_xlog_insert\r\n | btree_redo\r\n | StartupXLOG\r\n --4.72%--heap_xlog_insert\r\n\r\n 6.22% postgres postgres [.] ReadPageInternal\r\n ---ReadPageInternal\r\n XLogReadRecordInternal\r\n XLogReadAhead\r\n XLogPrefetcherReadAhead\r\n StartupXLOG\r\n\r\n 5.36% postgres postgres [.] hash_bytes\r\n ---hash_bytes\r\n |--3.86%--hash_search_memcmpopt\r\n\r\n[5] - \r\nhttps://www.2ndquadrant.com/en/blog/on-the-impact-of-full-page-writes/\r\nhttps://www.2ndquadrant.com/en/blog/sequential-uuid-generators/\r\nhttps://www.2ndquadrant.com/en/blog/sequential-uuid-generators-ssd/\r\n\r\n[6] - https://github.com/tvondra/sequential-uuids",
"msg_date": "Fri, 11 Dec 2020 12:24:29 +0000",
"msg_from": "Jakub Wartak <Jakub.Wartak@tomtom.com>",
"msg_from_op": false,
"msg_subject": "RE: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Dec 12, 2020 at 1:24 AM Jakub Wartak <Jakub.Wartak@tomtom.com> wrote:\n> I wanted to contribute my findings - after dozens of various lengthy runs here - so far with WAL (asynchronous) recovery performance in the hot-standby case. TL;DR; this patch is awesome even on NVMe\n\nThanks Jakub! Some interesting, and nice, results.\n\n> The startup/recovering gets into CPU 95% utilization territory with ~300k (?) hash_search_with_hash_value_memcmpopt() executions per second (measured using perf-probe).\n\nI suppose it's possible that this is caused by memory stalls that\ncould be improved by teaching the prefetching pipeline to prefetch the\nrelevant cachelines of memory (but it seems like it should be a pretty\nmicroscopic concern compared to the I/O).\n\n> [3] - hash_search_with_hash_value() spends a lot of time near \"callq *%r14\" in tight loop assembly in my case (indirect call to hash comparision function). This hash_search_with_hash_value_memcmpopt() is just copycat function and instead directly calls memcmp() where it matters (smgr.c, buf_table.c). Blind shot at gcc's -flto also didn't help to gain a lot there (I was thinking it could optimize it by building many instances of hash_search_with_hash_value of per-match() use, but no). I did not quantify the benefit, I think it just failed optimization experiment, as it is still top#1 in my profiles, it could be even noise.\n\nNice. A related specialisation is size (key and object). Of course,\nsimplehash.h already does that, but it also makes some other choices\nthat make it unusable for the buffer mapping table. So I think that\nwe should either figure out how to fix that, or consider specialising\nthe dynahash lookup path with a similar template scheme.\n\nRebase attached.",
"msg_date": "Thu, 24 Dec 2020 16:06:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2020-12-24 16:06:38 +1300, Thomas Munro wrote:\n> From 85187ee6a1dd4c68ba70cfbce002a8fa66c99925 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Sat, 28 Mar 2020 11:42:59 +1300\n> Subject: [PATCH v15 1/6] Add pg_atomic_unlocked_add_fetch_XXX().\n> \n> Add a variant of pg_atomic_add_fetch_XXX with no barrier semantics, for\n> cases where you only want to avoid the possibility that a concurrent\n> pg_atomic_read_XXX() sees a torn/partial value. On modern\n> architectures, this is simply value++, but there is a fallback to\n> spinlock emulation.\n\nWouldn't it be sufficient to implement this as one function implemented as\n pg_atomic_write_u32(val, pg_atomic_read_u32(val) + 1)\nthen we'd not need any ifdefs?\n\n\n\n> + * pg_atomic_unlocked_add_fetch_u32 - atomically add to variable\n\nIt's really not adding \"atomically\"...\n\n\n> + * Like pg_atomic_unlocked_write_u32, guarantees only that partial values\n> + * cannot be observed.\n\nMaybe add a note saying that that in particularly means that\nmodifications could be lost when used concurrently?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 29 Dec 2020 19:57:36 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Dec 5, 2020 at 7:27 AM Stephen Frost <sfrost@snowman.net> wrote:\n> * Thomas Munro (thomas.munro@gmail.com) wrote:\n> > I just noticed this thread proposing to retire pg_standby on that\n> > basis:\n> >\n> > https://www.postgresql.org/message-id/flat/20201029024412.GP5380%40telsasoft.com\n> >\n> > I'd be happy to see that land, to fix this problem with my plan. But\n> > are there other people writing restore scripts that block that would\n> > expect them to work on PG14?\n>\n> Ok, I think I finally get the concern that you're raising here-\n> basically that if a restore command was written to sit around and wait\n> for WAL segments to arrive, instead of just returning to PG and saying\n> \"WAL segment not found\", that this would be a problem if we are running\n> out ahead of the applying process and asking for WAL.\n>\n> The thing is- that's an outright broken restore command script in the\n> first place. If PG is in standby mode, we'll ask again if we get an\n> error result indicating that the WAL file wasn't found. The restore\n> command documentation is quite clear on this point:\n>\n> The command will be asked for file names that are not present in the\n> archive; it must return nonzero when so asked.\n>\n> There's no \"it can wait around for the next file to show up if it wants\n> to\" in there- it *must* return nonzero when asked for files that don't\n> exist.\n\nWell the manual does actually describe how to write your own version\nof pg_standby, referred to as a \"waiting restore script\":\n\nhttps://www.postgresql.org/docs/13/log-shipping-alternative.html\n\nI've now poked that other thread threatening to commit the removal of\npg_standby, and while I was there, also to remove the section on how\nto write your own (it's possible that I missed some other reference to\nthe concept elsewhere, I'll need to take another look).\n\n> So, I don't think that we really need to stress over this. The fact\n> that pg_standby offers options to have it wait instead of just returning\n> a non-zero error-code and letting the loop that we already do in the\n> core code seems like it's really just a legacy thing from before we were\n> doing that and probably should have been ripped out long ago... Even\n> more reason to get rid of pg_standby tho, imv, we haven't been properly\n> adjusting it when we've been making changes to the core code, it seems.\n\nSo far I haven't heard from anyone who thinks we should keep this old\nfacility (as useful as it was back then when it was the only way), so\nI hope we can now quietly drop it. It's not strictly an obstacle to\nthis recovery prefetching work, but it'd interact confusingly in hard\nto describe ways, and it seems strange to perpetuate something that\nmany were already proposing to drop due to obsolescence. Thanks for\nthe comments/sanity check.\n\n\n",
"msg_date": "Wed, 27 Jan 2021 17:34:22 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nI did a bunch of tests on v15, mostly to asses how much could the\nprefetching help. The most interesting test I did was this:\n\n1) primary instance on a box with 16/32 cores, 64GB RAM, NVMe SSD\n\n2) replica on small box with 4 cores, 8GB RAM, SSD RAID\n\n3) pause replication on the replica (pg_wal_replay_pause)\n\n4) initialize pgbench scale 2000 (fits into RAM on primary, while on\nreplica it's about 4x RAM)\n\n5) run 1h pgbench: pgbench -N -c 16 -j 4 -T 3600 test\n\n6) resume replication (pg_wal_replay_resume)\n\n7) measure how long it takes to catch up, monitor lag\n\nThis is nicely reproducible test case, it eliminates influence of\nnetwork speed and so on.\n\nAttached is a chart showing the lag with and without the prefetching. In\nboth cases we start with ~140GB of redo lag, and the chart shows how\nquickly the replica applies that. The \"waves\" are checkpoints, where\nright after a checkpoint the redo gets much faster thanks to FPIs and\nthen slows down as it gets to parts without them (having to do\nsynchronous random reads).\n\nWith master, it'd take ~16000 seconds to catch up. I don't have the\nexact number, because I got tired of waiting, but the estimate is likely\naccurate (judging by other tests and how regular the progress is).\n\nWith WAL prefetching enabled (I bumped up the buffer to 2MB, and\nprefetch limit to 500, but that was mostly just arbitrary choice), it\nfinishes in ~3200 seconds. This includes replication of the pgbench\ninitialization, which took ~200 seconds and where prefetching is mostly\nuseless. That's a damn pretty improvement, I guess!\n\nIn a way, this means the tiny replica would be able to keep up with a\nmuch larger machine, where everything is in memory.\n\n\nOne comment about the patch - the postgresql.conf.sample change says:\n\n#recovery_prefetch = on # whether to prefetch pages logged with FPW\n#recovery_prefetch_fpw = off # whether to prefetch pages logged with FPW\n\nbut clearly that comment is only for recovery_prefetch_fpw, the first\nGUC enables prefetching in general.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 4 Feb 2021 01:40:26 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Feb 4, 2021 at 1:40 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> With master, it'd take ~16000 seconds to catch up. I don't have the\n> exact number, because I got tired of waiting, but the estimate is likely\n> accurate (judging by other tests and how regular the progress is).\n>\n> With WAL prefetching enabled (I bumped up the buffer to 2MB, and\n> prefetch limit to 500, but that was mostly just arbitrary choice), it\n> finishes in ~3200 seconds. This includes replication of the pgbench\n> initialization, which took ~200 seconds and where prefetching is mostly\n> useless. That's a damn pretty improvement, I guess!\n\nHi Tomas,\n\nSorry for my slow response -- I've been catching up after some\nvacation time. Thanks very much for doing all this testing work!\nThose results are very good, and it's nice to see such compelling\ncases even with FPI enabled.\n\nI'm hoping to commit this in the next few weeks. There are a few\nlittle todos to tidy up, and I need to do some more review/testing of\nthe error handling and edge cases. Any ideas on how to battle test it\nare very welcome. I'm also currently testing how it interacts with\nsome other patches that are floating around. More soon.\n\n> #recovery_prefetch = on # whether to prefetch pages logged with FPW\n> #recovery_prefetch_fpw = off # whether to prefetch pages logged with FPW\n>\n> but clearly that comment is only for recovery_prefetch_fpw, the first\n> GUC enables prefetching in general.\n\nAck, thanks.\n\n\n",
"msg_date": "Wed, 10 Feb 2021 21:26:49 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Thomas Munro (thomas.munro@gmail.com) wrote:\n> Rebase attached.\n\n> Subject: [PATCH v15 4/6] Prefetch referenced blocks during recovery.\n> diff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\n> index 4b60382778..ac27392053 100644\n> --- a/doc/src/sgml/config.sgml\n> +++ b/doc/src/sgml/config.sgml\n> @@ -3366,6 +3366,64 @@ include_dir 'conf.d'\n[...]\n> + <varlistentry id=\"guc-recovery-prefetch-fpw\" xreflabel=\"recovery_prefetch_fpw\">\n> + <term><varname>recovery_prefetch_fpw</varname> (<type>boolean</type>)\n> + <indexterm>\n> + <primary><varname>recovery_prefetch_fpw</varname> configuration parameter</primary>\n> + </indexterm>\n> + </term>\n> + <listitem>\n> + <para>\n> + Whether to prefetch blocks that were logged with full page images,\n> + during recovery. Often this doesn't help, since such blocks will not\n> + be read the first time they are needed and might remain in the buffer\n\nThe \"might\" above seems slightly confusing- such blocks will remain in\nshared buffers until/unless they're forced out, right?\n\n> + pool after that. However, on file systems with a block size larger\n> + than\n> + <productname>PostgreSQL</productname>'s, prefetching can avoid a\n> + costly read-before-write when a blocks are later written.\n> + The default is off.\n\n\"when a blocks\" above doesn't sound quite right, maybe reword this as:\n\n\"prefetching can avoid a costly read-before-write when WAL replay\nreaches the block that needs to be written.\"\n\n> diff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\n> index d1c3893b14..c51c431398 100644\n> --- a/doc/src/sgml/wal.sgml\n> +++ b/doc/src/sgml/wal.sgml\n> @@ -720,6 +720,23 @@\n> <acronym>WAL</acronym> call being logged to the server log. This\n> option might be replaced by a more general mechanism in the future.\n> </para>\n> +\n> + <para>\n> + The <xref linkend=\"guc-recovery-prefetch\"/> parameter can\n> + be used to improve I/O performance during recovery by instructing\n> + <productname>PostgreSQL</productname> to initiate reads\n> + of disk blocks that will soon be needed but are not currently in\n> + <productname>PostgreSQL</productname>'s buffer pool.\n> + The <xref linkend=\"guc-maintenance-io-concurrency\"/> and\n> + <xref linkend=\"guc-wal-decode-buffer-size\"/> settings limit prefetching\n> + concurrency and distance, respectively. The\n> + prefetching mechanism is most likely to be effective on systems\n> + with <varname>full_page_writes</varname> set to\n> + <varname>off</varname> (where that is safe), and where the working\n> + set is larger than RAM. By default, prefetching in recovery is enabled\n> + on operating systems that have <function>posix_fadvise</function>\n> + support.\n> + </para>\n> </sect1>\n\n\n\n> diff --git a/src/backend/access/transam/xlog.c b/src/backend/access/transam/xlog.c\n\n> @@ -3697,7 +3699,6 @@ XLogFileRead(XLogSegNo segno, int emode, TimeLineID tli,\n> \t\t\tsnprintf(activitymsg, sizeof(activitymsg), \"waiting for %s\",\n> \t\t\t\t\t xlogfname);\n> \t\t\tset_ps_display(activitymsg);\n> -\n> \t\t\trestoredFromArchive = RestoreArchivedFile(path, xlogfname,\n> \t\t\t\t\t\t\t\t\t\t\t\t\t \"RECOVERYXLOG\",\n> \t\t\t\t\t\t\t\t\t\t\t\t\t wal_segment_size,\n\n> @@ -12566,6 +12585,7 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> \t\t\t\t\t\telse\n> \t\t\t\t\t\t\thavedata = false;\n> \t\t\t\t\t}\n> +\n> \t\t\t\t\tif (havedata)\n> \t\t\t\t\t{\n> \t\t\t\t\t\t/*\n\nRandom whitespace change hunks..?\n\n> diff --git a/src/backend/access/transam/xlogprefetch.c b/src/backend/access/transam/xlogprefetch.c\n\n> +\t * The size of the queue is based on the maintenance_io_concurrency\n> +\t * setting. In theory we might have a separate queue for each tablespace,\n> +\t * but it's not clear how that should work, so for now we'll just use the\n> +\t * general GUC to rate-limit all prefetching. The queue has space for up\n> +\t * the highest possible value of the GUC + 1, because our circular buffer\n> +\t * has a gap between head and tail when full.\n\nSeems like \"to\" is missing- \"The queue has space for up *to* the highest\npossible value of the GUC + 1\" ? Maybe also \"between the head and the\ntail when full\".\n\n> +/*\n> + * Scan the current record for block references, and consider prefetching.\n> + *\n> + * Return true if we processed the current record to completion and still have\n> + * queue space to process a new record, and false if we saturated the I/O\n> + * queue and need to wait for recovery to advance before we continue.\n> + */\n> +static bool\n> +XLogPrefetcherScanBlocks(XLogPrefetcher *prefetcher)\n> +{\n> +\tDecodedXLogRecord *record = prefetcher->record;\n> +\n> +\tAssert(!XLogPrefetcherSaturated(prefetcher));\n> +\n> +\t/*\n> +\t * We might already have been partway through processing this record when\n> +\t * our queue became saturated, so we need to start where we left off.\n> +\t */\n> +\tfor (int block_id = prefetcher->next_block_id;\n> +\t\t block_id <= record->max_block_id;\n> +\t\t ++block_id)\n> +\t{\n> +\t\tDecodedBkpBlock *block = &record->blocks[block_id];\n> +\t\tPrefetchBufferResult prefetch;\n> +\t\tSMgrRelation reln;\n> +\n> +\t\t/* Ignore everything but the main fork for now. */\n> +\t\tif (block->forknum != MAIN_FORKNUM)\n> +\t\t\tcontinue;\n> +\n> +\t\t/*\n> +\t\t * If there is a full page image attached, we won't be reading the\n> +\t\t * page, so you might think we should skip it. However, if the\n> +\t\t * underlying filesystem uses larger logical blocks than us, it\n> +\t\t * might still need to perform a read-before-write some time later.\n> +\t\t * Therefore, only prefetch if configured to do so.\n> +\t\t */\n> +\t\tif (block->has_image && !recovery_prefetch_fpw)\n> +\t\t{\n> +\t\t\tpg_atomic_unlocked_add_fetch_u64(&Stats->skip_fpw, 1);\n> +\t\t\tcontinue;\n> +\t\t}\n\nFPIs in the stream aren't going to just avoid reads when the\nfilesystem's block size matches PG's- they're also going to avoid\nsubsequent modifications to the block, provided we don't end up pushing\nthat block out of shared buffers, rights?\n\nThat is, if you have an empty shared buffers and see:\n\nBlock 5 FPI\nBlock 6 FPI\nBlock 5 Update\nBlock 6 Update\n\nit seems like, with this patch, we're going to Prefetch Block 5 & 6,\neven though we almost certainly won't actually need them.\n\n> +\t\t/* Fast path for repeated references to the same relation. */\n> +\t\tif (RelFileNodeEquals(block->rnode, prefetcher->last_rnode))\n> +\t\t{\n> +\t\t\t/*\n> +\t\t\t * If this is a repeat access to the same block, then skip it.\n> +\t\t\t *\n> +\t\t\t * XXX We could also check for last_blkno + 1 too, and also update\n> +\t\t\t * last_blkno; it's not clear if the kernel would do a better job\n> +\t\t\t * of sequential prefetching.\n> +\t\t\t */\n> +\t\t\tif (block->blkno == prefetcher->last_blkno)\n> +\t\t\t{\n> +\t\t\t\tpg_atomic_unlocked_add_fetch_u64(&Stats->skip_seq, 1);\n> +\t\t\t\tcontinue;\n> +\t\t\t}\n\nI'm sure this will help with some cases, but it wouldn't help with the\ncase that I mention above, as I understand it.\n\n> +\t\t{\"recovery_prefetch\", PGC_SIGHUP, WAL_SETTINGS,\n> +\t\t\tgettext_noop(\"Prefetch referenced blocks during recovery\"),\n> +\t\t\tgettext_noop(\"Read ahead of the currenty replay position to find uncached blocks.\")\n\nextra 'y' at the end of 'current', and \"find uncached blocks\" might be\nmisleading, maybe:\n\n\"Read out ahead of the current replay position and prefetch blocks.\"\n\n> diff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\n> index b7fb2ec1fe..4288f2f37f 100644\n> --- a/src/backend/utils/misc/postgresql.conf.sample\n> +++ b/src/backend/utils/misc/postgresql.conf.sample\n> @@ -234,6 +234,12 @@\n> #checkpoint_flush_after = 0\t\t# measured in pages, 0 disables\n> #checkpoint_warning = 30s\t\t# 0 disables\n> \n> +# - Prefetching during recovery -\n> +\n> +#wal_decode_buffer_size = 512kB\t\t# lookahead window used for prefetching\n> +#recovery_prefetch = on\t\t\t# whether to prefetch pages logged with FPW\n> +#recovery_prefetch_fpw = off\t\t# whether to prefetch pages logged with FPW\n\nThink this was already mentioned, but the above comments shouldn't be\nthe same. :)\n\n> From 2f6d690cefc0cad8cbd8b88dbed4d688399c6916 Mon Sep 17 00:00:00 2001\n> From: Thomas Munro <thomas.munro@gmail.com>\n> Date: Mon, 14 Sep 2020 23:20:55 +1200\n> Subject: [PATCH v15 5/6] WIP: Avoid extra buffer lookup when prefetching WAL\n> blocks.\n> \n> Provide a some workspace in decoded WAL records, so that we can remember\n> which buffer recently contained we found a block cached in, for later\n> use when replaying the record. Provide a new way to look up a\n> recently-known buffer and check if it's still valid and has the right\n> tag.\n\n\"Provide a place in decoded WAL records to remember which buffer we\nfound a block cached in, to hopefully avoid having to look it up again\nwhen we replay the record. Provide a way to look up a recently-known\nbuffer and check if it's still valid and has the right tag.\"\n\n> XXX Needs review to figure out if it's safe or steamrolling over subtleties\n\n... that's a great question. :) Not sure that I can really answer it\nconclusively, but I can't think of any reason, given the buffer tag\ncheck that's included, that it would be an issue. I'm glad to see this\nthough since it addresses some of the concern about this patch slowing\ndown replay in cases where there are FPIs and checkpoints are less than\nthe size of shared buffers, which seems much more common than cases\nwhere FPIs have been disabled and/or checkpoints are larger than SB.\nFurther effort to avoid having likely-unnecessary prefetching done for\nblocks which recently had an FPI would further reduce the risk of this\nchange slowing down replay for common deployments, though I'm not sure\nhow much of an impact that likely has or what the cost would be to avoid\nthe prefetching (and it's complicated by hot standby, I imagine...).\n\nThanks,\n\nStephen",
"msg_date": "Wed, 10 Feb 2021 16:50:33 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 2/10/21 10:50 PM, Stephen Frost wrote:\n >\n> ...\n >\n>> +/*\n>> + * Scan the current record for block references, and consider prefetching.\n>> + *\n>> + * Return true if we processed the current record to completion and still have\n>> + * queue space to process a new record, and false if we saturated the I/O\n>> + * queue and need to wait for recovery to advance before we continue.\n>> + */\n>> +static bool\n>> +XLogPrefetcherScanBlocks(XLogPrefetcher *prefetcher)\n>> +{\n>> +\tDecodedXLogRecord *record = prefetcher->record;\n>> +\n>> +\tAssert(!XLogPrefetcherSaturated(prefetcher));\n>> +\n>> +\t/*\n>> +\t * We might already have been partway through processing this record when\n>> +\t * our queue became saturated, so we need to start where we left off.\n>> +\t */\n>> +\tfor (int block_id = prefetcher->next_block_id;\n>> +\t\t block_id <= record->max_block_id;\n>> +\t\t ++block_id)\n>> +\t{\n>> +\t\tDecodedBkpBlock *block = &record->blocks[block_id];\n>> +\t\tPrefetchBufferResult prefetch;\n>> +\t\tSMgrRelation reln;\n>> +\n>> +\t\t/* Ignore everything but the main fork for now. */\n>> +\t\tif (block->forknum != MAIN_FORKNUM)\n>> +\t\t\tcontinue;\n>> +\n>> +\t\t/*\n>> +\t\t * If there is a full page image attached, we won't be reading the\n>> +\t\t * page, so you might think we should skip it. However, if the\n>> +\t\t * underlying filesystem uses larger logical blocks than us, it\n>> +\t\t * might still need to perform a read-before-write some time later.\n>> +\t\t * Therefore, only prefetch if configured to do so.\n>> +\t\t */\n>> +\t\tif (block->has_image && !recovery_prefetch_fpw)\n>> +\t\t{\n>> +\t\t\tpg_atomic_unlocked_add_fetch_u64(&Stats->skip_fpw, 1);\n>> +\t\t\tcontinue;\n>> +\t\t}\n> \n> FPIs in the stream aren't going to just avoid reads when the\n> filesystem's block size matches PG's- they're also going to avoid\n> subsequent modifications to the block, provided we don't end up pushing\n> that block out of shared buffers, rights?\n> \n> That is, if you have an empty shared buffers and see:\n> \n> Block 5 FPI\n> Block 6 FPI\n> Block 5 Update\n> Block 6 Update\n> \n> it seems like, with this patch, we're going to Prefetch Block 5 & 6,\n> even though we almost certainly won't actually need them.\n> \n\nYeah, that's a good point. I think it'd make sense to keep track of \nrecent FPIs and skip prefetching such blocks. But how exactly should we \nimplement that, how many blocks do we need to track? If you get an FPI, \nhow long should we skip prefetching of that block?\n\nI don't think the history needs to be very long, for two reasons. \nFirstly, the usual pattern is that we have FPI + several changes for \nthat block shortly after it. Secondly, maintenance_io_concurrency limits \nthis naturally - after crossing that, redo should place the FPI into \nshared buffers, allowing us to skip the prefetch.\n\nSo I think using maintenance_io_concurrency is sufficient. We might \ntrack more buffers to allow skipping prefetches of blocks that were \nevicted from shared buffers, but that seems like an overkill.\n\nHowever, maintenance_io_concurrency can be quite high, so just a simple \nqueue is not very suitable - searching it linearly for each block would \nbe too expensive. But I think we can use a simple hash table, tracking \n(relfilenode, block, LSN), over-sized to minimize collisions.\n\nImagine it's a simple array with (2 * maintenance_io_concurrency) \nelements, and whenever we prefetch a block or find an FPI, we simply add \nthe block to the array as determined by hash(relfilenode, block)\n\n hashtable[hash(...)] = {relfilenode, block, LSN}\n\nand then when deciding whether to prefetch a block, we look at that one \nposition. If the (relfilenode, block) match, we check the LSN and skip \nthe prefetch if it's sufficiently recent. Otherwise we prefetch.\n\nWe may issue some extra prefetches due to collisions, but that's fine I \nthink. There should not be very many of them, thanks to having the hash \ntable oversized.\n\nThe good thing is this is quite simple, fixed-sized data structure, \nthere's no need for allocations etc.\n\n\n\n>> +\t\t/* Fast path for repeated references to the same relation. */\n>> +\t\tif (RelFileNodeEquals(block->rnode, prefetcher->last_rnode))\n>> +\t\t{\n>> +\t\t\t/*\n>> +\t\t\t * If this is a repeat access to the same block, then skip it.\n>> +\t\t\t *\n>> +\t\t\t * XXX We could also check for last_blkno + 1 too, and also update\n>> +\t\t\t * last_blkno; it's not clear if the kernel would do a better job\n>> +\t\t\t * of sequential prefetching.\n>> +\t\t\t */\n>> +\t\t\tif (block->blkno == prefetcher->last_blkno)\n>> +\t\t\t{\n>> +\t\t\t\tpg_atomic_unlocked_add_fetch_u64(&Stats->skip_seq, 1);\n>> +\t\t\t\tcontinue;\n>> +\t\t\t}\n> \n> I'm sure this will help with some cases, but it wouldn't help with the\n> case that I mention above, as I understand it.\n> \n\nIt won't but it's a pretty effective check. I've done some experiments \nrecently, and with random pgbench this eliminates ~15% of prefetches.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Feb 2021 00:42:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-02-12 00:42:04 +0100, Tomas Vondra wrote:\n> Yeah, that's a good point. I think it'd make sense to keep track of recent\n> FPIs and skip prefetching such blocks. But how exactly should we implement\n> that, how many blocks do we need to track? If you get an FPI, how long\n> should we skip prefetching of that block?\n> \n> I don't think the history needs to be very long, for two reasons. Firstly,\n> the usual pattern is that we have FPI + several changes for that block\n> shortly after it. Secondly, maintenance_io_concurrency limits this naturally\n> - after crossing that, redo should place the FPI into shared buffers,\n> allowing us to skip the prefetch.\n> \n> So I think using maintenance_io_concurrency is sufficient. We might track\n> more buffers to allow skipping prefetches of blocks that were evicted from\n> shared buffers, but that seems like an overkill.\n> \n> However, maintenance_io_concurrency can be quite high, so just a simple\n> queue is not very suitable - searching it linearly for each block would be\n> too expensive. But I think we can use a simple hash table, tracking\n> (relfilenode, block, LSN), over-sized to minimize collisions.\n> \n> Imagine it's a simple array with (2 * maintenance_io_concurrency) elements,\n> and whenever we prefetch a block or find an FPI, we simply add the block to\n> the array as determined by hash(relfilenode, block)\n> \n> hashtable[hash(...)] = {relfilenode, block, LSN}\n> \n> and then when deciding whether to prefetch a block, we look at that one\n> position. If the (relfilenode, block) match, we check the LSN and skip the\n> prefetch if it's sufficiently recent. Otherwise we prefetch.\n\nI'm a bit doubtful this is really needed at this point. Yes, the\nprefetching will do a buffer table lookup - but it's a lookup that\nalready happens today. And the patch already avoids doing a second\nlookup after prefetching (by optimistically caching the last Buffer id,\nand re-checking).\n\nI think there's potential for some significant optimization going\nforward, but I think it's basically optimization over what we're doing\ntoday. As this is already a nontrivial patch, I'd argue for doing so\nseparately.\n\nRegards,\n\nAndres\n\n\n",
"msg_date": "Thu, 11 Feb 2021 20:46:14 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn 2/12/21 5:46 AM, Andres Freund wrote:\n> Hi,\n> \n> On 2021-02-12 00:42:04 +0100, Tomas Vondra wrote:\n>> Yeah, that's a good point. I think it'd make sense to keep track of recent\n>> FPIs and skip prefetching such blocks. But how exactly should we implement\n>> that, how many blocks do we need to track? If you get an FPI, how long\n>> should we skip prefetching of that block?\n>>\n>> I don't think the history needs to be very long, for two reasons. Firstly,\n>> the usual pattern is that we have FPI + several changes for that block\n>> shortly after it. Secondly, maintenance_io_concurrency limits this naturally\n>> - after crossing that, redo should place the FPI into shared buffers,\n>> allowing us to skip the prefetch.\n>>\n>> So I think using maintenance_io_concurrency is sufficient. We might track\n>> more buffers to allow skipping prefetches of blocks that were evicted from\n>> shared buffers, but that seems like an overkill.\n>>\n>> However, maintenance_io_concurrency can be quite high, so just a simple\n>> queue is not very suitable - searching it linearly for each block would be\n>> too expensive. But I think we can use a simple hash table, tracking\n>> (relfilenode, block, LSN), over-sized to minimize collisions.\n>>\n>> Imagine it's a simple array with (2 * maintenance_io_concurrency) elements,\n>> and whenever we prefetch a block or find an FPI, we simply add the block to\n>> the array as determined by hash(relfilenode, block)\n>>\n>> hashtable[hash(...)] = {relfilenode, block, LSN}\n>>\n>> and then when deciding whether to prefetch a block, we look at that one\n>> position. If the (relfilenode, block) match, we check the LSN and skip the\n>> prefetch if it's sufficiently recent. Otherwise we prefetch.\n> \n> I'm a bit doubtful this is really needed at this point. Yes, the\n> prefetching will do a buffer table lookup - but it's a lookup that\n> already happens today. And the patch already avoids doing a second\n> lookup after prefetching (by optimistically caching the last Buffer id,\n> and re-checking).\n> \n> I think there's potential for some significant optimization going\n> forward, but I think it's basically optimization over what we're doing\n> today. As this is already a nontrivial patch, I'd argue for doing so\n> separately.\n> \n\nI agree with treating this as an improvement - it's not something that \nneeds to be solved in the first verson. OTOH I think Stephen has a point \nthat just skipping FPIs like we do now has limited effect, because the \nWAL usually contains additional changes to the same block.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 12 Feb 2021 18:53:03 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Andres Freund (andres@anarazel.de) wrote:\n> On 2021-02-12 00:42:04 +0100, Tomas Vondra wrote:\n> > Yeah, that's a good point. I think it'd make sense to keep track of recent\n> > FPIs and skip prefetching such blocks. But how exactly should we implement\n> > that, how many blocks do we need to track? If you get an FPI, how long\n> > should we skip prefetching of that block?\n> > \n> > I don't think the history needs to be very long, for two reasons. Firstly,\n> > the usual pattern is that we have FPI + several changes for that block\n> > shortly after it. Secondly, maintenance_io_concurrency limits this naturally\n> > - after crossing that, redo should place the FPI into shared buffers,\n> > allowing us to skip the prefetch.\n> > \n> > So I think using maintenance_io_concurrency is sufficient. We might track\n> > more buffers to allow skipping prefetches of blocks that were evicted from\n> > shared buffers, but that seems like an overkill.\n> > \n> > However, maintenance_io_concurrency can be quite high, so just a simple\n> > queue is not very suitable - searching it linearly for each block would be\n> > too expensive. But I think we can use a simple hash table, tracking\n> > (relfilenode, block, LSN), over-sized to minimize collisions.\n> > \n> > Imagine it's a simple array with (2 * maintenance_io_concurrency) elements,\n> > and whenever we prefetch a block or find an FPI, we simply add the block to\n> > the array as determined by hash(relfilenode, block)\n> > \n> > hashtable[hash(...)] = {relfilenode, block, LSN}\n> > \n> > and then when deciding whether to prefetch a block, we look at that one\n> > position. If the (relfilenode, block) match, we check the LSN and skip the\n> > prefetch if it's sufficiently recent. Otherwise we prefetch.\n> \n> I'm a bit doubtful this is really needed at this point. Yes, the\n> prefetching will do a buffer table lookup - but it's a lookup that\n> already happens today. And the patch already avoids doing a second\n> lookup after prefetching (by optimistically caching the last Buffer id,\n> and re-checking).\n\nI agree that when a page is looked up, and found, in the buffer table\nthat the subsequent cacheing of the buffer id in the WAL records does a\ngood job of avoiding having to re-do that lookup. However, that isn't\nthe case which was being discussed here or what Tomas's suggestion was\nintended to address.\n\nWhat I pointed out up-thread and what's being discussed here is what\nhappens when the WAL contains a few FPIs and a few regular WAL records\nwhich are mixed up and not in ideal order. When that happens, with this\npatch, the FPIs will be ignored, the regular WAL records will reference\nblocks which aren't found in shared buffers (yet) and then we'll both\nissue pre-fetches for those and end up having spent effort doing a\nbuffer lookup that we'll later re-do.\n\nTo address the unnecessary syscalls we really just need to keep track of\nany FPIs that we've seen between where the point where the prefetching\nis happening and the point where the replay is being done- once replay\nhas replayed an FPI, our buffer lookup will succeed and we'll cache the\nbuffer that the FPI is at- in other words, only wal_decode_buffer_size\namount of WAL needs to be considered.\n\nWe could further leverage this tracking of FPIs, to skip the prefetch\nsyscalls, by cacheing what later records address the blocks that have\nFPIs earlier in the queue with the FPI record and then when replay hits\nthe FPI and loads it into shared_buffers, it could update the other WAL\nrecords in the queue with the buffer id of the page, allowing us to very\nlikely avoid having to do another lookup later on.\n\n> I think there's potential for some significant optimization going\n> forward, but I think it's basically optimization over what we're doing\n> today. As this is already a nontrivial patch, I'd argue for doing so\n> separately.\n\nThis seems like a great optimization, albeit a fair bit of code, for a\nrelatively uncommon use-case, specifically where full page writes are\ndisabled or very large checkpoints. As that's the case though, I would\nthink it's reasonable to ask that it go out of its way to avoid slowing\ndown the more common configurations, particularly since it's proposed to\nhave it on by default (which I agree with, provided it ends up improving\nthe common cases, which I think the suggestions above would certainly\nmake it more likely to do).\n\nPerhaps this already improves the common cases and is worth the extra\ncode on that basis, but I don't recall seeing much in the way of\nbenchmarking in this thread for that case- that is, where FPIs are\nenabled and checkpoints are smaller than shared buffers. Jakub's\ntesting was done with FPWs disabled and Tomas's testing used checkpoints\nwhich were much larger than the size of shared buffers on the system\ndoing the replay. While it's certainly good that this patch improves\nthose cases, we should also be looking out for the worst case and make\nsure that the patch doesn't degrade performance in that case.\n\nThanks,\n\nStephen",
"msg_date": "Sat, 13 Feb 2021 16:39:30 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn 2/13/21 10:39 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Andres Freund (andres@anarazel.de) wrote:\n>> On 2021-02-12 00:42:04 +0100, Tomas Vondra wrote:\n>>> Yeah, that's a good point. I think it'd make sense to keep track of recent\n>>> FPIs and skip prefetching such blocks. But how exactly should we implement\n>>> that, how many blocks do we need to track? If you get an FPI, how long\n>>> should we skip prefetching of that block?\n>>>\n>>> I don't think the history needs to be very long, for two reasons. Firstly,\n>>> the usual pattern is that we have FPI + several changes for that block\n>>> shortly after it. Secondly, maintenance_io_concurrency limits this naturally\n>>> - after crossing that, redo should place the FPI into shared buffers,\n>>> allowing us to skip the prefetch.\n>>>\n>>> So I think using maintenance_io_concurrency is sufficient. We might track\n>>> more buffers to allow skipping prefetches of blocks that were evicted from\n>>> shared buffers, but that seems like an overkill.\n>>>\n>>> However, maintenance_io_concurrency can be quite high, so just a simple\n>>> queue is not very suitable - searching it linearly for each block would be\n>>> too expensive. But I think we can use a simple hash table, tracking\n>>> (relfilenode, block, LSN), over-sized to minimize collisions.\n>>>\n>>> Imagine it's a simple array with (2 * maintenance_io_concurrency) elements,\n>>> and whenever we prefetch a block or find an FPI, we simply add the block to\n>>> the array as determined by hash(relfilenode, block)\n>>>\n>>> hashtable[hash(...)] = {relfilenode, block, LSN}\n>>>\n>>> and then when deciding whether to prefetch a block, we look at that one\n>>> position. If the (relfilenode, block) match, we check the LSN and skip the\n>>> prefetch if it's sufficiently recent. Otherwise we prefetch.\n>>\n>> I'm a bit doubtful this is really needed at this point. Yes, the\n>> prefetching will do a buffer table lookup - but it's a lookup that\n>> already happens today. And the patch already avoids doing a second\n>> lookup after prefetching (by optimistically caching the last Buffer id,\n>> and re-checking).\n> \n> I agree that when a page is looked up, and found, in the buffer table\n> that the subsequent cacheing of the buffer id in the WAL records does a\n> good job of avoiding having to re-do that lookup. However, that isn't\n> the case which was being discussed here or what Tomas's suggestion was\n> intended to address.\n> \n> What I pointed out up-thread and what's being discussed here is what\n> happens when the WAL contains a few FPIs and a few regular WAL records\n> which are mixed up and not in ideal order. When that happens, with this\n> patch, the FPIs will be ignored, the regular WAL records will reference\n> blocks which aren't found in shared buffers (yet) and then we'll both\n> issue pre-fetches for those and end up having spent effort doing a\n> buffer lookup that we'll later re-do.\n> \n\nThe question is how common this pattern actually is - I don't know. As \nnoted, the non-FPI would have to be fairly close to the FPI, i.e. within \nthe wal_decode_buffer_size, to actually cause measurable harm.\n\n> To address the unnecessary syscalls we really just need to keep track of\n> any FPIs that we've seen between where the point where the prefetching\n> is happening and the point where the replay is being done- once replay\n> has replayed an FPI, our buffer lookup will succeed and we'll cache the\n> buffer that the FPI is at- in other words, only wal_decode_buffer_size\n> amount of WAL needs to be considered.\n> \n\nYeah, that's essentially what I proposed.\n\n> We could further leverage this tracking of FPIs, to skip the prefetch\n> syscalls, by cacheing what later records address the blocks that have\n> FPIs earlier in the queue with the FPI record and then when replay hits\n> the FPI and loads it into shared_buffers, it could update the other WAL\n> records in the queue with the buffer id of the page, allowing us to very\n> likely avoid having to do another lookup later on.\n> \n\nThis seems like an over-engineering, at least for v1.\n\n>> I think there's potential for some significant optimization going\n>> forward, but I think it's basically optimization over what we're doing\n>> today. As this is already a nontrivial patch, I'd argue for doing so\n>> separately.\n> \n> This seems like a great optimization, albeit a fair bit of code, for a\n> relatively uncommon use-case, specifically where full page writes are\n> disabled or very large checkpoints. As that's the case though, I would\n> think it's reasonable to ask that it go out of its way to avoid slowing\n> down the more common configurations, particularly since it's proposed to\n> have it on by default (which I agree with, provided it ends up improving\n> the common cases, which I think the suggestions above would certainly\n> make it more likely to do).\n> \n\nI'm OK to do some benchmarking, but it's not quite clear to me why does \nit matter if the checkpoints are smaller than shared buffers? IMO what \nmatters is how \"localized\" the updates are, i.e. how likely it is to hit \nthe same page repeatedly (in a short amount of time). Regular pgbench is \nnot very suitable for that, but some non-uniform distribution should do \nthe trick, I think.\n\n> Perhaps this already improves the common cases and is worth the extra\n> code on that basis, but I don't recall seeing much in the way of\n> benchmarking in this thread for that case- that is, where FPIs are\n> enabled and checkpoints are smaller than shared buffers. Jakub's\n> testing was done with FPWs disabled and Tomas's testing used checkpoints\n> which were much larger than the size of shared buffers on the system\n> doing the replay. While it's certainly good that this patch improves\n> those cases, we should also be looking out for the worst case and make\n> sure that the patch doesn't degrade performance in that case.\n> \n\nI'm with Andres on this. It's fine to leave some possible optimizations \non the table for the future. And even if some workloads are affected \nnegatively, it's still possible to disable the prefetching.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sun, 14 Feb 2021 23:38:01 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> On 2/13/21 10:39 PM, Stephen Frost wrote:\n> >* Andres Freund (andres@anarazel.de) wrote:\n> >>On 2021-02-12 00:42:04 +0100, Tomas Vondra wrote:\n> >>>Yeah, that's a good point. I think it'd make sense to keep track of recent\n> >>>FPIs and skip prefetching such blocks. But how exactly should we implement\n> >>>that, how many blocks do we need to track? If you get an FPI, how long\n> >>>should we skip prefetching of that block?\n> >>>\n> >>>I don't think the history needs to be very long, for two reasons. Firstly,\n> >>>the usual pattern is that we have FPI + several changes for that block\n> >>>shortly after it. Secondly, maintenance_io_concurrency limits this naturally\n> >>>- after crossing that, redo should place the FPI into shared buffers,\n> >>>allowing us to skip the prefetch.\n> >>>\n> >>>So I think using maintenance_io_concurrency is sufficient. We might track\n> >>>more buffers to allow skipping prefetches of blocks that were evicted from\n> >>>shared buffers, but that seems like an overkill.\n> >>>\n> >>>However, maintenance_io_concurrency can be quite high, so just a simple\n> >>>queue is not very suitable - searching it linearly for each block would be\n> >>>too expensive. But I think we can use a simple hash table, tracking\n> >>>(relfilenode, block, LSN), over-sized to minimize collisions.\n> >>>\n> >>>Imagine it's a simple array with (2 * maintenance_io_concurrency) elements,\n> >>>and whenever we prefetch a block or find an FPI, we simply add the block to\n> >>>the array as determined by hash(relfilenode, block)\n> >>>\n> >>> hashtable[hash(...)] = {relfilenode, block, LSN}\n> >>>\n> >>>and then when deciding whether to prefetch a block, we look at that one\n> >>>position. If the (relfilenode, block) match, we check the LSN and skip the\n> >>>prefetch if it's sufficiently recent. Otherwise we prefetch.\n> >>\n> >>I'm a bit doubtful this is really needed at this point. Yes, the\n> >>prefetching will do a buffer table lookup - but it's a lookup that\n> >>already happens today. And the patch already avoids doing a second\n> >>lookup after prefetching (by optimistically caching the last Buffer id,\n> >>and re-checking).\n> >\n> >I agree that when a page is looked up, and found, in the buffer table\n> >that the subsequent cacheing of the buffer id in the WAL records does a\n> >good job of avoiding having to re-do that lookup. However, that isn't\n> >the case which was being discussed here or what Tomas's suggestion was\n> >intended to address.\n> >\n> >What I pointed out up-thread and what's being discussed here is what\n> >happens when the WAL contains a few FPIs and a few regular WAL records\n> >which are mixed up and not in ideal order. When that happens, with this\n> >patch, the FPIs will be ignored, the regular WAL records will reference\n> >blocks which aren't found in shared buffers (yet) and then we'll both\n> >issue pre-fetches for those and end up having spent effort doing a\n> >buffer lookup that we'll later re-do.\n> \n> The question is how common this pattern actually is - I don't know. As\n> noted, the non-FPI would have to be fairly close to the FPI, i.e. within the\n> wal_decode_buffer_size, to actually cause measurable harm.\n\nYeah, so it'll depend on how big wal_decode_buffer_size is. Increasing\nthat would certainly help to show if there ends up being a degredation\nwith this patch due to the extra prefetching being done.\n\n> >To address the unnecessary syscalls we really just need to keep track of\n> >any FPIs that we've seen between where the point where the prefetching\n> >is happening and the point where the replay is being done- once replay\n> >has replayed an FPI, our buffer lookup will succeed and we'll cache the\n> >buffer that the FPI is at- in other words, only wal_decode_buffer_size\n> >amount of WAL needs to be considered.\n> \n> Yeah, that's essentially what I proposed.\n\nGlad I captured it correctly.\n\n> >We could further leverage this tracking of FPIs, to skip the prefetch\n> >syscalls, by cacheing what later records address the blocks that have\n> >FPIs earlier in the queue with the FPI record and then when replay hits\n> >the FPI and loads it into shared_buffers, it could update the other WAL\n> >records in the queue with the buffer id of the page, allowing us to very\n> >likely avoid having to do another lookup later on.\n> \n> This seems like an over-engineering, at least for v1.\n\nPerhaps, though it didn't seem like it'd be very hard to do with the\nalready proposed changes to stash the buffer id in the WAL records.\n\n> >>I think there's potential for some significant optimization going\n> >>forward, but I think it's basically optimization over what we're doing\n> >>today. As this is already a nontrivial patch, I'd argue for doing so\n> >>separately.\n> >\n> >This seems like a great optimization, albeit a fair bit of code, for a\n> >relatively uncommon use-case, specifically where full page writes are\n> >disabled or very large checkpoints. As that's the case though, I would\n> >think it's reasonable to ask that it go out of its way to avoid slowing\n> >down the more common configurations, particularly since it's proposed to\n> >have it on by default (which I agree with, provided it ends up improving\n> >the common cases, which I think the suggestions above would certainly\n> >make it more likely to do).\n> \n> I'm OK to do some benchmarking, but it's not quite clear to me why does it\n> matter if the checkpoints are smaller than shared buffers? IMO what matters\n> is how \"localized\" the updates are, i.e. how likely it is to hit the same\n> page repeatedly (in a short amount of time). Regular pgbench is not very\n> suitable for that, but some non-uniform distribution should do the trick, I\n> think.\n\nI suppose strictly speaking it'd be\nMin(wal_decode_buffer_size,checkpoint_size), but yes, you're right that\nit's more about the wal_decode_buffer_size than the checkpoint's size.\nApologies for the confusion. As suggested above, one way to benchmark\nthis to really see if there's any issue would be to increase\nwal_decode_buffer_size to some pretty big size and then compare the\nperformance vs. unpatched. I'd think that could even be done with\npgbench, so you're not having to arrange for the same pages to get\nupdated over and over.\n\n> >Perhaps this already improves the common cases and is worth the extra\n> >code on that basis, but I don't recall seeing much in the way of\n> >benchmarking in this thread for that case- that is, where FPIs are\n> >enabled and checkpoints are smaller than shared buffers. Jakub's\n> >testing was done with FPWs disabled and Tomas's testing used checkpoints\n> >which were much larger than the size of shared buffers on the system\n> >doing the replay. While it's certainly good that this patch improves\n> >those cases, we should also be looking out for the worst case and make\n> >sure that the patch doesn't degrade performance in that case.\n> \n> I'm with Andres on this. It's fine to leave some possible optimizations on\n> the table for the future. And even if some workloads are affected\n> negatively, it's still possible to disable the prefetching.\n\nWhile I'm generally in favor of this argument, that a feature is\nparticularly important and that it's worth slowing down the common cases\nto enable it, I dislike that it's applied inconsistently. I'd certainly\nfeel better about it if we had actual performance numbers to consider.\nI don't doubt the possibility that the extra prefetch's just don't\namount to enough to matter but I have a hard time seeing them as not\nhaving some cost and without actually measuring it, it's hard to say\nwhat that cost is.\n\nWithout looking farther back than the last record, we could end up\nrepeatedly asking for the same blocks to be prefetched too-\n\nFPI for block 1\nFPI for block 2\nWAL record for block 1\nWAL record for block 2\nWAL record for block 1\nWAL record for block 2\nWAL record for block 1\nWAL record for block 2\n\n... etc.\n\nEntirely possible my math is off, but seems like the worst case\nsituation right now might end up with some 4500 unnecessary prefetch\nsyscalls even with the proposed default wal_decode_buffer_size of\n512k and 56-byte WAL records ((524,288 - 16,384) / 56 / 2 = ~4534).\n\nIssuing unnecessary prefetches for blocks we've already sent a prefetch\nfor is arguably a concern even if FPWs are off but the benefit of doing\nthe prefetching almost certainly will outweight that and mean that\nfinding a way to address it is something we could certainly do later as\na future improvement. I wouldn't have any issue with that. Just\ndoesn't seem as clear-cut to me when thinking about the FPW-enabled\ncase. Ultimately, if you, Andres and Munro are all not concerned about\nit and no one else speaks up then I'm not going to pitch a fuss over it\nbeing committed, but, as you said above, it seemed like a good point to\nraise for everyone to consider.\n\nThanks,\n\nStephen",
"msg_date": "Sun, 14 Feb 2021 18:18:15 -0500",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 2/15/21 12:18 AM, Stephen Frost wrote:\n> Greetings,\n> \n> ...\n>\n>>>> I think there's potential for some significant optimization going\n>>>> forward, but I think it's basically optimization over what we're doing\n>>>> today. As this is already a nontrivial patch, I'd argue for doing so\n>>>> separately.\n>>>\n>>> This seems like a great optimization, albeit a fair bit of code, for a\n>>> relatively uncommon use-case, specifically where full page writes are\n>>> disabled or very large checkpoints. As that's the case though, I would\n>>> think it's reasonable to ask that it go out of its way to avoid slowing\n>>> down the more common configurations, particularly since it's proposed to\n>>> have it on by default (which I agree with, provided it ends up improving\n>>> the common cases, which I think the suggestions above would certainly\n>>> make it more likely to do).\n>>\n>> I'm OK to do some benchmarking, but it's not quite clear to me why does it\n>> matter if the checkpoints are smaller than shared buffers? IMO what matters\n>> is how \"localized\" the updates are, i.e. how likely it is to hit the same\n>> page repeatedly (in a short amount of time). Regular pgbench is not very\n>> suitable for that, but some non-uniform distribution should do the trick, I\n>> think.\n> \n> I suppose strictly speaking it'd be\n> Min(wal_decode_buffer_size,checkpoint_size), but yes, you're right that\n> it's more about the wal_decode_buffer_size than the checkpoint's size.\n> Apologies for the confusion. As suggested above, one way to benchmark\n> this to really see if there's any issue would be to increase\n> wal_decode_buffer_size to some pretty big size and then compare the\n> performance vs. unpatched. I'd think that could even be done with\n> pgbench, so you're not having to arrange for the same pages to get\n> updated over and over.\n> \n\nWhat exactly would be the point of such benchmark? I don't think the\npatch does prefetching based on wal_decode_buffer_size, that just says\nhow far ahead we decode - the prefetch distance I is defined by\nmaintenance_io_concurrency.\n\nBut it's not clear to me what exactly would the result say about the\nnecessity of the optimization at hand (skipping prefetches for blocks\nwith recent FPI). If the the maintenance_io_concurrency is very high,\nthe probability that a block is evicted prematurely grows, making the\nprefetch useless in general. How does this say anything about the\nproblem at hand? Sure, we'll do unnecessary I/O, causing issues, but\nthat's a bit like complaining the engine gets very hot when driving on a\nhighway in reverse.\n\nAFAICS to measure the worst case, you'd need a workload with a lot of\nFPIs, and very little actual I/O. That means, data set that fits into\nmemory (either shared buffers or RAM), and short checkpoints. But that's\nexactly the case where you don't need prefetching ...\n\n>>> Perhaps this already improves the common cases and is worth the extra\n>>> code on that basis, but I don't recall seeing much in the way of\n>>> benchmarking in this thread for that case- that is, where FPIs are\n>>> enabled and checkpoints are smaller than shared buffers. Jakub's\n>>> testing was done with FPWs disabled and Tomas's testing used checkpoints\n>>> which were much larger than the size of shared buffers on the system\n>>> doing the replay. While it's certainly good that this patch improves\n>>> those cases, we should also be looking out for the worst case and make\n>>> sure that the patch doesn't degrade performance in that case.\n>>\n>> I'm with Andres on this. It's fine to leave some possible optimizations on\n>> the table for the future. And even if some workloads are affected\n>> negatively, it's still possible to disable the prefetching.\n> \n> While I'm generally in favor of this argument, that a feature is\n> particularly important and that it's worth slowing down the common cases\n> to enable it, I dislike that it's applied inconsistently. I'd certainly\n\nIf you have a workload where this happens to cause issues, you can just\ndisable that. IMHO that's a perfectly reasonable engineering approach,\nwhere we get something that significantly improves 80% of the cases,\nallow disabling it for cases where it might cause issues, and then\nimprove it in the next version.\n\n\n> feel better about it if we had actual performance numbers to consider.\n> I don't doubt the possibility that the extra prefetch's just don't\n> amount to enough to matter but I have a hard time seeing them as not\n> having some cost and without actually measuring it, it's hard to say\n> what that cost is.\n> \n> Without looking farther back than the last record, we could end up\n> repeatedly asking for the same blocks to be prefetched too-\n> \n> FPI for block 1\n> FPI for block 2\n> WAL record for block 1\n> WAL record for block 2\n> WAL record for block 1\n> WAL record for block 2\n> WAL record for block 1\n> WAL record for block 2\n> \n> ... etc.\n> \n> Entirely possible my math is off, but seems like the worst case\n> situation right now might end up with some 4500 unnecessary prefetch\n> syscalls even with the proposed default wal_decode_buffer_size of\n> 512k and 56-byte WAL records ((524,288 - 16,384) / 56 / 2 = ~4534).\n> \n\nWell, that's a bit extreme workload, I guess. If you really have such\nlong streaks of WAL records touching the same small set of blocks, you\ndon't need WAL prefetching at all and you can just disable it. Easy.\n\nIf you have workload with small active set, frequent checkpoint etc.\nthen just don't enable WAL prefetching. What's wrong with that?\n\n> Issuing unnecessary prefetches for blocks we've already sent a prefetch\n> for is arguably a concern even if FPWs are off but the benefit of doing\n> the prefetching almost certainly will outweight that and mean that\n> finding a way to address it is something we could certainly do later as\n> a future improvement. I wouldn't have any issue with that. Just\n> doesn't seem as clear-cut to me when thinking about the FPW-enabled\n> case. Ultimately, if you, Andres and Munro are all not concerned about\n> it and no one else speaks up then I'm not going to pitch a fuss over it\n> being committed, but, as you said above, it seemed like a good point to\n> raise for everyone to consider.\n> \n\nRight, I was just going to point out the FPIs are not necessary - what\nmatters is the presence of long streaks of WAL records touching the same\nset of blocks. But people with workloads where this is common likely\ndon't need the WAL prefetching at all - the replica can keep up just\nfine, because it doesn't need to do much I/O anyway (and if it can't\nthen prefetching won't help much anyway). So just don't enable the\nprefetching, and there'll be no overhead.\n\n\nIf it was up to me, I'd just get the patch committed as is. Delaying the\nfeature because of concerns that it might have some negative effect in\nsome cases, when that can be simply mitigated by disabling the feature,\nis not really beneficial for our users.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 17 Mar 2021 17:12:17 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\n* Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n> Right, I was just going to point out the FPIs are not necessary - what\n> matters is the presence of long streaks of WAL records touching the same\n> set of blocks. But people with workloads where this is common likely\n> don't need the WAL prefetching at all - the replica can keep up just\n> fine, because it doesn't need to do much I/O anyway (and if it can't\n> then prefetching won't help much anyway). So just don't enable the\n> prefetching, and there'll be no overhead.\n\nIsn't this exactly the common case though..? Checkpoints happening\nevery 5 minutes, the replay of the FPI happens first and then the record\nis updated and everything's in SB for the later changes? You mentioned\nelsewhere that this would improve 80% of cases but that doesn't seem to\nbe backed up by anything and certainly doesn't seem likely to be the\ncase if we're talking about across all PG deployments. I also disagree\nthat asking the kernel to go do random I/O for us, even as a prefetch,\nis entirely free simply because we won't actually need those pages. At\nthe least, it potentially pushes out pages that we might need shortly\nfrom the filesystem cache, no?\n\n> If it was up to me, I'd just get the patch committed as is. Delaying the\n> feature because of concerns that it might have some negative effect in\n> some cases, when that can be simply mitigated by disabling the feature,\n> is not really beneficial for our users.\n\nI don't know that we actually know how many cases it might have a\nnegative effect on or what the actual amount of such negative case there\nmight be- that's really why we should probably try to actually benchmark\nit and get real numbers behind it, particularly when the chances of\nrunning into such a negative effect with the default configuration (that\nis, FPWs enabled) on the more typical platforms (as in, not ZFS) is more\nlikely to occur in the field than the cases where FPWs are disabled and\nsomeone's running on ZFS.\n\nPerhaps more to the point, it'd be nice to see how this change actually\nimproves the caes where PG is running with more-or-less the defaults on\nthe more commonly deployed filesystems. If it doesn't then maybe it\nshouldn't be the default..? Surely the folks running on ZFS and running\nwith FPWs disabled would be able to manage to enable it if they\nwished to and we could avoid entirely the question of if this has a\nnegative impact on the more common cases.\n\nGuess I'm just not a fan of pushing out a change that will impact\neveryone by default, in a possibly negative way (or positive, though\nthat doesn't seem terribly likely, but who knows), without actually\nmeasuring what that impact will look like in those more common cases.\nShowing that it's a great win when you're on ZFS or running with FPWs\ndisabled is good and the expected best case, but we should be\nconsidering the worst case too when it comes to performance\nimprovements.\n\nAnyhow, ultimately I don't know that there's much more to discuss on\nthis thread with regard to this particular topic, at least. As I said\nbefore, if everyone else is on board and not worried about it then so be\nit; I feel that at least the concern that I raised has been heard.\n\nThanks,\n\nStephen",
"msg_date": "Wed, 17 Mar 2021 17:43:31 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 3/17/21 10:43 PM, Stephen Frost wrote:\n> Greetings,\n> \n> * Tomas Vondra (tomas.vondra@enterprisedb.com) wrote:\n>> Right, I was just going to point out the FPIs are not necessary - what\n>> matters is the presence of long streaks of WAL records touching the same\n>> set of blocks. But people with workloads where this is common likely\n>> don't need the WAL prefetching at all - the replica can keep up just\n>> fine, because it doesn't need to do much I/O anyway (and if it can't\n>> then prefetching won't help much anyway). So just don't enable the\n>> prefetching, and there'll be no overhead.\n> \n> Isn't this exactly the common case though..? Checkpoints happening\n> every 5 minutes, the replay of the FPI happens first and then the record\n> is updated and everything's in SB for the later changes?\n\nWell, as I said before, the FPIs are not very significant - you'll have\nmostly the same issue with any repeated changes to the same block. It\ndoes not matter much if you do\n\n FPI for block 1\n WAL record for block 2\n WAL record for block 1\n WAL record for block 2\n WAL record for block 1\n\nor just\n\n WAL record for block 1\n WAL record for block 2\n WAL record for block 1\n WAL record for block 2\n WAL record for block 1\n\nIn both cases some of the prefetches are probably unnecessary. But the\nfrequency of checkpoints does not really matter, the important bit is\nrepeated changes to the same block(s).\n\nIf you have active set much larger than RAM, this is quite unlikely. And\nwe know from the pgbench tests that prefetching has a huge positive\neffect in this case.\n\nOn smaller active sets, with frequent updates to the same block, we may\nissue unnecessary prefetches - that's true. But (a) you have not shown\nany numbers suggesting this is actually an issue, and (b) those cases\ndon't really need prefetching because all the data is already either in\nshared buffers or in page cache. So if it happens to be an issue, the\nuser can simply disable it.\n\nSo how exactly would a problematic workload look like?\n\n> You mentioned elsewhere that this would improve 80% of cases but that\n> doesn't seem to be backed up by anything and certainly doesn't seem\n> likely to be the case if we're talking about across all PG\n> deployments.\n\nObviously, the 80% was just a figure of speech, illustrating my belief\nthat the proposed patch is beneficial for most users who currently have\nissues with replication lag. That is based on my experience with support\ncustomers who have such issues - it's almost invariably an OLTP workload\nwith large active set, and we know (from the benchmarks) that in these\ncases it helps.\n\nUsers who don't have issues with replication lag can disable (or not\nenable) the prefetching, and won't get any negative effects.\n\nPerhaps there are users with weird workloads that have replication lag\nissues but this patch won't help them - bummer, we can't solve\neverything in one go. Also, no one actually demonstrated such workload\nin this thread so far.\n\nBut as you're suggesting we don't have data to support the claim that\nthis actually helps many users (with no risk to others), I'd point out\nyou have not actually provided any numbers showing that it actually is\nan issue in practice.\n\n\n> I also disagree that asking the kernel to go do random I/O for us, \n> even as a prefetch, is entirely free simply because we won't\n> actually need those pages. At the least, it potentially pushes out\n> pages that we might need shortly from the filesystem cache, no?\nWhere exactly did I say it's free? I said that workloads where this\nhappens a lot most likely don't need the prefetching at all, so it can\nbe simply disabled, eliminating all negative effects.\n\nMoreover, looking at a limited number of recently prefetched blocks\nwon't eliminate this problem anyway - imagine a random OLTP on large\ndata set that however fits into RAM. After a while no read I/O needs to\nbe done, but you'd need pretty much infinite list of prefetched blocks\nto eliminate that, and with smaller lists you'll still do 99% of the\nprefetches.\n\nJust disabling prefetching on such instances seems quite reasonable.\n\n\n>> If it was up to me, I'd just get the patch committed as is. Delaying the\n>> feature because of concerns that it might have some negative effect in\n>> some cases, when that can be simply mitigated by disabling the feature,\n>> is not really beneficial for our users.\n> \n> I don't know that we actually know how many cases it might have a\n> negative effect on or what the actual amount of such negative case there\n> might be- that's really why we should probably try to actually benchmark\n> it and get real numbers behind it, particularly when the chances of\n> running into such a negative effect with the default configuration (that\n> is, FPWs enabled) on the more typical platforms (as in, not ZFS) is more\n> likely to occur in the field than the cases where FPWs are disabled and\n> someone's running on ZFS.\n> \n> Perhaps more to the point, it'd be nice to see how this change actually\n> improves the caes where PG is running with more-or-less the defaults on\n> the more commonly deployed filesystems. If it doesn't then maybe it\n> shouldn't be the default..? Surely the folks running on ZFS and running\n> with FPWs disabled would be able to manage to enable it if they\n> wished to and we could avoid entirely the question of if this has a\n> negative impact on the more common cases.\n> \n> Guess I'm just not a fan of pushing out a change that will impact\n> everyone by default, in a possibly negative way (or positive, though\n> that doesn't seem terribly likely, but who knows), without actually\n> measuring what that impact will look like in those more common cases.\n> Showing that it's a great win when you're on ZFS or running with FPWs\n> disabled is good and the expected best case, but we should be\n> considering the worst case too when it comes to performance\n> improvements.\n> \n\nWell, maybe it'll behave differently on systems with ZFS. I don't know,\nand I have no such machine to test that at the moment. My argument\nhowever remains the same - if if happens to be a problem, just don't\nenable (or disable) the prefetching, and you get the current behavior.\n\nFWIW I'm not sure there was a discussion or argument about what should\nbe the default setting (enabled or disabled). I'm fine with not enabling\nthis by default, so that people have to enable it explicitly.\n\nIn a way that'd be consistent with effective_io_concurrency being 1 by\ndefault, which almost disables regular prefetching.\n\n\n> Anyhow, ultimately I don't know that there's much more to discuss on\n> this thread with regard to this particular topic, at least. As I said\n> before, if everyone else is on board and not worried about it then so be\n> it; I feel that at least the concern that I raised has been heard.\n> \n\nOK, thanks for the discussions.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 18 Mar 2021 00:00:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Mar 18, 2021 at 12:00 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 3/17/21 10:43 PM, Stephen Frost wrote:\n> > Guess I'm just not a fan of pushing out a change that will impact\n> > everyone by default, in a possibly negative way (or positive, though\n> > that doesn't seem terribly likely, but who knows), without actually\n> > measuring what that impact will look like in those more common cases.\n> > Showing that it's a great win when you're on ZFS or running with FPWs\n> > disabled is good and the expected best case, but we should be\n> > considering the worst case too when it comes to performance\n> > improvements.\n> >\n>\n> Well, maybe it'll behave differently on systems with ZFS. I don't know,\n> and I have no such machine to test that at the moment. My argument\n> however remains the same - if if happens to be a problem, just don't\n> enable (or disable) the prefetching, and you get the current behavior.\n\nI see the road map for this feature being to get it working on every\nOS via the AIO patchset, in later work, hopefully not very far in the\nfuture (in the most portable mode, you get I/O worker processes doing\npread() or preadv() calls on behalf of recovery). So I'll be glad to\nget this infrastructure in, even though it's maybe only useful for\nsome people in the first release.\n\n> FWIW I'm not sure there was a discussion or argument about what should\n> be the default setting (enabled or disabled). I'm fine with not enabling\n> this by default, so that people have to enable it explicitly.\n>\n> In a way that'd be consistent with effective_io_concurrency being 1 by\n> default, which almost disables regular prefetching.\n\nYeah, I'm not sure but I'd be fine with disabling it by default in the\ninitial release. The current patch set has it enabled, but that's\nmostly for testing, it's not an opinion on how it should ship.\n\nI've attached a rebased patch set with a couple of small changes:\n\n1. I abandoned the patch that proposed\npg_atomic_unlocked_add_fetch_u{32,64}() and went for a simple function\nlocal to xlogprefetch.c that just does pg_atomic_write_u64(counter,\npg_atomic_read_u64(counter) + 1), in response to complaints from\nAndres[1].\n\n2. I fixed a bug in ReadRecentBuffer(), and moved it into its own\npatch for separate review.\n\nI'm now looking at Horiguchi-san and Heikki's patch[2] to remove\nXLogReader's callbacks, to try to understand how these two patch sets\nare related. I don't really like the way those callbacks work, and\nI'm afraid had to make them more complicated. But I don't yet know\nvery much about that other patch set. More soon.\n\n[1] https://www.postgresql.org/message-id/20201230035736.qmyrtrpeewqbidfi%40alap3.anarazel.de\n[2] https://www.postgresql.org/message-id/flat/20190418.210257.43726183.horiguchi.kyotaro@lab.ntt.co.jp",
"msg_date": "Thu, 18 Mar 2021 13:54:18 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 3/18/21 1:54 AM, Thomas Munro wrote:\n> On Thu, Mar 18, 2021 at 12:00 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 3/17/21 10:43 PM, Stephen Frost wrote:\n>>> Guess I'm just not a fan of pushing out a change that will impact\n>>> everyone by default, in a possibly negative way (or positive, though\n>>> that doesn't seem terribly likely, but who knows), without actually\n>>> measuring what that impact will look like in those more common cases.\n>>> Showing that it's a great win when you're on ZFS or running with FPWs\n>>> disabled is good and the expected best case, but we should be\n>>> considering the worst case too when it comes to performance\n>>> improvements.\n>>>\n>>\n>> Well, maybe it'll behave differently on systems with ZFS. I don't know,\n>> and I have no such machine to test that at the moment. My argument\n>> however remains the same - if if happens to be a problem, just don't\n>> enable (or disable) the prefetching, and you get the current behavior.\n> \n> I see the road map for this feature being to get it working on every\n> OS via the AIO patchset, in later work, hopefully not very far in the\n> future (in the most portable mode, you get I/O worker processes doing\n> pread() or preadv() calls on behalf of recovery). So I'll be glad to\n> get this infrastructure in, even though it's maybe only useful for\n> some people in the first release.\n> \n\n+1 to that\n\n\n>> FWIW I'm not sure there was a discussion or argument about what should\n>> be the default setting (enabled or disabled). I'm fine with not enabling\n>> this by default, so that people have to enable it explicitly.\n>>\n>> In a way that'd be consistent with effective_io_concurrency being 1 by\n>> default, which almost disables regular prefetching.\n> \n> Yeah, I'm not sure but I'd be fine with disabling it by default in the\n> initial release. The current patch set has it enabled, but that's\n> mostly for testing, it's not an opinion on how it should ship.\n> \n\n+1 to that too. Better to have it disabled by default than not at all.\n\n\n> I've attached a rebased patch set with a couple of small changes:\n> \n> 1. I abandoned the patch that proposed\n> pg_atomic_unlocked_add_fetch_u{32,64}() and went for a simple function\n> local to xlogprefetch.c that just does pg_atomic_write_u64(counter,\n> pg_atomic_read_u64(counter) + 1), in response to complaints from\n> Andres[1].\n> \n> 2. I fixed a bug in ReadRecentBuffer(), and moved it into its own\n> patch for separate review.\n> \n> I'm now looking at Horiguchi-san and Heikki's patch[2] to remove\n> XLogReader's callbacks, to try to understand how these two patch sets\n> are related. I don't really like the way those callbacks work, and\n> I'm afraid had to make them more complicated. But I don't yet know\n> very much about that other patch set. More soon.\n> \n\nOK. Do you think we should get both of those patches in, or do we need\nto commit them in a particular order? Or what is your concern?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 19 Mar 2021 02:29:04 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Mar 19, 2021 at 2:29 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 3/18/21 1:54 AM, Thomas Munro wrote:\n> > I'm now looking at Horiguchi-san and Heikki's patch[2] to remove\n> > XLogReader's callbacks, to try to understand how these two patch sets\n> > are related. I don't really like the way those callbacks work, and\n> > I'm afraid had to make them more complicated. But I don't yet know\n> > very much about that other patch set. More soon.\n>\n> OK. Do you think we should get both of those patches in, or do we need\n> to commit them in a particular order? Or what is your concern?\n\nI would like to commit the callback-removal patch first, and then the\nWAL decoder and prefetcher patches become simpler and cleaner on top\nof that. I will post the rebase and explanation shortly.\n\n\n",
"msg_date": "Fri, 2 Apr 2021 10:50:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Here's rebase, on top of Horiguchi-san's v19 patch set. My patches\nstart at 0007. Previously, there was a \"nowait\" flag that was passed\ninto all the callbacks so that XLogReader could wait for new WAL in\nsome cases but not others. This new version uses the proposed\nXLREAD_NEED_DATA protocol, and the caller deals with waiting for data\nto arrive when appropriate. This seems tidier to me.\n\nI made one other simplifying change: previously, the prefetch module\nwould read the WAL up to the \"written\" LSN (so, allowing itself to\nread data that had been written but not yet flushed to disk by the\nwalreceiver), though it still waited until a record's LSN was\n\"flushed\" before replaying. That allowed prefetching to happen\nconcurrently with the WAL flush, which was nice, but it felt a little\ntoo \"special\". I decided to remove that part for now, and I plan to\nlook into making standbys work more like primary servers, using WAL\nbuffers, the WAL writer and optionally the standard log-before-data\nrule.",
"msg_date": "Wed, 7 Apr 2021 23:24:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 4/7/21 1:24 PM, Thomas Munro wrote:\n> Here's rebase, on top of Horiguchi-san's v19 patch set. My patches\n> start at 0007. Previously, there was a \"nowait\" flag that was passed\n> into all the callbacks so that XLogReader could wait for new WAL in\n> some cases but not others. This new version uses the proposed\n> XLREAD_NEED_DATA protocol, and the caller deals with waiting for data\n> to arrive when appropriate. This seems tidier to me.\n> \n\nOK, seems reasonable.\n\n> I made one other simplifying change: previously, the prefetch module\n> would read the WAL up to the \"written\" LSN (so, allowing itself to\n> read data that had been written but not yet flushed to disk by the\n> walreceiver), though it still waited until a record's LSN was\n> \"flushed\" before replaying. That allowed prefetching to happen\n> concurrently with the WAL flush, which was nice, but it felt a little\n> too \"special\". I decided to remove that part for now, and I plan to\n> look into making standbys work more like primary servers, using WAL\n> buffers, the WAL writer and optionally the standard log-before-data\n> rule.\n> \n\nNot sure, but the removal seems unnecessary. I'm worried that this will\nsignificantly reduce the amount of data that we'll be able to prefetch.\nHow likely it is that we have data that is written but not flushed?\nLet's assume the replica is lagging and network bandwidth is not the\nbottleneck - how likely is this \"has to be flushed\" a limit for the\nprefetching?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 7 Apr 2021 17:27:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 8, 2021 at 3:27 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 4/7/21 1:24 PM, Thomas Munro wrote:\n> > I made one other simplifying change: previously, the prefetch module\n> > would read the WAL up to the \"written\" LSN (so, allowing itself to\n> > read data that had been written but not yet flushed to disk by the\n> > walreceiver), though it still waited until a record's LSN was\n> > \"flushed\" before replaying. That allowed prefetching to happen\n> > concurrently with the WAL flush, which was nice, but it felt a little\n> > too \"special\". I decided to remove that part for now, and I plan to\n> > look into making standbys work more like primary servers, using WAL\n> > buffers, the WAL writer and optionally the standard log-before-data\n> > rule.\n>\n> Not sure, but the removal seems unnecessary. I'm worried that this will\n> significantly reduce the amount of data that we'll be able to prefetch.\n> How likely it is that we have data that is written but not flushed?\n> Let's assume the replica is lagging and network bandwidth is not the\n> bottleneck - how likely is this \"has to be flushed\" a limit for the\n> prefetching?\n\nYeah, it would have been nice to include that but it'll have to be for\nv15 due to lack of time to convince myself that it was correct. I do\nintend to look into more concurrency of that kind for v15. I have\npushed these patches, updated to be disabled by default. I will look\ninto how I can run a BF animal that has it enabled during the recovery\ntests for coverage. Thanks very much to everyone on this thread for\nall the discussion and testing so far.\n\n\n",
"msg_date": "Thu, 8 Apr 2021 23:46:21 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn 4/8/21 1:46 PM, Thomas Munro wrote:\n> On Thu, Apr 8, 2021 at 3:27 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> On 4/7/21 1:24 PM, Thomas Munro wrote:\n>>> I made one other simplifying change: previously, the prefetch module\n>>> would read the WAL up to the \"written\" LSN (so, allowing itself to\n>>> read data that had been written but not yet flushed to disk by the\n>>> walreceiver), though it still waited until a record's LSN was\n>>> \"flushed\" before replaying. That allowed prefetching to happen\n>>> concurrently with the WAL flush, which was nice, but it felt a little\n>>> too \"special\". I decided to remove that part for now, and I plan to\n>>> look into making standbys work more like primary servers, using WAL\n>>> buffers, the WAL writer and optionally the standard log-before-data\n>>> rule.\n>>\n>> Not sure, but the removal seems unnecessary. I'm worried that this will\n>> significantly reduce the amount of data that we'll be able to prefetch.\n>> How likely it is that we have data that is written but not flushed?\n>> Let's assume the replica is lagging and network bandwidth is not the\n>> bottleneck - how likely is this \"has to be flushed\" a limit for the\n>> prefetching?\n> \n> Yeah, it would have been nice to include that but it'll have to be for\n> v15 due to lack of time to convince myself that it was correct. I do\n> intend to look into more concurrency of that kind for v15. I have\n> pushed these patches, updated to be disabled by default. I will look\n> into how I can run a BF animal that has it enabled during the recovery\n> tests for coverage. Thanks very much to everyone on this thread for\n> all the discussion and testing so far.\n> \n\nOK, understood. I'll rerun the benchmarks on this version, and if\nthere's a significant negative impact we can look into that during the\nstabilization phase.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 8 Apr 2021 14:39:18 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Here's some little language fixes.\n\nBTW, before beginning \"recovery\", PG syncs all the data dirs.\nThis can be slow, and it seems like the slowness is frequently due to file\nmetadata. For example, that's an obvious consequence of an OS crash, after\nwhich the page cache is empty. I've made a habit of running find /zfs -ls |wc\nto pre-warm it, which can take a little bit, but then the recovery process\nstarts moments later. I don't have any timing measurements, but I expect that\nstarting to stat() all data files as soon as possible would be a win.\n\ncommit cc9707de333fe8242607cde9f777beadc68dbf04\nAuthor: Justin Pryzby <pryzbyj@telsasoft.com>\nDate: Thu Apr 8 10:43:14 2021 -0500\n\n WIP: doc review: Optionally prefetch referenced data in recovery.\n \n 1d257577e08d3e598011d6850fd1025858de8c8c\n\ndiff --git a/doc/src/sgml/config.sgml b/doc/src/sgml/config.sgml\nindex bc4a8b2279..139dee7aa2 100644\n--- a/doc/src/sgml/config.sgml\n+++ b/doc/src/sgml/config.sgml\n@@ -3621,7 +3621,7 @@ include_dir 'conf.d'\n pool after that. However, on file systems with a block size larger\n than\n <productname>PostgreSQL</productname>'s, prefetching can avoid a\n- costly read-before-write when a blocks are later written.\n+ costly read-before-write when blocks are later written.\n The default is off.\n </para>\n </listitem>\ndiff --git a/doc/src/sgml/wal.sgml b/doc/src/sgml/wal.sgml\nindex 24cf567ee2..36e00c92c2 100644\n--- a/doc/src/sgml/wal.sgml\n+++ b/doc/src/sgml/wal.sgml\n@@ -816,9 +816,7 @@\n prefetching mechanism is most likely to be effective on systems\n with <varname>full_page_writes</varname> set to\n <varname>off</varname> (where that is safe), and where the working\n- set is larger than RAM. By default, prefetching in recovery is enabled\n- on operating systems that have <function>posix_fadvise</function>\n- support.\n+ set is larger than RAM. By default, prefetching in recovery is disabled.\n </para>\n </sect1>\n \ndiff --git a/src/backend/access/transam/xlogprefetch.c b/src/backend/access/transam/xlogprefetch.c\nindex 28764326bc..363c079964 100644\n--- a/src/backend/access/transam/xlogprefetch.c\n+++ b/src/backend/access/transam/xlogprefetch.c\n@@ -31,7 +31,7 @@\n * stall; this is counted with \"skip_fpw\".\n *\n * The only way we currently have to know that an I/O initiated with\n- * PrefetchSharedBuffer() has that recovery will eventually call ReadBuffer(),\n+ * PrefetchSharedBuffer() has that recovery will eventually call ReadBuffer(), XXX: what ??\n * and perform a synchronous read. Therefore, we track the number of\n * potentially in-flight I/Os by using a circular buffer of LSNs. When it's\n * full, we have to wait for recovery to replay records so that the queue\n@@ -660,7 +660,7 @@ XLogPrefetcherScanBlocks(XLogPrefetcher *prefetcher)\n \t\t\t/*\n \t\t\t * I/O has possibly been initiated (though we don't know if it was\n \t\t\t * already cached by the kernel, so we just have to assume that it\n-\t\t\t * has due to lack of better information). Record this as an I/O\n+\t\t\t * was due to lack of better information). Record this as an I/O\n \t\t\t * in progress until eventually we replay this LSN.\n \t\t\t */\n \t\t\tXLogPrefetchIncrement(&SharedStats->prefetch);\ndiff --git a/src/backend/utils/misc/guc.c b/src/backend/utils/misc/guc.c\nindex 090abdad8b..8c72ba1f1a 100644\n--- a/src/backend/utils/misc/guc.c\n+++ b/src/backend/utils/misc/guc.c\n@@ -2774,7 +2774,7 @@ static struct config_int ConfigureNamesInt[] =\n \t{\n \t\t{\"wal_decode_buffer_size\", PGC_POSTMASTER, WAL_ARCHIVE_RECOVERY,\n \t\t\tgettext_noop(\"Maximum buffer size for reading ahead in the WAL during recovery.\"),\n-\t\t\tgettext_noop(\"This controls the maximum distance we can read ahead n the WAL to prefetch referenced blocks.\"),\n+\t\t\tgettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks.\"),\n \t\t\tGUC_UNIT_BYTE\n \t\t},\n \t\t&wal_decode_buffer_size,\n\n\n",
"msg_date": "Thu, 8 Apr 2021 22:37:04 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Apr 9, 2021 at 3:37 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Here's some little language fixes.\n\nThanks! Done. I rewrote the gibberish comment that made you say\n\"XXX: what?\". Pushed.\n\n> BTW, before beginning \"recovery\", PG syncs all the data dirs.\n> This can be slow, and it seems like the slowness is frequently due to file\n> metadata. For example, that's an obvious consequence of an OS crash, after\n> which the page cache is empty. I've made a habit of running find /zfs -ls |wc\n> to pre-warm it, which can take a little bit, but then the recovery process\n> starts moments later. I don't have any timing measurements, but I expect that\n> starting to stat() all data files as soon as possible would be a win.\n\nDid you see commit 61752afb, \"Provide\nrecovery_init_sync_method=syncfs\"? Actually I believe it's safe to\nskip that phase completely and do a tiny bit more work during\nrecovery, which I'd like to work on for v15[1].\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKG%2B8Wm8TSfMWPteMEHfh194RytVTBNoOkggTQT1p5NTY7Q%40mail.gmail.com\n\n\n",
"msg_date": "Sat, 10 Apr 2021 08:27:42 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 08:27:42AM +1200, Thomas Munro wrote:\n> On Fri, Apr 9, 2021 at 3:37 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> > Here's some little language fixes.\n> \n> Thanks! Done. I rewrote the gibberish comment that made you say\n> \"XXX: what?\". Pushed.\n> \n> > BTW, before beginning \"recovery\", PG syncs all the data dirs.\n> > This can be slow, and it seems like the slowness is frequently due to file\n> > metadata. For example, that's an obvious consequence of an OS crash, after\n> > which the page cache is empty. I've made a habit of running find /zfs -ls |wc\n> > to pre-warm it, which can take a little bit, but then the recovery process\n> > starts moments later. I don't have any timing measurements, but I expect that\n> > starting to stat() all data files as soon as possible would be a win.\n> \n> Did you see commit 61752afb, \"Provide\n> recovery_init_sync_method=syncfs\"? Actually I believe it's safe to\n> skip that phase completely and do a tiny bit more work during\n> recovery, which I'd like to work on for v15[1].\n\nYes, I have it in my list for v14 deployment. Thanks for that.\n\nDid you see this?\nhttps://www.postgresql.org/message-id/GV0P278MB0483490FEAC879DCA5ED583DD2739%40GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM\n\nI meant to mail you so you could include it in the same commit, but forgot\nuntil now.\n\n-- \nJustin\n\n\n",
"msg_date": "Fri, 9 Apr 2021 15:36:58 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 8:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> Did you see this?\n> https://www.postgresql.org/message-id/GV0P278MB0483490FEAC879DCA5ED583DD2739%40GV0P278MB0483.CHEP278.PROD.OUTLOOK.COM\n>\n> I meant to mail you so you could include it in the same commit, but forgot\n> until now.\n\nDone, thanks.\n\n\n",
"msg_date": "Sat, 10 Apr 2021 08:45:50 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi, \r\n\r\nThank you for developing a great feature. I tested this feature and checked the documentation.\r\nCurrently, the documentation for the pg_stat_prefetch_recovery view is included in the description for the pg_stat_subscription view.\r\n\r\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION\r\n\r\nIt is also not displayed in the list of \"28.2. The Statistics Collector\".\r\nhttps://www.postgresql.org/docs/devel/monitoring.html\r\n\r\nThe attached patch modifies the pg_stat_prefetch_recovery view to appear as a separate view.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro [mailto:thomas.munro@gmail.com] \r\nSent: Saturday, April 10, 2021 5:46 AM\r\nTo: Justin Pryzby <pryzby@telsasoft.com>\r\nCc: Tomas Vondra <tomas.vondra@enterprisedb.com>; Stephen Frost <sfrost@snowman.net>; Andres Freund <andres@anarazel.de>; Jakub Wartak <Jakub.Wartak@tomtom.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: WIP: WAL prefetch (another approach)\r\n\r\nOn Sat, Apr 10, 2021 at 8:37 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> Did you see this?\r\n> INVALID URI REMOVED\r\n> 278MB0483490FEAC879DCA5ED583DD2739*40GV0P278MB0483.CHEP278.PROD.OUTLOO\r\n> K.COM__;JQ!!NpxR!wcPrhiB2CaHRtywGoh9Ap0M-kH1m07hGI37-ycYRGCPgCqGs30lRS\r\n> KicsXacduEXHxI$\r\n>\r\n> I meant to mail you so you could include it in the same commit, but \r\n> forgot until now.\r\n\r\nDone, thanks.",
"msg_date": "Tue, 13 Apr 2021 02:33:12 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Apr 10, 2021 at 2:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n\nIn commit 1d257577e08d3e598011d6850fd1025858de8c8c, there is a change\nin file format for stats, won't it require bumping\nPGSTAT_FILE_FORMAT_ID?\n\nActually, I came across this while working on my today's commit\nf5fc2f5b23 where I forgot to bump PGSTAT_FILE_FORMAT_ID. So, I thought\nmaybe we can bump it just once if required?\n\n-- \nWith Regards,\nAmit Kapila.\n\n\n",
"msg_date": "Fri, 16 Apr 2021 17:35:42 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> Yeah, it would have been nice to include that but it'll have to be for\n> v15 due to lack of time to convince myself that it was correct. I do\n> intend to look into more concurrency of that kind for v15. I have\n> pushed these patches, updated to be disabled by default.\n\nI have a fairly bad feeling about these patches. I've already fixed\none critical bug (see 9e4114822), but I am still seeing random, hard\nto reproduce failures in WAL replay testing. It looks like sometimes\nthe \"decoded\" version of a WAL record doesn't match what I see in\nthe on-disk data, which I'm having no luck tracing down.\n\nAnother interesting failure I just came across is\n\n2021-04-21 11:32:14.280 EDT [14606] LOG: incorrect resource manager data checksum in record at F/438000A4\nTRAP: FailedAssertion(\"state->decoding\", File: \"xlogreader.c\", Line: 845, PID: 14606)\n2021-04-21 11:38:23.066 EDT [14603] LOG: startup process (PID 14606) was terminated by signal 6: Abort trap\n\nwith stack trace\n\n#0 0x90b669f0 in kill ()\n#1 0x90c01bfc in abort ()\n#2 0x0057a6a0 in ExceptionalCondition (conditionName=<value temporarily unavailable, due to optimizations>, errorType=<value temporarily unavailable, due to optimizations>, fileName=<value temporarily unavailable, due to optimizations>, lineNumber=<value temporarily unavailable, due to optimizations>) at assert.c:69\n#3 0x000f5cf4 in XLogDecodeOneRecord (state=0x1000640, allow_oversized=1 '\\001') at xlogreader.c:845\n#4 0x000f682c in XLogNextRecord (state=0x1000640, record=0xbfffba38, errormsg=0xbfffba9c) at xlogreader.c:466\n#5 0x000f695c in XLogReadRecord (state=<value temporarily unavailable, due to optimizations>, record=0xbfffba98, errormsg=<value temporarily unavailable, due to optimizations>) at xlogreader.c:352\n#6 0x000e61a0 in ReadRecord (xlogreader=0x1000640, emode=15, fetching_ckpt=0 '\\0') at xlog.c:4398\n#7 0x000ea320 in StartupXLOG () at xlog.c:7567\n#8 0x00362218 in StartupProcessMain () at startup.c:244\n#9 0x000fc170 in AuxiliaryProcessMain (argc=<value temporarily unavailable, due to optimizations>, argv=<value temporarily unavailable, due to optimizations>) at bootstrap.c:447\n#10 0x0035c740 in StartChildProcess (type=StartupProcess) at postmaster.c:5439\n#11 0x00360f4c in PostmasterMain (argc=5, argv=0xa006a0) at postmaster.c:1406\n#12 0x0029737c in main (argc=<value temporarily unavailable, due to optimizations>, argv=<value temporarily unavailable, due to optimizations>) at main.c:209\n\n\nI am not sure whether the checksum failure itself is real or a variant\nof the seeming bad-reconstruction problem, but what I'm on about right\nat this moment is that the error handling logic for this case seems\nquite broken. Why is a checksum failure only worthy of a LOG message?\nWhy is ValidXLogRecord() issuing a log message for itself, rather than\nbeing tied into the report_invalid_record() mechanism? Why are we\nevidently still trying to decode records afterwards?\n\nIn general, I'm not too pleased with the apparent attitude in this\nthread that it's okay to push a patch that only mostly works on the\nlast day of the dev cycle and plan to stabilize it later.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 12:30:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 4/21/21 6:30 PM, Tom Lane wrote:\n> Thomas Munro <thomas.munro@gmail.com> writes:\n>> Yeah, it would have been nice to include that but it'll have to be for\n>> v15 due to lack of time to convince myself that it was correct. I do\n>> intend to look into more concurrency of that kind for v15. I have\n>> pushed these patches, updated to be disabled by default.\n> \n> I have a fairly bad feeling about these patches. I've already fixed\n> one critical bug (see 9e4114822), but I am still seeing random, hard\n> to reproduce failures in WAL replay testing. It looks like sometimes\n> the \"decoded\" version of a WAL record doesn't match what I see in\n> the on-disk data, which I'm having no luck tracing down.\n> \n> Another interesting failure I just came across is\n> \n> 2021-04-21 11:32:14.280 EDT [14606] LOG: incorrect resource manager data checksum in record at F/438000A4\n> TRAP: FailedAssertion(\"state->decoding\", File: \"xlogreader.c\", Line: 845, PID: 14606)\n> 2021-04-21 11:38:23.066 EDT [14603] LOG: startup process (PID 14606) was terminated by signal 6: Abort trap\n> \n> with stack trace\n> \n> #0 0x90b669f0 in kill ()\n> #1 0x90c01bfc in abort ()\n> #2 0x0057a6a0 in ExceptionalCondition (conditionName=<value temporarily unavailable, due to optimizations>, errorType=<value temporarily unavailable, due to optimizations>, fileName=<value temporarily unavailable, due to optimizations>, lineNumber=<value temporarily unavailable, due to optimizations>) at assert.c:69\n> #3 0x000f5cf4 in XLogDecodeOneRecord (state=0x1000640, allow_oversized=1 '\\001') at xlogreader.c:845\n> #4 0x000f682c in XLogNextRecord (state=0x1000640, record=0xbfffba38, errormsg=0xbfffba9c) at xlogreader.c:466\n> #5 0x000f695c in XLogReadRecord (state=<value temporarily unavailable, due to optimizations>, record=0xbfffba98, errormsg=<value temporarily unavailable, due to optimizations>) at xlogreader.c:352\n> #6 0x000e61a0 in ReadRecord (xlogreader=0x1000640, emode=15, fetching_ckpt=0 '\\0') at xlog.c:4398\n> #7 0x000ea320 in StartupXLOG () at xlog.c:7567\n> #8 0x00362218 in StartupProcessMain () at startup.c:244\n> #9 0x000fc170 in AuxiliaryProcessMain (argc=<value temporarily unavailable, due to optimizations>, argv=<value temporarily unavailable, due to optimizations>) at bootstrap.c:447\n> #10 0x0035c740 in StartChildProcess (type=StartupProcess) at postmaster.c:5439\n> #11 0x00360f4c in PostmasterMain (argc=5, argv=0xa006a0) at postmaster.c:1406\n> #12 0x0029737c in main (argc=<value temporarily unavailable, due to optimizations>, argv=<value temporarily unavailable, due to optimizations>) at main.c:209\n> \n> \n> I am not sure whether the checksum failure itself is real or a variant\n> of the seeming bad-reconstruction problem, but what I'm on about right\n> at this moment is that the error handling logic for this case seems\n> quite broken. Why is a checksum failure only worthy of a LOG message?\n> Why is ValidXLogRecord() issuing a log message for itself, rather than\n> being tied into the report_invalid_record() mechanism? Why are we\n> evidently still trying to decode records afterwards?\n> \n\nYeah, that seems suspicious.\n\n> In general, I'm not too pleased with the apparent attitude in this\n> thread that it's okay to push a patch that only mostly works on the\n> last day of the dev cycle and plan to stabilize it later.\n> \n\nWas there such attitude? I don't think people were arguing for pushing a\npatch's not working correctly. The discussion was mostly about getting\nit committed even and leaving some optimizations for v15.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Wed, 21 Apr 2021 22:07:43 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 8:07 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> On 4/21/21 6:30 PM, Tom Lane wrote:\n> > Thomas Munro <thomas.munro@gmail.com> writes:\n> >> Yeah, it would have been nice to include that but it'll have to be for\n> >> v15 due to lack of time to convince myself that it was correct. I do\n> >> intend to look into more concurrency of that kind for v15. I have\n> >> pushed these patches, updated to be disabled by default.\n> >\n> > I have a fairly bad feeling about these patches. I've already fixed\n> > one critical bug (see 9e4114822), but I am still seeing random, hard\n> > to reproduce failures in WAL replay testing. It looks like sometimes\n> > the \"decoded\" version of a WAL record doesn't match what I see in\n> > the on-disk data, which I'm having no luck tracing down.\n\nUgh. Looking into this now. Also, this week I have been researching\na possible problem with eg ALTER TABLE SET TABLESPACE in the higher\nlevel patch, which I'll write about soon.\n\n> > I am not sure whether the checksum failure itself is real or a variant\n> > of the seeming bad-reconstruction problem, but what I'm on about right\n> > at this moment is that the error handling logic for this case seems\n> > quite broken. Why is a checksum failure only worthy of a LOG message?\n> > Why is ValidXLogRecord() issuing a log message for itself, rather than\n> > being tied into the report_invalid_record() mechanism? Why are we\n> > evidently still trying to decode records afterwards?\n>\n> Yeah, that seems suspicious.\n\nI may have invited trouble by deciding to rebase on the other proposal\nlate in the cycle. That interfaces around there.\n\n> > In general, I'm not too pleased with the apparent attitude in this\n> > thread that it's okay to push a patch that only mostly works on the\n> > last day of the dev cycle and plan to stabilize it later.\n>\n> Was there such attitude? I don't think people were arguing for pushing a\n> patch's not working correctly. The discussion was mostly about getting\n> it committed even and leaving some optimizations for v15.\n\nThat wasn't my plan, but I admit that the timing was non-ideal. In\nany case, I'll dig into these failures and then consider options.\nMore soon.\n\n\n",
"msg_date": "Thu, 22 Apr 2021 08:16:43 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 8:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> That wasn't my plan, but I admit that the timing was non-ideal. In\n> any case, I'll dig into these failures and then consider options.\n> More soon.\n\nYeah, this clearly needs more work. xlogreader.c is difficult to work\nwith and I think we need to keep trying to improve it, but I made a\nbad call here trying to combine this with other refactoring work up\nagainst a deadline and I made some dumb mistakes. I could of course\ndebug it in-tree, and I know that this has been an anticipated\nfeature. Personally I think the right thing to do now is to revert it\nand re-propose for 15 early in the cycle, supported with some better\ntesting infrastructure.\n\n\n",
"msg_date": "Thu, 22 Apr 2021 11:16:17 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greetings,\n\nOn Wed, Apr 21, 2021 at 19:17 Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Thu, Apr 22, 2021 at 8:16 AM Thomas Munro <thomas.munro@gmail.com>\n> wrote:\n> > That wasn't my plan, but I admit that the timing was non-ideal. In\n> > any case, I'll dig into these failures and then consider options.\n> > More soon.\n>\n> Yeah, this clearly needs more work. xlogreader.c is difficult to work\n> with and I think we need to keep trying to improve it, but I made a\n> bad call here trying to combine this with other refactoring work up\n> against a deadline and I made some dumb mistakes. I could of course\n> debug it in-tree, and I know that this has been an anticipated\n> feature. Personally I think the right thing to do now is to revert it\n> and re-propose for 15 early in the cycle, supported with some better\n> testing infrastructure.\n\n\nI tend to agree with the idea to revert it, perhaps a +0 on that, but if\nothers argue it should be fixed in-place, I wouldn’t complain about it.\n\nI very much encourage the idea of improving testing in this area and would\nbe happy to try and help do so in the 15 cycle.\n\nThanks,\n\nStephen\n\n>\n\nGreetings,On Wed, Apr 21, 2021 at 19:17 Thomas Munro <thomas.munro@gmail.com> wrote:On Thu, Apr 22, 2021 at 8:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> That wasn't my plan, but I admit that the timing was non-ideal. In\n> any case, I'll dig into these failures and then consider options.\n> More soon.\n\nYeah, this clearly needs more work. xlogreader.c is difficult to work\nwith and I think we need to keep trying to improve it, but I made a\nbad call here trying to combine this with other refactoring work up\nagainst a deadline and I made some dumb mistakes. I could of course\ndebug it in-tree, and I know that this has been an anticipated\nfeature. Personally I think the right thing to do now is to revert it\nand re-propose for 15 early in the cycle, supported with some better\ntesting infrastructure.I tend to agree with the idea to revert it, perhaps a +0 on that, but if others argue it should be fixed in-place, I wouldn’t complain about it.I very much encourage the idea of improving testing in this area and would be happy to try and help do so in the 15 cycle.Thanks,Stephen",
"msg_date": "Wed, 21 Apr 2021 19:22:33 -0400",
"msg_from": "Stephen Frost <sfrost@snowman.net>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Stephen Frost <sfrost@snowman.net> writes:\n> On Wed, Apr 21, 2021 at 19:17 Thomas Munro <thomas.munro@gmail.com> wrote:\n>> ... Personally I think the right thing to do now is to revert it\n>> and re-propose for 15 early in the cycle, supported with some better\n>> testing infrastructure.\n\n> I tend to agree with the idea to revert it, perhaps a +0 on that, but if\n> others argue it should be fixed in-place, I wouldn’t complain about it.\n\nFWIW, I've so far only been able to see problems on two old PPC Macs,\none of which has been known to be a bit flaky in the past. So it's\npossible that what I'm looking at is a hardware glitch. But it's\nconsistent enough that I rather doubt that.\n\nWhat I'm doing is running the core regression tests with a single\nstandby (on the same machine) and wal_consistency_checking = all.\nFairly reproducibly (more than one run in ten), what I get on the\nslightly-flaky machine is consistency check failures like\n\n2021-04-21 17:42:56.324 EDT [42286] PANIC: inconsistent page found, rel 1663/354383/357033, forknum 0, blkno 9, byte offset 2069: replay 0x00 primary 0x03\n2021-04-21 17:42:56.324 EDT [42286] CONTEXT: WAL redo at 24/121C97B0 for Heap/INSERT: off 107 flags 0x00; blkref #0: rel 1663/354383/357033, blk 9 FPW\n2021-04-21 17:45:11.662 EDT [42284] LOG: startup process (PID 42286) was terminated by signal 6: Abort trap\n\n2021-04-21 11:25:30.091 EDT [38891] PANIC: inconsistent page found, rel 1663/229880/237980, forknum 0, blkno 108, byte offset 3845: replay 0x00 primary 0x99\n2021-04-21 11:25:30.091 EDT [38891] CONTEXT: WAL redo at 17/A99897FC for SPGist/ADD_LEAF: add leaf to page; off 241; headoff 171; parentoff 0; blkref #0: rel 1663/229880/237980, blk 108 FPW\n2021-04-21 11:26:59.371 EDT [38889] LOG: startup process (PID 38891) was terminated by signal 6: Abort trap\n\n2021-04-20 19:20:16.114 EDT [34405] PANIC: inconsistent page found, rel 1663/189216/197311, forknum 0, blkno 115, byte offset 6149: replay 0x37 primary 0x03\n2021-04-20 19:20:16.114 EDT [34405] CONTEXT: WAL redo at 13/3CBFED00 for SPGist/ADD_LEAF: add leaf to page; off 241; headoff 171; parentoff 0; blkref #0: rel 1663/189216/197311, blk 115 FPW\n2021-04-20 19:21:54.421 EDT [34403] LOG: startup process (PID 34405) was terminated by signal 6: Abort trap\n\n2021-04-20 17:44:09.356 EDT [24106] FATAL: inconsistent page found, rel 1663/135419/143843, forknum 0, blkno 101, byte offset 6152: replay 0x40 primary 0x00\n2021-04-20 17:44:09.356 EDT [24106] CONTEXT: WAL redo at D/5107D8A8 for Gist/PAGE_UPDATE: ; blkref #0: rel 1663/135419/143843, blk 101 FPW\n\n(Note I modified checkXLogConsistency to PANIC on failure, so I could get\na core dump to analyze; and it's also printing the first-mismatch location.)\n\nI have not analyzed each one of these failures exhaustively, but on the\nones I have looked at closely, the replay_image_masked version of the page\nappears correct while the primary_image_masked version is *not*.\nMoreover, the primary_image_masked version does not match the full-page\nimage that I see in the on-disk WAL file. It did however seem to match\nthe in-memory WAL record contents that the decoder is operating on.\nSo unless you want to believe the buggy-hardware theory, something's\noccasionally messing up while loading WAL records from disk. All of the\ntrouble cases involve records that span across WAL pages (unsurprising\nsince they contain FPIs), so maybe there's something not quite right\nin there.\n\nIn the cases that I looked at closely, it appeared that there was a\nblock of 32 wrong bytes somewhere within the page image, with the data\nbefore and after that being correct. I'm not sure if that pattern\nholds in all cases though.\n\nBTW, if I restart the failed standby, it plows through the same data\njust fine, confirming that the on-disk WAL is not corrupt.\n\nThe other PPC machine (with no known history of trouble) is the one\nthat had the CRC failure I showed earlier. That one does seem to be\nactual bad data in the stored WAL, because the problem was also seen\nby pg_waldump, and trying to restart the standby got the same failure\nagain. I've not been able to duplicate the consistency-check failures\nthere. But because that machine is a laptop with a much inferior disk\ndrive, the speeds are enough different that it's not real surprising\nif it doesn't hit the same problem.\n\nI've also tried to reproduce on 32-bit and 64-bit Intel, without\nsuccess. So if this is real, maybe it's related to being big-endian\nhardware? But it's also quite sensitive to $dunno-what, maybe the\nhistory of WAL records that have already been replayed.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 21:21:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-21 21:21:05 -0400, Tom Lane wrote:\n> What I'm doing is running the core regression tests with a single\n> standby (on the same machine) and wal_consistency_checking = all.\n\nDo you run them over replication, or sequentially by storing data into\nan archive? Just curious, because its so painful to run that scenario in\nthe replication case due to the tablespace conflicting between\nprimary/standby, unless one disables the tablespace tests.\n\n\n> The other PPC machine (with no known history of trouble) is the one\n> that had the CRC failure I showed earlier. That one does seem to be\n> actual bad data in the stored WAL, because the problem was also seen\n> by pg_waldump, and trying to restart the standby got the same failure\n> again.\n\nIt seems like that could also indicate an xlogreader bug that is\nreliably hit? Once it gets confused about record lengths or such I'd\nexpect CRC failures...\n\nIf it were actually wrong WAL contents I don't think any of the\nxlogreader / prefetching changes could be responsible...\n\n\nHave you tried reproducing it on commits before the recent xlogreader\nchanges?\n\ncommit 1d257577e08d3e598011d6850fd1025858de8c8c\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: 2021-04-08 23:03:43 +1200\n\n Optionally prefetch referenced data in recovery.\n\ncommit f003d9f8721b3249e4aec8a1946034579d40d42c\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: 2021-04-08 23:03:34 +1200\n\n Add circular WAL decoding buffer.\n\n Discussion: https://postgr.es/m/CA+hUKGJ4VJN8ttxScUFM8dOKX0BrBiboo5uz1cq=AovOddfHpA@mail.gmail.com\n\ncommit 323cbe7c7ddcf18aaf24b7f6d682a45a61d4e31b\nAuthor: Thomas Munro <tmunro@postgresql.org>\nDate: 2021-04-08 23:03:23 +1200\n\n Remove read_page callback from XLogReader.\n\n\nTrying 323cbe7c7ddcf18aaf24b7f6d682a45a61d4e31b^ is probably the most\ninteresting bit.\n\n\n> I've not been able to duplicate the consistency-check failures\n> there. But because that machine is a laptop with a much inferior disk\n> drive, the speeds are enough different that it's not real surprising\n> if it doesn't hit the same problem.\n>\n> I've also tried to reproduce on 32-bit and 64-bit Intel, without\n> success. So if this is real, maybe it's related to being big-endian\n> hardware? But it's also quite sensitive to $dunno-what, maybe the\n> history of WAL records that have already been replayed.\n\nIt might just be disk speed influencing how long the tests take, which\nin turn increase the number of times checkpoints during the test,\nincreasing the number of FPIs?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 21 Apr 2021 18:34:11 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 1:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I've also tried to reproduce on 32-bit and 64-bit Intel, without\n> success. So if this is real, maybe it's related to being big-endian\n> hardware? But it's also quite sensitive to $dunno-what, maybe the\n> history of WAL records that have already been replayed.\n\nAh, that's interesting. There are a couple of sparc64 failures and a\nppc64 failure in the build farm, but I couldn't immediately spot what\nwas wrong with them or whether it might be related to this stuff.\n\nThanks for the clues. I'll see what unusual systems I can find to try\nthis on....\n\n\n",
"msg_date": "Thu, 22 Apr 2021 13:59:58 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-21 21:21:05 -0400, Tom Lane wrote:\n>> What I'm doing is running the core regression tests with a single\n>> standby (on the same machine) and wal_consistency_checking = all.\n\n> Do you run them over replication, or sequentially by storing data into\n> an archive? Just curious, because its so painful to run that scenario in\n> the replication case due to the tablespace conflicting between\n> primary/standby, unless one disables the tablespace tests.\n\nNo, live over replication. I've been skipping the tablespace test.\n\n> Have you tried reproducing it on commits before the recent xlogreader\n> changes?\n\nNope.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 21 Apr 2021 22:15:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-22 13:59:58 +1200, Thomas Munro wrote:\n> On Thu, Apr 22, 2021 at 1:21 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> > I've also tried to reproduce on 32-bit and 64-bit Intel, without\n> > success. So if this is real, maybe it's related to being big-endian\n> > hardware? But it's also quite sensitive to $dunno-what, maybe the\n> > history of WAL records that have already been replayed.\n> \n> Ah, that's interesting. There are a couple of sparc64 failures and a\n> ppc64 failure in the build farm, but I couldn't immediately spot what\n> was wrong with them or whether it might be related to this stuff.\n> \n> Thanks for the clues. I'll see what unusual systems I can find to try\n> this on....\n\nFWIW, I've run 32 and 64 bit x86 through several hundred regression\ncycles, without hitting an issue. For a lot of them I set\ncheckpoint_timeout to a lower value as I thought that might make it more\nlikely to reproduce an issue.\n\nTom, any chance you could check if your machine repros the issue before\nthese commits?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Apr 2021 09:32:45 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Tom, any chance you could check if your machine repros the issue before\n> these commits?\n\nWilco, but it'll likely take a little while to get results ...\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 12:45:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 29, 2021 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Tom, any chance you could check if your machine repros the issue before\n> > these commits?\n>\n> Wilco, but it'll likely take a little while to get results ...\n\nFWIW I also chewed through many megawatts trying to reproduce this on\na PowerPC system in 64 bit big endian mode, with an emulator. No\ncigar. However, it's so slow that I didn't make it to 10 runs...\n\n\n",
"msg_date": "Thu, 29 Apr 2021 10:46:14 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> FWIW I also chewed through many megawatts trying to reproduce this on\n> a PowerPC system in 64 bit big endian mode, with an emulator. No\n> cigar. However, it's so slow that I didn't make it to 10 runs...\n\nSpeaking of megawatts ... my G4 has now finished about ten cycles of\ninstallcheck-parallel without a failure, which isn't really enough\nto draw any conclusions yet. But I happened to notice the\naccumulated CPU time for the background processes:\n\nUSER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\ntgl 19048 0.0 4.4 229952 92196 ?? Ss 3:19PM 19:59.19 postgres: startup recovering 000000010000001400000022 \ntgl 19051 0.0 0.1 229656 1696 ?? Ss 3:19PM 27:09.14 postgres: walreceiver streaming 14/227D8F14 \ntgl 19052 0.0 0.1 229904 2516 ?? Ss 3:19PM 17:38.17 postgres: walsender tgl [local] streaming 14/227D8F14 \n\nIOW, we've spent over twice as many CPU cycles shipping data to the\nstandby as we did in applying the WAL on the standby. Is this\nexpected? I've got wal_consistency_checking = all, which is bloating\nthe WAL volume quite a bit, but still it seems like the walsender and\nwalreceiver have little excuse for spending more cycles per byte\nthan the startup process.\n\n(This is testing b3ee4c503, so if Thomas' WAL changes improved\nefficiency of the replay process at all, the discrepancy could be\neven worse in HEAD.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 19:24:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-28 19:24:53 -0400, Tom Lane wrote:\n> But I happened to notice the accumulated CPU time for the background\n> processes:\n> \n> USER PID %CPU %MEM VSZ RSS TT STAT STARTED TIME COMMAND\n> tgl 19048 0.0 4.4 229952 92196 ?? Ss 3:19PM 19:59.19 postgres: startup recovering 000000010000001400000022 \n> tgl 19051 0.0 0.1 229656 1696 ?? Ss 3:19PM 27:09.14 postgres: walreceiver streaming 14/227D8F14 \n> tgl 19052 0.0 0.1 229904 2516 ?? Ss 3:19PM 17:38.17 postgres: walsender tgl [local] streaming 14/227D8F14 \n> \n> IOW, we've spent over twice as many CPU cycles shipping data to the\n> standby as we did in applying the WAL on the standby. Is this\n> expected? I've got wal_consistency_checking = all, which is bloating\n> the WAL volume quite a bit, but still it seems like the walsender and\n> walreceiver have little excuse for spending more cycles per byte\n> than the startup process.\n\nI don't really know how the time calculation works on mac. Is there a\nchance it includes time spent doing IO? On the primary the WAL IO is\ndone by a lot of backends, but on the standby it's all going to be the\nwalreceiver. And the walreceiver does fsyncs in a not particularly\nefficient manner.\n\nFWIW, on my linux workstation no such difference is visible:\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\nandres 2910540 9.4 0.0 2237252 126680 ? Ss 16:55 0:20 postgres: dev assert standby: startup recovering 00000001000000020000003F\nandres 2910544 5.2 0.0 2236724 9260 ? Ss 16:55 0:11 postgres: dev assert standby: walreceiver streaming 2/3FDCF118\nandres 2910545 2.1 0.0 2237036 10672 ? Ss 16:55 0:04 postgres: dev assert: walsender andres [local] streaming 2/3FDCF118\n\n\n\n> (This is testing b3ee4c503, so if Thomas' WAL changes improved\n> efficiency of the replay process at all, the discrepancy could be\n> even worse in HEAD.)\n\nThe prefetching isn't enabled by default, so I'd not expect meaningful\ndifferences... And even with the prefetching enabled, our normal\nregression tests largely are resident in s_b, so there shouldn't be much\nprefetching.\n\n\nOh! I was about to ask how much shared buffers your primary / standby\nhave. And I think I may actually have reproduce a variant of the issue!\n\nI previously had played around with different settings that I thought\nmight increase the likelihood of reproducing the problem. But this time\nI set shared_buffers lower than before, and got:\n\n2021-04-28 17:03:22.174 PDT [2913840][] LOG: database system was shut down in recovery at 2021-04-28 17:03:11 PDT\n2021-04-28 17:03:22.174 PDT [2913840][] LOG: entering standby mode\n2021-04-28 17:03:22.178 PDT [2913840][1/0] LOG: redo starts at 2/416C6278\n2021-04-28 17:03:37.628 PDT [2913840][1/0] LOG: consistent recovery state reached at 4/7F5C3200\n2021-04-28 17:03:37.628 PDT [2913840][1/0] FATAL: invalid memory alloc request size 3053455757\n2021-04-28 17:03:37.628 PDT [2913839][] LOG: database system is ready to accept read only connections\n2021-04-28 17:03:37.636 PDT [2913839][] LOG: startup process (PID 2913840) exited with exit code 1\n\nThis reproduces across restarts. Yay, I guess.\n\nIsn't it off that we get a \"database system is ready to accept read only\nconnections\"?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Apr 2021 17:12:24 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> On 2021-04-28 19:24:53 -0400, Tom Lane wrote:\n>> IOW, we've spent over twice as many CPU cycles shipping data to the\n>> standby as we did in applying the WAL on the standby.\n\n> I don't really know how the time calculation works on mac. Is there a\n> chance it includes time spent doing IO?\n\nI'd be pretty astonished if it did. This is basically a NetBSD system\nremember (in fact, this ancient macOS release is a good deal closer\nto those roots than modern versions). BSDen have never accounted for\ntime that way AFAIK. Also, the \"ps\" man page says specifically that\nthat column is CPU time.\n\n> Oh! I was about to ask how much shared buffers your primary / standby\n> have. And I think I may actually have reproduce a variant of the issue!\n\nDefault configurations, so 128MB each.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 20:24:43 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-28 20:24:43 -0400, Tom Lane wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > Oh! I was about to ask how much shared buffers your primary / standby\n> > have.\n> Default configurations, so 128MB each.\n\nI thought that possibly initdb would detect less or something...\n\n\nI assume this is 32bit? I did notice that a 32bit test took a lot longer\nthan a 64bit test. But didn't investigate so far.\n\n\n> And I think I may actually have reproduce a variant of the issue!\n\nUnfortunately I had not set up things in a way that the primary retains\nthe WAL, making it harder to compare whether it's the WAL that got\ncorrupted or whether it's a decoding bug.\n\nI can however say that pg_waldump on the standby's pg_wal does also\nfail. The failure as part of the backend is \"invalid memory alloc\nrequest size\", whereas in pg_waldump I get the much more helpful:\npg_waldump: fatal: error in WAL record at 4/7F5C31C8: record with incorrect prev-link 416200FF/FF000000 at 4/7F5C3200\n\nIn frontend code that allocation actually succeeds, because there is no\nsize check. But in backend code we run into the size check, and thus\ndon't even display a useful error.\n\nIn 13 the header is validated before allocating space for the\nrecord(except if header is spread across pages) - it seems inadvisable\nto turn that around?\n\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Apr 2021 17:59:22 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-28 17:59:22 -0700, Andres Freund wrote:\n> I can however say that pg_waldump on the standby's pg_wal does also\n> fail. The failure as part of the backend is \"invalid memory alloc\n> request size\", whereas in pg_waldump I get the much more helpful:\n> pg_waldump: fatal: error in WAL record at 4/7F5C31C8: record with incorrect prev-link 416200FF/FF000000 at 4/7F5C3200\n\nThere's definitely something broken around continuation records, in\nXLogFindNextRecord(). Which means that it's not the cause for the server\nside issue, but obviously still not good.\n\nThe conversion of XLogFindNextRecord() to be state machine based\nbasically only works in a narrow set of circumstances. Whenever the end\nof the first record read is on a different page than the start of the\nrecord, we'll endlessly loop.\n\nWe'll go into XLogFindNextRecord(), and return until we've successfully\nread the page header. Then we'll enter the second loop. Which will try\nto read until the end of the first record. But after returning the first\nloop will again ask for page header.\n\nEven if that's fixed, the second loop alone has the same problem: As\nXLogBeginRead() is called unconditionally we'll start read the start of\nthe record, discover that it needs data on a second page, return, and\ndo the same thing again.\n\nI think it needs something roughly like the attached.\n\nGreetings,\n\nAndres Freund",
"msg_date": "Wed, 28 Apr 2021 19:25:53 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-04-28 17:59:22 -0700, Andres Freund wrote:\n> I can however say that pg_waldump on the standby's pg_wal does also\n> fail. The failure as part of the backend is \"invalid memory alloc\n> request size\", whereas in pg_waldump I get the much more helpful:\n> pg_waldump: fatal: error in WAL record at 4/7F5C31C8: record with incorrect prev-link 416200FF/FF000000 at 4/7F5C3200\n> \n> In frontend code that allocation actually succeeds, because there is no\n> size check. But in backend code we run into the size check, and thus\n> don't even display a useful error.\n> \n> In 13 the header is validated before allocating space for the\n> record(except if header is spread across pages) - it seems inadvisable\n> to turn that around?\n\nI was now able to reproduce the problem again, and I'm afraid that the\nbug I hit is likely separate from Tom's. The allocation thing above is\nthe issue in my case:\n\nThe walsender connection ended (I restarted the primary), thus the\nstartup switches to replaying locally. For some reason the end of the\nWAL contains non-zero data (I think it's because walreceiver doesn't\nzero out pages - that's bad!). Because the allocation happen before the\nheader is validated, we reproducably end up in the mcxt.c ERROR path,\nfailing recovery.\n\nTo me it looks like a smaller version of the problem is present in < 14,\nalbeit only when the page header is at a record boundary. In that case\nwe don't validate the page header immediately, only once it's completely\nread. But we do believe the total size, and try to allocate\nthat.\n\nThere's a really crufty escape hatch (from 70b4f82a4b) to that:\n\n\t/*\n\t * Note that in much unlucky circumstances, the random data read from a\n\t * recycled segment can cause this routine to be called with a size\n\t * causing a hard failure at allocation. For a standby, this would cause\n\t * the instance to stop suddenly with a hard failure, preventing it to\n\t * retry fetching WAL from one of its sources which could allow it to move\n\t * on with replay without a manual restart. If the data comes from a past\n\t * recycled segment and is still valid, then the allocation may succeed\n\t * but record checks are going to fail so this would be short-lived. If\n\t * the allocation fails because of a memory shortage, then this is not a\n\t * hard failure either per the guarantee given by MCXT_ALLOC_NO_OOM.\n\t */\n\tif (!AllocSizeIsValid(newSize))\n\t\treturn false;\n\nbut it looks to me like that's pretty much the wrong fix, at least in\nthe case where we've not yet validated the rest of the header. We don't\nneed to allocate all that data before we've read the rest of the\n*fixed-size* header.\n\nIt also seems to me that 70b4f82a4b should also have changed walsender\nto pad out the received data to an 8KB boundary?\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 28 Apr 2021 20:14:09 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> I was now able to reproduce the problem again, and I'm afraid that the\n> bug I hit is likely separate from Tom's.\n\nYeah, I think so --- the symptoms seem quite distinct.\n\nMy score so far today on the G4 is:\n\n12 error-free regression test cycles on b3ee4c503\n\n(plus one more with shared_buffers set to 16MB, on the strength\nof your previous hunch --- didn't fail for me though)\n\nHEAD failed on the second run with the same symptom as before:\n\n2021-04-28 22:57:17.048 EDT [50479] FATAL: inconsistent page found, rel 1663/58183/69545, forknum 0, blkno 696\n2021-04-28 22:57:17.048 EDT [50479] CONTEXT: WAL redo at 4/B72D408 for Heap/INSERT: off 77 flags 0x00; blkref #0: rel 1663/58183/69545, blk 696 FPW\n\nThis seems to me to be pretty strong evidence that I'm seeing *something*\nreal. I'm currently trying to isolate a specific commit to pin it on.\nA straight \"git bisect\" isn't going to work because so many people had\nbroken so many different things right around that date :-(, so it may\ntake awhile to get a good answer.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 28 Apr 2021 23:39:38 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 29, 2021 at 3:14 PM Andres Freund <andres@anarazel.de> wrote:\n> To me it looks like a smaller version of the problem is present in < 14,\n> albeit only when the page header is at a record boundary. In that case\n> we don't validate the page header immediately, only once it's completely\n> read. But we do believe the total size, and try to allocate\n> that.\n>\n> There's a really crufty escape hatch (from 70b4f82a4b) to that:\n\nRight, I made that problem worse, and that could probably be changed\nto be no worse than 13 by reordering those operations.\n\nPS Sorry for my intermittent/slow responses on this thread this week,\nas I'm mostly away from the keyboard due to personal commitments.\nI'll be back in the saddle next week to tidy this up, most likely by\nreverting. The main thought I've been having about this whole area is\nthat, aside from the lack of general testing of recovery, which we\nshould definitely address[1], what it really needs is a decent test\nharness to drive it through all interesting scenarios and states at a\nlower level, independently.\n\n[1] https://www.postgresql.org/message-id/flat/CA%2BhUKGKpRWQ9SxdxxDmTBCJoR0YnFpMBe7kyzY8SUQk%2BHeskxg%40mail.gmail.com\n\n\n",
"msg_date": "Thu, 29 Apr 2021 16:27:36 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Thu, Apr 29, 2021 at 4:45 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Andres Freund <andres@anarazel.de> writes:\n>>> Tom, any chance you could check if your machine repros the issue before\n>>> these commits?\n\n>> Wilco, but it'll likely take a little while to get results ...\n\n> FWIW I also chewed through many megawatts trying to reproduce this on\n> a PowerPC system in 64 bit big endian mode, with an emulator. No\n> cigar. However, it's so slow that I didn't make it to 10 runs...\n\nSo I've expended a lot of kilowatt-hours over the past several days,\nand I've got results that are interesting but don't really get us\nany closer to a resolution.\n\nTo recap, the test lashup is:\n* 2003 PowerMac G4 (1.25GHz PPC 7455, 7200 rpm spinning-rust drive)\n* Standard debug build (--enable-debug --enable-cassert)\n* Out-of-the-box configuration, except add wal_consistency_checking = all\nand configure a wal-streaming standby on the same machine\n* Repeatedly run \"make installcheck-parallel\", but skip the tablespace\ntest to avoid issues with the standby trying to use the same directory\n* Delay long enough after each installcheck-parallel to let the \nstandby catch up (the run proper is ~24 min, plus 2 min for catchup)\n\nThe failures I'm seeing generally look like\n\n2021-05-01 15:33:10.968 EDT [8281] FATAL: inconsistent page found, rel 1663/58186/66338, forknum 0, blkno 19\n2021-05-01 15:33:10.968 EDT [8281] CONTEXT: WAL redo at 3/4CE905B8 for Gist/PAGE_UPDATE: ; blkref #0: rel 1663/58186/66338, blk 19 FPW\n\nwith a variety of WAL record types being named, so it doesn't seem\nto be specific to any particular record type. I've twice gotten the\nbogus-checksum-and-then-assertion-failure I reported before:\n\n2021-05-01 17:07:52.992 EDT [17464] LOG: incorrect resource manager data checksum in record at 3/E0073EA4\nTRAP: FailedAssertion(\"state->recordRemainLen > 0\", File: \"xlogreader.c\", Line: 567, PID: 17464)\n\nIn both of those cases, the WAL on disk was perfectly fine, and the same\nis true of most of the \"inconsistent page\" complaints. So the issue\ndefinitely seems to be about the startup process mis-reading data that\nwas correctly shipped over.\n\nAnyway, the new and interesting data concerns the relative failure rates\nof different builds:\n\n* Recent HEAD (from 4-28 and 5-1): 4 failures in 8 test cycles\n\n* Reverting 1d257577e: 1 failure in 8 test cycles\n\n* Reverting 1d257577e and f003d9f87: 3 failures in 28 cycles\n\n* Reverting 1d257577e, f003d9f87, and 323cbe7c7: 2 failures in 93 cycles\n\nThat last point means that there was some hard-to-hit problem even\nbefore any of the recent WAL-related changes. However, 323cbe7c7\n(Remove read_page callback from XLogReader) increased the failure\nrate by at least a factor of 5, and 1d257577e (Optionally prefetch\nreferenced data) seems to have increased it by another factor of 4.\nBut it looks like f003d9f87 (Add circular WAL decoding buffer)\ndidn't materially change the failure rate.\n\nConsidering that 323cbe7c7 was supposed to be just refactoring,\nand 1d257577e is allegedly disabled-by-default, these are surely\nnot the results I was expecting to get.\n\nIt seems like it's still an open question whether all this is\na real bug, or flaky hardware. I have seen occasional kernel\nfreezeups (or so I think -- machine stops responding to keyboard\nor network input) over the past year or two, so I cannot in good\nconscience rule out the flaky-hardware theory. But it doesn't\nsmell like that kind of problem to me. I think what we're looking\nat is a timing-sensitive bug that was there before (maybe long\nbefore?) and these commits happened to make it occur more often\non this particular hardware. This hardware is enough unlike\nanything made in the past decade that it's not hard to credit\nthat it'd show a timing problem that nobody else can reproduce.\n\n(I did try the time-honored ritual of reseating all the machine's\nRAM, partway through this. Doesn't seem to have changed anything.)\n\nAnyway, I'm not sure where to go from here. I'm for sure nowhere\nnear being able to identify the bug --- and if there really is\na bug that formerly had a one-in-fifty reproduction rate, I have\nzero interest in trying to identify where it started by bisecting.\nIt'd take at least a day per bisection step, and even that might\nnot be accurate enough. (But, if anyone has ideas of specific\ncommits to test, I'd be willing to try a few.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 01 May 2021 23:16:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 29, 2021 at 12:24 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Andres Freund <andres@anarazel.de> writes:\n> > On 2021-04-28 19:24:53 -0400, Tom Lane wrote:\n> >> IOW, we've spent over twice as many CPU cycles shipping data to the\n> >> standby as we did in applying the WAL on the standby.\n>\n> > I don't really know how the time calculation works on mac. Is there a\n> > chance it includes time spent doing IO?\n\nFor comparison, on a modern Linux system I see numbers like this,\nwhile running that 025_stream_rep_regress.pl test I posted in a nearby\nthread:\n\nUSER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND\ntmunro 2150863 22.5 0.0 55348 6752 ? Ss 12:59 0:07\npostgres: standby_1: startup recovering 00000001000000020000003C\ntmunro 2150867 17.5 0.0 55024 6364 ? Ss 12:59 0:05\npostgres: standby_1: walreceiver streaming 2/3C675D80\ntmunro 2150868 11.7 0.0 55296 7192 ? Ss 12:59 0:04\npostgres: primary: walsender tmunro [local] streaming 2/3C675D80\n\nThose ratios are better but it's still hard work, and perf shows the\nCPU time is all in page cache schlep:\n\n 22.44% postgres [kernel.kallsyms] [k] copy_user_enhanced_fast_string\n 20.12% postgres [kernel.kallsyms] [k] __add_to_page_cache_locked\n 7.30% postgres [kernel.kallsyms] [k] iomap_set_page_dirty\n\nThat was with all three patches reverted, so it's nothing new.\nDefinitely room for improvement... there have been a few discussions\nabout not using a buffered file for high-frequency data exchange and\nrelaxing various timing rules, which we should definitely look into,\nbut I wouldn't be at all surprised if HFS+ was just much worse at\nthis.\n\nThinking more about good old HFS+... I guess it's remotely possible\nthat there might have been coherency bugs in that could be exposed by\nour usage pattern, but then that doesn't fit too well with the clues I\nhave from light reading: this is a non-SMP system, and it's said that\nHFS+ used to serialise pretty much everything on big filesystem locks\nanyway.\n\n\n",
"msg_date": "Mon, 3 May 2021 13:23:25 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sun, May 2, 2021 at 3:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> That last point means that there was some hard-to-hit problem even\n> before any of the recent WAL-related changes. However, 323cbe7c7\n> (Remove read_page callback from XLogReader) increased the failure\n> rate by at least a factor of 5, and 1d257577e (Optionally prefetch\n> referenced data) seems to have increased it by another factor of 4.\n> But it looks like f003d9f87 (Add circular WAL decoding buffer)\n> didn't materially change the failure rate.\n\nOh, wow. There are several surprising results there. Thanks for\nrunning those tests for so long so that we could see the rarest\nfailures.\n\nEven if there are somehow *two* causes of corruption, one preexisting\nand one added by the refactoring or decoding patches, I'm struggling\nto understand how the chance increases with 1d2575, since that only\nadds code that isn't reached when not enabled (though I'm going to\nre-review that).\n\n> Considering that 323cbe7c7 was supposed to be just refactoring,\n> and 1d257577e is allegedly disabled-by-default, these are surely\n> not the results I was expecting to get.\n\n+1\n\n> It seems like it's still an open question whether all this is\n> a real bug, or flaky hardware. I have seen occasional kernel\n> freezeups (or so I think -- machine stops responding to keyboard\n> or network input) over the past year or two, so I cannot in good\n> conscience rule out the flaky-hardware theory. But it doesn't\n> smell like that kind of problem to me. I think what we're looking\n> at is a timing-sensitive bug that was there before (maybe long\n> before?) and these commits happened to make it occur more often\n> on this particular hardware. This hardware is enough unlike\n> anything made in the past decade that it's not hard to credit\n> that it'd show a timing problem that nobody else can reproduce.\n\nHmm, yeah that does seem plausible. It would be nice to see a report\nfrom any other system though. I'm still trying, and reviewing...\n\n\n",
"msg_date": "Mon, 3 May 2021 17:42:35 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn 5/3/21 7:42 AM, Thomas Munro wrote:\n> On Sun, May 2, 2021 at 3:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> That last point means that there was some hard-to-hit problem even\n>> before any of the recent WAL-related changes. However, 323cbe7c7\n>> (Remove read_page callback from XLogReader) increased the failure\n>> rate by at least a factor of 5, and 1d257577e (Optionally prefetch\n>> referenced data) seems to have increased it by another factor of 4.\n>> But it looks like f003d9f87 (Add circular WAL decoding buffer)\n>> didn't materially change the failure rate.\n> \n> Oh, wow. There are several surprising results there. Thanks for\n> running those tests for so long so that we could see the rarest\n> failures.\n> \n> Even if there are somehow *two* causes of corruption, one preexisting\n> and one added by the refactoring or decoding patches, I'm struggling\n> to understand how the chance increases with 1d2575, since that only\n> adds code that isn't reached when not enabled (though I'm going to\n> re-review that).\n> \n>> Considering that 323cbe7c7 was supposed to be just refactoring,\n>> and 1d257577e is allegedly disabled-by-default, these are surely\n>> not the results I was expecting to get.\n> \n> +1\n> \n>> It seems like it's still an open question whether all this is\n>> a real bug, or flaky hardware. I have seen occasional kernel\n>> freezeups (or so I think -- machine stops responding to keyboard\n>> or network input) over the past year or two, so I cannot in good\n>> conscience rule out the flaky-hardware theory. But it doesn't\n>> smell like that kind of problem to me. I think what we're looking\n>> at is a timing-sensitive bug that was there before (maybe long\n>> before?) and these commits happened to make it occur more often\n>> on this particular hardware. This hardware is enough unlike\n>> anything made in the past decade that it's not hard to credit\n>> that it'd show a timing problem that nobody else can reproduce.\n> \n> Hmm, yeah that does seem plausible. It would be nice to see a report\n> from any other system though. I'm still trying, and reviewing...\n> \n\nFWIW I've ran the test (make installcheck-parallel in a loop) on four \ndifferent machines - two x86_64 ones, and two rpi4. The x86 boxes did \n~1000 rounds each (and one of them had 5 local replicas) without any \nissue. The rpi4 machines did ~50 rounds each, also without failures.\n\nObviously, it's possible there's something that neither of those (very \ndifferent systems) triggers, but I'd say it might also be a hint that \nthis really is a hw issue on the old ppc macs. Or maybe something very \nspecific to that arch.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 4 May 2021 14:37:22 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@enterprisedb.com> writes:\n> On 5/3/21 7:42 AM, Thomas Munro wrote:\n>> Hmm, yeah that does seem plausible. It would be nice to see a report\n>> from any other system though. I'm still trying, and reviewing...\n\n> FWIW I've ran the test (make installcheck-parallel in a loop) on four \n> different machines - two x86_64 ones, and two rpi4. The x86 boxes did \n> ~1000 rounds each (and one of them had 5 local replicas) without any \n> issue. The rpi4 machines did ~50 rounds each, also without failures.\n\nYeah, I have also spent a fair amount of time trying to reproduce it\nelsewhere, without success so far. Notably, I've been trying on a\nPPC Mac laptop that has a fairly similar CPU to what's in the G4,\nthough a far slower disk drive. So that seems to exclude theories\nbased on it being PPC-specific.\n\nI suppose that if we're unable to reproduce it on at least one other box,\nwe have to write it off as hardware flakiness. I'm not entirely\ncomfortable with that answer, but I won't push for reversion of the WAL\npatches without more evidence that there's a real issue.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 May 2021 09:46:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "I wrote:\n> I suppose that if we're unable to reproduce it on at least one other box,\n> we have to write it off as hardware flakiness.\n\nBTW, that conclusion shouldn't distract us from the very real bug\nthat Andres identified. I was just scraping the buildfarm logs\nconcerning recent failures, and I found several recent cases\nthat match the symptom he reported:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=chipmunk&dt=2021-04-23%2022%3A27%3A41\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=hornet&dt=2021-04-21%2005%3A15%3A24\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2021-04-20%2002%3A03%3A08\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2021-05-04%2004%3A07%3A41\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=wrasse&dt=2021-04-20%2021%3A08%3A59\n\nThey all show the standby in recovery/019_replslot_limit.pl failing\nwith symptoms like\n\n2021-05-04 07:42:00.968 UTC [24707406:1] LOG: database system was shut down in recovery at 2021-05-04 07:41:39 UTC\n2021-05-04 07:42:00.968 UTC [24707406:2] LOG: entering standby mode\n2021-05-04 07:42:01.050 UTC [24707406:3] LOG: redo starts at 0/1C000D8\n2021-05-04 07:42:01.079 UTC [24707406:4] LOG: consistent recovery state reached at 0/1D00000\n2021-05-04 07:42:01.079 UTC [24707406:5] FATAL: invalid memory alloc request size 1476397045\n2021-05-04 07:42:01.080 UTC [13238274:3] LOG: database system is ready to accept read only connections\n2021-05-04 07:42:01.082 UTC [13238274:4] LOG: startup process (PID 24707406) exited with exit code 1\n\n(BTW, the behavior seen here where the failure occurs *immediately*\nafter reporting \"consistent recovery state reached\" is seen in the\nother reports as well, including Andres' version. I wonder if that\nmeans anything.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 04 May 2021 15:47:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-04 15:47:41 -0400, Tom Lane wrote:\n> BTW, that conclusion shouldn't distract us from the very real bug\n> that Andres identified. I was just scraping the buildfarm logs\n> concerning recent failures, and I found several recent cases\n> that match the symptom he reported:\n> [...]\n> They all show the standby in recovery/019_replslot_limit.pl failing\n> with symptoms like\n>\n> 2021-05-04 07:42:00.968 UTC [24707406:1] LOG: database system was shut down in recovery at 2021-05-04 07:41:39 UTC\n> 2021-05-04 07:42:00.968 UTC [24707406:2] LOG: entering standby mode\n> 2021-05-04 07:42:01.050 UTC [24707406:3] LOG: redo starts at 0/1C000D8\n> 2021-05-04 07:42:01.079 UTC [24707406:4] LOG: consistent recovery state reached at 0/1D00000\n> 2021-05-04 07:42:01.079 UTC [24707406:5] FATAL: invalid memory alloc request size 1476397045\n> 2021-05-04 07:42:01.080 UTC [13238274:3] LOG: database system is ready to accept read only connections\n> 2021-05-04 07:42:01.082 UTC [13238274:4] LOG: startup process (PID 24707406) exited with exit code 1\n\nYea, that's the pre-existing end-of-log-issue that got more likely as\nwell as more consequential (by accident) in Thomas' patch. It's easy to\nreach parity with the state in 13, it's just changing the order in one\nplace.\n\nBut I think we need to do something for all branches here. The bandaid\nthat was added to allocate_recordbuf() does doesn't really seems\nsufficient to me. This is\n\ncommit 70b4f82a4b5cab5fc12ff876235835053e407155\nAuthor: Michael Paquier <michael@paquier.xyz>\nDate: 2018-06-18 10:43:27 +0900\n\n Prevent hard failures of standbys caused by recycled WAL segments\n\nIn <= 13 the current state is that we'll allocate effectively random\nbytes as long as the random number is below 1GB whenever we reach the\nend of the WAL with the record on a page boundary (because there we\ndon't. That allocation is then not freed for the lifetime of the\nxlogreader. And for FRONTEND uses of xlogreader we'll just happily\nallocate 4GB. The specific problem here is that we don't validate the\nrecord header before allocating when the record header is split across a\npage boundary - without much need as far as I can tell? Until we've read\nthe entire header, we actually don't need to allocate the record buffer?\n\nThis seems like an issue that needs to be fixed to be more robust in\ncrash recovery scenarios where obviously we could just have failed with\nhalf written records.\n\nBut the issue that 70b4f82a4b is trying to address seems bigger to\nme. The reason it's so easy to hit the issue is that walreceiver does <\n8KB writes into recycled WAL segments *without* zero-filling the tail\nend of the page - which will commonly be filled with random older\ncontents, because we'll use a recycled segments. I think that\n*drastically* increases the likelihood of finding something that looks\nlike a valid record header compared to the situation on a primary where\nthe zeroing pages before use makes that pretty unlikely.\n\n\n> (BTW, the behavior seen here where the failure occurs *immediately*\n> after reporting \"consistent recovery state reached\" is seen in the\n> other reports as well, including Andres' version. I wonder if that\n> means anything.)\n\nThat's to be expected, I think. There's not a lot of data that needs to\nbe replayed, and we'll always reach consistency before the end of the\nWAL unless you're dealing with starting from an in-progress base-backup\nthat hasn't yet finished or such. The test causes replication to fail\nshortly after that, so we'll always switch to doing recovery from\npg_wal, which then will hit the end of the WAL, hitting this issue with,\nI think, ~25% likelihood (data from recycled WAL data is probably\n*roughly* evenly distributed, and any 4byte value above 1GB will hit\nthis error in 14).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Tue, 4 May 2021 18:08:35 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-04 09:46:12 -0400, Tom Lane wrote:\n> Yeah, I have also spent a fair amount of time trying to reproduce it\n> elsewhere, without success so far. Notably, I've been trying on a\n> PPC Mac laptop that has a fairly similar CPU to what's in the G4,\n> though a far slower disk drive. So that seems to exclude theories\n> based on it being PPC-specific.\n>\n> I suppose that if we're unable to reproduce it on at least one other box,\n> we have to write it off as hardware flakiness.\n\nI wonder if there's a chance what we're seeing is an OS memory ordering\nbug, or a race between walreceiver writing data and the startup process\nreading it.\n\nWhen the startup process is able to keep up, there often will be a very\nsmall time delta between the startup process reading a page that the\nwalreceiver just wrote. And if the currently read page was the tail page\nwritten to by a 'w' message, it'll often be written to again in short\norder - potentially while the startup process is reading it.\n\nIt'd not terribly surprise me if an old OS version on an old processor\nhad some issues around that.\n\n\nWere there any cases of walsender terminating and reconnecting around\nthe failures?\n\n\nIt looks suspicious that XLogPageRead() does not invalidate the\nxlogreader state when retrying. Normally that's xlogreader's\nresponsibility, but there is that whole XLogReaderValidatePageHeader()\nbusiness. But I don't quite see how it'd actually cause problems.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 02:50:38 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-05-04 18:08:35 -0700, Andres Freund wrote:\n> But the issue that 70b4f82a4b is trying to address seems bigger to\n> me. The reason it's so easy to hit the issue is that walreceiver does <\n> 8KB writes into recycled WAL segments *without* zero-filling the tail\n> end of the page - which will commonly be filled with random older\n> contents, because we'll use a recycled segments. I think that\n> *drastically* increases the likelihood of finding something that looks\n> like a valid record header compared to the situation on a primary where\n> the zeroing pages before use makes that pretty unlikely.\n\nI've written an experimental patch to deal with this and, as expected,\nit does make the end-of-wal detection a lot more predictable and\nreliable. There's only two types of possible errors outside of crashes:\nA record length of 0 (the end of WAL is within a page), and the page\nheader LSN mismatching (the end of WAL is at a page boundary).\n\nThis seems like a significant improvement.\n\nHowever: It's nontrivial to do this nicely and in a backpatchable way in\nXLogWalRcvWrite(). Or at least I haven't found a good way:\n- We can't extend the input buffer to XLogWalRcvWrite(), it's from\n libpq.\n- We don't want to copy the the entire buffer (commonly 128KiB) to a new\n buffer that we then can extend by 0-BLCKSZ of zeroes to cover the\n trailing part of the last page.\n- In PG13+ we can do this utilizing pg_writev(), adding another IOV\n entry covering the trailing space to be padded.\n- It's nicer to avoid increasign the number of write() calls, but it's\n not as crucial as the earlier points.\n\nI'm also a bit uncomfortable with another aspect, although I can't\nreally see a problem: When switch to receiving WAL via walreceiver, we\nalways start at a segment boundary, even if we had received most of that\nsegment before. Currently that won't end up with any trailing space that\nneeds to be zeroed, because the server always will send 128KB chunks,\nbut there's no formal guarantee for that. It seems a bit odd that we\ncould end up zeroing trailing space that already contains valid data,\njust to overwrite it with valid data again. But it ought to always be\nfine.\n\nThe least offensive way I could come up with is for XLogWalRcvWrite() to\nalways write partial pages in a separate pg_pwrite(). When writing a\npartial page, and the previous write position was not already on that\nsame page, copy the buffer into a local XLOG_BLCKSZ sized buffer\n(although we'll never use more than XLOG_BLCKSZ-1 I think), and (re)zero\nout the trailing part. One thing that does not yet handle is if we were\nto get a partial write - we'd not again notice that we need to pad the\nend of the page.\n\nDoes anybody have a better idea?\n\nI really wish we had a version of pg_p{read,write}[v] that internally\nhandled partial IOs, retrying as long as they see > 0 bytes written.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Thu, 6 May 2021 13:23:43 -0700",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Apr 22, 2021 at 11:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n> On Wed, Apr 21, 2021 at 19:17 Thomas Munro <thomas.munro@gmail.com> wrote:\n>> On Thu, Apr 22, 2021 at 8:16 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> ... Personally I think the right thing to do now is to revert it\n>> and re-propose for 15 early in the cycle, supported with some better\n>> testing infrastructure.\n>\n> I tend to agree with the idea to revert it, perhaps a +0 on that, but if others argue it should be fixed in-place, I wouldn’t complain about it.\n\nReverted.\n\nNote: eelpout may return a couple of failures because it's set up to\nrun with recovery_prefetch=on (now an unknown GUC), and it'll be a few\nhours before I can access that machine to adjust that...\n\n> I very much encourage the idea of improving testing in this area and would be happy to try and help do so in the 15 cycle.\n\nCool. I'm going to try out some ideas.\n\n\n",
"msg_date": "Mon, 10 May 2021 16:11:08 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "> On 10 May 2021, at 06:11, Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Thu, Apr 22, 2021 at 11:22 AM Stephen Frost <sfrost@snowman.net> wrote:\n\n>> I tend to agree with the idea to revert it, perhaps a +0 on that, but if others argue it should be fixed in-place, I wouldn’t complain about it.\n> \n> Reverted.\n> \n> Note: eelpout may return a couple of failures because it's set up to\n> run with recovery_prefetch=on (now an unknown GUC), and it'll be a few\n> hours before I can access that machine to adjust that...\n> \n>> I very much encourage the idea of improving testing in this area and would be happy to try and help do so in the 15 cycle.\n> \n> Cool. I'm going to try out some ideas.\n\nSkimming this thread without all the context it's not entirely clear which\npatch the CF entry relates to (I assume it's the one from April 7 based on\nattached mail-id but there is a revert from May?), and the CF app and CF bot\nare also in disagreement which is the latest one.\n\nCould you post an updated version of the patch which is for review?\n\n--\nDaniel Gustafsson\t\thttps://vmware.com/\n\n\n\n",
"msg_date": "Mon, 15 Nov 2021 11:31:42 +0100",
"msg_from": "Daniel Gustafsson <daniel@yesql.se>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Mon, Nov 15, 2021 at 11:31 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Could you post an updated version of the patch which is for review?\n\nSorry for taking so long to come back; I learned some new things that\nmade me want to restructure this code a bit (see below). Here is an\nupdated pair of patches that I'm currently testing.\n\nOld problems:\n\n1. Last time around, an infinite loop was reported in pg_waldump. I\nbelieve Horiguchi-san has fixed that[1], but I'm no longer depending\non that patch. I thought his patch set was a good idea, but it's\ncomplicated and there's enough going on here already... let's consider\nthat independently.\n\nThis version goes back to what I had earlier, though (I hope) it is\nbetter about how \"nonblocking\" states are communicated. In this\nversion, XLogPageRead() has a way to give up part way through a record\nif it doesn't have enough data and there are queued up records that\ncould be replayed right now. In that case, we'll go back to the\nbeginning of the record (and occasionally, back a WAL page) next time\nwe try. That's the cost of not maintaining intra-record decoding\nstate.\n\n2. Last time around, we could try to allocate a crazy amount of\nmemory when reading garbage past the end of the WAL. Fixed, by\nvalidating first, like in master.\n\nNew work:\n\nSince last time, I went away and worked on a \"real\" AIO version of\nthis feature. That's ongoing experimental work for a future proposal,\nbut I have a working prototype and I aim to share that soon, when that\nbranch is rebased to catch up with recent changes. In that version,\nthe prefetcher starts actual reads into the buffer pool, and recovery\nreceives already pinned buffers attached to the stream of records it's\nreplaying.\n\nThat inspired a couple of refactoring changes to this non-AIO version,\nto minimise the difference and anticipate the future work better:\n\n1. The logic for deciding which block to start prefetching next is\nmoved into a new callback function in a sort of standard form (this is\napproximately how all/most prefetching code looks in the AIO project,\nie sequential scans, bitmap heap scan, etc).\n\n2. The logic for controlling how many IOs are running and deciding\nwhen to call the above is in a separate component. In this non-AIO\nversion, it works using a simple ring buffer of LSNs to estimate the\nnumber of in flight I/Os, just like before. This part would be thrown\naway and replaced with the AIO branch's centralised \"streaming read\"\nmechanism which tracks I/O completions based on a stream of completion\nevents from the kernel (or I/O worker processes).\n\n3. In this version, the prefetcher still doesn't pin buffers, for\nsimplicity. That work did force me to study places where WAL streams\nneed prefetching \"barriers\", though, so in this patch you can\nsee that it's now a little more careful than it probably needs to be.\n(It doesn't really matter much if you call posix_fadvise() on a\nnon-existent file region, or the wrong file after OID wraparound and\nreuse, but it would matter if you actually read it into a buffer, and\nif an intervening record might be trying to drop something you have\npinned).\n\nSome other changes:\n\n1. I dropped the GUC recovery_prefetch_fpw. I think it was a\npossibly useful idea but it's a niche concern and not worth worrying\nabout for now.\n\n2. I simplified the stats. Coming up with a good running average\nsystem seemed like a problem for another day (the numbers before were\nhard to interpret). The new stats are super simple counters and\ninstantaneous values:\n\npostgres=# select * from pg_stat_prefetch_recovery ;\n-[ RECORD 1 ]--+------------------------------\nstats_reset | 2021-11-10 09:02:08.590217+13\nprefetch | 13605674 <- times we called posix_fadvise()\nhit | 24185289 <- times we found pages already cached\nskip_init | 217215 <- times we did nothing because init, not read\nskip_new | 192347 <- times we skipped because relation too small\nskip_fpw | 27429 <- times we skipped because fpw, not read\nwal_distance | 10648 <- how far ahead in WAL bytes\nblock_distance | 134 <- how far ahead in block references\nio_depth | 50 <- fadvise() calls not yet followed by pread()\n\nI also removed the code to save and restore the stats via the stats\ncollector, for now. I figured that persistent stats could be a later\nfeature, perhaps after the shared memory stats stuff?\n\n3. I dropped the code that was caching an SMgrRelation pointer to\navoid smgropen() calls that showed up in some profiles. That probably\nlacked invalidation that could be done with some more WAL analysis,\nbut I decided to leave it out completely for now for simplicity.\n\n4. I dropped the verbose logging. I think it might make sense to\nintegrate with the new \"recovery progress\" system, but I think that\nshould be a separate discussion. If you want to see the counters\nafter crash recovery finishes, you can look at the stats view.\n\n[1] https://commitfest.postgresql.org/34/2113/",
"msg_date": "Tue, 23 Nov 2021 23:13:57 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nIt's great you posted a new version of this patch, so I took a look a\nbrief look at it. The code seems in pretty good shape, I haven't found\nany real issues - just two minor comments:\n\nThis seems a bit strange:\n\n #define DEFAULT_DECODE_BUFFER_SIZE 0x10000\n\nWhy not to define this as a simple decimal value? Is there something\nspecial about this particular value, or is it arbitrary? I guess it's\nsimply the minimum for wal_decode_buffer_size GUC, but why not to use\nthe GUC for all places decoding WAL?\n\nFWIW I don't think we include updates to typedefs.list in patches.\n\n\nI also repeated the benchmarks I did at the beginning of the year [1].\nAttached is a chart with four different configurations:\n\n1) master (f79962d826)\n\n2) patched (with prefetching disabled)\n\n3) patched (with default configuration)\n\n4) patched (with I/O concurrency 256 and 2MB decode buffer)\n\nFor all configs the shared buffers were set to 64GB, checkpoints every\n20 minutes, etc.\n\nThe results are pretty good / similar to previous results. Replaying the\n1h worth of work on a smaller machine takes ~5:30h without prefetching\n(master or with prefetching disabled). With prefetching enabled this\ndrops to ~2h (default config) and ~1h (with tuning).\n\nregards\n\n\n[1]\nhttps://www.postgresql.org/message-id/c5d52837-6256-0556-ac8c-d6d3d558820a%40enterprisedb.com\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Thu, 25 Nov 2021 23:32:07 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Nov 26, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> The results are pretty good / similar to previous results. Replaying the\n> 1h worth of work on a smaller machine takes ~5:30h without prefetching\n> (master or with prefetching disabled). With prefetching enabled this\n> drops to ~2h (default config) and ~1h (with tuning).\n\nThanks for testing! Wow, that's a nice graph.\n\nThis has bit-rotted already due to Robert's work on ripping out\nglobals, so I'll post a rebase early next week, and incorporate your\ncode feedback.\n\n\n",
"msg_date": "Sat, 27 Nov 2021 10:16:33 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 11/26/21 22:16, Thomas Munro wrote:\n> On Fri, Nov 26, 2021 at 11:32 AM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> The results are pretty good / similar to previous results. Replaying the\n>> 1h worth of work on a smaller machine takes ~5:30h without prefetching\n>> (master or with prefetching disabled). With prefetching enabled this\n>> drops to ~2h (default config) and ~1h (with tuning).\n> \n> Thanks for testing! Wow, that's a nice graph.\n> \n> This has bit-rotted already due to Robert's work on ripping out\n> globals, so I'll post a rebase early next week, and incorporate your\n> code feedback.\n> \n\nOne thing that's not clear to me is what happened to the reasons why \nthis feature was reverted in the PG14 cycle?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 27 Nov 2021 00:34:21 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sat, Nov 27, 2021 at 12:34 PM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> One thing that's not clear to me is what happened to the reasons why\n> this feature was reverted in the PG14 cycle?\n\nReasons for reverting:\n\n1. A bug in commit 323cbe7c, \"Remove read_page callback from\nXLogReader.\". I couldn't easily revert just that piece. This new\nversion doesn't depend on that change anymore, to try to keep things\nsimple. (That particular bug has been fixed in a newer version of\nthat patch[1], which I still think was a good idea incidentally.)\n2. A bug where allocation for large records happened before\nvalidation. Concretely, you can see that this patch does\nXLogReadRecordAlloc() after validating the header (usually, same as\nmaster), but commit f003d9f8 did it first. (Though Andres pointed\nout[2] that more work is needed on that to make that logic more\nrobust, and I'm keen to look into that, but that's independent of this\nwork).\n3. A wild goose chase for bugs on Tom Lane's antique 32 bit PPC\nmachine. Tom eventually reproduced it with the patches reverted,\nwhich seemed to exonerate them but didn't leave a good feeling: what\nwas happening, and why did the patches hugely increase the likelihood\nof the failure mode? I have no new information on that, but I know\nthat several people spent a huge amount of time and effort trying to\nreproduce it on various types of systems, as did I, so despite not\nreaching a conclusion of a bug, this certainly contributed to a\nfeeling that the patch had run out of steam for the 14 cycle.\n\nThis week I'll have another crack at getting that TAP test I proposed\nthat runs the regression tests with a streaming replica to work on\nWindows. That does approximately what Tom was doing when he saw\nproblem #3, which I'd like to have as standard across the build farm.\n\n[1] https://www.postgresql.org/message-id/20211007.172820.1874635561738958207.horikyota.ntt%40gmail.com\n[2] https://www.postgresql.org/message-id/20210505010835.umylslxgq4a6rbwg%40alap3.anarazel.de\n\n\n",
"msg_date": "Sat, 27 Nov 2021 14:47:01 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n> On Sat, Nov 27, 2021 at 12:34 PM Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>> One thing that's not clear to me is what happened to the reasons why\n>> this feature was reverted in the PG14 cycle?\n\n> 3. A wild goose chase for bugs on Tom Lane's antique 32 bit PPC\n> machine. Tom eventually reproduced it with the patches reverted,\n> which seemed to exonerate them but didn't leave a good feeling: what\n> was happening, and why did the patches hugely increase the likelihood\n> of the failure mode? I have no new information on that, but I know\n> that several people spent a huge amount of time and effort trying to\n> reproduce it on various types of systems, as did I, so despite not\n> reaching a conclusion of a bug, this certainly contributed to a\n> feeling that the patch had run out of steam for the 14 cycle.\n\nYeah ... on the one hand, that machine has shown signs of\nhard-to-reproduce flakiness, so it's easy to write off the failures\nI saw as hardware issues. On the other hand, the flakiness I've\nseen has otherwise manifested as kernel crashes, which is nothing\nlike the consistent test failures I was seeing with the patch.\n\nAndres speculated that maybe we were seeing a kernel bug that\naffects consistency of concurrent reads and writes. That could\nbe an explanation; but it's just evidence-free speculation so far,\nso I don't feel real convinced by that idea either.\n\nAnyway, I hope to find time to see if the issue still reproduces\nwith Thomas' new patch set.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 26 Nov 2021 21:46:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi Thomas,\n\nI am unable to apply these new set of patches on HEAD. Can you please share\nthe rebased patch or if you have any work branch can you please point it\nout, I will refer to it for the changes.\n\n--\nWith Regards,\nAshutosh sharma.\n\nOn Tue, Nov 23, 2021 at 3:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> On Mon, Nov 15, 2021 at 11:31 PM Daniel Gustafsson <daniel@yesql.se>\n> wrote:\n> > Could you post an updated version of the patch which is for review?\n>\n> Sorry for taking so long to come back; I learned some new things that\n> made me want to restructure this code a bit (see below). Here is an\n> updated pair of patches that I'm currently testing.\n>\n> Old problems:\n>\n> 1. Last time around, an infinite loop was reported in pg_waldump. I\n> believe Horiguchi-san has fixed that[1], but I'm no longer depending\n> on that patch. I thought his patch set was a good idea, but it's\n> complicated and there's enough going on here already... let's consider\n> that independently.\n>\n> This version goes back to what I had earlier, though (I hope) it is\n> better about how \"nonblocking\" states are communicated. In this\n> version, XLogPageRead() has a way to give up part way through a record\n> if it doesn't have enough data and there are queued up records that\n> could be replayed right now. In that case, we'll go back to the\n> beginning of the record (and occasionally, back a WAL page) next time\n> we try. That's the cost of not maintaining intra-record decoding\n> state.\n>\n> 2. Last time around, we could try to allocate a crazy amount of\n> memory when reading garbage past the end of the WAL. Fixed, by\n> validating first, like in master.\n>\n> New work:\n>\n> Since last time, I went away and worked on a \"real\" AIO version of\n> this feature. That's ongoing experimental work for a future proposal,\n> but I have a working prototype and I aim to share that soon, when that\n> branch is rebased to catch up with recent changes. In that version,\n> the prefetcher starts actual reads into the buffer pool, and recovery\n> receives already pinned buffers attached to the stream of records it's\n> replaying.\n>\n> That inspired a couple of refactoring changes to this non-AIO version,\n> to minimise the difference and anticipate the future work better:\n>\n> 1. The logic for deciding which block to start prefetching next is\n> moved into a new callback function in a sort of standard form (this is\n> approximately how all/most prefetching code looks in the AIO project,\n> ie sequential scans, bitmap heap scan, etc).\n>\n> 2. The logic for controlling how many IOs are running and deciding\n> when to call the above is in a separate component. In this non-AIO\n> version, it works using a simple ring buffer of LSNs to estimate the\n> number of in flight I/Os, just like before. This part would be thrown\n> away and replaced with the AIO branch's centralised \"streaming read\"\n> mechanism which tracks I/O completions based on a stream of completion\n> events from the kernel (or I/O worker processes).\n>\n> 3. In this version, the prefetcher still doesn't pin buffers, for\n> simplicity. That work did force me to study places where WAL streams\n> need prefetching \"barriers\", though, so in this patch you can\n> see that it's now a little more careful than it probably needs to be.\n> (It doesn't really matter much if you call posix_fadvise() on a\n> non-existent file region, or the wrong file after OID wraparound and\n> reuse, but it would matter if you actually read it into a buffer, and\n> if an intervening record might be trying to drop something you have\n> pinned).\n>\n> Some other changes:\n>\n> 1. I dropped the GUC recovery_prefetch_fpw. I think it was a\n> possibly useful idea but it's a niche concern and not worth worrying\n> about for now.\n>\n> 2. I simplified the stats. Coming up with a good running average\n> system seemed like a problem for another day (the numbers before were\n> hard to interpret). The new stats are super simple counters and\n> instantaneous values:\n>\n> postgres=# select * from pg_stat_prefetch_recovery ;\n> -[ RECORD 1 ]--+------------------------------\n> stats_reset | 2021-11-10 09:02:08.590217+13\n> prefetch | 13605674 <- times we called posix_fadvise()\n> hit | 24185289 <- times we found pages already cached\n> skip_init | 217215 <- times we did nothing because init, not read\n> skip_new | 192347 <- times we skipped because relation too small\n> skip_fpw | 27429 <- times we skipped because fpw, not read\n> wal_distance | 10648 <- how far ahead in WAL bytes\n> block_distance | 134 <- how far ahead in block references\n> io_depth | 50 <- fadvise() calls not yet followed by pread()\n>\n> I also removed the code to save and restore the stats via the stats\n> collector, for now. I figured that persistent stats could be a later\n> feature, perhaps after the shared memory stats stuff?\n>\n> 3. I dropped the code that was caching an SMgrRelation pointer to\n> avoid smgropen() calls that showed up in some profiles. That probably\n> lacked invalidation that could be done with some more WAL analysis,\n> but I decided to leave it out completely for now for simplicity.\n>\n> 4. I dropped the verbose logging. I think it might make sense to\n> integrate with the new \"recovery progress\" system, but I think that\n> should be a separate discussion. If you want to see the counters\n> after crash recovery finishes, you can look at the stats view.\n>\n> [1] https://commitfest.postgresql.org/34/2113/\n>\n\nHi Thomas,I am unable to apply these new set of patches on HEAD. Can you please share the rebased patch or if you have any work branch can you please point it out, I will refer to it for the changes.--With Regards,Ashutosh sharma.On Tue, Nov 23, 2021 at 3:44 PM Thomas Munro <thomas.munro@gmail.com> wrote:On Mon, Nov 15, 2021 at 11:31 PM Daniel Gustafsson <daniel@yesql.se> wrote:\n> Could you post an updated version of the patch which is for review?\n\nSorry for taking so long to come back; I learned some new things that\nmade me want to restructure this code a bit (see below). Here is an\nupdated pair of patches that I'm currently testing.\n\nOld problems:\n\n1. Last time around, an infinite loop was reported in pg_waldump. I\nbelieve Horiguchi-san has fixed that[1], but I'm no longer depending\non that patch. I thought his patch set was a good idea, but it's\ncomplicated and there's enough going on here already... let's consider\nthat independently.\n\nThis version goes back to what I had earlier, though (I hope) it is\nbetter about how \"nonblocking\" states are communicated. In this\nversion, XLogPageRead() has a way to give up part way through a record\nif it doesn't have enough data and there are queued up records that\ncould be replayed right now. In that case, we'll go back to the\nbeginning of the record (and occasionally, back a WAL page) next time\nwe try. That's the cost of not maintaining intra-record decoding\nstate.\n\n2. Last time around, we could try to allocate a crazy amount of\nmemory when reading garbage past the end of the WAL. Fixed, by\nvalidating first, like in master.\n\nNew work:\n\nSince last time, I went away and worked on a \"real\" AIO version of\nthis feature. That's ongoing experimental work for a future proposal,\nbut I have a working prototype and I aim to share that soon, when that\nbranch is rebased to catch up with recent changes. In that version,\nthe prefetcher starts actual reads into the buffer pool, and recovery\nreceives already pinned buffers attached to the stream of records it's\nreplaying.\n\nThat inspired a couple of refactoring changes to this non-AIO version,\nto minimise the difference and anticipate the future work better:\n\n1. The logic for deciding which block to start prefetching next is\nmoved into a new callback function in a sort of standard form (this is\napproximately how all/most prefetching code looks in the AIO project,\nie sequential scans, bitmap heap scan, etc).\n\n2. The logic for controlling how many IOs are running and deciding\nwhen to call the above is in a separate component. In this non-AIO\nversion, it works using a simple ring buffer of LSNs to estimate the\nnumber of in flight I/Os, just like before. This part would be thrown\naway and replaced with the AIO branch's centralised \"streaming read\"\nmechanism which tracks I/O completions based on a stream of completion\nevents from the kernel (or I/O worker processes).\n\n3. In this version, the prefetcher still doesn't pin buffers, for\nsimplicity. That work did force me to study places where WAL streams\nneed prefetching \"barriers\", though, so in this patch you can\nsee that it's now a little more careful than it probably needs to be.\n(It doesn't really matter much if you call posix_fadvise() on a\nnon-existent file region, or the wrong file after OID wraparound and\nreuse, but it would matter if you actually read it into a buffer, and\nif an intervening record might be trying to drop something you have\npinned).\n\nSome other changes:\n\n1. I dropped the GUC recovery_prefetch_fpw. I think it was a\npossibly useful idea but it's a niche concern and not worth worrying\nabout for now.\n\n2. I simplified the stats. Coming up with a good running average\nsystem seemed like a problem for another day (the numbers before were\nhard to interpret). The new stats are super simple counters and\ninstantaneous values:\n\npostgres=# select * from pg_stat_prefetch_recovery ;\n-[ RECORD 1 ]--+------------------------------\nstats_reset | 2021-11-10 09:02:08.590217+13\nprefetch | 13605674 <- times we called posix_fadvise()\nhit | 24185289 <- times we found pages already cached\nskip_init | 217215 <- times we did nothing because init, not read\nskip_new | 192347 <- times we skipped because relation too small\nskip_fpw | 27429 <- times we skipped because fpw, not read\nwal_distance | 10648 <- how far ahead in WAL bytes\nblock_distance | 134 <- how far ahead in block references\nio_depth | 50 <- fadvise() calls not yet followed by pread()\n\nI also removed the code to save and restore the stats via the stats\ncollector, for now. I figured that persistent stats could be a later\nfeature, perhaps after the shared memory stats stuff?\n\n3. I dropped the code that was caching an SMgrRelation pointer to\navoid smgropen() calls that showed up in some profiles. That probably\nlacked invalidation that could be done with some more WAL analysis,\nbut I decided to leave it out completely for now for simplicity.\n\n4. I dropped the verbose logging. I think it might make sense to\nintegrate with the new \"recovery progress\" system, but I think that\nshould be a separate discussion. If you want to see the counters\nafter crash recovery finishes, you can look at the stats view.\n\n[1] https://commitfest.postgresql.org/34/2113/",
"msg_date": "Fri, 10 Dec 2021 14:09:57 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Nov 26, 2021 at 9:47 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah ... on the one hand, that machine has shown signs of\n> hard-to-reproduce flakiness, so it's easy to write off the failures\n> I saw as hardware issues. On the other hand, the flakiness I've\n> seen has otherwise manifested as kernel crashes, which is nothing\n> like the consistent test failures I was seeing with the patch.\n>\n> Andres speculated that maybe we were seeing a kernel bug that\n> affects consistency of concurrent reads and writes. That could\n> be an explanation; but it's just evidence-free speculation so far,\n> so I don't feel real convinced by that idea either.\n>\n> Anyway, I hope to find time to see if the issue still reproduces\n> with Thomas' new patch set.\n\nHonestly, all the reasons that Thomas articulated for the revert seem\nrelatively unimpressive from my point of view. Perhaps they are\nsufficient justification for a revert so near to the end of the\ndevelopment cycle, but that's just an argument for committing things a\nlittle sooner so we have time to work out the kinks. This kind of work\nis too valuable to get hung up for a year or three because of a couple\nof minor preexisting bugs and/or preexisting maybe-bugs.\n\n-- \nRobert Haas\nEDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Mon, 13 Dec 2021 09:23:42 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, 26 Nov 2021 at 21:47, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Yeah ... on the one hand, that machine has shown signs of\n> hard-to-reproduce flakiness, so it's easy to write off the failures\n> I saw as hardware issues. On the other hand, the flakiness I've\n> seen has otherwise manifested as kernel crashes, which is nothing\n> like the consistent test failures I was seeing with the patch.\n\nHm. I asked around and found a machine I can use that can run PPC\nbinaries, but it's actually, well, confusing. I think this is an x86\nmachine running Leopard which uses JIT to transparently run PPC\nbinaries. I'm not sure this is really a good test.\n\nBut if you're interested and can explain the tests to run I can try to\nget the tests running on this machine:\n\nIBUILD:~ gsstark$ uname -a\nDarwin IBUILD.MIT.EDU 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15\n16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386\n\nIBUILD:~ gsstark$ sw_vers\nProductName: Mac OS X\nProductVersion: 10.5.8\nBuildVersion: 9L31a\n\n\n",
"msg_date": "Thu, 16 Dec 2021 21:36:37 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "The actual hardware of this machine is a Mac Mini Core 2 Duo. I'm not\nreally clear how the emulation is done and whether it makes a\nreasonable test environment or not.\n\n Hardware Overview:\n\n Model Name: Mac mini\n Model Identifier: Macmini2,1\n Processor Name: Intel Core 2 Duo\n Processor Speed: 2 GHz\n Number Of Processors: 1\n Total Number Of Cores: 2\n L2 Cache: 4 MB\n Memory: 2 GB\n Bus Speed: 667 MHz\n Boot ROM Version: MM21.009A.B00\n\n\n",
"msg_date": "Thu, 16 Dec 2021 22:09:58 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> But if you're interested and can explain the tests to run I can try to\n> get the tests running on this machine:\n\nI'm not sure that machine is close enough to prove much, but by all\nmeans give it a go if you wish. My test setup was explained in [1]:\n\n>> To recap, the test lashup is:\n>> * 2003 PowerMac G4 (1.25GHz PPC 7455, 7200 rpm spinning-rust drive)\n>> * Standard debug build (--enable-debug --enable-cassert)\n>> * Out-of-the-box configuration, except add wal_consistency_checking = all\n>> and configure a wal-streaming standby on the same machine\n>> * Repeatedly run \"make installcheck-parallel\", but skip the tablespace\n>> test to avoid issues with the standby trying to use the same directory\n>> * Delay long enough after each installcheck-parallel to let the \n>> standby catch up (the run proper is ~24 min, plus 2 min for catchup)\n\nRemember also that the code in question is not in HEAD; you'd\nneed to apply Munro's patches, or check out some commit from\naround 2021-04-22.\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/3502526.1619925367%40sss.pgh.pa.us\n\n\n",
"msg_date": "Thu, 16 Dec 2021 23:11:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "What tools and tool versions are you using to build? Is it just GCC for PPC?\n\nThere aren't any special build processes to make a fat binary involved?\n\nOn Thu, 16 Dec 2021 at 23:11, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> > But if you're interested and can explain the tests to run I can try to\n> > get the tests running on this machine:\n>\n> I'm not sure that machine is close enough to prove much, but by all\n> means give it a go if you wish. My test setup was explained in [1]:\n>\n> >> To recap, the test lashup is:\n> >> * 2003 PowerMac G4 (1.25GHz PPC 7455, 7200 rpm spinning-rust drive)\n> >> * Standard debug build (--enable-debug --enable-cassert)\n> >> * Out-of-the-box configuration, except add wal_consistency_checking = all\n> >> and configure a wal-streaming standby on the same machine\n> >> * Repeatedly run \"make installcheck-parallel\", but skip the tablespace\n> >> test to avoid issues with the standby trying to use the same directory\n> >> * Delay long enough after each installcheck-parallel to let the\n> >> standby catch up (the run proper is ~24 min, plus 2 min for catchup)\n>\n> Remember also that the code in question is not in HEAD; you'd\n> need to apply Munro's patches, or check out some commit from\n> around 2021-04-22.\n>\n> regards, tom lane\n>\n> [1] https://www.postgresql.org/message-id/3502526.1619925367%40sss.pgh.pa.us\n\n\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 17 Dec 2021 13:27:50 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> What tools and tool versions are you using to build? Is it just GCC for PPC?\n> There aren't any special build processes to make a fat binary involved?\n\nNope, just \"configure; make\" using that macOS version's regular gcc.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Dec 2021 13:59:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "I have\n\nIBUILD:postgresql gsstark$ ls /usr/bin/*gcc*\n/usr/bin/gcc\n/usr/bin/gcc-4.0\n/usr/bin/gcc-4.2\n/usr/bin/i686-apple-darwin9-gcc-4.0.1\n/usr/bin/i686-apple-darwin9-gcc-4.2.1\n/usr/bin/powerpc-apple-darwin9-gcc-4.0.1\n/usr/bin/powerpc-apple-darwin9-gcc-4.2.1\n\nI'm guessing I should do CC=/usr/bin/powerpc-apple-darwin9-gcc-4.2.1\nor maybe 4.0.1. What version is on your G4?\n\n\n",
"msg_date": "Fri, 17 Dec 2021 14:40:20 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> I'm guessing I should do CC=/usr/bin/powerpc-apple-darwin9-gcc-4.2.1\n> or maybe 4.0.1. What version is on your G4?\n\n$ gcc -v\nUsing built-in specs.\nTarget: powerpc-apple-darwin9\nConfigured with: /var/tmp/gcc/gcc-5493~1/src/configure --disable-checking -enable-werror --prefix=/usr --mandir=/share/man --enable-languages=c,objc,c++,obj-c++ --program-transform-name=/^[cg][^.-]*$/s/$/-4.0/ --with-gxx-include-dir=/include/c++/4.0.0 --with-slibdir=/usr/lib --build=i686-apple-darwin9 --program-prefix= --host=powerpc-apple-darwin9 --target=powerpc-apple-darwin9\nThread model: posix\ngcc version 4.0.1 (Apple Inc. build 5493)\n\nI see that gcc 4.2.1 is also present on this machine, but I've\nnever used it.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Dec 2021 14:53:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hm. I seem to have picked a bad checkout. I took the last one before\nthe revert (45aa88fe1d4028ea50ba7d26d390223b6ef78acc). Or there's some\nincompatibility with the emulation and the IPC stuff parallel workers\nuse.\n\n\n2021-12-17 17:51:51.688 EST [50955] LOG: background worker \"parallel\nworker\" (PID 54073) was terminated by signal 10: Bus error\n2021-12-17 17:51:51.688 EST [50955] DETAIL: Failed process was\nrunning: SELECT variance(unique1::int4), sum(unique1::int8),\nregr_count(unique1::float8, unique1::float8)\nFROM (SELECT * FROM tenk1\n UNION ALL SELECT * FROM tenk1\n UNION ALL SELECT * FROM tenk1\n UNION ALL SELECT * FROM tenk1) u;\n2021-12-17 17:51:51.690 EST [50955] LOG: terminating any other active\nserver processes\n2021-12-17 17:51:51.748 EST [54078] FATAL: the database system is in\nrecovery mode\n2021-12-17 17:51:51.761 EST [50955] LOG: all server processes\nterminated; reinitializing\n\n\n",
"msg_date": "Fri, 17 Dec 2021 17:56:22 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 12/17/21 23:56, Greg Stark wrote:\n> Hm. I seem to have picked a bad checkout. I took the last one before\n> the revert (45aa88fe1d4028ea50ba7d26d390223b6ef78acc). Or there's some\n> incompatibility with the emulation and the IPC stuff parallel workers\n> use.\n> \n> \n> 2021-12-17 17:51:51.688 EST [50955] LOG: background worker \"parallel\n> worker\" (PID 54073) was terminated by signal 10: Bus error\n> 2021-12-17 17:51:51.688 EST [50955] DETAIL: Failed process was\n> running: SELECT variance(unique1::int4), sum(unique1::int8),\n> regr_count(unique1::float8, unique1::float8)\n> FROM (SELECT * FROM tenk1\n> UNION ALL SELECT * FROM tenk1\n> UNION ALL SELECT * FROM tenk1\n> UNION ALL SELECT * FROM tenk1) u;\n> 2021-12-17 17:51:51.690 EST [50955] LOG: terminating any other active\n> server processes\n> 2021-12-17 17:51:51.748 EST [54078] FATAL: the database system is in\n> recovery mode\n> 2021-12-17 17:51:51.761 EST [50955] LOG: all server processes\n> terminated; reinitializing\n> \n\nInteresting. In my experience SIGBUS on PPC tends to be due to incorrect \nalignment, but I'm not sure how that works with the emulation. Can you \nget a backtrace?\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Sat, 18 Dec 2021 00:04:00 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> Hm. I seem to have picked a bad checkout. I took the last one before\n> the revert (45aa88fe1d4028ea50ba7d26d390223b6ef78acc).\n\nFWIW, I think that's the first one *after* the revert.\n\n> 2021-12-17 17:51:51.688 EST [50955] LOG: background worker \"parallel\n> worker\" (PID 54073) was terminated by signal 10: Bus error\n\nI'm betting on weird emulation issue. None of my real PPC machines\nshowed such things.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Dec 2021 18:40:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, 17 Dec 2021 at 18:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Greg Stark <stark@mit.edu> writes:\n> > Hm. I seem to have picked a bad checkout. I took the last one before\n> > the revert (45aa88fe1d4028ea50ba7d26d390223b6ef78acc).\n>\n> FWIW, I think that's the first one *after* the revert.\n\nDoh\n\nBut the bigger question is. Are we really concerned about this flaky\nproblem? Is it worth investing time and money on? I can get money to\ngo buy a G4 or G5 and spend some time on it. It just seems a bit...\nniche. But if it's a real bug that represents something broken on\nother architectures that just happens to be easier to trigger here it\nmight be worthwhile.\n\n-- \ngreg\n\n\n",
"msg_date": "Fri, 17 Dec 2021 18:55:50 -0500",
"msg_from": "Greg Stark <stark@mit.edu>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Greg Stark <stark@mit.edu> writes:\n> But the bigger question is. Are we really concerned about this flaky\n> problem? Is it worth investing time and money on? I can get money to\n> go buy a G4 or G5 and spend some time on it. It just seems a bit...\n> niche. But if it's a real bug that represents something broken on\n> other architectures that just happens to be easier to trigger here it\n> might be worthwhile.\n\nTBH, I don't know. There seem to be three plausible explanations:\n\n1. Flaky hardware in my unit.\n2. Ancient macOS bug, as Andres suggested upthread.\n3. Actual PG bug.\n\nIf it's #1 or #2 then we're just wasting our time here. I'm not\nsure how to estimate the relative probabilities, but I suspect\n#3 is the least likely of the lot.\n\nFWIW, I did just reproduce the problem on that machine with current HEAD:\n\n2021-12-17 18:40:40.293 EST [21369] FATAL: inconsistent page found, rel 1663/167772/2673, forknum 0, blkno 26\n2021-12-17 18:40:40.293 EST [21369] CONTEXT: WAL redo at C/3DE3F658 for Btree/INSERT_LEAF: off 208; blkref #0: rel 1663/167772/2673, blk 26 FPW\n2021-12-17 18:40:40.522 EST [21365] LOG: startup process (PID 21369) exited with exit code 1\n\nThat was after only five loops of the regression tests, so either\nI got lucky or the failure probability has increased again.\n\nIn any case, it seems clear that the problem exists independently of\nMunro's patches, so I don't really think this question should be\nconsidered a blocker for those.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 17 Dec 2021 19:38:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "[Replies to two emails]\n\nOn Fri, Dec 10, 2021 at 9:40 PM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I am unable to apply these new set of patches on HEAD. Can you please share the rebased patch or if you have any work branch can you please point it out, I will refer to it for the changes.\n\nHi Ashutosh,\n\nSorry I missed this. Rebase attached, and I also have a public\nworking branch at\nhttps://github.com/macdice/postgres/tree/recovery-prefetch-ii .\n\nOn Fri, Nov 26, 2021 at 11:32 AM Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n> It's great you posted a new version of this patch, so I took a look a\n> brief look at it. The code seems in pretty good shape, I haven't found\n> any real issues - just two minor comments:\n>\n> This seems a bit strange:\n>\n> #define DEFAULT_DECODE_BUFFER_SIZE 0x10000\n>\n> Why not to define this as a simple decimal value?\n\nChanged to (64 * 1024).\n\n> Is there something\n> special about this particular value, or is it arbitrary?\n\nIt should be large enough for most records, without being ridiculously\nlarge. This means that typical users of XLogReader (pg_waldump, ...)\nare unlikely to fall back to the \"oversized\" code path for records\nthat don't fit in the decoding buffer. Comment added.\n\n> I guess it's\n> simply the minimum for wal_decode_buffer_size GUC, but why not to use\n> the GUC for all places decoding WAL?\n\nThe GUC is used only by xlog.c for replay (and has a larger default\nsince it can usefully see into the future), but frontend tools and\nother kinds of backend WAL decoding things (2PC, logical decoding)\ndon't or can't respect the GUC and it didn't seem worth choosing a\nnumber for each user, so I needed to pick a default.\n\n> FWIW I don't think we include updates to typedefs.list in patches.\n\nSeems pretty harmless? And useful to keep around in development\nbranches because I like to pgindent stuff...",
"msg_date": "Wed, 29 Dec 2021 17:29:52 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Thomas Munro <thomas.munro@gmail.com> writes:\n>> FWIW I don't think we include updates to typedefs.list in patches.\n\n> Seems pretty harmless? And useful to keep around in development\n> branches because I like to pgindent stuff...\n\nAs far as that goes, my habit is to pull down\nhttps://buildfarm.postgresql.org/cgi-bin/typedefs.pl\non a regular basis and pgindent against that. There have been\nsome discussions about formalizing that process a bit more,\nbut we've not come to any conclusions.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 29 Dec 2021 00:27:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2021-12-29 17:29:52 +1300, Thomas Munro wrote:\n> > FWIW I don't think we include updates to typedefs.list in patches.\n> \n> Seems pretty harmless? And useful to keep around in development\n> branches because I like to pgindent stuff...\n\nI think it's even helpful. As long as it's done with a bit of manual\noversight, I don't see a meaningful downside of doing so. One needs to be\ncareful to not remove platform dependant typedefs, but that's it. And\nespecially for long-lived feature branches it's much less work to keep the\ntypedefs.list changes in the tree, rather than coming up with them locally\nover and over / across multiple people working on a branch.\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 29 Dec 2021 11:41:42 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Dec 29, 2021 at 5:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> https://github.com/macdice/postgres/tree/recovery-prefetch-ii\n\nHere's a rebase. This mostly involved moving hunks over to the new\nxlogrecovery.c file. One thing that seemed a little strange to me\nwith the new layout is that xlogreader is now a global variable. I\nfollowed that pattern and made xlogprefetcher a global variable too,\nfor now.\n\nThere is one functional change: now I block readahead at records that\nmight change the timeline ID. This removes the need to think about\nscenarios where \"replay TLI\" and \"read TLI\" might differ. I don't\nknow of a concrete problem in that area with the previous version, but\nthe recent introduction of the variable(s) \"replayTLI\" and associated\ncomments in master made me realise I hadn't analysed the hazards here\nenough. Since timelines are tricky things and timeline changes are\nextremely infrequent, it seemed better to simplify matters by putting\nup a big road block there.\n\nI'm now starting to think about committing this soon.",
"msg_date": "Tue, 8 Mar 2022 18:15:43 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn 3/8/22 06:15, Thomas Munro wrote:\n> On Wed, Dec 29, 2021 at 5:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n>> https://github.com/macdice/postgres/tree/recovery-prefetch-ii\n> \n> Here's a rebase. This mostly involved moving hunks over to the new\n> xlogrecovery.c file. One thing that seemed a little strange to me\n> with the new layout is that xlogreader is now a global variable. I\n> followed that pattern and made xlogprefetcher a global variable too,\n> for now.\n> \n> There is one functional change: now I block readahead at records that\n> might change the timeline ID. This removes the need to think about\n> scenarios where \"replay TLI\" and \"read TLI\" might differ. I don't\n> know of a concrete problem in that area with the previous version, but\n> the recent introduction of the variable(s) \"replayTLI\" and associated\n> comments in master made me realise I hadn't analysed the hazards here\n> enough. Since timelines are tricky things and timeline changes are\n> extremely infrequent, it seemed better to simplify matters by putting\n> up a big road block there.\n> \n> I'm now starting to think about committing this soon.\n\n+1. I don't have the capacity/hardware to do more testing at the moment,\nbut all of this looks reasonable.\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 8 Mar 2022 14:48:54 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn 2022-03-08 18:15:43 +1300, Thomas Munro wrote:\n> I'm now starting to think about committing this soon.\n\n+1\n\nAre you thinking of committing both patches at once, or with a bit of\ndistance?\n\nI think something in the regression tests ought to enable\nrecovery_prefetch. 027_stream_regress or 001_stream_rep seem like the obvious\ncandidates?\n\n\n- Andres\n\n\n",
"msg_date": "Tue, 8 Mar 2022 13:17:16 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn Tue, Mar 08, 2022 at 06:15:43PM +1300, Thomas Munro wrote:\n> On Wed, Dec 29, 2021 at 5:29 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > https://github.com/macdice/postgres/tree/recovery-prefetch-ii\n>\n> Here's a rebase. This mostly involved moving hunks over to the new\n> xlogrecovery.c file. One thing that seemed a little strange to me\n> with the new layout is that xlogreader is now a global variable. I\n> followed that pattern and made xlogprefetcher a global variable too,\n> for now.\n\nI for now went through 0001, TL;DR the patch looks good to me. I have a few\nminor comments though, mostly to make things a bit clearer (at least to me).\n\ndiff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c\nindex 2340dc247b..c129df44ac 100644\n--- a/src/bin/pg_waldump/pg_waldump.c\n+++ b/src/bin/pg_waldump/pg_waldump.c\n@@ -407,10 +407,10 @@ XLogDumpRecordLen(XLogReaderState *record, uint32 *rec_len, uint32 *fpi_len)\n * add an accessor macro for this.\n */\n *fpi_len = 0;\n+ for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n {\n if (XLogRecHasBlockImage(record, block_id))\n- *fpi_len += record->blocks[block_id].bimg_len;\n+ *fpi_len += record->record->blocks[block_id].bimg_len;\n }\n(and similar in that file, xlogutils.c and xlogreader.c)\n\nThis could use XLogRecGetBlock? Note that this macro is for now never used.\n\nxlogreader.c also has some similar forgotten code that could use\nXLogRecMaxBlockId.\n\n\n+ * See if we can release the last record that was returned by\n+ * XLogNextRecord(), to free up space.\n+ */\n+void\n+XLogReleasePreviousRecord(XLogReaderState *state)\n\nThe comment seems a bit misleading, as I first understood it as it could be\noptional even if the record exists. Maybe something more like \"Release the\nlast record if any\"?\n\n\n+ * Remove it from the decoded record queue. It must be the oldest item\n+ * decoded, decode_queue_tail.\n+ */\n+ record = state->record;\n+ Assert(record == state->decode_queue_tail);\n+ state->record = NULL;\n+ state->decode_queue_tail = record->next;\n\nThe naming is a bit counter intuitive to me, as before reading the rest of the\ncode I wasn't expecting the item at the tail of the queue to have a next\nelement. Maybe just inverting tail and head would make it clearer?\n\n\n+DecodedXLogRecord *\n+XLogNextRecord(XLogReaderState *state, char **errormsg)\n+{\n[...]\n+ /*\n+ * state->EndRecPtr is expected to have been set by the last call to\n+ * XLogBeginRead() or XLogNextRecord(), and is the location of the\n+ * error.\n+ */\n+\n+ return NULL;\n\nThe comment should refer to XLogFindNextRecord, not XLogNextRecord?\nAlso, is it worth an assert (likely at the top of the function) for that?\n\n\n XLogRecord *\n XLogReadRecord(XLogReaderState *state, char **errormsg)\n+{\n[...]\n+ if (decoded)\n+ {\n+ /*\n+ * XLogReadRecord() returns a pointer to the record's header, not the\n+ * actual decoded record. The caller will access the decoded record\n+ * through the XLogRecGetXXX() macros, which reach the decoded\n+ * recorded as xlogreader->record.\n+ */\n+ Assert(state->record == decoded);\n+ return &decoded->header;\n\nI find it a bit weird to mention XLogReadRecord() as it's the current function.\n\n\n+/*\n+ * Allocate space for a decoded record. The only member of the returned\n+ * object that is initialized is the 'oversized' flag, indicating that the\n+ * decoded record wouldn't fit in the decode buffer and must eventually be\n+ * freed explicitly.\n+ *\n+ * Return NULL if there is no space in the decode buffer and allow_oversized\n+ * is false, or if memory allocation fails for an oversized buffer.\n+ */\n+static DecodedXLogRecord *\n+XLogReadRecordAlloc(XLogReaderState *state, size_t xl_tot_len, bool allow_oversized)\n\nIs it worth clearly stating that it's the reponsability of the caller to update\nthe decode_buffer_head (with the real size) after a successful decoding of this\nbuffer?\n\n\n+ if (unlikely(state->decode_buffer == NULL))\n+ {\n+ if (state->decode_buffer_size == 0)\n+ state->decode_buffer_size = DEFAULT_DECODE_BUFFER_SIZE;\n+ state->decode_buffer = palloc(state->decode_buffer_size);\n+ state->decode_buffer_head = state->decode_buffer;\n+ state->decode_buffer_tail = state->decode_buffer;\n+ state->free_decode_buffer = true;\n+ }\n\nMaybe change XLogReaderSetDecodeBuffer to also handle allocation and use it\nhere too? Otherwise XLogReaderSetDecodeBuffer should probably go in 0002 as\nthe only caller is the recovery prefetching.\n\n+ return decoded;\n+}\n\nI would find it a bit clearer to explicitly return NULL here.\n\n\n readOff = ReadPageInternal(state, targetPagePtr,\n Min(targetRecOff + SizeOfXLogRecord, XLOG_BLCKSZ));\n- if (readOff < 0)\n+ if (readOff == XLREAD_WOULDBLOCK)\n+ return XLREAD_WOULDBLOCK;\n+ else if (readOff < 0)\n\nReadPageInternal comment should be updated to mention the new XLREAD_WOULDBLOCK\npossible return value.\n\nIt's also not particulary obvious why XLogFindNextRecord() doesn't check for\nthis value. AFAICS callers don't (and should never) call it with a\nnonblocking == true state, maybe add an assert for that?\n\n\n@@ -468,7 +748,7 @@ restart:\n if (pageHeader->xlp_info & XLP_FIRST_IS_OVERWRITE_CONTRECORD)\n {\n state->overwrittenRecPtr = RecPtr;\n- ResetDecoder(state);\n+ //ResetDecoder(state);\n\nAFAICS this is indeed not necessary anymore, so it can be removed?\n\n\n static void\n ResetDecoder(XLogReaderState *state)\n {\n[...]\n+ /* Reset the decoded record queue, freeing any oversized records. */\n+ while ((r = state->decode_queue_tail))\n\nnit: I think it's better to explicitly check for the assignment being != NULL,\nand existing code is more frequently written this way AFAICS.\n\n\n+/* Return values from XLogPageReadCB. */\n+typedef enum XLogPageReadResultResult\n\ntypo\n\n\n",
"msg_date": "Wed, 9 Mar 2022 14:46:33 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Mar 9, 2022 at 7:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> I for now went through 0001, TL;DR the patch looks good to me. I have a few\n> minor comments though, mostly to make things a bit clearer (at least to me).\n\nHi Julien,\n\nThanks for your review of 0001! It gave me a few things to think\nabout and some good improvements.\n\n> diff --git a/src/bin/pg_waldump/pg_waldump.c b/src/bin/pg_waldump/pg_waldump.c\n> index 2340dc247b..c129df44ac 100644\n> --- a/src/bin/pg_waldump/pg_waldump.c\n> +++ b/src/bin/pg_waldump/pg_waldump.c\n> @@ -407,10 +407,10 @@ XLogDumpRecordLen(XLogReaderState *record, uint32 *rec_len, uint32 *fpi_len)\n> * add an accessor macro for this.\n> */\n> *fpi_len = 0;\n> + for (block_id = 0; block_id <= XLogRecMaxBlockId(record); block_id++)\n> {\n> if (XLogRecHasBlockImage(record, block_id))\n> - *fpi_len += record->blocks[block_id].bimg_len;\n> + *fpi_len += record->record->blocks[block_id].bimg_len;\n> }\n> (and similar in that file, xlogutils.c and xlogreader.c)\n>\n> This could use XLogRecGetBlock? Note that this macro is for now never used.\n\nYeah, I think that is a good idea for pg_waldump.c and xlogutils.c. Done.\n\n> xlogreader.c also has some similar forgotten code that could use\n> XLogRecMaxBlockId.\n\nThat is true, but I was thinking of it like this: most of the existing\ncode that interacts with xlogreader.c is working with the old model,\nwhere the XLogReader object holds only one \"current\" record. For that\nreason the XLogRecXXX() macros continue to work as before, implicitly\nreferring to the record that XLogReadRecord() most recently returned.\nFor xlogreader.c code, I prefer not to use the XLogRecXXX() macros,\neven when referring to the \"current\" record, since xlogreader.c has\nswitched to a new multi-record model. In other words, they're sort of\n'old API' accessors provided for continuity. Does this make sense?\n\n> + * See if we can release the last record that was returned by\n> + * XLogNextRecord(), to free up space.\n> + */\n> +void\n> +XLogReleasePreviousRecord(XLogReaderState *state)\n>\n> The comment seems a bit misleading, as I first understood it as it could be\n> optional even if the record exists. Maybe something more like \"Release the\n> last record if any\"?\n\nDone.\n\n> + * Remove it from the decoded record queue. It must be the oldest item\n> + * decoded, decode_queue_tail.\n> + */\n> + record = state->record;\n> + Assert(record == state->decode_queue_tail);\n> + state->record = NULL;\n> + state->decode_queue_tail = record->next;\n>\n> The naming is a bit counter intuitive to me, as before reading the rest of the\n> code I wasn't expecting the item at the tail of the queue to have a next\n> element. Maybe just inverting tail and head would make it clearer?\n\nYeah, after mulling this over for a day, I agree. I've flipped it around.\n\nExplanation: You're quite right, singly-linked lists traditionally\nhave a 'tail' that points to null, so it makes sense for new items to\nbe added there and older items to be consumed from the 'head' end, as\nyou expected. But... it's also typical (I think?) in ring buffers AKA\ncircular buffers to insert at the 'head', and remove from the 'tail'.\nThis code has both a linked-list (the chain of decoded records with a\n->next pointer), and the underlying storage, which is a circular\nbuffer of bytes. I didn't want them to use opposite terminology, and\nsince I started by writing the ring buffer part, that's where I\nfinished up... I agree that it's an improvement to flip them.\n\n> +DecodedXLogRecord *\n> +XLogNextRecord(XLogReaderState *state, char **errormsg)\n> +{\n> [...]\n> + /*\n> + * state->EndRecPtr is expected to have been set by the last call to\n> + * XLogBeginRead() or XLogNextRecord(), and is the location of the\n> + * error.\n> + */\n> +\n> + return NULL;\n>\n> The comment should refer to XLogFindNextRecord, not XLogNextRecord?\n\nNo, it does mean to refer to the XLogNextRecord() (ie the last time\nyou called XLogNextRecord and successfully dequeued a record, we put\nits end LSN there, so if there is a deferred error, that's the\ncorresponding LSN). Make sense?\n\n> Also, is it worth an assert (likely at the top of the function) for that?\n\nHow could I assert that EndRecPtr has the right value?\n\n> XLogRecord *\n> XLogReadRecord(XLogReaderState *state, char **errormsg)\n> +{\n> [...]\n> + if (decoded)\n> + {\n> + /*\n> + * XLogReadRecord() returns a pointer to the record's header, not the\n> + * actual decoded record. The caller will access the decoded record\n> + * through the XLogRecGetXXX() macros, which reach the decoded\n> + * recorded as xlogreader->record.\n> + */\n> + Assert(state->record == decoded);\n> + return &decoded->header;\n>\n> I find it a bit weird to mention XLogReadRecord() as it's the current function.\n\nChanged to \"This function ...\".\n\n> +/*\n> + * Allocate space for a decoded record. The only member of the returned\n> + * object that is initialized is the 'oversized' flag, indicating that the\n> + * decoded record wouldn't fit in the decode buffer and must eventually be\n> + * freed explicitly.\n> + *\n> + * Return NULL if there is no space in the decode buffer and allow_oversized\n> + * is false, or if memory allocation fails for an oversized buffer.\n> + */\n> +static DecodedXLogRecord *\n> +XLogReadRecordAlloc(XLogReaderState *state, size_t xl_tot_len, bool allow_oversized)\n>\n> Is it worth clearly stating that it's the reponsability of the caller to update\n> the decode_buffer_head (with the real size) after a successful decoding of this\n> buffer?\n\nComment added.\n\n> + if (unlikely(state->decode_buffer == NULL))\n> + {\n> + if (state->decode_buffer_size == 0)\n> + state->decode_buffer_size = DEFAULT_DECODE_BUFFER_SIZE;\n> + state->decode_buffer = palloc(state->decode_buffer_size);\n> + state->decode_buffer_head = state->decode_buffer;\n> + state->decode_buffer_tail = state->decode_buffer;\n> + state->free_decode_buffer = true;\n> + }\n>\n> Maybe change XLogReaderSetDecodeBuffer to also handle allocation and use it\n> here too? Otherwise XLogReaderSetDecodeBuffer should probably go in 0002 as\n> the only caller is the recovery prefetching.\n\nI don't think it matters much?\n\n> + return decoded;\n> +}\n>\n> I would find it a bit clearer to explicitly return NULL here.\n\nDone.\n\n> readOff = ReadPageInternal(state, targetPagePtr,\n> Min(targetRecOff + SizeOfXLogRecord, XLOG_BLCKSZ));\n> - if (readOff < 0)\n> + if (readOff == XLREAD_WOULDBLOCK)\n> + return XLREAD_WOULDBLOCK;\n> + else if (readOff < 0)\n>\n> ReadPageInternal comment should be updated to mention the new XLREAD_WOULDBLOCK\n> possible return value.\n\nYeah. Done.\n\n> It's also not particulary obvious why XLogFindNextRecord() doesn't check for\n> this value. AFAICS callers don't (and should never) call it with a\n> nonblocking == true state, maybe add an assert for that?\n\nFair point. I have now explicitly cleared that flag. (I don't much\nlike state->nonblocking, which might be better as an argument to\npage_read(), but in fact I don't like the fact that page_read\ncallbacks are blocking in the first place, which is why I liked\nHoriguchi-san's patch to get rid of that... but that can be a subject\nfor later work.)\n\n> @@ -468,7 +748,7 @@ restart:\n> if (pageHeader->xlp_info & XLP_FIRST_IS_OVERWRITE_CONTRECORD)\n> {\n> state->overwrittenRecPtr = RecPtr;\n> - ResetDecoder(state);\n> + //ResetDecoder(state);\n>\n> AFAICS this is indeed not necessary anymore, so it can be removed?\n\nOops, yeah I use C++ comments when there's something I intended to\nremove. Done.\n\n> static void\n> ResetDecoder(XLogReaderState *state)\n> {\n> [...]\n> + /* Reset the decoded record queue, freeing any oversized records. */\n> + while ((r = state->decode_queue_tail))\n>\n> nit: I think it's better to explicitly check for the assignment being != NULL,\n> and existing code is more frequently written this way AFAICS.\n\nI think it's perfectly normal idiomatic C, but if you think it's\nclearer that way, OK, done like that.\n\n> +/* Return values from XLogPageReadCB. */\n> +typedef enum XLogPageReadResultResult\n>\n> typo\n\nFixed.\n\nI realised that this version has broken -DWAL_DEBUG. I'll fix that\nshortly, but I wanted to post this update ASAP, so here's a new\nversion. The other thing I need to change is that I should turn on\nrecovery_prefetch for platforms that support it (ie Linux and maybe\nNetBSD only for now), in the tests. Right now you need to put\nrecovery_prefetch=on in a file and then run the tests with\n\"TEMP_CONFIG=path_to_that make -C src/test/recovery check\" to\nexcercise much of 0002.",
"msg_date": "Fri, 11 Mar 2022 18:31:13 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 6:31 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Thanks for your review of 0001! It gave me a few things to think\n> about and some good improvements.\n\nAnd just in case it's useful, here's what changed between v21 and v22..",
"msg_date": "Fri, 11 Mar 2022 18:35:26 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "\n\nOn March 10, 2022 9:31:13 PM PST, Thomas Munro <thomas.munro@gmail.com> wrote:\n> The other thing I need to change is that I should turn on\n>recovery_prefetch for platforms that support it (ie Linux and maybe\n>NetBSD only for now), in the tests. \n\nCould a setting of \"try\" make sense?\n-- \nSent from my Android device with K-9 Mail. Please excuse my brevity.\n\n\n",
"msg_date": "Thu, 10 Mar 2022 22:03:18 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 06:31:13PM +1300, Thomas Munro wrote:\n> On Wed, Mar 9, 2022 at 7:47 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> >\n> > This could use XLogRecGetBlock? Note that this macro is for now never used.\n> > xlogreader.c also has some similar forgotten code that could use\n> > XLogRecMaxBlockId.\n>\n> That is true, but I was thinking of it like this: most of the existing\n> code that interacts with xlogreader.c is working with the old model,\n> where the XLogReader object holds only one \"current\" record. For that\n> reason the XLogRecXXX() macros continue to work as before, implicitly\n> referring to the record that XLogReadRecord() most recently returned.\n> For xlogreader.c code, I prefer not to use the XLogRecXXX() macros,\n> even when referring to the \"current\" record, since xlogreader.c has\n> switched to a new multi-record model. In other words, they're sort of\n> 'old API' accessors provided for continuity. Does this make sense?\n\nAh I see, it does make sense. I'm wondering if there should be some comment\nsomewhere on the top of the file to mention it, as otherwise someone may be\ntempted to change it to avoid some record->record->xxx usage.\n\n> > +DecodedXLogRecord *\n> > +XLogNextRecord(XLogReaderState *state, char **errormsg)\n> > +{\n> > [...]\n> > + /*\n> > + * state->EndRecPtr is expected to have been set by the last call to\n> > + * XLogBeginRead() or XLogNextRecord(), and is the location of the\n> > + * error.\n> > + */\n> > +\n> > + return NULL;\n> >\n> > The comment should refer to XLogFindNextRecord, not XLogNextRecord?\n> \n> No, it does mean to refer to the XLogNextRecord() (ie the last time\n> you called XLogNextRecord and successfully dequeued a record, we put\n> its end LSN there, so if there is a deferred error, that's the\n> corresponding LSN). Make sense?\n\nIt does, thanks!\n\n> \n> > Also, is it worth an assert (likely at the top of the function) for that?\n> \n> How could I assert that EndRecPtr has the right value?\n\nSorry, I meant to assert that some value was assigned (!XLogRecPtrIsInvalid).\nIt can only make sure that the first call is done after XLogBeginRead /\nXLogFindNextRecord, but that's better than nothing and consistent with the top\ncomment.\n\n> > + if (unlikely(state->decode_buffer == NULL))\n> > + {\n> > + if (state->decode_buffer_size == 0)\n> > + state->decode_buffer_size = DEFAULT_DECODE_BUFFER_SIZE;\n> > + state->decode_buffer = palloc(state->decode_buffer_size);\n> > + state->decode_buffer_head = state->decode_buffer;\n> > + state->decode_buffer_tail = state->decode_buffer;\n> > + state->free_decode_buffer = true;\n> > + }\n> >\n> > Maybe change XLogReaderSetDecodeBuffer to also handle allocation and use it\n> > here too? Otherwise XLogReaderSetDecodeBuffer should probably go in 0002 as\n> > the only caller is the recovery prefetching.\n> \n> I don't think it matters much?\n\nThe thing is that for now the only caller to XLogReaderSetDecodeBuffer (in\n0002) only uses it to set the length, so a buffer is actually never passed to\nthat function. Since frontend code can rely on a palloc emulation, is there\nreally a use case to use e.g. some stack buffer there, or something in a\nspecific memory context? It seems to be the only use cases for having\nXLogReaderSetDecodeBuffer() rather than simply a\nXLogReaderSetDecodeBufferSize(). But overall I agree it doesn't matter much,\nso no objection to keep it as-is.\n\n> > It's also not particulary obvious why XLogFindNextRecord() doesn't check for\n> > this value. AFAICS callers don't (and should never) call it with a\n> > nonblocking == true state, maybe add an assert for that?\n> \n> Fair point. I have now explicitly cleared that flag. (I don't much\n> like state->nonblocking, which might be better as an argument to\n> page_read(), but in fact I don't like the fact that page_read\n> callbacks are blocking in the first place, which is why I liked\n> Horiguchi-san's patch to get rid of that... but that can be a subject\n> for later work.)\n\nAgreed.\n\n> > static void\n> > ResetDecoder(XLogReaderState *state)\n> > {\n> > [...]\n> > + /* Reset the decoded record queue, freeing any oversized records. */\n> > + while ((r = state->decode_queue_tail))\n> >\n> > nit: I think it's better to explicitly check for the assignment being != NULL,\n> > and existing code is more frequently written this way AFAICS.\n> \n> I think it's perfectly normal idiomatic C, but if you think it's\n> clearer that way, OK, done like that.\n\nThe thing I don't like about this form is that you can never be sure that an\nassignment was really meant unless you read the rest of the nearby code. Other\nthan that agreed, if perfectly normal idiomatic C.\n\n> I realised that this version has broken -DWAL_DEBUG. I'll fix that\n> shortly, but I wanted to post this update ASAP, so here's a new\n> version.\n\n+ * Returns XLREAD_WOULDBLOCK if he requested data can't be read without\n+ * waiting. This can be returned only if the installed page_read callback\n\ntypo: \"the\" requested data.\n\nOther than that it all looks good to me!\n\n> The other thing I need to change is that I should turn on\n> recovery_prefetch for platforms that support it (ie Linux and maybe\n> NetBSD only for now), in the tests. Right now you need to put\n> recovery_prefetch=on in a file and then run the tests with\n> \"TEMP_CONFIG=path_to_that make -C src/test/recovery check\" to\n> excercise much of 0002.\n\n+1 with Andres' idea to have a \"try\" setting.\n\n\n",
"msg_date": "Fri, 11 Mar 2022 16:27:37 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Mar 11, 2022 at 9:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > Also, is it worth an assert (likely at the top of the function) for that?\n> >\n> > How could I assert that EndRecPtr has the right value?\n>\n> Sorry, I meant to assert that some value was assigned (!XLogRecPtrIsInvalid).\n> It can only make sure that the first call is done after XLogBeginRead /\n> XLogFindNextRecord, but that's better than nothing and consistent with the top\n> comment.\n\nDone.\n\n> + * Returns XLREAD_WOULDBLOCK if he requested data can't be read without\n> + * waiting. This can be returned only if the installed page_read callback\n>\n> typo: \"the\" requested data.\n\nFixed.\n\n> Other than that it all looks good to me!\n\nThanks!\n\n> > The other thing I need to change is that I should turn on\n> > recovery_prefetch for platforms that support it (ie Linux and maybe\n> > NetBSD only for now), in the tests. Right now you need to put\n> > recovery_prefetch=on in a file and then run the tests with\n> > \"TEMP_CONFIG=path_to_that make -C src/test/recovery check\" to\n> > excercise much of 0002.\n>\n> +1 with Andres' idea to have a \"try\" setting.\n\nDone. The default is still \"off\" for now, but in\n027_stream_regress.pl I set it to \"try\".\n\nI also fixed the compile failure with -DWAL_DEBUG, and checked that\noutput looks sane with wal_debug=on.",
"msg_date": "Mon, 14 Mar 2022 18:15:59 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 06:15:59PM +1300, Thomas Munro wrote:\n> On Fri, Mar 11, 2022 at 9:27 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > > > Also, is it worth an assert (likely at the top of the function) for that?\n> > >\n> > > How could I assert that EndRecPtr has the right value?\n> >\n> > Sorry, I meant to assert that some value was assigned (!XLogRecPtrIsInvalid).\n> > It can only make sure that the first call is done after XLogBeginRead /\n> > XLogFindNextRecord, but that's better than nothing and consistent with the top\n> > comment.\n>\n> Done.\n\nJust a small detail: I would move that assert at the top of the function as it\nshould always be valid.\n>\n> I also fixed the compile failure with -DWAL_DEBUG, and checked that\n> output looks sane with wal_debug=on.\n\nGreat! I'm happy with 0001 and I think it's good to go!\n>\n> > > The other thing I need to change is that I should turn on\n> > > recovery_prefetch for platforms that support it (ie Linux and maybe\n> > > NetBSD only for now), in the tests. Right now you need to put\n> > > recovery_prefetch=on in a file and then run the tests with\n> > > \"TEMP_CONFIG=path_to_that make -C src/test/recovery check\" to\n> > > excercise much of 0002.\n> >\n> > +1 with Andres' idea to have a \"try\" setting.\n>\n> Done. The default is still \"off\" for now, but in\n> 027_stream_regress.pl I set it to \"try\".\n\nGreat too! Unless you want to commit both patches right now I'd like to review\n0002 too (this week), as I barely look into it for now.\n\n\n",
"msg_date": "Mon, 14 Mar 2022 15:17:13 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Mon, Mar 14, 2022 at 8:17 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> Great! I'm happy with 0001 and I think it's good to go!\n\nI'll push 0001 today to let the build farm chew on it for a few days\nbefore moving to 0002.\n\n\n",
"msg_date": "Fri, 18 Mar 2022 09:59:31 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Mar 18, 2022 at 9:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I'll push 0001 today to let the build farm chew on it for a few days\n> before moving to 0002.\n\nClearly 018_wal_optimize.pl is flapping and causing recoveryCheck to\nfail occasionally, but that predates the above commit. I didn't\nfollow the existing discussion on that, so I'll try to look into that\ntomorrow.\n\nHere's a rebase of the 0002 patch, now called 0001",
"msg_date": "Sun, 20 Mar 2022 17:36:38 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Sun, Mar 20, 2022 at 5:36 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> Clearly 018_wal_optimize.pl is flapping\n\nCorrection, 019_replslot_limit.pl, discussed at\nhttps://www.postgresql.org/message-id/flat/83b46e5f-2a52-86aa-fa6c-8174908174b8%40iki.fi\n.\n\n\n",
"msg_date": "Sun, 20 Mar 2022 20:52:04 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\n\nOn Sun, Mar 20, 2022 at 05:36:38PM +1300, Thomas Munro wrote:\n> On Fri, Mar 18, 2022 at 9:59 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> > I'll push 0001 today to let the build farm chew on it for a few days\n> > before moving to 0002.\n> \n> Clearly 018_wal_optimize.pl is flapping and causing recoveryCheck to\n> fail occasionally, but that predates the above commit. I didn't\n> follow the existing discussion on that, so I'll try to look into that\n> tomorrow.\n> \n> Here's a rebase of the 0002 patch, now called 0001\n\nSo I finally finished looking at this patch. Here again, AFAICS the feature is\nworking as expected and I didn't find any problem. I just have some minor\ncomments, like for the previous patch.\n\nFor the docs:\n\n+ Whether to try to prefetch blocks that are referenced in the WAL that\n+ are not yet in the buffer pool, during recovery. Valid values are\n+ <literal>off</literal> (the default), <literal>on</literal> and\n+ <literal>try</literal>. The setting <literal>try</literal> enables\n+ prefetching only if the operating system provides the\n+ <function>posix_fadvise</function> function, which is currently used\n+ to implement prefetching. Note that some operating systems provide the\n+ function, but don't actually perform any prefetching.\n\nIs there any reason not to change it to try? I'm wondering if some system says\nthat the function exists but simply raise an error if you actually try to use\nit. I think that at least WSL does that for some functions.\n\n+ <para>\n+ The <xref linkend=\"guc-recovery-prefetch\"/> parameter can\n+ be used to improve I/O performance during recovery by instructing\n+ <productname>PostgreSQL</productname> to initiate reads\n+ of disk blocks that will soon be needed but are not currently in\n+ <productname>PostgreSQL</productname>'s buffer pool.\n+ The <xref linkend=\"guc-maintenance-io-concurrency\"/> and\n+ <xref linkend=\"guc-wal-decode-buffer-size\"/> settings limit prefetching\n+ concurrency and distance, respectively.\n+ By default, prefetching in recovery is disabled.\n+ </para>\n\nI think that \"improving I/O performance\" is a bit misleading, maybe reduce I/O\nwait time or something like that? Also, I don't know if we need to be that\nprecise, but maybe we should say that it's the underlying kernel that will\n(asynchronously) initiate the reads, and postgres will simply notifies it.\n\n\n+ <para>\n+ The <structname>pg_stat_prefetch_recovery</structname> view will contain only\n+ one row. It is filled with nulls if recovery is not running or WAL\n+ prefetching is not enabled. See <xref linkend=\"guc-recovery-prefetch\"/>\n+ for more information.\n+ </para>\n\nThat's not the implemented behavior as far as I can see. It just prints whatever is in SharedStats\nregardless of the recovery state or the prefetch_wal setting (assuming that\nthere's no pending reset request). Similarly, there's a mention that\npg_stat_reset_shared('wal') will reset the stats, but I don't see anything\ncalling XLogPrefetchRequestResetStats().\n\nFinally, I think we should documented what are the cumulated counters in that\nview (that should get reset) and the dynamic counters (that shouldn't get\nreset).\n\nFor the code:\n\n bool\n XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,\n RelFileNode *rnode, ForkNumber *forknum, BlockNumber *blknum)\n+{\n+ return XLogRecGetBlockInfo(record, block_id, rnode, forknum, blknum, NULL);\n+}\n+\n+bool\n+XLogRecGetBlockInfo(XLogReaderState *record, uint8 block_id,\n+ RelFileNode *rnode, ForkNumber *forknum,\n+ BlockNumber *blknum,\n+ Buffer *prefetch_buffer)\n {\n\nIt's missing comments on that function. XLogRecGetBlockTag comments should\nprobably be reworded at the same time.\n\n+ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,\n bool fetching_ckpt, TimeLineID replayTLI)\n {\n XLogRecord *record;\n+ XLogReaderState *xlogreader = XLogPrefetcherReader(xlogprefetcher);\n\nnit: maybe name it XLogPrefetcherGetReader()?\n\n * containing it (if not open already), and returns true. When end of standby\n * mode is triggered by the user, and there is no more WAL available, returns\n * false.\n+ *\n+ * If nonblocking is true, then give up immediately if we can't satisfy the\n+ * request, returning XLREAD_WOULDBLOCK instead of waiting.\n */\n-static bool\n+static XLogPageReadResult\n WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n\nThe comment still mentions a couple of time returning true/false rather than\nXLREAD_*, same for at least XLogPageRead().\n\n@@ -3350,6 +3392,14 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n */\n if (lastSourceFailed)\n {\n+ /*\n+ * Don't allow any retry loops to occur during nonblocking\n+ * readahead. Let the caller process everything that has been\n+ * decoded already first.\n+ */\n+ if (nonblocking)\n+ return XLREAD_WOULDBLOCK;\n\nIs that really enough? I'm wondering if the code path in ReadRecord() that\nforces lastSourceFailed to False while it actually failed when switching into\narchive recovery (xlogrecovery.c around line 3044) can be problematic here.\n\n\n\t\t{\"wal_decode_buffer_size\", PGC_POSTMASTER, WAL_ARCHIVE_RECOVERY,\n\t\t\tgettext_noop(\"Maximum buffer size for reading ahead in the WAL during recovery.\"),\n\t\t\tgettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks.\"),\n\t\t\tGUC_UNIT_BYTE\n\t\t},\n\t\t&wal_decode_buffer_size,\n\t\t512 * 1024, 64 * 1024, INT_MAX,\n\nShould the max be MaxAllocSize?\n\n\n+ /* Do we have a clue where the buffer might be already? */\n+ if (BufferIsValid(recent_buffer) &&\n+ mode == RBM_NORMAL &&\n+ ReadRecentBuffer(rnode, forknum, blkno, recent_buffer))\n+ {\n+ buffer = recent_buffer;\n+ goto recent_buffer_fast_path;\n+ }\n\nShould this increment (local|shared)_blks_hit, since ReadRecentBuffer doesn't?\n\nMissed in the previous patch: XLogDecodeNextRecord() isn't a trivial function,\nso some comments would be helpful.\n\n\nxlogprefetcher.c:\n\n+ * data. XLogRecBufferForRedo() cooperates uses information stored in the\n+ * decoded record to find buffers efficiently.\n\nI'm not sure what you wanted to say here. Also, I don't see any\nXLogRecBufferForRedo() anywhere, I'm assuming it's\nXLogReadBufferForRedo?\n\n+/*\n+ * A callback that reads ahead in the WAL and tries to initiate one IO.\n+ */\n+static LsnReadQueueNextStatus\n+XLogPrefetcherNextBlock(uintptr_t pgsr_private, XLogRecPtr *lsn)\n\nShould there be a bit more comments about what this function is supposed to\nenforce?\n\nI'm wondering if it's a bit overkill to implement this as a callback. Do you\nhave near future use cases in mind? For now no other code could use the\ninfrastructure at all as the lrq is private, so some changes will be needed to\nmake it truly configurable anyway.\n\nIf we keep it as a callback, I think it would make sense to extract some part,\nlike the main prefetch filters / global-limit logic, so other possible\nimplementations can use it if needed. It would also help to reduce this\nfunction a bit, as it's somewhat long.\n\nAlso, about those filters:\n\n+ if (rmid == RM_XLOG_ID)\n+ {\n+ if (record_type == XLOG_CHECKPOINT_SHUTDOWN ||\n+ record_type == XLOG_END_OF_RECOVERY)\n+ {\n+ /*\n+ * These records might change the TLI. Avoid potential\n+ * bugs if we were to allow \"read TLI\" and \"replay TLI\" to\n+ * differ without more analysis.\n+ */\n+ prefetcher->no_readahead_until = record->lsn;\n+ }\n+ }\n\nShould there be a note that it's still ok to process this record in the loop\njust after, as it won't contain any prefetchable data, or simply jump to the\nend of that loop?\n\n+/*\n+ * Increment a counter in shared memory. This is equivalent to *counter++ on a\n+ * plain uint64 without any memory barrier or locking, except on platforms\n+ * where readers can't read uint64 without possibly observing a torn value.\n+ */\n+static inline void\n+XLogPrefetchIncrement(pg_atomic_uint64 *counter)\n+{\n+ Assert(AmStartupProcess() || !IsUnderPostmaster);\n+ pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n+}\n\nI'm curious about this one. Is it to avoid expensive locking on platforms that\ndon't have a lockless pg_atomic_fetch_add_u64?\n\nAlso, it's only correct because there can only be a single prefetcher, so you\ncan't have concurrent increment of the same counter right?\n\n+Datum\n+pg_stat_get_prefetch_recovery(PG_FUNCTION_ARGS)\n+{\n[...]\n\nThis function could use the new SetSingleFuncCall() function introduced in\n9e98583898c.\n\nAnd finally:\n\ndiff --git a/src/backend/utils/misc/postgresql.conf.sample b/src/backend/utils/misc/postgresql.conf.sample\nindex 4cf5b26a36..0a6c7bd83e 100644\n--- a/src/backend/utils/misc/postgresql.conf.sample\n+++ b/src/backend/utils/misc/postgresql.conf.sample\n@@ -241,6 +241,11 @@\n #max_wal_size = 1GB\n #min_wal_size = 80MB\n\n+# - Prefetching during recovery -\n+\n+#wal_decode_buffer_size = 512kB # lookahead window used for prefetching\n\nThis one should be documented as \"(change requires restart)\"\n\n\n",
"msg_date": "Mon, 21 Mar 2022 16:29:16 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Mon, Mar 21, 2022 at 9:29 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> So I finally finished looking at this patch. Here again, AFAICS the feature is\n> working as expected and I didn't find any problem. I just have some minor\n> comments, like for the previous patch.\n\nThanks very much for the review. I've attached a new version\naddressing most of your feedback, and also rebasing over the new\nWAL-logged CREATE DATABASE. I've also fixed a couple of bugs (see\nend).\n\n> For the docs:\n>\n> + Whether to try to prefetch blocks that are referenced in the WAL that\n> + are not yet in the buffer pool, during recovery. Valid values are\n> + <literal>off</literal> (the default), <literal>on</literal> and\n> + <literal>try</literal>. The setting <literal>try</literal> enables\n> + prefetching only if the operating system provides the\n> + <function>posix_fadvise</function> function, which is currently used\n> + to implement prefetching. Note that some operating systems provide the\n> + function, but don't actually perform any prefetching.\n>\n> Is there any reason not to change it to try? I'm wondering if some system says\n> that the function exists but simply raise an error if you actually try to use\n> it. I think that at least WSL does that for some functions.\n\nYeah, we could just default it to try. Whether we should ship that\nway is another question, but done for now.\n\nI don't think there are any supported systems that have a\nposix_fadvise() that fails with -1, or we'd know about it, because\nwe already use it in other places. We do support one OS that provides\na dummy function in libc that does nothing at all (Solaris/illumos),\nand at least a couple that enter the kernel but are known to do\nnothing at all for WILLNEED (AIX, FreeBSD).\n\n> + <para>\n> + The <xref linkend=\"guc-recovery-prefetch\"/> parameter can\n> + be used to improve I/O performance during recovery by instructing\n> + <productname>PostgreSQL</productname> to initiate reads\n> + of disk blocks that will soon be needed but are not currently in\n> + <productname>PostgreSQL</productname>'s buffer pool.\n> + The <xref linkend=\"guc-maintenance-io-concurrency\"/> and\n> + <xref linkend=\"guc-wal-decode-buffer-size\"/> settings limit prefetching\n> + concurrency and distance, respectively.\n> + By default, prefetching in recovery is disabled.\n> + </para>\n>\n> I think that \"improving I/O performance\" is a bit misleading, maybe reduce I/O\n> wait time or something like that? Also, I don't know if we need to be that\n> precise, but maybe we should say that it's the underlying kernel that will\n> (asynchronously) initiate the reads, and postgres will simply notifies it.\n\nUpdated with this new text:\n\n The <xref linkend=\"guc-recovery-prefetch\"/> parameter can be used to reduce\n I/O wait times during recovery by instructing the kernel to initiate reads\n of disk blocks that will soon be needed but are not currently in\n <productname>PostgreSQL</productname>'s buffer pool and will soon be read.\n\n> + <para>\n> + The <structname>pg_stat_prefetch_recovery</structname> view will contain only\n> + one row. It is filled with nulls if recovery is not running or WAL\n> + prefetching is not enabled. See <xref linkend=\"guc-recovery-prefetch\"/>\n> + for more information.\n> + </para>\n>\n> That's not the implemented behavior as far as I can see. It just prints whatever is in SharedStats\n> regardless of the recovery state or the prefetch_wal setting (assuming that\n> there's no pending reset request).\n\nYeah. Updated text: \"It is filled with nulls if recovery has not run\nor ...\".\n\n> Similarly, there's a mention that\n> pg_stat_reset_shared('wal') will reset the stats, but I don't see anything\n> calling XLogPrefetchRequestResetStats().\n\nIt's 'prefetch_recovery', not 'wal', but yeah, oops, it looks like I\ngot carried away between v18 and v19 while simplifying the stats and\nlost a hunk I should have kept. Fixed.\n\n> Finally, I think we should documented what are the cumulated counters in that\n> view (that should get reset) and the dynamic counters (that shouldn't get\n> reset).\n\nOK, done.\n\n> For the code:\n>\n> bool\n> XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,\n> RelFileNode *rnode, ForkNumber *forknum, BlockNumber *blknum)\n> +{\n> + return XLogRecGetBlockInfo(record, block_id, rnode, forknum, blknum, NULL);\n> +}\n> +\n> +bool\n> +XLogRecGetBlockInfo(XLogReaderState *record, uint8 block_id,\n> + RelFileNode *rnode, ForkNumber *forknum,\n> + BlockNumber *blknum,\n> + Buffer *prefetch_buffer)\n> {\n>\n> It's missing comments on that function. XLogRecGetBlockTag comments should\n> probably be reworded at the same time.\n\nNew comment added for XLogRecGetBlockInfo(). Wish I could come up\nwith a better name for that... Not quite sure what you thought I should\nchange about XLogRecGetBlockTag().\n\n> +ReadRecord(XLogPrefetcher *xlogprefetcher, int emode,\n> bool fetching_ckpt, TimeLineID replayTLI)\n> {\n> XLogRecord *record;\n> + XLogReaderState *xlogreader = XLogPrefetcherReader(xlogprefetcher);\n>\n> nit: maybe name it XLogPrefetcherGetReader()?\n\nOK.\n\n> * containing it (if not open already), and returns true. When end of standby\n> * mode is triggered by the user, and there is no more WAL available, returns\n> * false.\n> + *\n> + * If nonblocking is true, then give up immediately if we can't satisfy the\n> + * request, returning XLREAD_WOULDBLOCK instead of waiting.\n> */\n> -static bool\n> +static XLogPageReadResult\n> WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n>\n> The comment still mentions a couple of time returning true/false rather than\n> XLREAD_*, same for at least XLogPageRead().\n\nFixed.\n\n> @@ -3350,6 +3392,14 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> */\n> if (lastSourceFailed)\n> {\n> + /*\n> + * Don't allow any retry loops to occur during nonblocking\n> + * readahead. Let the caller process everything that has been\n> + * decoded already first.\n> + */\n> + if (nonblocking)\n> + return XLREAD_WOULDBLOCK;\n>\n> Is that really enough? I'm wondering if the code path in ReadRecord() that\n> forces lastSourceFailed to False while it actually failed when switching into\n> archive recovery (xlogrecovery.c around line 3044) can be problematic here.\n\nI don't see the problem scenario, could you elaborate?\n\n> {\"wal_decode_buffer_size\", PGC_POSTMASTER, WAL_ARCHIVE_RECOVERY,\n> gettext_noop(\"Maximum buffer size for reading ahead in the WAL during recovery.\"),\n> gettext_noop(\"This controls the maximum distance we can read ahead in the WAL to prefetch referenced blocks.\"),\n> GUC_UNIT_BYTE\n> },\n> &wal_decode_buffer_size,\n> 512 * 1024, 64 * 1024, INT_MAX,\n>\n> Should the max be MaxAllocSize?\n\nHmm. OK, done.\n\n> + /* Do we have a clue where the buffer might be already? */\n> + if (BufferIsValid(recent_buffer) &&\n> + mode == RBM_NORMAL &&\n> + ReadRecentBuffer(rnode, forknum, blkno, recent_buffer))\n> + {\n> + buffer = recent_buffer;\n> + goto recent_buffer_fast_path;\n> + }\n>\n> Should this increment (local|shared)_blks_hit, since ReadRecentBuffer doesn't?\n\nHmm. I guess ReadRecentBuffer() should really do that. Done.\n\n> Missed in the previous patch: XLogDecodeNextRecord() isn't a trivial function,\n> so some comments would be helpful.\n\nOK, I'll come back to that.\n\n> xlogprefetcher.c:\n>\n> + * data. XLogRecBufferForRedo() cooperates uses information stored in the\n> + * decoded record to find buffers ently.\n>\n> I'm not sure what you wanted to say here. Also, I don't see any\n> XLogRecBufferForRedo() anywhere, I'm assuming it's\n> XLogReadBufferForRedo?\n\nYeah, typos. I rewrote that comment.\n\n> +/*\n> + * A callback that reads ahead in the WAL and tries to initiate one IO.\n> + */\n> +static LsnReadQueueNextStatus\n> +XLogPrefetcherNextBlock(uintptr_t pgsr_private, XLogRecPtr *lsn)\n>\n> Should there be a bit more comments about what this function is supposed to\n> enforce?\n\nI have added a comment to explain.\n\n> I'm wondering if it's a bit overkill to implement this as a callback. Do you\n> have near future use cases in mind? For now no other code could use the\n> infrastructure at all as the lrq is private, so some changes will be needed to\n> make it truly configurable anyway.\n\nYeah. Actually, in the next step I want to throw away the lrq part,\nand keep just the XLogPrefetcherNextBlock() function, with some small\nmodifications.\n\nAdmittedly the control flow is a little confusing, but the point of\nthis architecture is to separate \"how to prefetch one more thing\" from\n\"when to prefetch, considering I/O depth and related constraints\".\nThe first thing, \"how\", is represented by XLogPrefetcherNextBlock().\nThe second thing, \"when\", is represented here by the\nLsnReadQueue/lrq_XXX stuff that is private in this file for now, but\nlater I will propose to replace that second thing with the\npg_streaming_read facility of commitfest entry 38/3316. This is a way\nof getting there step by step. I also wrote briefly about that here:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGJ7OqpdnbSTq5oK%3DdjSeVW2JMnrVPSm8JC-_dbN6Y7bpw%40mail.gmail.com\n\n> If we keep it as a callback, I think it would make sense to extract some part,\n> like the main prefetch filters / global-limit logic, so other possible\n> implementations can use it if needed. It would also help to reduce this\n> function a bit, as it's somewhat long.\n\nI can't imagine reusing any of those filtering things anywhere else.\nI admit that the function is kinda long...\n\n> Also, about those filters:\n>\n> + if (rmid == RM_XLOG_ID)\n> + {\n> + if (record_type == XLOG_CHECKPOINT_SHUTDOWN ||\n> + record_type == XLOG_END_OF_RECOVERY)\n> + {\n> + /*\n> + * These records might change the TLI. Avoid potential\n> + * bugs if we were to allow \"read TLI\" and \"replay TLI\" to\n> + * differ without more analysis.\n> + */\n> + prefetcher->no_readahead_until = record->lsn;\n> + }\n> + }\n>\n> Should there be a note that it's still ok to process this record in the loop\n> just after, as it won't contain any prefetchable data, or simply jump to the\n> end of that loop?\n\nComment added.\n\n> +/*\n> + * Increment a counter in shared memory. This is equivalent to *counter++ on a\n> + * plain uint64 without any memory barrier or locking, except on platforms\n> + * where readers can't read uint64 without possibly observing a torn value.\n> + */\n> +static inline void\n> +XLogPrefetchIncrement(pg_atomic_uint64 *counter)\n> +{\n> + Assert(AmStartupProcess() || !IsUnderPostmaster);\n> + pg_atomic_write_u64(counter, pg_atomic_read_u64(counter) + 1);\n> +}\n>\n> I'm curious about this one. Is it to avoid expensive locking on platforms that\n> don't have a lockless pg_atomic_fetch_add_u64?\n\nMy goal here is only to make sure that systems without\nPG_HAVE_8BYTE_SINGLE_COPY_ATOMICITY don't see bogus/torn values. On\nmore typical systems, I just want plain old counter++, for the CPU to\nfeel free to reorder, without the overheads of LOCK XADD.\n\n> +Datum\n> +pg_stat_get_prefetch_recovery(PG_FUNCTION_ARGS)\n> +{\n> [...]\n>\n> This function could use the new SetSingleFuncCall() function introduced in\n> 9e98583898c.\n\nOh, yeah, that looks much nicer!\n\n> +# - Prefetching during recovery -\n> +\n> +#wal_decode_buffer_size = 512kB # lookahead window used for prefetching\n>\n> This one should be documented as \"(change requires restart)\"\n\nDone.\n\nOther changes:\n\n1. The logic for handling relations and blocks that don't exist\n(presumably, yet) wasn't quite right. The previous version could\nraise an error in smgrnblocks() if a referenced relation doesn't exist\nat all on disk. I don't know how to actually reach that case\n(considering the analysis this thing does of SMGR create etc to avoid\ntouching relations that haven't been created yet), but if it is\npossible somehow, then it will handle this gracefully.\n\nTo check for missing relations I use smgrexists(). To make that fast,\nI changed it to not close segments when in recovery, which is OK\nbecause recovery already closes SMGR relations when replaying anything\nthat would unlink files.\n\n2. The logic for filtering out access to an entire database wasn't\nquite right. In this new version, that's necessary only for\nfile-based CREATE DATABASE, since that does bulk creation of relations\nwithout any individual WAL records to analyse. This works by using\n{inv, dbNode, inv} as a key in the filter hash table, but I was trying\nto look things up by {spcNode, dbNode, inv}. Fixed.\n\n3. The handling for XLOG_SMGR_CREATE was firing for every fork, but\nit really only needed to fire for the main fork, for now. (There's no\nreason at all this thing shouldn't prefetch other forks, that's just\nleft for later).\n\n4. To make it easier to see the filtering logic at work, I added code\nto log messages about that if you #define XLOGPREFETCHER_DEBUG_LEVEL.\nCould be extended to show more internal state and events...\n\n5. While retesting various scenarios, it bothered me that big seq\nscan UPDATEs would repeatedly issue posix_fadvise() for the same block\n(because multiple rows in a page are touched by consecutive records,\nand the page doesn't make it into the buffer pool until a bit later).\nI resurrected the defences I had against that a few versions back\nusing a small window of recent prefetches, which I'd originally\ndeveloped as a way to avoid explicit prefetches of sequential scans\n(prefetch 1, 2, 3, ...). That turned out to be useless superstition\nbased on ancient discussions in this mailing list, but I think it's\nstill useful to avoid obviously stupid sequences of repeat system\ncalls (prefetch 1, 1, 1, ...). So now it has a little one-cache-line\nsized window of history, to avoid doing that.\n\nI need to re-profile a few workloads after these changes, and then\nthere are a couple of bikeshed-colour items:\n\n1. It's completely arbitrary that it limits its lookahead to\nmaintenance_io_concurrency * 4 blockrefs ahead in the WAL. I have no\nprincipled reason to choose 4. In the AIO version of this (to\nfollow), that number of blocks finishes up getting pinned at the same\ntime, so more thought might be needed on that, but that doesn't apply\nhere yet, so it's a bit arbitrary.\n\n2. Defaults for wal_decode_buffer_size and maintenance_io_concurrency\nare likewise arbitrary.\n\n3. At some point in this long thread I was convinced to name the view\npg_stat_prefetch_recovery, but the GUC is called recovery_prefetch.\nThat seems silly...",
"msg_date": "Thu, 31 Mar 2022 22:49:32 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, Mar 31, 2022 at 10:49:32PM +1300, Thomas Munro wrote:\n> On Mon, Mar 21, 2022 at 9:29 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> > So I finally finished looking at this patch. Here again, AFAICS the feature is\n> > working as expected and I didn't find any problem. I just have some minor\n> > comments, like for the previous patch.\n>\n> Thanks very much for the review. I've attached a new version\n> addressing most of your feedback, and also rebasing over the new\n> WAL-logged CREATE DATABASE. I've also fixed a couple of bugs (see\n> end).\n>\n> > For the docs:\n> >\n> > + Whether to try to prefetch blocks that are referenced in the WAL that\n> > + are not yet in the buffer pool, during recovery. Valid values are\n> > + <literal>off</literal> (the default), <literal>on</literal> and\n> > + <literal>try</literal>. The setting <literal>try</literal> enables\n> > + prefetching only if the operating system provides the\n> > + <function>posix_fadvise</function> function, which is currently used\n> > + to implement prefetching. Note that some operating systems provide the\n> > + function, but don't actually perform any prefetching.\n> >\n> > Is there any reason not to change it to try? I'm wondering if some system says\n> > that the function exists but simply raise an error if you actually try to use\n> > it. I think that at least WSL does that for some functions.\n>\n> Yeah, we could just default it to try. Whether we should ship that\n> way is another question, but done for now.\n\nShould there be an associated pg15 open item for that, when the patch will be\ncommitted? Note that in wal.sgml, the patch still says:\n\n+ [...] By default, prefetching in\n+ recovery is disabled.\n\nI guess this should be changed even if we eventually choose to disable it by\ndefault?\n\n> I don't think there are any supported systems that have a\n> posix_fadvise() that fails with -1, or we'd know about it, because\n> we already use it in other places. We do support one OS that provides\n> a dummy function in libc that does nothing at all (Solaris/illumos),\n> and at least a couple that enter the kernel but are known to do\n> nothing at all for WILLNEED (AIX, FreeBSD).\n\nAh, I didn't know that, thanks for the info!\n\n> > bool\n> > XLogRecGetBlockTag(XLogReaderState *record, uint8 block_id,\n> > RelFileNode *rnode, ForkNumber *forknum, BlockNumber *blknum)\n> > +{\n> > + return XLogRecGetBlockInfo(record, block_id, rnode, forknum, blknum, NULL);\n> > +}\n> > +\n> > +bool\n> > +XLogRecGetBlockInfo(XLogReaderState *record, uint8 block_id,\n> > + RelFileNode *rnode, ForkNumber *forknum,\n> > + BlockNumber *blknum,\n> > + Buffer *prefetch_buffer)\n> > {\n> >\n> > It's missing comments on that function. XLogRecGetBlockTag comments should\n> > probably be reworded at the same time.\n>\n> New comment added for XLogRecGetBlockInfo(). Wish I could come up\n> with a better name for that... Not quite sure what you thought I should\n> change about XLogRecGetBlockTag().\n\nSince XLogRecGetBlockTag is now a wrapper for XLogRecGetBlockInfo, I thought it\nwould be better to document only the specific behavior for this one (so no\nprefetch_buffer), rather than duplicating the whole description in both places.\nIt seems like a good recipe to miss one of the comments the next time something\nis changed there.\n\nFor the name, why not the usual XLogRecGetBlockTagExtended()?\n\n> > @@ -3350,6 +3392,14 @@ WaitForWALToBecomeAvailable(XLogRecPtr RecPtr, bool randAccess,\n> > */\n> > if (lastSourceFailed)\n> > {\n> > + /*\n> > + * Don't allow any retry loops to occur during nonblocking\n> > + * readahead. Let the caller process everything that has been\n> > + * decoded already first.\n> > + */\n> > + if (nonblocking)\n> > + return XLREAD_WOULDBLOCK;\n> >\n> > Is that really enough? I'm wondering if the code path in ReadRecord() that\n> > forces lastSourceFailed to False while it actually failed when switching into\n> > archive recovery (xlogrecovery.c around line 3044) can be problematic here.\n>\n> I don't see the problem scenario, could you elaborate?\n\nSorry, I missed that in standby mode ReadRecord would keep going until a record\nis found, so no problem indeed.\n\n> > + /* Do we have a clue where the buffer might be already? */\n> > + if (BufferIsValid(recent_buffer) &&\n> > + mode == RBM_NORMAL &&\n> > + ReadRecentBuffer(rnode, forknum, blkno, recent_buffer))\n> > + {\n> > + buffer = recent_buffer;\n> > + goto recent_buffer_fast_path;\n> > + }\n> >\n> > Should this increment (local|shared)_blks_hit, since ReadRecentBuffer doesn't?\n>\n> Hmm. I guess ReadRecentBuffer() should really do that. Done.\n\nAh, I also thought it be be better there but was assuming that there was some\npossible usage where it's not wanted. Good then!\n\nShould ReadRecentBuffer comment be updated to mention that pgBufferUsage is\nincremented as appropriate? FWIW that's the first place I looked when checking\nif the stats would be incremented.\n\n> > Missed in the previous patch: XLogDecodeNextRecord() isn't a trivial function,\n> > so some comments would be helpful.\n>\n> OK, I'll come back to that.\n\nOk!\n\n>\n> > +/*\n> > + * A callback that reads ahead in the WAL and tries to initiate one IO.\n> > + */\n> > +static LsnReadQueueNextStatus\n> > +XLogPrefetcherNextBlock(uintptr_t pgsr_private, XLogRecPtr *lsn)\n> >\n> > Should there be a bit more comments about what this function is supposed to\n> > enforce?\n>\n> I have added a comment to explain.\n\nsmall typos:\n\n+ * Returns LRQ_NEXT_IO if the next block reference and it isn't in the buffer\n+ * pool, [...]\n\nI guess s/if the next block/if there's a next block/ or s/and it//.\n\n+ * Returns LRQ_NO_IO if we examined the next block reference and found that it\n+ * was already in the buffer pool.\n\nshould be LRQ_NEXT_NO_IO, and also this is returned if prefetching is disabled\nor it the next block isn't prefetchable.\n\n> > I'm wondering if it's a bit overkill to implement this as a callback. Do you\n> > have near future use cases in mind? For now no other code could use the\n> > infrastructure at all as the lrq is private, so some changes will be needed to\n> > make it truly configurable anyway.\n>\n> Yeah. Actually, in the next step I want to throw away the lrq part,\n> and keep just the XLogPrefetcherNextBlock() function, with some small\n> modifications.\n\nAh I see, that makes sense then.\n>\n> Admittedly the control flow is a little confusing, but the point of\n> this architecture is to separate \"how to prefetch one more thing\" from\n> \"when to prefetch, considering I/O depth and related constraints\".\n> The first thing, \"how\", is represented by XLogPrefetcherNextBlock().\n> The second thing, \"when\", is represented here by the\n> LsnReadQueue/lrq_XXX stuff that is private in this file for now, but\n> later I will propose to replace that second thing with the\n> pg_streaming_read facility of commitfest entry 38/3316. This is a way\n> of getting there step by step. I also wrote briefly about that here:\n>\n> https://www.postgresql.org/message-id/CA%2BhUKGJ7OqpdnbSTq5oK%3DdjSeVW2JMnrVPSm8JC-_dbN6Y7bpw%40mail.gmail.com\n\nI unsurprisingly didn't read the direct IO patch, and also joined the\nprefetching thread quite recently so I missed that mail. Thanks for the\npointer!\n\n>\n> > If we keep it as a callback, I think it would make sense to extract some part,\n> > like the main prefetch filters / global-limit logic, so other possible\n> > implementations can use it if needed. It would also help to reduce this\n> > function a bit, as it's somewhat long.\n>\n> I can't imagine reusing any of those filtering things anywhere else.\n> I admit that the function is kinda long...\n\nYeah, I thought your plan was to provide custom prefetching method or something\nlike that. As-is, apart from making the function less long it wouldn't do\nmuch.\n\n> Other changes:\n> [...]\n> 3. The handling for XLOG_SMGR_CREATE was firing for every fork, but\n> it really only needed to fire for the main fork, for now. (There's no\n> reason at all this thing shouldn't prefetch other forks, that's just\n> left for later).\n\nAh indeed. While at it, should there some comments on top of the file\nmentioning that only the main fork is prefetched?\n\n> 4. To make it easier to see the filtering logic at work, I added code\n> to log messages about that if you #define XLOGPREFETCHER_DEBUG_LEVEL.\n> Could be extended to show more internal state and events...\n\nFTR I also tested the patch defining this. I will probably define it on my\nbuildfarm animal when the patch is committed to make sure it doesn't get\nbroken.\n\n> 5. While retesting various scenarios, it bothered me that big seq\n> scan UPDATEs would repeatedly issue posix_fadvise() for the same block\n> (because multiple rows in a page are touched by consecutive records,\n> and the page doesn't make it into the buffer pool until a bit later).\n> I resurrected the defences I had against that a few versions back\n> using a small window of recent prefetches, which I'd originally\n> developed as a way to avoid explicit prefetches of sequential scans\n> (prefetch 1, 2, 3, ...). That turned out to be useless superstition\n> based on ancient discussions in this mailing list, but I think it's\n> still useful to avoid obviously stupid sequences of repeat system\n> calls (prefetch 1, 1, 1, ...). So now it has a little one-cache-line\n> sized window of history, to avoid doing that.\n\nNice!\n\n+ * To detect repeat access to the same block and skip useless extra system\n+ * calls, we remember a small windows of recently prefetched blocks.\n\nShould it be \"repeated\" access, and small window (singular)?\n\nAlso, I'm wondering if the \"seq\" part of the related pieces is a bit too much\nspecific, as there could be other workloads that lead to repeated update of the\nsame blocks. Maybe it's ok to use it for internal variables, but the new\nskip_seq field seems a bit too obscure for some user facing thing. Maybe\nskip_same, skip_repeated or something like that?\n\n> I need to re-profile a few workloads after these changes, and then\n> there are a couple of bikeshed-colour items:\n>\n> 1. It's completely arbitrary that it limits its lookahead to\n> maintenance_io_concurrency * 4 blockrefs ahead in the WAL. I have no\n> principled reason to choose 4. In the AIO version of this (to\n> follow), that number of blocks finishes up getting pinned at the same\n> time, so more thought might be needed on that, but that doesn't apply\n> here yet, so it's a bit arbitrary.\n\nYeah, I don't see that as a blocker for now. Maybe use some #define to make it\nmore obvious though, as it's a bit hidden in the code right now?\n\n> 3. At some point in this long thread I was convinced to name the view\n> pg_stat_prefetch_recovery, but the GUC is called recovery_prefetch.\n> That seems silly...\n\nFWIW I prefer recovery_prefetch to prefetch_recovery.\n\n\n",
"msg_date": "Mon, 4 Apr 2022 11:12:31 +0800",
"msg_from": "Julien Rouhaud <rjuju123@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Mon, Apr 4, 2022 at 3:12 PM Julien Rouhaud <rjuju123@gmail.com> wrote:\n> [review]\n\nThanks! I took almost all of your suggestions about renaming things,\ncomments, docs and moving a magic number into a macro.\n\nMinor changes:\n\n1. Rebased over the shmem stats changes and others that have just\nlanded today (woo!). The way my simple SharedStats object works and\nis reset looks a little primitive next to the shiny new stats\ninfrastructure, but I can always adjust that in a follow-up patch if\nrequired.\n\n2. It was a bit annoying that the pg_stat_recovery_prefetch view\nwould sometimes show stale numbers when waiting for WAL to be\nstreamed, since that happens at arbitrary points X bytes apart in the\nWAL. Now it also happens before sleeping/waiting and when recovery\nends.\n\n3. Last year, commit a55a9847 synchronised config.sgml with guc.c's\ncategories. A couple of hunks in there that modified the previous\nversion of this work before it all got reverted. So I've re-added the\nWAL_RECOVERY GUC category, to match the new section in config.sgml.\n\nAbout test coverage, the most interesting lines of xlogprefetcher.c\nthat stand out as unreached in a gcov report are in the special\nhandling for the new CREATE DATABASE in file-copy mode -- but that's\nprobably something to raise in the thread that introduced that new\nfunctionality without a test. I've tested that code locally; if you\ndefine XLOGPREFETCHER_DEBUG_LEVEL you'll see that it won't touch\nanything in the new database until recovery has replayed the\nfile-copy.\n\nAs for current CI-vs-buildfarm blind spots that recently bit me and\nothers, I also tested -m32 and -fsanitize=undefined,unaligned builds.\n\nI reran one of the quick pgbench/crash/drop-caches/recover tests I had\nlying around and saw a 17s -> 6s speedup with FPW off (you need much\nlonger tests to see speedup with them on, so this is a good way for\nquick sanity checks -- see Tomas V's results for long runs with FPWs\nand curved effects).\n\nWith that... I've finally pushed the 0002 patch and will be watching\nthe build farm.\n\n\n",
"msg_date": "Thu, 7 Apr 2022 19:45:23 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "The docs seem to be wrong about the default.\n\n+ are not yet in the buffer pool, during recovery. Valid values are\n+ <literal>off</literal> (the default), <literal>on</literal> and\n+ <literal>try</literal>. The setting <literal>try</literal> enables\n\n+ concurrency and distance, respectively. By default, it is set to\n+ <literal>try</literal>, which enabled the feature on systems where\n+ <function>posix_fadvise</function> is available.\n\nShould say \"which enables\".\n\n+ {\n+ {\"recovery_prefetch\", PGC_SIGHUP, WAL_RECOVERY,\n+ gettext_noop(\"Prefetch referenced blocks during recovery\"),\n+ gettext_noop(\"Look ahead in the WAL to find references to uncached data.\")\n+ },\n+ &recovery_prefetch,\n+ RECOVERY_PREFETCH_TRY, recovery_prefetch_options,\n+ check_recovery_prefetch, assign_recovery_prefetch, NULL\n+ },\n\nCuriously, I reported a similar issue last year.\n\nOn Thu, Apr 08, 2021 at 10:37:04PM -0500, Justin Pryzby wrote:\n> --- a/doc/src/sgml/wal.sgml\n> +++ b/doc/src/sgml/wal.sgml\n> @@ -816,9 +816,7 @@\n> prefetching mechanism is most likely to be effective on systems\n> with <varname>full_page_writes</varname> set to\n> <varname>off</varname> (where that is safe), and where the working\n> - set is larger than RAM. By default, prefetching in recovery is enabled\n> - on operating systems that have <function>posix_fadvise</function>\n> - support.\n> + set is larger than RAM. By default, prefetching in recovery is disabled.\n> </para>\n> </sect1>\n\n\n",
"msg_date": "Thu, 7 Apr 2022 07:55:55 -0500",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Fri, Apr 8, 2022 at 12:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\n> The docs seem to be wrong about the default.\n>\n> + are not yet in the buffer pool, during recovery. Valid values are\n> + <literal>off</literal> (the default), <literal>on</literal> and\n> + <literal>try</literal>. The setting <literal>try</literal> enables\n\nFixed.\n\n> + concurrency and distance, respectively. By default, it is set to\n> + <literal>try</literal>, which enabled the feature on systems where\n> + <function>posix_fadvise</function> is available.\n>\n> Should say \"which enables\".\n\nFixed.\n\n> Curiously, I reported a similar issue last year.\n\nSorry. I guess both times we only agreed on what the default should\nbe in the final review round before commit, and I let the docs get out\nof sync (well, the default is mentioned in two places and I apparently\nended my search too soon, changing only one). I also found another\nrecently obsoleted sentence: the one about showing nulls sometimes was\nno longer true. Removed.\n\n\n",
"msg_date": "Fri, 8 Apr 2022 13:46:49 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi, \r\nThank you for developing the great feature. I tested this feature and checked the documentation. Currently, the documentation for the pg_stat_prefetch_recovery view is included in the description for the pg_stat_subscription view.\r\n\r\nhttps://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION\r\n\r\nIt is also not displayed in the list of \"28.2. The Statistics Collector\".\r\nhttps://www.postgresql.org/docs/devel/monitoring.html\r\n\r\nThe attached patch modifies the pg_stat_prefetch_recovery view to appear as a separate view.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Friday, April 8, 2022 10:47 AM\r\nTo: Justin Pryzby <pryzby@telsasoft.com>\r\nCc: Tomas Vondra <tomas.vondra@enterprisedb.com>; Stephen Frost <sfrost@snowman.net>; Andres Freund <andres@anarazel.de>; Jakub Wartak <Jakub.Wartak@tomtom.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: WIP: WAL prefetch (another approach)\r\n\r\nOn Fri, Apr 8, 2022 at 12:55 AM Justin Pryzby <pryzby@telsasoft.com> wrote:\r\n> The docs seem to be wrong about the default.\r\n>\r\n> + are not yet in the buffer pool, during recovery. Valid values are\r\n> + <literal>off</literal> (the default), <literal>on</literal> and\r\n> + <literal>try</literal>. The setting <literal>try</literal> \r\n> + enables\r\n\r\nFixed.\r\n\r\n> + concurrency and distance, respectively. By default, it is set to\r\n> + <literal>try</literal>, which enabled the feature on systems where\r\n> + <function>posix_fadvise</function> is available.\r\n>\r\n> Should say \"which enables\".\r\n\r\nFixed.\r\n\r\n> Curiously, I reported a similar issue last year.\r\n\r\nSorry. I guess both times we only agreed on what the default should be in the final review round before commit, and I let the docs get out of sync (well, the default is mentioned in two places and I apparently ended my search too soon, changing only one). I also found another recently obsoleted sentence: the one about showing nulls sometimes was no longer true. Removed.",
"msg_date": "Tue, 12 Apr 2022 09:01:51 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Tue, Apr 12, 2022 at 9:03 PM Shinoda, Noriyoshi (PN Japan FSIP)\n<noriyoshi.shinoda@hpe.com> wrote:\n> Thank you for developing the great feature. I tested this feature and checked the documentation. Currently, the documentation for the pg_stat_prefetch_recovery view is included in the description for the pg_stat_subscription view.\n>\n> https://www.postgresql.org/docs/devel/monitoring-stats.html#MONITORING-PG-STAT-SUBSCRIPTION\n\nHi! Thanks. I had just committed a fix before I saw your message,\nbecause there was already another report here:\n\nhttps://www.postgresql.org/message-id/flat/CAKrAKeVk-LRHMdyT6x_p33eF6dCorM2jed5h_eHdRdv0reSYTA%40mail.gmail.com\n\n\n",
"msg_date": "Tue, 12 Apr 2022 21:28:26 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Hi,\r\nThank you for your reply. \r\nI missed the message, sorry.\r\n\r\nRegards,\r\nNoriyoshi Shinoda\r\n\r\n-----Original Message-----\r\nFrom: Thomas Munro <thomas.munro@gmail.com> \r\nSent: Tuesday, April 12, 2022 6:28 PM\r\nTo: Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com>\r\nCc: Justin Pryzby <pryzby@telsasoft.com>; Tomas Vondra <tomas.vondra@enterprisedb.com>; Stephen Frost <sfrost@snowman.net>; Andres Freund <andres@anarazel.de>; Jakub Wartak <Jakub.Wartak@tomtom.com>; Alvaro Herrera <alvherre@2ndquadrant.com>; Tomas Vondra <tomas.vondra@2ndquadrant.com>; Dmitry Dolgov <9erthalion6@gmail.com>; David Steele <david@pgmasters.net>; pgsql-hackers <pgsql-hackers@postgresql.org>\r\nSubject: Re: WIP: WAL prefetch (another approach)\r\n\r\nOn Tue, Apr 12, 2022 at 9:03 PM Shinoda, Noriyoshi (PN Japan FSIP) <noriyoshi.shinoda@hpe.com> wrote:\r\n> Thank you for developing the great feature. I tested this feature and checked the documentation. Currently, the documentation for the pg_stat_prefetch_recovery view is included in the description for the pg_stat_subscription view.\r\n>\r\n> INVALID URI REMOVED\r\n> toring-stats.html*MONITORING-PG-STAT-SUBSCRIPTION__;Iw!!NpxR!xRu7zc4Hc\r\n> ZppB-32Fp3YfESPqJ7B4AOP_RF7QuYP-kCWidoiJ5txu9CW8sX61TfwddE$\r\n\r\nHi! Thanks. I had just committed a fix before I saw your message, because there was already another report here:\r\n\r\nhttps://www.postgresql.org/message-id/flat/CAKrAKeVk-LRHMdyT6x_p33eF6dCorM2jed5h_eHdRdv0reSYTA@mail.gmail.com \r\n",
"msg_date": "Tue, 12 Apr 2022 09:32:16 +0000",
"msg_from": "\"Shinoda, Noriyoshi (PN Japan FSIP)\" <noriyoshi.shinoda@hpe.com>",
"msg_from_op": false,
"msg_subject": "RE: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Thu, 7 Apr 2022 at 08:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n\n> With that... I've finally pushed the 0002 patch and will be watching\n> the build farm.\n\nThis is a nice feature if it is safe to turn off full_page_writes.\n\nWhen is it safe to do that? On which platform?\n\nI am not aware of any released software that allows full_page_writes\nto be safely disabled. Perhaps something has been released recently\nthat allows this? I think we have substantial documentation about\nsafety of other settings, so we should carefully document things here\nalso.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 12 Apr 2022 14:58:17 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 4/12/22 15:58, Simon Riggs wrote:\n> On Thu, 7 Apr 2022 at 08:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n> \n>> With that... I've finally pushed the 0002 patch and will be watching\n>> the build farm.\n> \n> This is a nice feature if it is safe to turn off full_page_writes.\n> \n> When is it safe to do that? On which platform?\n> \n> I am not aware of any released software that allows full_page_writes\n> to be safely disabled. Perhaps something has been released recently\n> that allows this? I think we have substantial documentation about\n> safety of other settings, so we should carefully document things here\n> also.\n> \n\nI don't see why/how would an async prefetch make FPW unnecessary. Did\nanyone claim that be the case?\n\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 12 Apr 2022 17:41:00 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Tue, 12 Apr 2022 at 16:41, Tomas Vondra\n<tomas.vondra@enterprisedb.com> wrote:\n>\n> On 4/12/22 15:58, Simon Riggs wrote:\n> > On Thu, 7 Apr 2022 at 08:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n> >\n> >> With that... I've finally pushed the 0002 patch and will be watching\n> >> the build farm.\n> >\n> > This is a nice feature if it is safe to turn off full_page_writes.\n> >\n> > When is it safe to do that? On which platform?\n> >\n> > I am not aware of any released software that allows full_page_writes\n> > to be safely disabled. Perhaps something has been released recently\n> > that allows this? I think we have substantial documentation about\n> > safety of other settings, so we should carefully document things here\n> > also.\n> >\n>\n> I don't see why/how would an async prefetch make FPW unnecessary. Did\n> anyone claim that be the case?\n\nOther way around. FPWs make prefetch unnecessary.\nTherefore you would only want prefetch with FPW=off, AFAIK.\n\nOr put this another way: when is it safe and sensible to use\nrecovery_prefetch != off?\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/\n\n\n",
"msg_date": "Tue, 12 Apr 2022 16:46:39 +0100",
"msg_from": "Simon Riggs <simon.riggs@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Simon Riggs <simon.riggs@enterprisedb.com> writes:\n\n> On Thu, 7 Apr 2022 at 08:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n>\n>> With that... I've finally pushed the 0002 patch and will be watching\n>> the build farm.\n>\n> This is a nice feature if it is safe to turn off full_page_writes.\n>\n> When is it safe to do that? On which platform?\n>\n> I am not aware of any released software that allows full_page_writes\n> to be safely disabled. Perhaps something has been released recently\n> that allows this? I think we have substantial documentation about\n> safety of other settings, so we should carefully document things here\n> also.\n\nOur WAL reliability docs claim that ZFS is safe against torn pages:\n\nhttps://www.postgresql.org/docs/current/wal-reliability.html:\n\n If you have file-system software that prevents partial page writes\n (e.g., ZFS), you can turn off this page imaging by turning off the\n full_page_writes parameter.\n\n- ilmari\n\n\n",
"msg_date": "Tue, 12 Apr 2022 16:57:41 +0100",
"msg_from": "=?utf-8?Q?Dagfinn_Ilmari_Manns=C3=A5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On 4/12/22 17:46, Simon Riggs wrote:\n> On Tue, 12 Apr 2022 at 16:41, Tomas Vondra\n> <tomas.vondra@enterprisedb.com> wrote:\n>>\n>> On 4/12/22 15:58, Simon Riggs wrote:\n>>> On Thu, 7 Apr 2022 at 08:46, Thomas Munro <thomas.munro@gmail.com> wrote:\n>>>\n>>>> With that... I've finally pushed the 0002 patch and will be watching\n>>>> the build farm.\n>>>\n>>> This is a nice feature if it is safe to turn off full_page_writes.\n>>>\n>>> When is it safe to do that? On which platform?\n>>>\n>>> I am not aware of any released software that allows full_page_writes\n>>> to be safely disabled. Perhaps something has been released recently\n>>> that allows this? I think we have substantial documentation about\n>>> safety of other settings, so we should carefully document things here\n>>> also.\n>>>\n>>\n>> I don't see why/how would an async prefetch make FPW unnecessary. Did\n>> anyone claim that be the case?\n> \n> Other way around. FPWs make prefetch unnecessary.\n> Therefore you would only want prefetch with FPW=off, AFAIK.\n> \n> Or put this another way: when is it safe and sensible to use\n> recovery_prefetch != off?\n> \n\nThat assumes the FPI stays in memory until the next modification, and\nthat can be untrue for a number of reasons. Long checkpoint interval\nwith enough random accesses in between is a nice example. See the\nbenchmarks I did a year ago (regular pgbench).\n\nOr imagine a r/o replica used to run analytics queries, that access so\nmuch data it evicts the buffers initialized by the FPI records.\n\nregards\n\n-- \nTomas Vondra\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Tue, 12 Apr 2022 18:07:20 +0200",
"msg_from": "Tomas Vondra <tomas.vondra@enterprisedb.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "> Other way around. FPWs make prefetch unnecessary.\n> Therefore you would only want prefetch with FPW=off, AFAIK.\n>\nA few scenarios I can imagine page prefetch can help are, 1/ A DR replica\ninstance that is smaller instance size than primary. Page prefetch can\nbring the pages back into memory in advance when they are evicted. This\nspeeds up the replay and is cost effective. 2/ Allows larger\ncheckpoint_timeout for the same recovery SLA and perhaps improved\nperformance? 3/ WAL prefetch (not pages by itself) can improve replay by\nitself (not sure if it was measured in isolation, Tomas V can comment on\nit). 4/ Read replica running analytical workload scenario Tomas V mentioned\nearlier.\n\n\n>\n> Or put this another way: when is it safe and sensible to use\n> recovery_prefetch != off?\n>\nWhen checkpoint_timeout is set large and under heavy write activity, on a\nread replica that has working set higher than the memory and receiving\nconstant updates from primary. This covers 1 & 4 above.\n\n\n> --\n> Simon Riggs http://www.EnterpriseDB.com/\n>\n>\n>\n\n\nOther way around. FPWs make prefetch unnecessary.\nTherefore you would only want prefetch with FPW=off, AFAIK.A few scenarios I can imagine page prefetch can help are, 1/ A DR replica instance that is smaller instance size than primary. Page prefetch can bring the pages back into memory in advance when they are evicted. This speeds up the replay and is cost effective. 2/ Allows larger checkpoint_timeout for the same recovery SLA and perhaps improved performance? 3/ WAL prefetch (not pages by itself) can improve replay by itself (not sure if it was measured in isolation, Tomas V can comment on it). 4/ Read replica running analytical workload scenario Tomas V mentioned earlier. \n\nOr put this another way: when is it safe and sensible to use\nrecovery_prefetch != off?When checkpoint_timeout is set large and under heavy write activity, on a read replica that has working set higher than the memory and receiving constant updates from primary. This covers 1 & 4 above.\n\n-- \nSimon Riggs http://www.EnterpriseDB.com/",
"msg_date": "Tue, 12 Apr 2022 11:04:41 -0700",
"msg_from": "SATYANARAYANA NARLAPURAM <satyanarlapuram@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 3:57 AM Dagfinn Ilmari Mannsåker\n<ilmari@ilmari.org> wrote:\n> Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > This is a nice feature if it is safe to turn off full_page_writes.\n\nAs other have said/shown, it does also help if a block with FPW is\nevicted and then read back in during one checkpoint cycle, in other\nwords if the working set is larger than shared buffers.\n\nThis also provides infrastructure for proposals in the next cycle, as\npart of commitfest #3316:\n* in direct I/O mode, I/O stalls become more likely due to lack of\nkernel prefetching/double-buffering, so prefetching becomes more\nessential\n* even in buffered I/O mode when benefiting from free\ndouble-buffering, the copy from kernel buffer to user space buffer can\nbe finished in the background instead of calling pread() when you need\nthe page, but you need to start it sooner\n* adjacent blocks accessed by nearby records can be merged into a\nsingle scatter-read, for example with preadv() in the background\n* repeated buffer lookups, pins, locks (and maybe eventually replay)\nto the same page can be consolidated\n\nPie-in-the-sky ideas:\n* someone might eventually want to be able to replay in parallel\n(hard, but certainly requires lookahead)\n* I sure hope we'll eventually use different techniques for torn-page\nprotection to avoid the high online costs of FPW\n\n> > When is it safe to do that? On which platform?\n> >\n> > I am not aware of any released software that allows full_page_writes\n> > to be safely disabled. Perhaps something has been released recently\n> > that allows this? I think we have substantial documentation about\n> > safety of other settings, so we should carefully document things here\n> > also.\n>\n> Our WAL reliability docs claim that ZFS is safe against torn pages:\n>\n> https://www.postgresql.org/docs/current/wal-reliability.html:\n>\n> If you have file-system software that prevents partial page writes\n> (e.g., ZFS), you can turn off this page imaging by turning off the\n> full_page_writes parameter.\n\nUnfortunately, posix_fadvise(WILLNEED) doesn't do anything on ZFS\nright now :-(. I have some patches to fix that on Linux[1] and\nFreeBSD and it seems like there's a good chance of getting them\ncommitted based on feedback, but it needs some more work on tests and\nmmap integration. If anyone's interested in helping get that landed\nfaster, please ping me off-list.\n\n[1] https://github.com/openzfs/zfs/pull/9807\n\n\n",
"msg_date": "Wed, 13 Apr 2022 08:05:11 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "I believe that the WAL prefetch patch probably accounts for the\nintermittent errors that buildfarm member topminnow has shown\nsince it went in, eg [1]:\n\ndiff -U3 /home/nm/ext4/HEAD/pgsql/contrib/pg_walinspect/expected/pg_walinspect.out /home/nm/ext4/HEAD/pgsql.build/contrib/pg_walinspect/results/pg_walinspect.out\n--- /home/nm/ext4/HEAD/pgsql/contrib/pg_walinspect/expected/pg_walinspect.out\t2022-04-10 03:05:15.972622440 +0200\n+++ /home/nm/ext4/HEAD/pgsql.build/contrib/pg_walinspect/results/pg_walinspect.out\t2022-04-25 05:09:49.861642059 +0200\n@@ -34,11 +34,7 @@\n (1 row)\n \n SELECT COUNT(*) >= 0 AS ok FROM pg_get_wal_records_info_till_end_of_wal(:'wal_lsn1');\n- ok \n-----\n- t\n-(1 row)\n-\n+ERROR: could not read WAL at 0/1903E40\n SELECT COUNT(*) >= 0 AS ok FROM pg_get_wal_stats(:'wal_lsn1', :'wal_lsn2');\n ok \n ----\n@@ -46,11 +42,7 @@\n (1 row)\n \n SELECT COUNT(*) >= 0 AS ok FROM pg_get_wal_stats_till_end_of_wal(:'wal_lsn1');\n- ok \n-----\n- t\n-(1 row)\n-\n+ERROR: could not read WAL at 0/1903E40\n -- ===================================================================\n -- Test for filtering out WAL records of a particular table\n -- ===================================================================\n\n\nI've reproduced this manually on that machine, and confirmed that the\nproximate cause is that XLogNextRecord() is returning NULL because\nstate->decode_queue_head == NULL, without bothering to provide an errormsg\n(which doesn't seem very well thought out in itself). I obtained the\ncontents of the xlogreader struct at failure:\n\n(gdb) p *xlogreader\n$1 = {routine = {page_read = 0x594270 <read_local_xlog_page_no_wait>, \n segment_open = 0x593b44 <wal_segment_open>, \n segment_close = 0x593d38 <wal_segment_close>}, system_identifier = 0, \n private_data = 0x0, ReadRecPtr = 26230672, EndRecPtr = 26230752, \n abortedRecPtr = 26230752, missingContrecPtr = 26230784, \n overwrittenRecPtr = 0, DecodeRecPtr = 26230672, NextRecPtr = 26230752, \n PrevRecPtr = 0, record = 0x0, decode_buffer = 0xf25428 \"\\240\", \n decode_buffer_size = 65536, free_decode_buffer = true, \n decode_buffer_head = 0xf25428 \"\\240\", decode_buffer_tail = 0xf25428 \"\\240\", \n decode_queue_head = 0x0, decode_queue_tail = 0x0, \n readBuf = 0xf173f0 \"\\020\\321\\005\", readLen = 0, segcxt = {\n ws_dir = '\\000' <repeats 1023 times>, ws_segsize = 16777216}, seg = {\n ws_file = 25, ws_segno = 0, ws_tli = 1}, segoff = 0, \n latestPagePtr = 26222592, latestPageTLI = 1, currRecPtr = 26230752, \n currTLI = 1, currTLIValidUntil = 0, nextTLI = 0, \n readRecordBuf = 0xf1b3f8 \"<\", readRecordBufSize = 40960, \n errormsg_buf = 0xef3270 \"\", errormsg_deferred = false, nonblocking = false}\n\nI don't have an intuition about where to look beyond that, any\nsuggestions?\n\nWhat I do know so far is that while the failure reproduces fairly\nreliably under \"make check\" (more than half the time, which squares\nwith topminnow's history), it doesn't reproduce at all under \"make\ninstallcheck\" (after removing NO_INSTALLCHECK), which seems odd.\nMaybe it's dependent on how much WAL history the installation has\naccumulated?\n\nIt could be that this is a bug in pg_walinspect or a fault in its\ntest case; hard to tell since that got committed at about the same\ntime as the prefetch changes.\n\n\t\t\tregards, tom lane\n\n[1] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=topminnow&dt=2022-04-25%2001%3A48%3A47\n\n\n",
"msg_date": "Mon, 25 Apr 2022 14:11:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "Oh, one more bit of data: here's an excerpt from pg_waldump output after\nthe failed test:\n\nrmgr: Btree len (rec/tot): 72/ 72, tx: 727, lsn: 0/01903BC8, prev 0/01903B70, desc: INSERT_LEAF off 111, blkref #0: rel 1663/16384/2673 blk 9\nrmgr: Btree len (rec/tot): 72/ 72, tx: 727, lsn: 0/01903C10, prev 0/01903BC8, desc: INSERT_LEAF off 141, blkref #0: rel 1663/16384/2674 blk 7\nrmgr: Standby len (rec/tot): 42/ 42, tx: 727, lsn: 0/01903C58, prev 0/01903C10, desc: LOCK xid 727 db 16384 rel 16391 \nrmgr: Transaction len (rec/tot): 437/ 437, tx: 727, lsn: 0/01903C88, prev 0/01903C58, desc: COMMIT 2022-04-25 20:16:03.374197 CEST; inval msgs: catcache 80 catcache 79 catcache 80 catcache 79 catcache 55 catcache 54 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 catcache 7 catcache 6 snapshot 2608 relcache 16391\nrmgr: Heap len (rec/tot): 59/ 59, tx: 728, lsn: 0/01903E40, prev 0/01903C88, desc: INSERT+INIT off 1 flags 0x00, blkref #0: rel 1663/16384/16391 blk 0\nrmgr: Heap len (rec/tot): 59/ 59, tx: 728, lsn: 0/01903E80, prev 0/01903E40, desc: INSERT off 2 flags 0x00, blkref #0: rel 1663/16384/16391 blk 0\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 728, lsn: 0/01903EC0, prev 0/01903E80, desc: COMMIT 2022-04-25 20:16:03.379323 CEST\nrmgr: Heap len (rec/tot): 59/ 59, tx: 729, lsn: 0/01903EE8, prev 0/01903EC0, desc: INSERT off 3 flags 0x00, blkref #0: rel 1663/16384/16391 blk 0\nrmgr: Heap len (rec/tot): 59/ 59, tx: 729, lsn: 0/01903F28, prev 0/01903EE8, desc: INSERT off 4 flags 0x00, blkref #0: rel 1663/16384/16391 blk 0\nrmgr: Transaction len (rec/tot): 34/ 34, tx: 729, lsn: 0/01903F68, prev 0/01903F28, desc: COMMIT 2022-04-25 20:16:03.381720 CEST\n\nThe error is complaining about not being able to read 0/01903E40,\nwhich AFAICT is from the first \"INSERT INTO sample_tbl\" command,\nwhich most certainly ought to be down to disk at this point.\n\nAlso, I modified the test script to see what WAL LSNs it thought\nit was dealing with, and got\n\n+\\echo 'wal_lsn1 = ' :wal_lsn1\n+wal_lsn1 = 0/1903E40\n+\\echo 'wal_lsn2 = ' :wal_lsn2\n+wal_lsn2 = 0/1903EE8\n\nconfirming that idea of where 0/01903E40 is in the WAL history.\nSo this is sure looking like a bug somewhere in xlogreader.c,\nnot in pg_walinspect.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 25 Apr 2022 14:31:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 6:11 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> I believe that the WAL prefetch patch probably accounts for the\n> intermittent errors that buildfarm member topminnow has shown\n> since it went in, eg [1]:\n>\n> diff -U3 /home/nm/ext4/HEAD/pgsql/contrib/pg_walinspect/expected/pg_walinspect.out /home/nm/ext4/HEAD/pgsql.build/contrib/pg_walinspect/results/pg_walinspect.out\n\nHmm, maybe but I suspect not. I think I might see what's happening here.\n\n> +ERROR: could not read WAL at 0/1903E40\n\n> I've reproduced this manually on that machine, and confirmed that the\n> proximate cause is that XLogNextRecord() is returning NULL because\n> state->decode_queue_head == NULL, without bothering to provide an errormsg\n> (which doesn't seem very well thought out in itself). I obtained the\n\nThanks for doing that. After several hours of trying I also managed\nto reproduce it on that gcc23 system (not at all sure why it doesn't\nshow up elsewhere; MIPS 32 bit layout may be a factor), and added some\ntrace to get some more clues. Still looking into it, but here is the\ncurrent hypothesis I'm testing:\n\n1. The reason there's a messageless ERROR in this case is because\nthere is new read_page callback logic introduced for pg_walinspect,\ncalled via read_local_xlog_page_no_wait(), which is like the old\nread_local_xlog_page() except that it returns -1 if you try to read\npast the current \"flushed\" LSN, and we have no queued message. An\nerror is then reported by XLogReadRecord(), and appears to the user.\n\n2. The reason pg_walinspect tries to read WAL data past the flushed\nLSN is because its GetWALRecordsInfo() function keeps calling\nXLogReadRecord() until EndRecPtr >= end_lsn, where end_lsn is taken\nfrom a snapshot of the flushed LSN, but I don't see where it takes\ninto account that the flushed LSN might momentarily fall in the middle\nof a record. In that case, xlogreader.c will try to read the next\npage, which fails because it's past the flushed LSN (see point 1).\n\nI will poke some more tomorrow to try to confirm this and try to come\nup with a fix.\n\n\n",
"msg_date": "Tue, 26 Apr 2022 18:11:20 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Tue, Apr 26, 2022 at 6:11 PM Thomas Munro <thomas.munro@gmail.com> wrote:\n> I will poke some more tomorrow to try to confirm this and try to come\n> up with a fix.\n\nDone, and moved over to the pg_walinspect commit thread to reach the\nright eyeballs:\n\nhttps://www.postgresql.org/message-id/CA%2BhUKGLtswFk9ZO3WMOqnDkGs6dK5kCdQK9gxJm0N8gip5cpiA%40mail.gmail.com\n\n\n",
"msg_date": "Wed, 27 Apr 2022 12:10:40 +1200",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
},
{
"msg_contents": "On Wed, Apr 13, 2022 at 8:05 AM Thomas Munro <thomas.munro@gmail.com> wrote:\n> On Wed, Apr 13, 2022 at 3:57 AM Dagfinn Ilmari Mannsåker\n> <ilmari@ilmari.org> wrote:\n> > Simon Riggs <simon.riggs@enterprisedb.com> writes:\n> > > This is a nice feature if it is safe to turn off full_page_writes.\n\n> > > When is it safe to do that? On which platform?\n> > >\n> > > I am not aware of any released software that allows full_page_writes\n> > > to be safely disabled. Perhaps something has been released recently\n> > > that allows this? I think we have substantial documentation about\n> > > safety of other settings, so we should carefully document things here\n> > > also.\n> >\n> > Our WAL reliability docs claim that ZFS is safe against torn pages:\n> >\n> > https://www.postgresql.org/docs/current/wal-reliability.html:\n> >\n> > If you have file-system software that prevents partial page writes\n> > (e.g., ZFS), you can turn off this page imaging by turning off the\n> > full_page_writes parameter.\n>\n> Unfortunately, posix_fadvise(WILLNEED) doesn't do anything on ZFS\n> right now :-(.\n\nUpdate: OpenZFS now has this working in its master branch (Linux only\nfor now), so fingers crossed for the next release.\n\n\n",
"msg_date": "Sun, 25 Sep 2022 15:31:37 +1300",
"msg_from": "Thomas Munro <thomas.munro@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: WIP: WAL prefetch (another approach)"
}
] |
[
{
"msg_contents": "Hi All,\n\nWhen a ROW variable having NULL value is assigned to a RECORD\nvariable, it gives no structure to the RECORD type variable. Let's\nconsider the following example.\n\ncreate table t1(a int, b text);\n\ninsert into t1 values(1, 'str1');\n\ncreate or replace function f1() returns void as\n$$\ndeclare\n row t1%ROWTYPE;\n rec RECORD;\nbegin\n row := NULL;\n rec := row;\n raise info 'rec.a = %, rec.b = %', rec.a, rec.b;\nend;\n$$ language plpgsql;\n\nIn above example as 'row' variable is having NULL value, assigning\nthis to 'rec' didn't give any structure to it although 'row' is having\na predefined structure. Here is the error observed when above function\nis executed.\n\nselect f1();\nERROR: record \"rec\" is not assigned yet\n\nThis started happening from the following git commit onwards,\n\ncommit 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Feb 13 18:52:21 2018 -0500\n\n Make plpgsql use its DTYPE_REC code paths for composite-type variables.\n\nI know this is expected to happen considering the changes done in\nabove commit because from this commit onwards, NULL value assigned to\nany row variable represents a true NULL composite value before this\ncommit it used to be a tuple with each column having null value in it.\nBut, the point is, even if the row variable is having a NULL value it\nstill has a structure associated with it. Shouldn't that structure be\ntransferred to RECORD variable when it is assigned with a ROW type\nvariable ? Can we consider this behaviour change as a side effect of\nthe improvement done in the RECORD type of variable?\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 1 Jan 2020 21:19:44 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "st 1. 1. 2020 v 16:50 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com>\nnapsal:\n\n> Hi All,\n>\n> When a ROW variable having NULL value is assigned to a RECORD\n> variable, it gives no structure to the RECORD type variable. Let's\n> consider the following example.\n>\n> create table t1(a int, b text);\n>\n> insert into t1 values(1, 'str1');\n>\n> create or replace function f1() returns void as\n> $$\n> declare\n> row t1%ROWTYPE;\n> rec RECORD;\n> begin\n> row := NULL;\n> rec := row;\n> raise info 'rec.a = %, rec.b = %', rec.a, rec.b;\n> end;\n> $$ language plpgsql;\n>\n> In above example as 'row' variable is having NULL value, assigning\n> this to 'rec' didn't give any structure to it although 'row' is having\n> a predefined structure. Here is the error observed when above function\n> is executed.\n>\n> select f1();\n> ERROR: record \"rec\" is not assigned yet\n>\n> This started happening from the following git commit onwards,\n>\n> commit 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496\n> Author: Tom Lane <tgl@sss.pgh.pa.us>\n> Date: Tue Feb 13 18:52:21 2018 -0500\n>\n> Make plpgsql use its DTYPE_REC code paths for composite-type variables.\n>\n> I know this is expected to happen considering the changes done in\n> above commit because from this commit onwards, NULL value assigned to\n> any row variable represents a true NULL composite value before this\n> commit it used to be a tuple with each column having null value in it.\n> But, the point is, even if the row variable is having a NULL value it\n> still has a structure associated with it. Shouldn't that structure be\n> transferred to RECORD variable when it is assigned with a ROW type\n> variable ? Can we consider this behaviour change as a side effect of\n> the improvement done in the RECORD type of variable?\n>\n\n+1\n\nPavel\n\n\n> --\n> With Regards,\n> Ashutosh Sharma\n> EnterpriseDB:http://www.enterprisedb.com\n>\n>\n>\n\nst 1. 1. 2020 v 16:50 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com> napsal:Hi All,\n\nWhen a ROW variable having NULL value is assigned to a RECORD\nvariable, it gives no structure to the RECORD type variable. Let's\nconsider the following example.\n\ncreate table t1(a int, b text);\n\ninsert into t1 values(1, 'str1');\n\ncreate or replace function f1() returns void as\n$$\ndeclare\n row t1%ROWTYPE;\n rec RECORD;\nbegin\n row := NULL;\n rec := row;\n raise info 'rec.a = %, rec.b = %', rec.a, rec.b;\nend;\n$$ language plpgsql;\n\nIn above example as 'row' variable is having NULL value, assigning\nthis to 'rec' didn't give any structure to it although 'row' is having\na predefined structure. Here is the error observed when above function\nis executed.\n\nselect f1();\nERROR: record \"rec\" is not assigned yet\n\nThis started happening from the following git commit onwards,\n\ncommit 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Feb 13 18:52:21 2018 -0500\n\n Make plpgsql use its DTYPE_REC code paths for composite-type variables.\n\nI know this is expected to happen considering the changes done in\nabove commit because from this commit onwards, NULL value assigned to\nany row variable represents a true NULL composite value before this\ncommit it used to be a tuple with each column having null value in it.\nBut, the point is, even if the row variable is having a NULL value it\nstill has a structure associated with it. Shouldn't that structure be\ntransferred to RECORD variable when it is assigned with a ROW type\nvariable ? Can we consider this behaviour change as a side effect of\nthe improvement done in the RECORD type of variable?+1 Pavel\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Wed, 1 Jan 2020 16:58:58 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "Further, if a table type (a.k.a. composite type or row type) having null\nvalue or holding no data in it is assigned to a record variable there is no\nstructure provided to the record variable. However when the same table\nhaving no data in it is assigned to the record variable, it does provide\nstructure to the record variable. I mean in both the cases we are assigning\nnull value to the record type so it looks a bit weird to see that in one\ncase we end up providing a proper structure to the record variable but not\nin the other case. Here is an example illustrating this scenario,\n\n\n\n\n\n\n\n\n\n\n\n\n*create table t1(a int, b text);do $$declare x t1; y record;begin --\ny := x; -- as mentioned earlier this doesn't\nprovide any structure to variable 'y'. --select * into y from t1; --\nthis does provide a structure to the variable 'y'. raise info 'y.a = %',\ny.a; -- this errors out for 1st statement (y := x) but not for the later\none (select ... into)end;$$ language plpgsql;*\n\nInvestigating this revealed that in later case i.e. in case of into clause,\nalthough there is no tuple returned by the select query still a tuple\ndescriptor is set by the query which provides the structure to the record\nvariable being written because having non-null tuple descriptor allows the\ncreation of an expanded object for the record variable eventually giving it\na structure.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\nOn Wed, Jan 1, 2020 at 9:29 PM Pavel Stehule <pavel.stehule@gmail.com>\nwrote:\n\n>\n>\n> st 1. 1. 2020 v 16:50 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com>\n> napsal:\n>\n>> Hi All,\n>>\n>> When a ROW variable having NULL value is assigned to a RECORD\n>> variable, it gives no structure to the RECORD type variable. Let's\n>> consider the following example.\n>>\n>> create table t1(a int, b text);\n>>\n>> insert into t1 values(1, 'str1');\n>>\n>> create or replace function f1() returns void as\n>> $$\n>> declare\n>> row t1%ROWTYPE;\n>> rec RECORD;\n>> begin\n>> row := NULL;\n>> rec := row;\n>> raise info 'rec.a = %, rec.b = %', rec.a, rec.b;\n>> end;\n>> $$ language plpgsql;\n>>\n>> In above example as 'row' variable is having NULL value, assigning\n>> this to 'rec' didn't give any structure to it although 'row' is having\n>> a predefined structure. Here is the error observed when above function\n>> is executed.\n>>\n>> select f1();\n>> ERROR: record \"rec\" is not assigned yet\n>>\n>> This started happening from the following git commit onwards,\n>>\n>> commit 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496\n>> Author: Tom Lane <tgl@sss.pgh.pa.us>\n>> Date: Tue Feb 13 18:52:21 2018 -0500\n>>\n>> Make plpgsql use its DTYPE_REC code paths for composite-type\n>> variables.\n>>\n>> I know this is expected to happen considering the changes done in\n>> above commit because from this commit onwards, NULL value assigned to\n>> any row variable represents a true NULL composite value before this\n>> commit it used to be a tuple with each column having null value in it.\n>> But, the point is, even if the row variable is having a NULL value it\n>> still has a structure associated with it. Shouldn't that structure be\n>> transferred to RECORD variable when it is assigned with a ROW type\n>> variable ? Can we consider this behaviour change as a side effect of\n>> the improvement done in the RECORD type of variable?\n>>\n>\n> +1\n>\n> Pavel\n>\n>\n>> --\n>> With Regards,\n>> Ashutosh Sharma\n>> EnterpriseDB:http://www.enterprisedb.com\n>>\n>>\n>>\n\nFurther, if a table type (a.k.a. composite type or row type) having null value or holding no data in it is assigned to a record variable there is no structure provided to the record variable. However when the same table having no data in it is assigned to the record variable, it does provide structure to the record variable. I mean in both the cases we are assigning null value to the record type so it looks a bit weird to see that in one case we end up providing a proper structure to the record variable but not in the other case. Here is an example illustrating this scenario,create table t1(a int, b text);do $$declare x t1; y record;begin -- y := x; -- as mentioned earlier this doesn't provide any structure to variable 'y'. --select * into y from t1; -- this does provide a structure to the variable 'y'. raise info 'y.a = %', y.a; -- this errors out for 1st statement (y := x) but not for the later one (select ... into)end;$$ language plpgsql;Investigating this revealed that in later case i.e. in case of into clause, although there is no tuple returned by the select query still a tuple descriptor is set by the query which provides the structure to the record variable being written because having non-null tuple descriptor allows the creation of an expanded object for the record variable eventually giving it a structure.--With Regards,Ashutosh SharmaEnterpriseDB:http://www.enterprisedb.comOn Wed, Jan 1, 2020 at 9:29 PM Pavel Stehule <pavel.stehule@gmail.com> wrote:st 1. 1. 2020 v 16:50 odesílatel Ashutosh Sharma <ashu.coek88@gmail.com> napsal:Hi All,\n\nWhen a ROW variable having NULL value is assigned to a RECORD\nvariable, it gives no structure to the RECORD type variable. Let's\nconsider the following example.\n\ncreate table t1(a int, b text);\n\ninsert into t1 values(1, 'str1');\n\ncreate or replace function f1() returns void as\n$$\ndeclare\n row t1%ROWTYPE;\n rec RECORD;\nbegin\n row := NULL;\n rec := row;\n raise info 'rec.a = %, rec.b = %', rec.a, rec.b;\nend;\n$$ language plpgsql;\n\nIn above example as 'row' variable is having NULL value, assigning\nthis to 'rec' didn't give any structure to it although 'row' is having\na predefined structure. Here is the error observed when above function\nis executed.\n\nselect f1();\nERROR: record \"rec\" is not assigned yet\n\nThis started happening from the following git commit onwards,\n\ncommit 4b93f57999a2ca9b9c9e573ea32ab1aeaa8bf496\nAuthor: Tom Lane <tgl@sss.pgh.pa.us>\nDate: Tue Feb 13 18:52:21 2018 -0500\n\n Make plpgsql use its DTYPE_REC code paths for composite-type variables.\n\nI know this is expected to happen considering the changes done in\nabove commit because from this commit onwards, NULL value assigned to\nany row variable represents a true NULL composite value before this\ncommit it used to be a tuple with each column having null value in it.\nBut, the point is, even if the row variable is having a NULL value it\nstill has a structure associated with it. Shouldn't that structure be\ntransferred to RECORD variable when it is assigned with a ROW type\nvariable ? Can we consider this behaviour change as a side effect of\nthe improvement done in the RECORD type of variable?+1 Pavel\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com",
"msg_date": "Fri, 3 Jan 2020 10:32:17 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "On Wed, Jan 1, 2020 at 10:50 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I know this is expected to happen considering the changes done in\n> above commit because from this commit onwards, NULL value assigned to\n> any row variable represents a true NULL composite value before this\n> commit it used to be a tuple with each column having null value in it.\n> But, the point is, even if the row variable is having a NULL value it\n> still has a structure associated with it. Shouldn't that structure be\n> transferred to RECORD variable when it is assigned with a ROW type\n> variable ? Can we consider this behaviour change as a side effect of\n> the improvement done in the RECORD type of variable?\n\nI'm not an expert on this topic. However, I *think* that you're trying\nto distinguish between two things that are actually the same. If it's\na \"true NULL,\" it has no structure; it's just NULL. If it has a\nstructure, then it's really a composite value with a NULL in each\ndefined column, i.e. (NULL, NULL, NULL, ...) for some row type rather\nthan just NULL.\n\nI have to admit that I've always found PL/pgsql to be a bit pedantic\nabout this whole thing. For instance:\n\nrhaas=# do $$declare x record; begin raise notice '%', x.a; end;$$\nlanguage plpgsql;\nERROR: record \"x\" is not assigned yet\nDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\nCONTEXT: SQL statement \"SELECT x.a\"\nPL/pgSQL function inline_code_block line 1 at RAISE\n\nBut maybe it should just make x.a evaluate to NULL. It's one thing if\nI have a record with columns 'a' and 'b' and I ask for column 'c'; I\nguess you could call that NULL, but it feels reasonably likely to be a\nprogramming error. But if we have no idea what the record columns are\nat all, perhaps we could just assume that whatever column the user is\nrequesting is intended to be one of them, and that since the whole\nthing is null, that column in particular is null.\n\nOn the other hand, maybe that would be too lenient and lead to subtle\nand hard-to-find bugs in plpgsql programs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:56:48 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "pá 3. 1. 2020 v 19:57 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:\n\n> On Wed, Jan 1, 2020 at 10:50 AM Ashutosh Sharma <ashu.coek88@gmail.com>\n> wrote:\n> > I know this is expected to happen considering the changes done in\n> > above commit because from this commit onwards, NULL value assigned to\n> > any row variable represents a true NULL composite value before this\n> > commit it used to be a tuple with each column having null value in it.\n> > But, the point is, even if the row variable is having a NULL value it\n> > still has a structure associated with it. Shouldn't that structure be\n> > transferred to RECORD variable when it is assigned with a ROW type\n> > variable ? Can we consider this behaviour change as a side effect of\n> > the improvement done in the RECORD type of variable?\n>\n> I'm not an expert on this topic. However, I *think* that you're trying\n> to distinguish between two things that are actually the same. If it's\n> a \"true NULL,\" it has no structure; it's just NULL. If it has a\n> structure, then it's really a composite value with a NULL in each\n> defined column, i.e. (NULL, NULL, NULL, ...) for some row type rather\n> than just NULL.\n>\n> I have to admit that I've always found PL/pgsql to be a bit pedantic\n> about this whole thing. For instance:\n>\n> rhaas=# do $$declare x record; begin raise notice '%', x.a; end;$$\n> language plpgsql;\n> ERROR: record \"x\" is not assigned yet\n> DETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\n> CONTEXT: SQL statement \"SELECT x.a\"\n> PL/pgSQL function inline_code_block line 1 at RAISE\n>\n> But maybe it should just make x.a evaluate to NULL. It's one thing if\n> I have a record with columns 'a' and 'b' and I ask for column 'c'; I\n> guess you could call that NULL, but it feels reasonably likely to be a\n> programming error. But if we have no idea what the record columns are\n> at all, perhaps we could just assume that whatever column the user is\n> requesting is intended to be one of them, and that since the whole\n> thing is null, that column in particular is null.\n>\n\nI don't like this idea. We should not to invent record's fields created by\nreading or writing some field. At end it block any static code analyze and\nit can hide a errors. If we enhance a interface for json or jsonb, then\nthis dynamic work can be done with these types.\n\nWe should to distinguish between typend and untyped NULL - it has sense for\nme (what was proposed by Ashutosh Sharma), but I don't see any sense to go\nfar.\n\nRegards\n\nPavel\n\n\n\n> On the other hand, maybe that would be too lenient and lead to subtle\n> and hard-to-find bugs in plpgsql programs.\n>\n> --\n> Robert Haas\n> EnterpriseDB: http://www.enterprisedb.com\n> The Enterprise PostgreSQL Company\n>\n>\n>\n\npá 3. 1. 2020 v 19:57 odesílatel Robert Haas <robertmhaas@gmail.com> napsal:On Wed, Jan 1, 2020 at 10:50 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> I know this is expected to happen considering the changes done in\n> above commit because from this commit onwards, NULL value assigned to\n> any row variable represents a true NULL composite value before this\n> commit it used to be a tuple with each column having null value in it.\n> But, the point is, even if the row variable is having a NULL value it\n> still has a structure associated with it. Shouldn't that structure be\n> transferred to RECORD variable when it is assigned with a ROW type\n> variable ? Can we consider this behaviour change as a side effect of\n> the improvement done in the RECORD type of variable?\n\nI'm not an expert on this topic. However, I *think* that you're trying\nto distinguish between two things that are actually the same. If it's\na \"true NULL,\" it has no structure; it's just NULL. If it has a\nstructure, then it's really a composite value with a NULL in each\ndefined column, i.e. (NULL, NULL, NULL, ...) for some row type rather\nthan just NULL.\n\nI have to admit that I've always found PL/pgsql to be a bit pedantic\nabout this whole thing. For instance:\n\nrhaas=# do $$declare x record; begin raise notice '%', x.a; end;$$\nlanguage plpgsql;\nERROR: record \"x\" is not assigned yet\nDETAIL: The tuple structure of a not-yet-assigned record is indeterminate.\nCONTEXT: SQL statement \"SELECT x.a\"\nPL/pgSQL function inline_code_block line 1 at RAISE\n\nBut maybe it should just make x.a evaluate to NULL. It's one thing if\nI have a record with columns 'a' and 'b' and I ask for column 'c'; I\nguess you could call that NULL, but it feels reasonably likely to be a\nprogramming error. But if we have no idea what the record columns are\nat all, perhaps we could just assume that whatever column the user is\nrequesting is intended to be one of them, and that since the whole\nthing is null, that column in particular is null.I don't like this idea. We should not to invent record's fields created by reading or writing some field. At end it block any static code analyze and it can hide a errors. If we enhance a interface for json or jsonb, then this dynamic work can be done with these types. We should to distinguish between typend and untyped NULL - it has sense for me (what was proposed by Ashutosh Sharma), but I don't see any sense to go far.RegardsPavel\n\nOn the other hand, maybe that would be too lenient and lead to subtle\nand hard-to-find bugs in plpgsql programs.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company",
"msg_date": "Fri, 3 Jan 2020 20:39:25 +0100",
"msg_from": "Pavel Stehule <pavel.stehule@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Wed, Jan 1, 2020 at 10:50 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n>> I know this is expected to happen considering the changes done in\n>> above commit because from this commit onwards, NULL value assigned to\n>> any row variable represents a true NULL composite value before this\n>> commit it used to be a tuple with each column having null value in it.\n>> But, the point is, even if the row variable is having a NULL value it\n>> still has a structure associated with it. Shouldn't that structure be\n>> transferred to RECORD variable when it is assigned with a ROW type\n>> variable ? Can we consider this behaviour change as a side effect of\n>> the improvement done in the RECORD type of variable?\n\n> I'm not an expert on this topic. However, I *think* that you're trying\n> to distinguish between two things that are actually the same. If it's\n> a \"true NULL,\" it has no structure; it's just NULL. If it has a\n> structure, then it's really a composite value with a NULL in each\n> defined column, i.e. (NULL, NULL, NULL, ...) for some row type rather\n> than just NULL.\n\nYeah. In general, we can't do this, because a null value of type\nRECORD simply hasn't got any information about what specific rowtype\nmight be involved. In the case where the null is of a named composite\ntype, rather than RECORD, we could choose to act differently ... but\nI'm not really sure that such a change would be an improvement and not\njust a decrease in consistency.\n\nIn any case, plpgsql's prior behavior was an implementation artifact\nwith very little to recommend it. As a concrete example, consider\n\ncreate table t1(a int, b text);\n\ndo $$\ndeclare x t1; r record;\nbegin\n x := null;\n r := x;\n raise notice 'r.a = %', r.a;\nend $$;\n\ndo $$\ndeclare r record;\nbegin\n r := null::t1;\n raise notice 'r.a = %', r.a;\nend $$;\n\nI assert that in any sanely-defined semantics, these two examples\nshould give the same result. In v11 and up, they both give\n'record \"r\" is not assigned yet' ... but in prior versions, they\ngave different results. I do not want to go back to that.\n\nOn the other hand, we now have\n\ndo $$\ndeclare x t1; r record;\nbegin\n x := null;\n r := x;\n raise notice 'x.a = %', x.a;\n raise notice 'r.a = %', r.a;\nend $$;\n\nwhich gives\n\nNOTICE: x.a = <NULL>\nERROR: record \"r\" is not assigned yet\n\nwhich is certainly also inconsistent. The variable declared as\nbeing type t1 behaves, for this purpose, as if it contained\n\"row(null,null)\" not just a simple null. But if you print it,\nor assign it to something else as a whole, you'll find it just\ncontains a simple null. One way to see that these are different\nstates is to do\n\ndo $$ declare x t1; begin x := null; raise notice 'x = %', x; end$$;\nNOTICE: x = <NULL>\n\nversus\n\ndo $$ declare x t1; begin x := row(null,null); raise notice 'x = %', x; end$$;\nNOTICE: x = (,)\n\nAnd, if you assign a row of nulls to a record-type variable, that works:\n\ndo $$\ndeclare x t1; r record;\nbegin\n x := row(null,null);\n r := x;\n raise notice 'x.a = %', x.a;\n raise notice 'r.a = %', r.a;\nend $$;\n\nwhich gives\n\nNOTICE: x.a = <NULL>\nNOTICE: r.a = <NULL>\n\nIf we were to change this behavior, I think it would be tantamount\nto sometimes expanding a simple null to a row of nulls, and I'm\nnot sure that's a great idea.\n\nThe SQL standard is confusing in this respect, because it seems\nthat at least the \"x IS [NOT] NULL\" construct is defined to\nconsider both a \"simple NULL\" and ROW(NULL,NULL,...) as \"null\".\nBut we've concluded that other parts of the spec do allow for\na distinction (I'm too lazy to search the archives for relevant\ndiscussions, but there have been some). The two things are\ndefinitely different implementation-wise, so it would be hard\nto hide the difference completely.\n\nAnother fun fact is that right now, assignment of any null value\nto a composite plpgsql variable works the same: you can assign a simple\nnull of some other composite type, or even a scalar null, and behold you\nget a null composite value without any error. That's because\nexec_assign_value's DTYPE_REC case pays no attention to the declared\ntype of the source value once it's found to be null. Thus\n\ndo $$ declare x t1; begin x := 42; raise notice 'x = %', x; end$$;\nERROR: cannot assign non-composite value to a record variable\n\ndo $$ declare x t1; begin x := null::int; raise notice 'x = %', x; end$$;\nNOTICE: x = <NULL>\n\nThat's pretty bizarre, and I don't think I'd agree with adopting those\nsemantics if we were in a green field. But if we start paying attention\nto the specific type of a null source value, I bet we're going to break\nsome code that works today.\n\nAnyway, maybe this area could be improved, but I'm not fully convinced.\nI definitely do not subscribe to the theory that we need to make it\nwork like v10 again.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 15:39:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 2:09 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Wed, Jan 1, 2020 at 10:50 AM Ashutosh Sharma <ashu.coek88@gmail.com> wrote:\n> >> I know this is expected to happen considering the changes done in\n> >> above commit because from this commit onwards, NULL value assigned to\n> >> any row variable represents a true NULL composite value before this\n> >> commit it used to be a tuple with each column having null value in it.\n> >> But, the point is, even if the row variable is having a NULL value it\n> >> still has a structure associated with it. Shouldn't that structure be\n> >> transferred to RECORD variable when it is assigned with a ROW type\n> >> variable ? Can we consider this behaviour change as a side effect of\n> >> the improvement done in the RECORD type of variable?\n>\n> > I'm not an expert on this topic. However, I *think* that you're trying\n> > to distinguish between two things that are actually the same. If it's\n> > a \"true NULL,\" it has no structure; it's just NULL. If it has a\n> > structure, then it's really a composite value with a NULL in each\n> > defined column, i.e. (NULL, NULL, NULL, ...) for some row type rather\n> > than just NULL.\n>\n> Yeah. In general, we can't do this, because a null value of type\n> RECORD simply hasn't got any information about what specific rowtype\n> might be involved. In the case where the null is of a named composite\n> type, rather than RECORD, we could choose to act differently ... but\n> I'm not really sure that such a change would be an improvement and not\n> just a decrease in consistency.\n>\n> In any case, plpgsql's prior behavior was an implementation artifact\n> with very little to recommend it. As a concrete example, consider\n>\n> create table t1(a int, b text);\n>\n> do $$\n> declare x t1; r record;\n> begin\n> x := null;\n> r := x;\n> raise notice 'r.a = %', r.a;\n> end $$;\n>\n> do $$\n> declare r record;\n> begin\n> r := null::t1;\n> raise notice 'r.a = %', r.a;\n> end $$;\n>\n> I assert that in any sanely-defined semantics, these two examples\n> should give the same result. In v11 and up, they both give\n> 'record \"r\" is not assigned yet' ... but in prior versions, they\n> gave different results. I do not want to go back to that.\n>\n> On the other hand, we now have\n>\n> do $$\n> declare x t1; r record;\n> begin\n> x := null;\n> r := x;\n> raise notice 'x.a = %', x.a;\n> raise notice 'r.a = %', r.a;\n> end $$;\n>\n> which gives\n>\n> NOTICE: x.a = <NULL>\n> ERROR: record \"r\" is not assigned yet\n>\n> which is certainly also inconsistent. The variable declared as\n> being type t1 behaves, for this purpose, as if it contained\n> \"row(null,null)\" not just a simple null. But if you print it,\n> or assign it to something else as a whole, you'll find it just\n> contains a simple null. One way to see that these are different\n> states is to do\n>\n> do $$ declare x t1; begin x := null; raise notice 'x = %', x; end$$;\n> NOTICE: x = <NULL>\n>\n> versus\n>\n> do $$ declare x t1; begin x := row(null,null); raise notice 'x = %', x; end$$;\n> NOTICE: x = (,)\n>\n> And, if you assign a row of nulls to a record-type variable, that works:\n>\n> do $$\n> declare x t1; r record;\n> begin\n> x := row(null,null);\n> r := x;\n> raise notice 'x.a = %', x.a;\n> raise notice 'r.a = %', r.a;\n> end $$;\n>\n> which gives\n>\n> NOTICE: x.a = <NULL>\n> NOTICE: r.a = <NULL>\n>\n> If we were to change this behavior, I think it would be tantamount\n> to sometimes expanding a simple null to a row of nulls, and I'm\n> not sure that's a great idea.\n>\n> The SQL standard is confusing in this respect, because it seems\n> that at least the \"x IS [NOT] NULL\" construct is defined to\n> consider both a \"simple NULL\" and ROW(NULL,NULL,...) as \"null\".\n> But we've concluded that other parts of the spec do allow for\n> a distinction (I'm too lazy to search the archives for relevant\n> discussions, but there have been some). The two things are\n> definitely different implementation-wise, so it would be hard\n> to hide the difference completely.\n>\n> Another fun fact is that right now, assignment of any null value\n> to a composite plpgsql variable works the same: you can assign a simple\n> null of some other composite type, or even a scalar null, and behold you\n> get a null composite value without any error. That's because\n> exec_assign_value's DTYPE_REC case pays no attention to the declared\n> type of the source value once it's found to be null. Thus\n>\n> do $$ declare x t1; begin x := 42; raise notice 'x = %', x; end$$;\n> ERROR: cannot assign non-composite value to a record variable\n>\n> do $$ declare x t1; begin x := null::int; raise notice 'x = %', x; end$$;\n> NOTICE: x = <NULL>\n>\n> That's pretty bizarre, and I don't think I'd agree with adopting those\n> semantics if we were in a green field. But if we start paying attention\n> to the specific type of a null source value, I bet we're going to break\n> some code that works today.\n>\n> Anyway, maybe this area could be improved, but I'm not fully convinced.\n> I definitely do not subscribe to the theory that we need to make it\n> work like v10 again.\n\nOkay. Thanks for sharing your thoughts on this.\n\n--\nWith Regards,\nAshutosh Sharma\nEnterpriseDB:http://www.enterprisedb.com\n\n\n",
"msg_date": "Sat, 4 Jan 2020 07:32:11 +0530",
"msg_from": "Ashutosh Sharma <ashu.coek88@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: Assigning ROW variable having NULL value to RECORD type variable\n doesn't give any structure to the RECORD variable."
}
] |
[
{
"msg_contents": "The RelationNeedsWAL() code block within _bt_delitems_delete() has had\nthe following comment for many years now:\n\n/*\n * We need the target-offsets array whether or not we store the whole\n * buffer, to allow us to find the latestRemovedXid on a standby\n * server.\n */\nXLogRegisterData((char *) itemnos, nitems * sizeof(OffsetNumber));\n\nHowever, we don't actually need to do it that way these days. We won't\ngo on to determine a latestRemovedXid on a standby as of commit\n558a9165e08 (that happens on the primary instead), so the comment\nseems wrong.\n\nRather than just changing the comment, I propose that we tweak the\nbehavior of _bt_delitems_delete() to match its sibling function\n_bt_delitems_vacuum(). That is, it should use XLogRegisterBufData(),\nnot XLogRegisterData(). This is cleaner, and ought to be a minor win.\n\nAttached patch shows what I have in mind. The new comment block has\nbeen copied from _bt_delitems_vacuum().\n\n--\nPeter Geoghegan",
"msg_date": "Wed, 1 Jan 2020 13:00:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "_bt_delitems_delete() should use XLogRegisterBufData(),\n not XLogRegisterData()"
},
{
"msg_contents": "On Wed, Jan 1, 2020 at 1:00 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> Attached patch shows what I have in mind. The new comment block has\n> been copied from _bt_delitems_vacuum().\n\nI also think that the WAL record and function signature of\n_bt_delitems_delete() should be brought closer to\n_bt_delitems_vacuum(). Attached patch does it that way.\n\nI intend to commit this in the next day or two, barring any objections.\n\n-- \nPeter Geoghegan",
"msg_date": "Thu, 2 Jan 2020 13:41:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": true,
"msg_subject": "Re: _bt_delitems_delete() should use XLogRegisterBufData(),\n not XLogRegisterData()"
}
] |
[
{
"msg_contents": "Hi,\n\nI am starting a new thread for some of the decisions for a parallel vacuum\nin the hope to get feedback from more people. There are mainly two points\nfor which we need some feedback.\n\n1. Tomas Vondra has pointed out on the main thread [1] that by default the\nparallel vacuum should be enabled similar to what we do for Create Index.\nAs proposed, the patch enables it only when the user specifies it (ex.\nVacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of\nenabling it by default as mentioned by Tomas is \"It's pretty much the same\nthing we did with vacuum throttling - it's disabled for explicit vacuum by\ndefault, but you can enable it. If you're worried about VACUUM causing\nissues, you should set cost delay.\". Some of the arguments against\nenabling it are that it will lead to use of more resources (like CPU, I/O)\nwhich users might or might like.\n\nNow, if we want to enable it by default, we need a way to disable it as\nwell and along with that, we need a way for users to specify a parallel\ndegree. I have mentioned a few reasons why we need a parallel degree for\nthis operation in the email [2] on the main thread.\n\nIf parallel vacuum is **not** enabled by default, then I think the current\nway to enable is fine which is as follows:\nVacuum (Parallel 2) <tbl_name>;\n\nHere, if the user doesn't specify parallel_degree, then we internally\ndecide based on number of indexes that support a parallel vacuum with a\nmaximum of max_parallel_maintenance_workers.\n\nIf the parallel vacuum is enabled by default, then I could think of the\nfollowing ways:\n(a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n<parallel_degree>) <tbl_name>;\n(b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies\nparallel_degree as 0, then disable parallelism.\n(c) ... Any better ideas?\n\n2. The patch provides a FAST option (based on suggestion by Robert) for a\nparallel vacuum which will make it behave like vacuum_cost_delay = 0 which\nmeans it will disable throttling. So,\nVACUUM (PARALLEL n, FAST) <tbl_name> will allow the parallel vacuum to run\nwithout resource throttling. Tomas thinks that we don't need such an\noption as the same can be served by setting vacuum_cost_delay = 0 which is\na valid argument, but OTOH, providing an option to the user which can make\nhis life easier is not a bad idea either.\n\nThoughts?\n\n[1] -\nhttps://www.postgresql.org/message-id/20191229212354.tqivttn23lxjg2jz%40development\n[2] -\nhttps://www.postgresql.org/message-id/CAA4eK1%2B1o-BaPvJnK7BPThTryx3MRDS%2BmCf9eVVZT%3DSVJ8mwLg%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\nHi,I am starting a new thread for some of the decisions for a parallel vacuum in the hope to get feedback from more people. There are mainly two points for which we need some feedback.1. Tomas Vondra has pointed out on the main thread [1] that by default the parallel vacuum should be enabled similar to what we do for Create Index. As proposed, the patch enables it only when the user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of enabling it by default as mentioned by Tomas is \"It's pretty much the same thing we did with vacuum throttling - it's disabled for explicit vacuum by default, but you can enable it. If you're worried about VACUUM causing issues, you should set cost delay.\". Some of the arguments against enabling it are that it will lead to use of more resources (like CPU, I/O) which users might or might like.Now, if we want to enable it by default, we need a way to disable it as well and along with that, we need a way for users to specify a parallel degree. I have mentioned a few reasons why we need a parallel degree for this operation in the email [2] on the main thread.If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:Vacuum (Parallel 2) <tbl_name>;Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.If the parallel vacuum is enabled by default, then I could think of the following ways:(a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;(b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.(c) ... Any better ideas?2. The patch provides a FAST option (based on suggestion by Robert) for a parallel vacuum which will make it behave like vacuum_cost_delay = 0 which means it will disable throttling. So,VACUUM (PARALLEL n, FAST) <tbl_name> will allow the parallel vacuum to run without resource throttling. Tomas thinks that we don't need such an option as the same can be served by setting vacuum_cost_delay = 0 which is a valid argument, but OTOH, providing an option to the user which can make his life easier is not a bad idea either.Thoughts?[1] - https://www.postgresql.org/message-id/20191229212354.tqivttn23lxjg2jz%40development[2] - https://www.postgresql.org/message-id/CAA4eK1%2B1o-BaPvJnK7BPThTryx3MRDS%2BmCf9eVVZT%3DSVJ8mwLg%40mail.gmail.com-- With Regards,Amit Kapila.EnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 2 Jan 2020 17:38:50 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "parallel vacuum options/syntax"
},
{
"msg_contents": "Le jeu. 2 janv. 2020 à 13:09, Amit Kapila <amit.kapila16@gmail.com> a\nécrit :\n\n> Hi,\n>\n> I am starting a new thread for some of the decisions for a parallel vacuum\n> in the hope to get feedback from more people. There are mainly two points\n> for which we need some feedback.\n>\n> 1. Tomas Vondra has pointed out on the main thread [1] that by default the\n> parallel vacuum should be enabled similar to what we do for Create Index.\n> As proposed, the patch enables it only when the user specifies it (ex.\n> Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of\n> enabling it by default as mentioned by Tomas is \"It's pretty much the same\n> thing we did with vacuum throttling - it's disabled for explicit vacuum by\n> default, but you can enable it. If you're worried about VACUUM causing\n> issues, you should set cost delay.\". Some of the arguments against\n> enabling it are that it will lead to use of more resources (like CPU, I/O)\n> which users might or might like.\n>\n> Now, if we want to enable it by default, we need a way to disable it as\n> well and along with that, we need a way for users to specify a parallel\n> degree. I have mentioned a few reasons why we need a parallel degree for\n> this operation in the email [2] on the main thread.\n>\n> If parallel vacuum is **not** enabled by default, then I think the\n> current way to enable is fine which is as follows:\n> Vacuum (Parallel 2) <tbl_name>;\n>\n> Here, if the user doesn't specify parallel_degree, then we internally\n> decide based on number of indexes that support a parallel vacuum with a\n> maximum of max_parallel_maintenance_workers.\n>\n> If the parallel vacuum is enabled by default, then I could think of the\n> following ways:\n> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n> <parallel_degree>) <tbl_name>;\n> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies\n> parallel_degree as 0, then disable parallelism.\n> (c) ... Any better ideas?\n>\n>\nAFAICT, every parallel-able statement use parallelisation by default, so it\nwouldn't be consistent if VACUUM behaves some other way.\n\nSo, (c) has my vote.\n\n2. The patch provides a FAST option (based on suggestion by Robert) for a\n> parallel vacuum which will make it behave like vacuum_cost_delay = 0 which\n> means it will disable throttling. So,\n> VACUUM (PARALLEL n, FAST) <tbl_name> will allow the parallel vacuum to run\n> without resource throttling. Tomas thinks that we don't need such an\n> option as the same can be served by setting vacuum_cost_delay = 0 which is\n> a valid argument, but OTOH, providing an option to the user which can make\n> his life easier is not a bad idea either.\n>\n>\nThe user already has an option (the vacuum_cost_delay GUC). So I kinda\nagree with Tomas on this.\n\n\n-- \nGuillaume.\n\nLe jeu. 2 janv. 2020 à 13:09, Amit Kapila <amit.kapila16@gmail.com> a écrit :Hi,I am starting a new thread for some of the decisions for a parallel vacuum in the hope to get feedback from more people. There are mainly two points for which we need some feedback.1. Tomas Vondra has pointed out on the main thread [1] that by default the parallel vacuum should be enabled similar to what we do for Create Index. As proposed, the patch enables it only when the user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of enabling it by default as mentioned by Tomas is \"It's pretty much the same thing we did with vacuum throttling - it's disabled for explicit vacuum by default, but you can enable it. If you're worried about VACUUM causing issues, you should set cost delay.\". Some of the arguments against enabling it are that it will lead to use of more resources (like CPU, I/O) which users might or might like.Now, if we want to enable it by default, we need a way to disable it as well and along with that, we need a way for users to specify a parallel degree. I have mentioned a few reasons why we need a parallel degree for this operation in the email [2] on the main thread.If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:Vacuum (Parallel 2) <tbl_name>;Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.If the parallel vacuum is enabled by default, then I could think of the following ways:(a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;(b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.(c) ... Any better ideas?AFAICT, every parallel-able statement use parallelisation by default, so it wouldn't be consistent if VACUUM behaves some other way.So, (c) has my vote.2. The patch provides a FAST option (based on suggestion by Robert) for a parallel vacuum which will make it behave like vacuum_cost_delay = 0 which means it will disable throttling. So,VACUUM (PARALLEL n, FAST) <tbl_name> will allow the parallel vacuum to run without resource throttling. Tomas thinks that we don't need such an option as the same can be served by setting vacuum_cost_delay = 0 which is a valid argument, but OTOH, providing an option to the user which can make his life easier is not a bad idea either.The user already has an option (the vacuum_cost_delay GUC). So I kinda agree with Tomas on this.-- Guillaume.",
"msg_date": "Thu, 2 Jan 2020 14:39:20 +0100",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 5:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Hi,\n>\n> I am starting a new thread for some of the decisions for a parallel vacuum in the hope to get feedback from more people. There are mainly two points for which we need some feedback.\n>\n> 1. Tomas Vondra has pointed out on the main thread [1] that by default the parallel vacuum should be enabled similar to what we do for Create Index. As proposed, the patch enables it only when the user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of enabling it by default as mentioned by Tomas is \"It's pretty much the same thing we did with vacuum throttling - it's disabled for explicit vacuum by default, but you can enable it. If you're worried about VACUUM causing issues, you should set cost delay.\". Some of the arguments against enabling it are that it will lead to use of more resources (like CPU, I/O) which users might or might like.\n>\n> Now, if we want to enable it by default, we need a way to disable it as well and along with that, we need a way for users to specify a parallel degree. I have mentioned a few reasons why we need a parallel degree for this operation in the email [2] on the main thread.\n>\n> If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n> Vacuum (Parallel 2) <tbl_name>;\n>\n> Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n>\n> If the parallel vacuum is enabled by default, then I could think of the following ways:\n> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n> (c) ... Any better ideas?\n\nIMHO, it's better to keep the parallelism enables by default. Because\nif the user is giving an explicit vacuum then better to keep it fast\nby default. However, I agree that we can provide an option for the\nuser to disable it and provide the parallel degree with the vacuum\ncommand something like option (b).\n>\n> 2. The patch provides a FAST option (based on suggestion by Robert) for a parallel vacuum which will make it behave like vacuum_cost_delay = 0 which means it will disable throttling. So,\n> VACUUM (PARALLEL n, FAST) <tbl_name> will allow the parallel vacuum to run without resource throttling. Tomas thinks that we don't need such an option as the same can be served by setting vacuum_cost_delay = 0 which is a valid argument, but OTOH, providing an option to the user which can make his life easier is not a bad idea either.\n\nI agree that there is already an option to run it without cost delay\nbut there is no harm in providing extra power to the user where he can\nrun a particular vacuum command without IO throttling. So +1 for the\nFAST option.\n\n-- \nRegards,\nDilip Kumar\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 08:50:42 +0530",
"msg_from": "Dilip Kumar <dilipbalaut@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 8:50 AM Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jan 2, 2020 at 5:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n> > Vacuum (Parallel 2) <tbl_name>;\n> >\n> > Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n> >\n> > If the parallel vacuum is enabled by default, then I could think of the following ways:\n> > (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n> > (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n> > (c) ... Any better ideas?\n>\n> IMHO, it's better to keep the parallelism enables by default. Because\n> if the user is giving an explicit vacuum then better to keep it fast\n> by default.\n\nOkay.\n\n> However, I agree that we can provide an option for the\n> user to disable it and provide the parallel degree with the vacuum\n> command something like option (b).\n>\n\nThe option (b) has some advantage over (a) that we don't need to\ninvent multiple options to enable/disable parallelism for vacuum.\nHowever, it might appear awkward to set parallel_degree as 0 (Vacuum\n(Parallel 0) tbl_name) to disable parallelism. Having said that, we\nalready have some precedence wherein if we set parameters like\nstatement_timeout, lock_timeout, etc to zero, it disables the timeout.\nSo, it won't be insane if we choose this option.\n\nDoes anyone else have any opinion on what makes sense here?\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:31:58 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 7:09 PM Guillaume Lelarge <guillaume@lelarge.info> wrote:\n>\n> Le jeu. 2 janv. 2020 à 13:09, Amit Kapila <amit.kapila16@gmail.com> a écrit :\n>>\n>> If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n>> Vacuum (Parallel 2) <tbl_name>;\n>>\n>> Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n>>\n>> If the parallel vacuum is enabled by default, then I could think of the following ways:\n>> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n>> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n>> (c) ... Any better ideas?\n>>\n>\n> AFAICT, every parallel-able statement use parallelisation by default, so it wouldn't be consistent if VACUUM behaves some other way.\n>\n\nFair enough.\n\n> So, (c) has my vote.\n>\n\nI don't understand this. What do you mean by voting (c) option? Do\nyou mean that you didn't like any of (a) or (b)? If so, then feel\nfree to suggest something else. One more possibility could be to\nallow users to specify parallel degree or disable parallelism via guc\n'max_parallel_maintenance_workers'. Basically, if the user wants to\ndisable parallelism, it needs to set the value of guc\nmax_parallel_maintenance_workers as zero and if it wants to increase\nthe parallel degree than the default value (which is two), then it can\nset it via max_parallel_maintenance_workers before running vacuum\ncommand. Now, this can certainly work, but I feel setting/resetting a\nguc before a vacuum\ncommand can be a bit inconvenient for users, but if others prefer that\nway, then we can do that.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 13:35:54 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "Le ven. 3 janv. 2020 à 09:06, Amit Kapila <amit.kapila16@gmail.com> a\nécrit :\n\n> On Thu, Jan 2, 2020 at 7:09 PM Guillaume Lelarge <guillaume@lelarge.info>\n> wrote:\n> >\n> > Le jeu. 2 janv. 2020 à 13:09, Amit Kapila <amit.kapila16@gmail.com> a\n> écrit :\n> >>\n> >> If parallel vacuum is *not* enabled by default, then I think the\n> current way to enable is fine which is as follows:\n> >> Vacuum (Parallel 2) <tbl_name>;\n> >>\n> >> Here, if the user doesn't specify parallel_degree, then we internally\n> decide based on number of indexes that support a parallel vacuum with a\n> maximum of max_parallel_maintenance_workers.\n> >>\n> >> If the parallel vacuum is enabled by default, then I could think of the\n> following ways:\n> >> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n> <parallel_degree>) <tbl_name>;\n> >> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies\n> parallel_degree as 0, then disable parallelism.\n> >> (c) ... Any better ideas?\n> >>\n> >\n> > AFAICT, every parallel-able statement use parallelisation by default, so\n> it wouldn't be consistent if VACUUM behaves some other way.\n> >\n>\n> Fair enough.\n>\n> > So, (c) has my vote.\n> >\n>\n> I don't understand this. What do you mean by voting (c) option? Do\n> you mean that you didn't like any of (a) or (b)?\n\n\nI meant (b), sorry :)\n\n If so, then feel\n> free to suggest something else. One more possibility could be to\n> allow users to specify parallel degree or disable parallelism via guc\n> 'max_parallel_maintenance_workers'. Basically, if the user wants to\n> disable parallelism, it needs to set the value of guc\n> max_parallel_maintenance_workers as zero and if it wants to increase\n> the parallel degree than the default value (which is two), then it can\n> set it via max_parallel_maintenance_workers before running vacuum\n> command. Now, this can certainly work, but I feel setting/resetting a\n> guc before a vacuum\n> command can be a bit inconvenient for users, but if others prefer that\n> way, then we can do that.\n>\n>\n\n-- \nGuillaume.\n\nLe ven. 3 janv. 2020 à 09:06, Amit Kapila <amit.kapila16@gmail.com> a écrit :On Thu, Jan 2, 2020 at 7:09 PM Guillaume Lelarge <guillaume@lelarge.info> wrote:\n>\n> Le jeu. 2 janv. 2020 à 13:09, Amit Kapila <amit.kapila16@gmail.com> a écrit :\n>>\n>> If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n>> Vacuum (Parallel 2) <tbl_name>;\n>>\n>> Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n>>\n>> If the parallel vacuum is enabled by default, then I could think of the following ways:\n>> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n>> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n>> (c) ... Any better ideas?\n>>\n>\n> AFAICT, every parallel-able statement use parallelisation by default, so it wouldn't be consistent if VACUUM behaves some other way.\n>\n\nFair enough.\n\n> So, (c) has my vote.\n>\n\nI don't understand this. What do you mean by voting (c) option? Do\nyou mean that you didn't like any of (a) or (b)?I meant (b), sorry :) If so, then feel\nfree to suggest something else. One more possibility could be to\nallow users to specify parallel degree or disable parallelism via guc\n'max_parallel_maintenance_workers'. Basically, if the user wants to\ndisable parallelism, it needs to set the value of guc\nmax_parallel_maintenance_workers as zero and if it wants to increase\nthe parallel degree than the default value (which is two), then it can\nset it via max_parallel_maintenance_workers before running vacuum\ncommand. Now, this can certainly work, but I feel setting/resetting a\nguc before a vacuum\ncommand can be a bit inconvenient for users, but if others prefer that\nway, then we can do that.\n\n-- Guillaume.",
"msg_date": "Fri, 3 Jan 2020 09:08:13 +0100",
"msg_from": "Guillaume Lelarge <guillaume@lelarge.info>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Fri, 3 Jan 2020 at 08:51, Dilip Kumar <dilipbalaut@gmail.com> wrote:\n>\n> On Thu, Jan 2, 2020 at 5:39 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > Hi,\n> >\n> > I am starting a new thread for some of the decisions for a parallel vacuum in the hope to get feedback from more people. There are mainly two points for which we need some feedback.\n> >\n> > 1. Tomas Vondra has pointed out on the main thread [1] that by default the parallel vacuum should be enabled similar to what we do for Create Index. As proposed, the patch enables it only when the user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of enabling it by default as mentioned by Tomas is \"It's pretty much the same thing we did with vacuum throttling - it's disabled for explicit vacuum by default, but you can enable it. If you're worried about VACUUM causing issues, you should set cost delay.\". Some of the arguments against enabling it are that it will lead to use of more resources (like CPU, I/O) which users might or might like.\n> >\n> > Now, if we want to enable it by default, we need a way to disable it as well and along with that, we need a way for users to specify a parallel degree. I have mentioned a few reasons why we need a parallel degree for this operation in the email [2] on the main thread.\n> >\n> > If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n> > Vacuum (Parallel 2) <tbl_name>;\n> >\n> > Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n> >\n> > If the parallel vacuum is enabled by default, then I could think of the following ways:\n> > (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n> > (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n> > (c) ... Any better ideas?\n>\n> IMHO, it's better to keep the parallelism enables by default. Because\n> if the user is giving an explicit vacuum then better to keep it fast\n> by default. However, I agree that we can provide an option for the\n> user to disable it and provide the parallel degree with the vacuum\n> command something like option (b).\n\n+1\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 15:25:04 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 9:09 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> Hi,\n>\n> I am starting a new thread for some of the decisions for a parallel vacuum in the hope to get feedback from more people. There are mainly two points for which we need some feedback.\n>\n> 1. Tomas Vondra has pointed out on the main thread [1] that by default the parallel vacuum should be enabled similar to what we do for Create Index. As proposed, the patch enables it only when the user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the arguments in favor of enabling it by default as mentioned by Tomas is \"It's pretty much the same thing we did with vacuum throttling - it's disabled for explicit vacuum by default, but you can enable it. If you're worried about VACUUM causing issues, you should set cost delay.\". Some of the arguments against enabling it are that it will lead to use of more resources (like CPU, I/O) which users might or might like.\n>\n\nI'm a bit wary of making parallel vacuum enabled by default. Single\nprocess vacuum does sequential reads/writes on most of indexes but\nparallel vacuum does random access random reads/writes. I've tested\nparallel vacuum on HDD and confirmed the performance is good but I'm\nconcerned that it might be cause of more disk I/O than user expected.\n\n> Now, if we want to enable it by default, we need a way to disable it as well and along with that, we need a way for users to specify a parallel degree. I have mentioned a few reasons why we need a parallel degree for this operation in the email [2] on the main thread.\n>\n> If parallel vacuum is *not* enabled by default, then I think the current way to enable is fine which is as follows:\n> Vacuum (Parallel 2) <tbl_name>;\n>\n> Here, if the user doesn't specify parallel_degree, then we internally decide based on number of indexes that support a parallel vacuum with a maximum of max_parallel_maintenance_workers.\n>\n> If the parallel vacuum is enabled by default, then I could think of the following ways:\n> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel <parallel_degree>) <tbl_name>;\n> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user specifies parallel_degree as 0, then disable parallelism.\n> (c) ... Any better ideas?\n>\n\nIf parallel vacuum is enabled by default, I would prefer (b) but I\ndon't think it's a good idea to accept 0 as parallel degree. If we\nwant to disable parallel vacuum we should\nmax_parallel_maintenance_workers to 0 instead.\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 08:54:15 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 05, 2020 at 08:54:15AM +0900, Masahiko Sawada wrote:\n>On Thu, Jan 2, 2020 at 9:09 PM Amit Kapila <amit.kapila16@gmail.com>\n>wrote:\n>>\n>> Hi,\n>>\n>> I am starting a new thread for some of the decisions for a parallel\n>> vacuum in the hope to get feedback from more people. There are\n>> mainly two points for which we need some feedback.\n>>\n>> 1. Tomas Vondra has pointed out on the main thread [1] that by\n>> default the parallel vacuum should be enabled similar to what we do\n>> for Create Index. As proposed, the patch enables it only when the\n>> user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the\n>> arguments in favor of enabling it by default as mentioned by Tomas is\n>> \"It's pretty much the same thing we did with vacuum throttling - it's\n>> disabled for explicit vacuum by default, but you can enable it. If\n>> you're worried about VACUUM causing issues, you should set cost\n>> delay.\". Some of the arguments against enabling it are that it will\n>> lead to use of more resources (like CPU, I/O) which users might or\n>> might like.\n>>\n>\n>I'm a bit wary of making parallel vacuum enabled by default. Single\n>process vacuum does sequential reads/writes on most of indexes but\n>parallel vacuum does random access random reads/writes. I've tested\n>parallel vacuum on HDD and confirmed the performance is good but I'm\n>concerned that it might be cause of more disk I/O than user expected.\n>\n\nI understand the concern, but it's not clear to me why to apply this\ndefensive approach just to vacuum and not to all commands. Especially\nwhen we do have a way to throttle vacuum (unlike pretty much any other\ncommand) if I/O really is a scarce resource.\n\nAs the vacuum workers are separate processes, each generating requests\nwith a sequential pattern, so I'd expect readahead to kick in and keep\nthe efficiency of sequential access pattern.\n\n>> Now, if we want to enable it by default, we need a way to disable it\n>> as well and along with that, we need a way for users to specify a\n>> parallel degree. I have mentioned a few reasons why we need a\n>> parallel degree for this operation in the email [2] on the main\n>> thread.\n>>\n>> If parallel vacuum is *not* enabled by default, then I think the\n>> current way to enable is fine which is as follows: Vacuum (Parallel\n>> 2) <tbl_name>;\n>>\n>> Here, if the user doesn't specify parallel_degree, then we internally\n>> decide based on number of indexes that support a parallel vacuum with\n>> a maximum of max_parallel_maintenance_workers.\n>>\n>> If the parallel vacuum is enabled by default, then I could think of\n>> the following ways:\n>>\n>> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n>> <parallel_degree>) <tbl_name>;\n>>\n>> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user\n>> specifies parallel_degree as 0, then disable parallelism.\n>>\n>> (c) ... Any better ideas?\n>>\n>\n>If parallel vacuum is enabled by default, I would prefer (b) but I\n>don't think it's a good idea to accept 0 as parallel degree. If we want\n>to disable parallel vacuum we should max_parallel_maintenance_workers\n>to 0 instead.\n>\n\nIMO that just makes the interaction between vacuum options and the GUC\neven more complicated/confusing.\n\nIf we want to have a vacuum option to determine parallel degree, we\nshould probably have a vacuum option to disable parallelism using just a\nvacuum option. I don't think 0 is too bad, and disable_parallel seems a\nbit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\nThat's what Oracle does, so it's not entirely without a precedent.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sun, 5 Jan 2020 02:10:34 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 6:40 AM Tomas Vondra\n<tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Jan 05, 2020 at 08:54:15AM +0900, Masahiko Sawada wrote:\n> >On Thu, Jan 2, 2020 at 9:09 PM Amit Kapila <amit.kapila16@gmail.com>\n> >wrote:\n> >>\n> >> Hi,\n> >>\n> >> I am starting a new thread for some of the decisions for a parallel\n> >> vacuum in the hope to get feedback from more people. There are\n> >> mainly two points for which we need some feedback.\n> >>\n> >> 1. Tomas Vondra has pointed out on the main thread [1] that by\n> >> default the parallel vacuum should be enabled similar to what we do\n> >> for Create Index. As proposed, the patch enables it only when the\n> >> user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the\n> >> arguments in favor of enabling it by default as mentioned by Tomas is\n> >> \"It's pretty much the same thing we did with vacuum throttling - it's\n> >> disabled for explicit vacuum by default, but you can enable it. If\n> >> you're worried about VACUUM causing issues, you should set cost\n> >> delay.\". Some of the arguments against enabling it are that it will\n> >> lead to use of more resources (like CPU, I/O) which users might or\n> >> might like.\n> >>\n> >\n> >I'm a bit wary of making parallel vacuum enabled by default. Single\n> >process vacuum does sequential reads/writes on most of indexes but\n> >parallel vacuum does random access random reads/writes. I've tested\n> >parallel vacuum on HDD and confirmed the performance is good but I'm\n> >concerned that it might be cause of more disk I/O than user expected.\n> >\n>\n> I understand the concern, but it's not clear to me why to apply this\n> defensive approach just to vacuum and not to all commands. Especially\n> when we do have a way to throttle vacuum (unlike pretty much any other\n> command) if I/O really is a scarce resource.\n>\n> As the vacuum workers are separate processes, each generating requests\n> with a sequential pattern, so I'd expect readahead to kick in and keep\n> the efficiency of sequential access pattern.\n>\n\nRight, I also think so.\n\n> >> Now, if we want to enable it by default, we need a way to disable it\n> >> as well and along with that, we need a way for users to specify a\n> >> parallel degree. I have mentioned a few reasons why we need a\n> >> parallel degree for this operation in the email [2] on the main\n> >> thread.\n> >>\n> >> If parallel vacuum is *not* enabled by default, then I think the\n> >> current way to enable is fine which is as follows: Vacuum (Parallel\n> >> 2) <tbl_name>;\n> >>\n> >> Here, if the user doesn't specify parallel_degree, then we internally\n> >> decide based on number of indexes that support a parallel vacuum with\n> >> a maximum of max_parallel_maintenance_workers.\n> >>\n> >> If the parallel vacuum is enabled by default, then I could think of\n> >> the following ways:\n> >>\n> >> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n> >> <parallel_degree>) <tbl_name>;\n> >>\n> >> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user\n> >> specifies parallel_degree as 0, then disable parallelism.\n> >>\n> >> (c) ... Any better ideas?\n> >>\n> >\n> >If parallel vacuum is enabled by default, I would prefer (b) but I\n> >don't think it's a good idea to accept 0 as parallel degree. If we want\n> >to disable parallel vacuum we should max_parallel_maintenance_workers\n> >to 0 instead.\n> >\n>\n> IMO that just makes the interaction between vacuum options and the GUC\n> even more complicated/confusing.\n>\n\nYeah, I am also not sure if that will be a good idea.\n\n> If we want to have a vacuum option to determine parallel degree, we\n> should probably have a vacuum option to disable parallelism using just a\n> vacuum option. I don't think 0 is too bad, and disable_parallel seems a\n> bit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\n> That's what Oracle does, so it's not entirely without a precedent.\n>\n\nWe can go either way (using 0 for parallel to indicate disable\nparallelism or by introducing a new option like NOPARALLEL). I think\ninitially we can avoid introducing more options and just go with\n'Parallel 0' and if we find a lot of people find it inconvenient, then\nwe can always introduce a new option later.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Jan 2020 15:56:35 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 05, 2020 at 03:56:35PM +0530, Amit Kapila wrote:\n>>\n>> ...\n>>\n>> >If parallel vacuum is enabled by default, I would prefer (b) but I\n>> >don't think it's a good idea to accept 0 as parallel degree. If we want\n>> >to disable parallel vacuum we should max_parallel_maintenance_workers\n>> >to 0 instead.\n>> >\n>>\n>> IMO that just makes the interaction between vacuum options and the GUC\n>> even more complicated/confusing.\n>>\n>\n>Yeah, I am also not sure if that will be a good idea.\n>\n>> If we want to have a vacuum option to determine parallel degree, we\n>> should probably have a vacuum option to disable parallelism using just a\n>> vacuum option. I don't think 0 is too bad, and disable_parallel seems a\n>> bit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\n>> That's what Oracle does, so it's not entirely without a precedent.\n>>\n>\n>We can go either way (using 0 for parallel to indicate disable\n>parallelism or by introducing a new option like NOPARALLEL). I think\n>initially we can avoid introducing more options and just go with\n>'Parallel 0' and if we find a lot of people find it inconvenient, then\n>we can always introduce a new option later.\n>\n\nI don't think starting with \"parallel 0\" and then maybe introducing\nNOPARALLEL sometime in the future is a good plan, because after adding\nNOPARALLEL we'd either have to remove \"parallel 0\" (breaking backwards\ncompatibility unnecessarily) or supporting both approaches.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 12:59:10 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 7:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 5, 2020 at 6:40 AM Tomas Vondra\n> <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> > On Sun, Jan 05, 2020 at 08:54:15AM +0900, Masahiko Sawada wrote:\n> > >On Thu, Jan 2, 2020 at 9:09 PM Amit Kapila <amit.kapila16@gmail.com>\n> > >wrote:\n> > >>\n> > >> Hi,\n> > >>\n> > >> I am starting a new thread for some of the decisions for a parallel\n> > >> vacuum in the hope to get feedback from more people. There are\n> > >> mainly two points for which we need some feedback.\n> > >>\n> > >> 1. Tomas Vondra has pointed out on the main thread [1] that by\n> > >> default the parallel vacuum should be enabled similar to what we do\n> > >> for Create Index. As proposed, the patch enables it only when the\n> > >> user specifies it (ex. Vacuum (Parallel 2) <tbl_name>;). One of the\n> > >> arguments in favor of enabling it by default as mentioned by Tomas is\n> > >> \"It's pretty much the same thing we did with vacuum throttling - it's\n> > >> disabled for explicit vacuum by default, but you can enable it. If\n> > >> you're worried about VACUUM causing issues, you should set cost\n> > >> delay.\". Some of the arguments against enabling it are that it will\n> > >> lead to use of more resources (like CPU, I/O) which users might or\n> > >> might like.\n> > >>\n> > >\n> > >I'm a bit wary of making parallel vacuum enabled by default. Single\n> > >process vacuum does sequential reads/writes on most of indexes but\n> > >parallel vacuum does random access random reads/writes. I've tested\n> > >parallel vacuum on HDD and confirmed the performance is good but I'm\n> > >concerned that it might be cause of more disk I/O than user expected.\n> > >\n> >\n> > I understand the concern, but it's not clear to me why to apply this\n> > defensive approach just to vacuum and not to all commands. Especially\n> > when we do have a way to throttle vacuum (unlike pretty much any other\n> > command) if I/O really is a scarce resource.\n> >\n> > As the vacuum workers are separate processes, each generating requests\n> > with a sequential pattern, so I'd expect readahead to kick in and keep\n> > the efficiency of sequential access pattern.\n> >\n>\n> Right, I also think so.\n\nOkay I understand.\n\n>\n> > >> Now, if we want to enable it by default, we need a way to disable it\n> > >> as well and along with that, we need a way for users to specify a\n> > >> parallel degree. I have mentioned a few reasons why we need a\n> > >> parallel degree for this operation in the email [2] on the main\n> > >> thread.\n> > >>\n> > >> If parallel vacuum is *not* enabled by default, then I think the\n> > >> current way to enable is fine which is as follows: Vacuum (Parallel\n> > >> 2) <tbl_name>;\n> > >>\n> > >> Here, if the user doesn't specify parallel_degree, then we internally\n> > >> decide based on number of indexes that support a parallel vacuum with\n> > >> a maximum of max_parallel_maintenance_workers.\n> > >>\n> > >> If the parallel vacuum is enabled by default, then I could think of\n> > >> the following ways:\n> > >>\n> > >> (a) Vacuum (disable_parallel) <tbl_name>; Vacuum (Parallel\n> > >> <parallel_degree>) <tbl_name>;\n> > >>\n> > >> (b) Vacuum (Parallel <parallel_degree>) <tbl_name>; If user\n> > >> specifies parallel_degree as 0, then disable parallelism.\n> > >>\n> > >> (c) ... Any better ideas?\n> > >>\n> > >\n> > >If parallel vacuum is enabled by default, I would prefer (b) but I\n> > >don't think it's a good idea to accept 0 as parallel degree. If we want\n> > >to disable parallel vacuum we should max_parallel_maintenance_workers\n> > >to 0 instead.\n> > >\n> >\n> > IMO that just makes the interaction between vacuum options and the GUC\n> > even more complicated/confusing.\n> >\n>\n> Yeah, I am also not sure if that will be a good idea.\n>\n> > If we want to have a vacuum option to determine parallel degree, we\n> > should probably have a vacuum option to disable parallelism using just a\n> > vacuum option. I don't think 0 is too bad, and disable_parallel seems a\n> > bit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\n> > That's what Oracle does, so it's not entirely without a precedent.\n> >\n>\n> We can go either way (using 0 for parallel to indicate disable\n> parallelism or by introducing a new option like NOPARALLEL). I think\n> initially we can avoid introducing more options and just go with\n> 'Parallel 0' and if we find a lot of people find it inconvenient, then\n> we can always introduce a new option later.\n\nHmm I'm confused. Specifying NOPARALLEL or PARALLEL 0 is the same as\nsetting max_parallel_maintenance_workers to 0, right? We normally set\nmax_parallel_workers_per_gather to 0 to disable parallel queries on a\nquery. So I think that disabling parallel vacuum by setting\nmax_parallel_maintenance_workers to 0 is the same concept. Regarding\nproposed two options we already have storage parameter\nparallel_workers and it accepts 0 but PARALLEL 0 looks like\ncontradicted at a glance. And NOPARALLEL is inconsistent with existing\nDISABLE_XXX options and it's a bit awkward to specify like (NOPARALLEL\noff).\n\nRegards,\n\n--\nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 21:17:57 +0900",
"msg_from": "Masahiko Sawada <sawada.mshk@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 05, 2020 at 09:17:57PM +0900, Masahiko Sawada wrote:\n>On Sun, Jan 5, 2020 at 7:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>>\n>> ...\n>>\n>> > If we want to have a vacuum option to determine parallel degree, we\n>> > should probably have a vacuum option to disable parallelism using just a\n>> > vacuum option. I don't think 0 is too bad, and disable_parallel seems a\n>> > bit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\n>> > That's what Oracle does, so it's not entirely without a precedent.\n>> >\n>>\n>> We can go either way (using 0 for parallel to indicate disable\n>> parallelism or by introducing a new option like NOPARALLEL). I think\n>> initially we can avoid introducing more options and just go with\n>> 'Parallel 0' and if we find a lot of people find it inconvenient, then\n>> we can always introduce a new option later.\n>\n>Hmm I'm confused. Specifying NOPARALLEL or PARALLEL 0 is the same as\n>setting max_parallel_maintenance_workers to 0, right? We normally set\n>max_parallel_workers_per_gather to 0 to disable parallel queries on a\n>query. So I think that disabling parallel vacuum by setting\n>max_parallel_maintenance_workers to 0 is the same concept. Regarding\n>proposed two options we already have storage parameter\n>parallel_workers and it accepts 0 but PARALLEL 0 looks like\n>contradicted at a glance. And NOPARALLEL is inconsistent with existing\n>DISABLE_XXX options and it's a bit awkward to specify like (NOPARALLEL\n>off).\n>\n\nMy understanding is the motivation for new vacuum options is a claim\nthat m_p_m_w is not sufficient/suitable for the vacuum case. I've\nexpressed my doubts about this, but let's assume it's the right\nsolution. To me it seems a bit confusing to just fall back to m_p_m_w\nwhen it comes to disabling the parallel vacuum.\n\nSo if we think we need an option to determine vacuum parallel degree, we\nshould have an option to disable parallelism too. I don't care much if\nit's called DISABLE_PARALLEL, NOPARALLEL or PARALLEL 0, as long as we\nmake our mind and don't unnecessarily break it in the next release.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 14:39:45 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, 5 Jan 2020 at 22:39, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n>\n> On Sun, Jan 05, 2020 at 09:17:57PM +0900, Masahiko Sawada wrote:\n> >On Sun, Jan 5, 2020 at 7:26 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >>\n> >> ...\n> >>\n> >> > If we want to have a vacuum option to determine parallel degree, we\n> >> > should probably have a vacuum option to disable parallelism using just a\n> >> > vacuum option. I don't think 0 is too bad, and disable_parallel seems a\n> >> > bit awkward. Maybe we could use NOPARALLEL (in addition to PARALLEL n).\n> >> > That's what Oracle does, so it's not entirely without a precedent.\n> >> >\n> >>\n> >> We can go either way (using 0 for parallel to indicate disable\n> >> parallelism or by introducing a new option like NOPARALLEL). I think\n> >> initially we can avoid introducing more options and just go with\n> >> 'Parallel 0' and if we find a lot of people find it inconvenient, then\n> >> we can always introduce a new option later.\n> >\n> >Hmm I'm confused. Specifying NOPARALLEL or PARALLEL 0 is the same as\n> >setting max_parallel_maintenance_workers to 0, right? We normally set\n> >max_parallel_workers_per_gather to 0 to disable parallel queries on a\n> >query. So I think that disabling parallel vacuum by setting\n> >max_parallel_maintenance_workers to 0 is the same concept. Regarding\n> >proposed two options we already have storage parameter\n> >parallel_workers and it accepts 0 but PARALLEL 0 looks like\n> >contradicted at a glance. And NOPARALLEL is inconsistent with existing\n> >DISABLE_XXX options and it's a bit awkward to specify like (NOPARALLEL\n> >off).\n> >\n>\n> My understanding is the motivation for new vacuum options is a claim\n> that m_p_m_w is not sufficient/suitable for the vacuum case. I've\n> expressed my doubts about this, but let's assume it's the right\n> solution. To me it seems a bit confusing to just fall back to m_p_m_w\n> when it comes to disabling the parallel vacuum.\n>\n> So if we think we need an option to determine vacuum parallel degree, we\n> should have an option to disable parallelism too. I don't care much if\n> it's called DISABLE_PARALLEL, NOPARALLEL or PARALLEL 0, as long as we\n> make our mind and don't unnecessarily break it in the next release.\n\nOkay I got your point. It's just an idea but how about controlling\nparallel vacuum using two options. That is, we have PARALLEL option\nthat takes a boolean value (true by default) and enables/disables\nparallel vacuum. And we have WORKERS option that takes an integer more\nthan 1 to specify the number of workers. Of course we should raise an\nerror if only WORKERS option is specified. WORKERS option is optional.\nIf WORKERS option is omitted the number of workers is determined based\non the number of indexes on the table.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 23:08:03 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 7:38 PM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 5 Jan 2020 at 22:39, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> >\n> >\n> > So if we think we need an option to determine vacuum parallel degree, we\n> > should have an option to disable parallelism too. I don't care much if\n> > it's called DISABLE_PARALLEL, NOPARALLEL or PARALLEL 0, as long as we\n> > make our mind and don't unnecessarily break it in the next release.\n> >\n\nFair point. I favor parallel 0 as that avoids adding more options and\nalso it is not very clear whether that is required at all. Till now,\nif I see most people who have shared their opinion seems to favor this\nas compared to another idea where we need to introduce more options.\n\n>\n> Okay I got your point. It's just an idea but how about controlling\n> parallel vacuum using two options. That is, we have PARALLEL option\n> that takes a boolean value (true by default) and enables/disables\n> parallel vacuum. And we have WORKERS option that takes an integer more\n> than 1 to specify the number of workers. Of course we should raise an\n> error if only WORKERS option is specified. WORKERS option is optional.\n> If WORKERS option is omitted the number of workers is determined based\n> on the number of indexes on the table.\n>\n\nI think this would add failure modes without serving any additional\npurpose. Sure, it might give the better feeling that we have separate\noptions to enable/disable parallelism and then specify the number of\nworkers with a separate option, but we already have various examples\nas shared by me previously where setting the value as zero means the\noption is disabled, so why to invent something new here?\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Sun, 5 Jan 2020 19:58:21 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Sun, 5 Jan 2020 at 23:28, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Sun, Jan 5, 2020 at 7:38 PM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Sun, 5 Jan 2020 at 22:39, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > >\n> > >\n> > > So if we think we need an option to determine vacuum parallel degree, we\n> > > should have an option to disable parallelism too. I don't care much if\n> > > it's called DISABLE_PARALLEL, NOPARALLEL or PARALLEL 0, as long as we\n> > > make our mind and don't unnecessarily break it in the next release.\n> > >\n>\n> Fair point. I favor parallel 0 as that avoids adding more options and\n> also it is not very clear whether that is required at all. Till now,\n> if I see most people who have shared their opinion seems to favor this\n> as compared to another idea where we need to introduce more options.\n>\n> >\n> > Okay I got your point. It's just an idea but how about controlling\n> > parallel vacuum using two options. That is, we have PARALLEL option\n> > that takes a boolean value (true by default) and enables/disables\n> > parallel vacuum. And we have WORKERS option that takes an integer more\n> > than 1 to specify the number of workers. Of course we should raise an\n> > error if only WORKERS option is specified. WORKERS option is optional.\n> > If WORKERS option is omitted the number of workers is determined based\n> > on the number of indexes on the table.\n> >\n>\n> I think this would add failure modes without serving any additional\n> purpose. Sure, it might give the better feeling that we have separate\n> options to enable/disable parallelism and then specify the number of\n> workers with a separate option, but we already have various examples\n> as shared by me previously where setting the value as zero means the\n> option is disabled, so why to invent something new here?\n\nI just felt it's not intuitive that specifying parallel degree to 0\nmeans to disable parallel vacuum. But since majority of hackers seem\nto agree with this syntax I'm not going to insist on that any further.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Mon, 6 Jan 2020 15:27:20 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Mon, 6 Jan 2020 at 15:27, Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Sun, 5 Jan 2020 at 23:28, Amit Kapila <amit.kapila16@gmail.com> wrote:\n> >\n> > On Sun, Jan 5, 2020 at 7:38 PM Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > On Sun, 5 Jan 2020 at 22:39, Tomas Vondra <tomas.vondra@2ndquadrant.com> wrote:\n> > > >\n> > > >\n> > > > So if we think we need an option to determine vacuum parallel degree, we\n> > > > should have an option to disable parallelism too. I don't care much if\n> > > > it's called DISABLE_PARALLEL, NOPARALLEL or PARALLEL 0, as long as we\n> > > > make our mind and don't unnecessarily break it in the next release.\n> > > >\n> >\n> > Fair point. I favor parallel 0 as that avoids adding more options and\n> > also it is not very clear whether that is required at all. Till now,\n> > if I see most people who have shared their opinion seems to favor this\n> > as compared to another idea where we need to introduce more options.\n> >\n> > >\n> > > Okay I got your point. It's just an idea but how about controlling\n> > > parallel vacuum using two options. That is, we have PARALLEL option\n> > > that takes a boolean value (true by default) and enables/disables\n> > > parallel vacuum. And we have WORKERS option that takes an integer more\n> > > than 1 to specify the number of workers. Of course we should raise an\n> > > error if only WORKERS option is specified. WORKERS option is optional.\n> > > If WORKERS option is omitted the number of workers is determined based\n> > > on the number of indexes on the table.\n> > >\n> >\n> > I think this would add failure modes without serving any additional\n> > purpose. Sure, it might give the better feeling that we have separate\n> > options to enable/disable parallelism and then specify the number of\n> > workers with a separate option, but we already have various examples\n> > as shared by me previously where setting the value as zero means the\n> > option is disabled, so why to invent something new here?\n>\n> I just felt it's not intuitive that specifying parallel degree to 0\n> means to disable parallel vacuum. But since majority of hackers seem\n> to agree with this syntax I'm not going to insist on that any further.\n>\n\nOkay I'm going to go with enabling parallel vacuum by default and\ndisabling it by specifying PARALLEL 0.\n\nFor combination of VACUUM command options, although parallel vacuum is\nenabled by default and VACUUM FULL doesn't support it yet, 'VACUUM\n(FULL)' would work. On the other hand 'VACUUM (FULL, PARALLEL)' and\n'VACUUM(FULL, PARALLEL 1)' would not work with error. And I think it\nis better if 'VACUUM (FULL, PARALLEL 0)' also work but I'd like to\nhear opinions.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:01:17 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 11:31 AM Masahiko Sawada\n<masahiko.sawada@2ndquadrant.com> wrote:\n>\n> On Mon, 6 Jan 2020 at 15:27, Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > I just felt it's not intuitive that specifying parallel degree to 0\n> > means to disable parallel vacuum. But since majority of hackers seem\n> > to agree with this syntax I'm not going to insist on that any further.\n> >\n>\n> Okay I'm going to go with enabling parallel vacuum by default and\n> disabling it by specifying PARALLEL 0.\n>\n\nSounds fine to me. However, I have already started updating the patch\nfor that. I shall post the new version today or tomorrow. Is that\nfine with you?\n\n> For combination of VACUUM command options, although parallel vacuum is\n> enabled by default and VACUUM FULL doesn't support it yet, 'VACUUM\n> (FULL)' would work. On the other hand 'VACUUM (FULL, PARALLEL)' and\n> 'VACUUM(FULL, PARALLEL 1)' would not work with error. And I think it\n> is better if 'VACUUM (FULL, PARALLEL 0)' also work but I'd like to\n> hear opinions.\n>\n\nI agree with all these points.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jan 2020 12:01:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Wed, 8 Jan 2020 at 15:31, Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n> On Wed, Jan 8, 2020 at 11:31 AM Masahiko Sawada\n> <masahiko.sawada@2ndquadrant.com> wrote:\n> >\n> > On Mon, 6 Jan 2020 at 15:27, Masahiko Sawada\n> > <masahiko.sawada@2ndquadrant.com> wrote:\n> > >\n> > > I just felt it's not intuitive that specifying parallel degree to 0\n> > > means to disable parallel vacuum. But since majority of hackers seem\n> > > to agree with this syntax I'm not going to insist on that any further.\n> > >\n> >\n> > Okay I'm going to go with enabling parallel vacuum by default and\n> > disabling it by specifying PARALLEL 0.\n> >\n>\n> Sounds fine to me. However, I have already started updating the patch\n> for that. I shall post the new version today or tomorrow. Is that\n> fine with you?\n\nYes, that's fine. Thanks.\n\nRegards,\n\n-- \nMasahiko Sawada http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Wed, 8 Jan 2020 16:34:51 +0900",
"msg_from": "Masahiko Sawada <masahiko.sawada@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: parallel vacuum options/syntax"
},
{
"msg_contents": "On Wed, Jan 8, 2020 at 12:01 PM Amit Kapila <amit.kapila16@gmail.com> wrote:\n>\n>\n> > For combination of VACUUM command options, although parallel vacuum is\n> > enabled by default and VACUUM FULL doesn't support it yet, 'VACUUM\n> > (FULL)' would work. On the other hand 'VACUUM (FULL, PARALLEL)' and\n> > 'VACUUM(FULL, PARALLEL 1)' would not work with error. And I think it\n> > is better if 'VACUUM (FULL, PARALLEL 0)' also work but I'd like to\n> > hear opinions.\n> >\n\nOn again thinking about whether we should allow VACUUM (FULL, PARALLEL\n0) case, I am not sure, so, for now, the patch [1] is throwing error\nfor that case, but we can modify it if we want.\n\n[1] - https://www.postgresql.org/message-id/CAA4eK1JxWAYTSM4NpTi7Tz%3DsPetbWBWZPpHKxLoEKb%3DgMi%3DGGA%40mail.gmail.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jan 2020 18:49:41 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": true,
"msg_subject": "Re: parallel vacuum options/syntax"
}
] |
[
{
"msg_contents": "Hi all,\n\nHappy new year to all!\n\nAs we have entered in January, the commit fest for 2020-01 has\nofficially begun, and I have switched the status of this commit fest\nto \"In Progress\" a couple of minutes ago. Unfortunately, we still\nlack a commit fest manager.\n\nAre there any volunteers? Please note that I have taken care of the\nlast one, so this time I am out.\n\nThanks,\n--\nMichael",
"msg_date": "Thu, 2 Jan 2020 23:22:33 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Commit fest manager for 2020-01"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 11:22:33PM +0900, Michael Paquier wrote:\n>Hi all,\n>\n>Happy new year to all!\n>\n>As we have entered in January, the commit fest for 2020-01 has\n>officially begun, and I have switched the status of this commit fest\n>to \"In Progress\" a couple of minutes ago. Unfortunately, we still\n>lack a commit fest manager.\n>\n>Are there any volunteers? Please note that I have taken care of the\n>last one, so this time I am out.\n>\n\nIt's probably time I've done one of these, so if there are no other\nvolunteers I'll take care of it this one.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Thu, 2 Jan 2020 23:34:55 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 11:34:55PM +0100, Tomas Vondra wrote:\n> It's probably time I've done one of these, so if there are no other\n> volunteers I'll take care of it this one.\n\nNobody has raised his/her hand yet, but let's see. If you take care\nof it, that would be great. Thanks!\n--\nMichael",
"msg_date": "Fri, 3 Jan 2020 15:51:32 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "On Fri, 3 Jan 2020 at 11:51, Michael Paquier <michael@paquier.xyz> wrote:\n\n> On Thu, Jan 02, 2020 at 11:34:55PM +0100, Tomas Vondra wrote:\n> > It's probably time I've done one of these, so if there are no other\n> > volunteers I'll take care of it this one.\n>\n> Nobody has raised his/her hand yet, but let's see. If you take care\n> of it, that would be great. Thanks!\n> --\n> Michael\n\n\n\n> I want to be this time. This is my first time doing this.\n-- \nIbrar Ahmed\n\nOn Fri, 3 Jan 2020 at 11:51, Michael Paquier <michael@paquier.xyz> wrote:On Thu, Jan 02, 2020 at 11:34:55PM +0100, Tomas Vondra wrote:\n> It's probably time I've done one of these, so if there are no other\n> volunteers I'll take care of it this one.\n\nNobody has raised his/her hand yet, but let's see. If you take care\nof it, that would be great. Thanks!\n--\nMichael\nI want to be this time. This is my first time doing this.-- Ibrar Ahmed",
"msg_date": "Fri, 3 Jan 2020 13:29:23 +0500",
"msg_from": "Ibrar Ahmed <ibrar.ahmad@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "On Fri, Jan 03, 2020 at 01:29:23PM +0500, Ibrar Ahmed wrote:\n>On Fri, 3 Jan 2020 at 11:51, Michael Paquier <michael@paquier.xyz> wrote:\n>\n>> On Thu, Jan 02, 2020 at 11:34:55PM +0100, Tomas Vondra wrote:\n>> > It's probably time I've done one of these, so if there are no other\n>> > volunteers I'll take care of it this one.\n>>\n>> Nobody has raised his/her hand yet, but let's see. If you take care\n>> of it, that would be great. Thanks!\n>> --\n>> Michael\n>\n> I want to be this time. This is my first time doing this.\n>-- \n>Ibrar Ahmed\n\nIt's not clear to me what's the process for picking a CFM when there are\nmultiple volunteers.\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services \n\n\n",
"msg_date": "Sat, 4 Jan 2020 20:04:25 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n> On Fri, Jan 03, 2020 at 01:29:23PM +0500, Ibrar Ahmed wrote:\n>> I want to be this time. This is my first time doing this.\n\n> It's not clear to me what's the process for picking a CFM when there are\n> multiple volunteers.\n\nUp to now we haven't needed a process for that ;-)\n\nI'd say you're it, since you volunteered first; and if you want\nto let Ibrar help out, that's fine, but it's your call.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 18:19:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "On Sat, Jan 04, 2020 at 06:19:33PM -0500, Tom Lane wrote:\n>Tomas Vondra <tomas.vondra@2ndquadrant.com> writes:\n>> On Fri, Jan 03, 2020 at 01:29:23PM +0500, Ibrar Ahmed wrote:\n>>> I want to be this time. This is my first time doing this.\n>\n>> It's not clear to me what's the process for picking a CFM when there are\n>> multiple volunteers.\n>\n>Up to now we haven't needed a process for that ;-)\n>\n>I'd say you're it, since you volunteered first;\n\nOK\n\n>and if you want to let Ibrar help out, that's fine, but it's your call.\n>\n\nI'm not against doing that, but I don't know how to split the work.\n\n\nregards\n\n-- \nTomas Vondra http://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 04:55:18 +0100",
"msg_from": "Tomas Vondra <tomas.vondra@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: Commit fest manager for 2020-01"
},
{
"msg_contents": "On Sun, Jan 05, 2020 at 04:55:18AM +0100, Tomas Vondra wrote:\n> On Sat, Jan 04, 2020 at 06:19:33PM -0500, Tom Lane wrote:\n>> I'd say you're it, since you volunteered first;\n> \n> OK\n\nSounds like a plan then. \n\n> I'm not against doing that, but I don't know how to split the work.\n\nYou could also be two independent entities when it comes to review\npatches. There are so many entries that when it comes to\nclassification of the patches there is unlikely going to be a\nconflict. I'll try to do my own share of patches to look at, but\nthat's business as usual.\n--\nMichael",
"msg_date": "Mon, 6 Jan 2020 11:01:45 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": true,
"msg_subject": "Re: Commit fest manager for 2020-01"
}
] |
[
{
"msg_contents": "Is there any appetite for use of array initializer rather than memset, as in\nattached ? So far, I only looked for \"memset.*null\", and I can't see that any\nof these are hot paths, but saves a cycle or two and a line of code for each.\n\ngcc 4.9.2 with -O2 emits smaller code with array initializer than with inlined\ncall to memset.\n\n$ wc -l contrib/pageinspect/heapfuncs.S? \n 22159 contrib/pageinspect/heapfuncs.S0\n 22011 contrib/pageinspect/heapfuncs.S1\n\nAlso true of gcc 5.4. And 7.3:\n\n 25294 contrib/pageinspect/heapfuncs.S0\n 25234 contrib/pageinspect/heapfuncs.S1",
"msg_date": "Thu, 2 Jan 2020 10:38:35 -0600",
"msg_from": "Justin Pryzby <pryzby@telsasoft.com>",
"msg_from_op": true,
"msg_subject": "avoid some calls to memset with array initializer"
},
{
"msg_contents": "I think this proposal is the same as [1], so you might want to read that thread.\n\n\n\n1- https://www.postgresql.org/message-id/flat/201DD0641B056142AC8C6645EC1B5F62014B919631%40SYD1217\n\nOn Thu, Jan 2, 2020 at 5:38 PM Justin Pryzby <pryzby@telsasoft.com> wrote:\n>\n> Is there any appetite for use of array initializer rather than memset, as in\n> attached ? So far, I only looked for \"memset.*null\", and I can't see that any\n> of these are hot paths, but saves a cycle or two and a line of code for each.\n>\n> gcc 4.9.2 with -O2 emits smaller code with array initializer than with inlined\n> call to memset.\n>\n> $ wc -l contrib/pageinspect/heapfuncs.S?\n> 22159 contrib/pageinspect/heapfuncs.S0\n> 22011 contrib/pageinspect/heapfuncs.S1\n>\n> Also true of gcc 5.4. And 7.3:\n>\n> 25294 contrib/pageinspect/heapfuncs.S0\n> 25234 contrib/pageinspect/heapfuncs.S1\n\n\n\n-- \nRaúl Marín Rodríguez\ncarto.com\n\n\n",
"msg_date": "Thu, 2 Jan 2020 18:18:54 +0100",
"msg_from": "rmrodriguez@carto.com",
"msg_from_op": false,
"msg_subject": "Re: avoid some calls to memset with array initializer"
}
] |
[
{
"msg_contents": "Hi hackers,\n\nWhile reading code and doing some testing, I found that if we create a\ntemporary table with same name as we created a normal(global) table, then\n\\d is showing only temporary table info. I think, ideally we should\ndisplay info of both the tables. Below is the example:\n\npostgres=# create table t (a int);\nCREATE TABLE\npostgres=# create temporary table t (a int);\nCREATE TABLE\npostgres=# \\d\n List of relations\n Schema | Name | Type | Owner\n-----------+------+-------+----------\n pg_temp_2 | t | table | mahendra\n(1 row)\n\n\n*Expected behavior:*\npostgres=# \\d\n List of relations\n Schema | Name | Type | Owner\n-----------+------+-------+----------\n pg_temp_2 | t | table | mahendra\n public | t | table | mahendra\n(2 rows)\n\n\nFor me, It looks like a bug.\n\nI debugged and found that due to below code, we are showing only temp table\ninformation.\n\n /*\n * If it is in the path, it might still not be visible; it could be\n * hidden by another relation of the same name earlier in the path.\nSo\n * we must do a slow check for conflicting relations.\n */\n char *relname = NameStr(relform->relname);\n ListCell *l;\n\n visible = false;\n foreach(l, activeSearchPath)\n {\n Oid namespaceId = lfirst_oid(l);\n\n if (namespaceId == relnamespace)\n {\n /* Found it first in path */\n visible = true;\n break;\n }\n if (OidIsValid(get_relname_relid(relname, namespaceId)))\n {\n /* Found something else first in path */\n break;\n }\n }\n\npostgres=# select oid, relname , relnamespace , reltype from pg_class where\nrelname = 't';\n oid | relname | relnamespace | reltype\n-------+---------+--------------+---------\n 16384 | t | 2200 | 16386\n 16389 | t | 16387 | 16391\n(2 rows)\n\nFor able example, we have 3 namespaceId in the activeSearchPath list\n(16387->temporary_table, 11, 2200->noraml-table). As I can see that 16387\nis the 1st oid in list, that is corresponds to temp table, we are\ndisplaying info of temp table but when we are checking visibility of normal\ntable, then we exiting from list after comparing with 1st oid because 16387\nis the 1st in list and that oid is valid.\n\nIf this is a bug, then please let me know. I will be happy to fix this.\n\nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\nHi hackers,While reading code and doing some testing, I found that if we create a temporary table with same name as we created a normal(global) table, then \\d is showing only temporary table info. I think, ideally we should display info of both the tables. Below is the example:postgres=# create table t (a int);CREATE TABLEpostgres=# create temporary table t (a int);CREATE TABLEpostgres=# \\d List of relations Schema | Name | Type | Owner -----------+------+-------+---------- pg_temp_2 | t | table | mahendra(1 row)Expected behavior:postgres=# \\d List of relations Schema | Name | Type | Owner -----------+------+-------+---------- pg_temp_2 | t | table | mahendra public | t | table | mahendra(2 rows)For me, It looks like a bug.I debugged and found that due to below code, we are showing only temp table information. /* * If it is in the path, it might still not be visible; it could be * hidden by another relation of the same name earlier in the path. So * we must do a slow check for conflicting relations. */ char *relname = NameStr(relform->relname); ListCell *l; visible = false; foreach(l, activeSearchPath) { Oid namespaceId = lfirst_oid(l); if (namespaceId == relnamespace) { /* Found it first in path */ visible = true; break; } if (OidIsValid(get_relname_relid(relname, namespaceId))) { /* Found something else first in path */ break; } }postgres=# select oid, relname , relnamespace , reltype from pg_class where relname = 't'; oid | relname | relnamespace | reltype-------+---------+--------------+--------- 16384 | t | 2200 | 16386 16389 | t | 16387 | 16391(2 rows)For able example, we have 3 namespaceId in the activeSearchPath list (16387->temporary_table, 11, 2200->noraml-table). As I can see that 16387 is the 1st oid in list, that is corresponds to temp table, we are displaying info of temp table but when we are checking visibility of normal table, then we exiting from list after comparing with 1st oid because 16387 is the 1st in list and that oid is valid.If this is a bug, then please let me know. I will be happy to fix this.Thanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com",
"msg_date": "Thu, 2 Jan 2020 23:29:20 +0530",
"msg_from": "Mahendra Singh <mahi6run@gmail.com>",
"msg_from_op": true,
"msg_subject": "\\d is not showing global(normal) table info if we create temporary\n table with same name as global table"
},
{
"msg_contents": "On Thu, Jan 2, 2020 at 12:59 PM Mahendra Singh <mahi6run@gmail.com> wrote:\n> While reading code and doing some testing, I found that if we create a temporary table with same name as we created a normal(global) table, then \\d is showing only temporary table info.\n\nThat's because the query that \\d issues to the backend includes:\n\n AND pg_catalog.pg_table_is_visible(c.oid)\n\nSo I'd say it's not a bug, because that bit of SQL didn't get included\nin the query by accident.\n\nWhether it is the behavior that everybody wants is debatable, but I\nthink it's been this way since 2002. See commit\n039cb479884abc28ee494f6cf6c5e7ec26b88fc8.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 2 Jan 2020 13:39:39 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \\d is not showing global(normal) table info if we create\n temporary table with same name as global table"
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> On Thu, Jan 2, 2020 at 12:59 PM Mahendra Singh <mahi6run@gmail.com> wrote:\n>> While reading code and doing some testing, I found that if we create a temporary table with same name as we created a normal(global) table, then \\d is showing only temporary table info.\n\n> That's because the query that \\d issues to the backend includes:\n> AND pg_catalog.pg_table_is_visible(c.oid)\n> So I'd say it's not a bug, because that bit of SQL didn't get included\n> in the query by accident.\n\nIt's also documented:\n\n Whenever the pattern parameter is omitted completely, the \\d commands\n display all objects that are visible in the current schema search path\n — this is equivalent to using * as the pattern. (An object is said to\n be visible if its containing schema is in the search path and no\n object of the same kind and name appears earlier in the search\n path. This is equivalent to the statement that the object can be\n referenced by name without explicit schema qualification.) To see all\n objects in the database regardless of visibility, use *.* as the\n pattern.\n\nPerhaps that's not clear enough, but the behavior is certainly as-intended.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 02 Jan 2020 14:10:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: \\d is not showing global(normal) table info if we create\n temporary table with same name as global table"
},
{
"msg_contents": "On Fri, 3 Jan 2020 at 00:40, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Robert Haas <robertmhaas@gmail.com> writes:\n> > On Thu, Jan 2, 2020 at 12:59 PM Mahendra Singh <mahi6run@gmail.com> wrote:\n> >> While reading code and doing some testing, I found that if we create a temporary table with same name as we created a normal(global) table, then \\d is showing only temporary table info.\n>\n> > That's because the query that \\d issues to the backend includes:\n> > AND pg_catalog.pg_table_is_visible(c.oid)\n> > So I'd say it's not a bug, because that bit of SQL didn't get included\n> > in the query by accident.\n>\n> It's also documented:\n>\n> Whenever the pattern parameter is omitted completely, the \\d commands\n> display all objects that are visible in the current schema search path\n> — this is equivalent to using * as the pattern. (An object is said to\n> be visible if its containing schema is in the search path and no\n> object of the same kind and name appears earlier in the search\n> path. This is equivalent to the statement that the object can be\n> referenced by name without explicit schema qualification.) To see all\n> objects in the database regardless of visibility, use *.* as the\n> pattern.\n>\n> Perhaps that's not clear enough, but the behavior is certainly as-intended.\n>\n> regards, tom lane\n\nThanks Robert and Tom for quick detailed response.\n\n-- \nThanks and Regards\nMahendra Singh Thalor\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Fri, 3 Jan 2020 23:55:49 +0530",
"msg_from": "Mahendra Singh Thalor <mahi6run@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: \\d is not showing global(normal) table info if we create\n temporary table with same name as global table"
}
] |
[
{
"msg_contents": "Add basic TAP tests for psql's tab-completion logic.\n\nUp to now, psql's tab-complete.c has had exactly no regression test\ncoverage. This patch is an experimental attempt to add some.\n\nThis needs Perl's IO::Pty module, which isn't installed everywhere,\nso the test script just skips all tests if that's not present.\nThere may be other portability gotchas too, so I await buildfarm\nresults with interest.\n\nSo far this just covers a few very basic keyword-completion and\nquery-driven-completion scenarios, which should be enough to let us\nget a feel for whether this is practical at all from a portability\nstandpoint. If it is, there's lots more that can be done.\n\nDiscussion: https://postgr.es/m/10967.1577562752@sss.pgh.pa.us\n\nBranch\n------\nmaster\n\nDetails\n-------\nhttps://git.postgresql.org/pg/commitdiff/7c015045b9141cc30272930ea88cfa5df47240b7\n\nModified Files\n--------------\nconfigure | 2 +\nconfigure.in | 1 +\nsrc/Makefile.global.in | 1 +\nsrc/bin/psql/.gitignore | 2 +-\nsrc/bin/psql/Makefile | 10 +++\nsrc/bin/psql/t/010_tab_completion.pl | 122 +++++++++++++++++++++++++++++++++++\nsrc/test/perl/PostgresNode.pm | 67 +++++++++++++++++++\n7 files changed, 204 insertions(+), 1 deletion(-)",
"msg_date": "Thu, 02 Jan 2020 20:02:29 +0000",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-02 <E1in6ft-0004zR-6l@gemulon.postgresql.org>\n> Add basic TAP tests for psql's tab-completion logic.\n\nThe \\DRD test fails on Debian/unstable:\n\n# check case-sensitive keyword replacement\n# XXX the output here might vary across readline versions\ncheck_completion(\n \"\\\\DRD\\t\",\n \"\\\\DRD\\b\\b\\bdrds \",\n \"complete \\\\DRD<tab> to \\\\drds\");\n\n\n00:58:02 rm -rf '/<<PKGBUILDDIR>>/build/src/bin/psql'/tmp_check\n00:58:02 /bin/mkdir -p '/<<PKGBUILDDIR>>/build/src/bin/psql'/tmp_check\n00:58:02 cd /<<PKGBUILDDIR>>/build/../src/bin/psql && TESTDIR='/<<PKGBUILDDIR>>/build/src/bin/psql' PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/postgresql/13/bin:$PATH\" LD_LIBRARY_PATH=\"/<<PKGBUILDDIR>>/build/tmp_install/usr/lib/x86_64-linux-gnu\" PGPORT='65432' PG_REGRESS='/<<PKGBUILDDIR>>/build/src/bin/psql/../../../src/test/regress/pg_regress' REGRESS_SHLIB='/<<PKGBUILDDIR>>/build/src/test/regress/regress.so' /usr/bin/prove -I /<<PKGBUILDDIR>>/build/../src/test/perl/ -I /<<PKGBUILDDIR>>/build/../src/bin/psql t/*.pl\n00:58:09\n00:58:09 # Failed test 'complete \\DRD<tab> to \\drds'\n00:58:09 # at t/010_tab_completion.pl line 64.\n00:58:09 # Looks like you failed 1 test of 12.\n00:58:09 t/010_tab_completion.pl ..\n00:58:09 Dubious, test returned 1 (wstat 256, 0x100)\n00:58:09 Failed 1/12 subtests\n00:58:09\n00:58:09 Test Summary Report\n00:58:09 -------------------\n00:58:09 t/010_tab_completion.pl (Wstat: 256 Tests: 12 Failed: 1)\n00:58:09 Failed test: 11\n00:58:09 Non-zero exit status: 1\n00:58:09 Files=1, Tests=12, 7 wallclock secs ( 0.01 usr 0.01 sys + 0.77 cusr 0.23 csys = 1.02 CPU)\n00:58:09 Result: FAIL\n00:58:09 make[2]: *** [Makefile:87: check] Error 1\n\nhttps://pgdgbuild.dus.dg-i.net/job/postgresql-13-binaries/architecture=amd64,distribution=sid/444/console\n\n\nShouldn't this print some \"expected foo, got bar\" diagnostics instead\nof just dying?\n\nChristoph\n\n\n",
"msg_date": "Fri, 3 Jan 2020 12:01:29 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Thu, Jan 02, 2020 at 08:02:29PM +0000, Tom Lane wrote:\n> Add basic TAP tests for psql's tab-completion logic.\n> \n> Up to now, psql's tab-complete.c has had exactly no regression test\n> coverage. This patch is an experimental attempt to add some.\n> \n> This needs Perl's IO::Pty module, which isn't installed everywhere,\n> so the test script just skips all tests if that's not present.\n> There may be other portability gotchas too, so I await buildfarm\n> results with interest.\n\nReading through the commit logs, I am not a fan of this part:\n+if ($ENV{with_readline} ne 'yes')\n+{\n+ plan skip_all => 'readline is not supported by this build';\n+}\n+\n+# If we don't have IO::Pty, forget it, because IPC::Run depends on that\n+# to support pty connections\n+eval { require IO::Pty; };\n+if ($@)\n+{\n+ plan skip_all => 'IO::Pty is needed to run this test';\n+}\n\nThis has the disadvantage to have people never actually notice if the\ntests are running or not because this does not generate a dependency\nerror. Skipping things if libreadline is not around is perfectly fine\nIMO, but I think that we should harden things for IO::Pty by removing\nthis skipping part, and by adding a test in configure.in's\nAX_PROG_PERL_MODULES. That would be also more consistent with the\napproach we take with other tests.\n--\nMichael",
"msg_date": "Fri, 3 Jan 2020 20:46:07 +0900",
"msg_from": "Michael Paquier <michael@paquier.xyz>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2020-01-02 <E1in6ft-0004zR-6l@gemulon.postgresql.org>\n>> Add basic TAP tests for psql's tab-completion logic.\n\n> The \\DRD test fails on Debian/unstable:\n\nIndeed. It appears that recent libedit breaks tab-completion for\nwords involving a backslash, which is the fault of this upstream\ncommit:\n\nhttp://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/filecomplete.c.diff?r1=1.52&r2=1.53\n\nBasically what that's doing is applying de-backslashing to EVERY\nword that completion is attempted on, whether it might be a filename\nor not. So what psql_complete sees in this test case is just \"DRD\"\nwhich of course it does not recognize as a possible psql backslash\ncommand.\n\nI found out while investigating this that the libedit version shipping\nwith buster (3.1-20181209) is differently broken for the same case:\ninstead of inapproriate forced de-escaping of the input of the\napplication-specific completion function, it applies inapproriate\nforced escaping to the output of said function, so that when we see\n\"\\DRD\" and return \"\\drds\", what comes out to the user is \"\\\\drds\".\nlibedit apparently needs a regression test suite even worse than we do.\n\nI was kind of despairing of fixing this last night, but in the light\nof morning it occurs to me that there's a possible workaround for the\nde-escape bug: we could make psql_completion ignore the passed \"text\"\nstring and look at the original input buffer, as\nget_previous_words() is already doing. I don't see any way to\ndodge buster's bug, though.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 08:52:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Shouldn't this print some \"expected foo, got bar\" diagnostics instead\n> of just dying?\n\nBTW, as far as that goes, we do: see for instance the tail end of\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2020-01-02%2020%3A04%3A03\n\nok 8 - offer multiple table choices\nok 9 - finish completion of one of multiple table choices\nok 10 - \\r works\nnot ok 11 - complete \\DRD<tab> to \\drds\n\n# Failed test 'complete \\DRD<tab> to \\drds'\n# at t/010_tab_completion.pl line 64.\n# Actual output was \"\\DRD\u0007\"\nok 12 - \\r works\n\nNot sure why you are not seeing the \"Actual output\" bit in your log.\nI used a \"note\" command to print it, maybe that's not best practice?\n\nAlso, while I'm asking for Perl advice: I can see in my editor that\nthere's a control-G bell character in that string, but this is far\nfrom obvious on the web page. I'd kind of like to get the report\nto escapify control characters so that what comes out is more like\n\n\t# Actual output was \"\\DRD^G\"\nor\n\t# Actual output was \"\\\\DRD\\007\"\n\nor some such. Anybody know an easy way to do that in Perl?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 09:03:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Michael Paquier <michael@paquier.xyz> writes:\n> This has the disadvantage to have people never actually notice if the\n> tests are running or not because this does not generate a dependency\n> error. Skipping things if libreadline is not around is perfectly fine\n> IMO, but I think that we should harden things for IO::Pty by removing\n> this skipping part, and by adding a test in configure.in's\n> AX_PROG_PERL_MODULES. That would be also more consistent with the\n> approach we take with other tests.\n\nI do not think that requiring IO::Pty is practical. It's not going\nto be present in Windows installations, for starters, because it's\nnonfunctional there. I've also found that it fails to compile on\nsome of my older buildfarm dinosaurs.\n\nIn the case of IPC::Run, having a hard dependency is sensible because\nthe TAP tests pretty much can't do anything at all without it.\nHowever, we don't need IO::Pty except for testing a few psql behaviors,\nso it's fine with me if some buildfarm members don't run those tests.\n\nThere's precedent, too: see for instance\nsrc/test/recovery/t/017_shm.pl\nwhich is where I stole this coding technique from.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 09:10:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-03 <13708.1578059577@sss.pgh.pa.us>\n> I found out while investigating this that the libedit version shipping\n> with buster (3.1-20181209) is differently broken for the same case:\n\n(Fwiw this wasn't spotted before because we have this LD_PRELOAD hack\nthat replaces libedit with readline at psql runtime. I guess that\nmeans that the hack is pretty stable... Still, looking forward to the\nday that OpenSSL is finally relicensing so we can properly link to\nreadline.)\n\n\nRe: Tom Lane 2020-01-03 <14261.1578060227@sss.pgh.pa.us>\n> > Shouldn't this print some \"expected foo, got bar\" diagnostics instead\n> > of just dying?\n> \n> BTW, as far as that goes, we do: see for instance the tail end of\n> \n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=desmoxytes&dt=2020-01-02%2020%3A04%3A03\n> \n> ok 8 - offer multiple table choices\n> ok 9 - finish completion of one of multiple table choices\n> ok 10 - \\r works\n> not ok 11 - complete \\DRD<tab> to \\drds\n> \n> # Failed test 'complete \\DRD<tab> to \\drds'\n> # at t/010_tab_completion.pl line 64.\n> # Actual output was \"\\DRD\u0007\"\n> ok 12 - \\r works\n> \n> Not sure why you are not seeing the \"Actual output\" bit in your log.\n> I used a \"note\" command to print it, maybe that's not best practice?\n\nI think best practice is to use something like\n\nlike($out, qr/$pattern/, $annotation)\n\ninstead of plain \"ok()\" which doesn't know about the actual values\ncompared. The \"&& !$timer->is_expired\" condition can be dropped from\nthe test because all we care about is if the output matches.\n\nI never really grasped in which contexts TAP is supposed to print the\nfull test output (\"ok 10 -...\"). Apparently the way the testsuite is\ninvoked at package build time only prints the terse failure summary in\nwhich \"note\"s aren't included. Is there a switch to configure that?\n\n> Also, while I'm asking for Perl advice: I can see in my editor that\n> there's a control-G bell character in that string, but this is far\n> from obvious on the web page. I'd kind of like to get the report\n> to escapify control characters so that what comes out is more like\n> \n> \t# Actual output was \"\\DRD^G\"\n> or\n> \t# Actual output was \"\\\\DRD\\007\"\n> \n> or some such. Anybody know an easy way to do that in Perl?\n\nI don't know for note(), but maybe like() would do that automatically.\n\nChristoph\n\n\n",
"msg_date": "Fri, 3 Jan 2020 18:02:26 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Also, while I'm asking for Perl advice: I can see in my editor that\n> there's a control-G bell character in that string, but this is far\n> from obvious on the web page. I'd kind of like to get the report\n> to escapify control characters so that what comes out is more like\n>\n> \t# Actual output was \"\\DRD^G\"\n> or\n> \t# Actual output was \"\\\\DRD\\007\"\n>\n> or some such. Anybody know an easy way to do that in Perl?\n\nI was going to suggest using Test::More's like() function to do the\nregex check, but sadly that only escapes things that would break the TAP\nstream syntax, not non-printables in general. The next obvious thing is\nData::Dumper with the 'Useqq' option enabled, which makes it use\ndouble-quoted-string escapes (e.g. \"\\a\" for ^G).\n\nThe attaced patch does that, and also bumps $Test::Builder::Level so the\ndiagnostic references the calling line, and uses diag() instad of\nnote(), so it shows even in non-verbose mode.\n\n- ilmari\n-- \n\"A disappointingly low fraction of the human race is,\n at any given time, on fire.\" - Stig Sandbeck Mathisen",
"msg_date": "Fri, 03 Jan 2020 17:12:09 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2020-01-03 <13708.1578059577@sss.pgh.pa.us>\n>> I found out while investigating this that the libedit version shipping\n>> with buster (3.1-20181209) is differently broken for the same case:\n\n> (Fwiw this wasn't spotted before because we have this LD_PRELOAD hack\n> that replaces libedit with readline at psql runtime.\n\nYou do? I went looking in the Debian package source repo just the\nother day for some evidence that that was true, and couldn't find\nany, so I concluded that it was only an urban legend. Where is that\ndone exactly?\n\nPerhaps more importantly, *why* is it done? It seems to me that it\ntakes a pretty fevered imagination to suppose that using libreadline\nthat way meets the terms of its license but just building against\nthe library normally would not. Certainly when I worked for Red Hat,\ntheir lawyers did not think there was any problem with building\nPostgres using both openssl and readline.\n\nThe reason I'm concerned about this is that there's a patch on the\ntable [1] that will probably not behave nicely at all if it's\ncompiled against libedit headers and then executed with libreadline,\nbecause it will draw the wrong conclusions about whether the\nfilename quoting hooks are available. So that hack is going to\nfail on you soon, especially after I add regression testing around\nthe filename completion stuff ;-)\n\n>> I used a \"note\" command to print it, maybe that's not best practice?\n\n> I think best practice is to use something like\n> like($out, qr/$pattern/, $annotation)\n\nI'll check into that, thanks!\n\n\t\t\tregards, tom lane\n\n[1] https://www.postgresql.org/message-id/flat/16059-8836946734c02b84@postgresql.org\n\n\n",
"msg_date": "Fri, 03 Jan 2020 12:35:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-03 <26339.1578072930@sss.pgh.pa.us>\n> Christoph Berg <myon@debian.org> writes:\n> > Re: Tom Lane 2020-01-03 <13708.1578059577@sss.pgh.pa.us>\n> >> I found out while investigating this that the libedit version shipping\n> >> with buster (3.1-20181209) is differently broken for the same case:\n> \n> > (Fwiw this wasn't spotted before because we have this LD_PRELOAD hack\n> > that replaces libedit with readline at psql runtime.\n> \n> You do? I went looking in the Debian package source repo just the\n> other day for some evidence that that was true, and couldn't find\n> any, so I concluded that it was only an urban legend. Where is that\n> done exactly?\n\n/usr/share/postgresql-common/pg_wrapper\n\nhttps://salsa.debian.org/postgresql/postgresql-common/blob/master/pg_wrapper#L129-157\n\n> Perhaps more importantly, *why* is it done? It seems to me that it\n> takes a pretty fevered imagination to suppose that using libreadline\n\nTom, claiming that things are \"fevered\" just because you didn't like\nthem is not appropriate. It's not fun working with PostgreSQL when the\ntone is like that.\n\n> that way meets the terms of its license but just building against\n> the library normally would not. Certainly when I worked for Red Hat,\n> their lawyers did not think there was any problem with building\n> Postgres using both openssl and readline.\n\nI'm not starting that debate here, but Debian thinks otherwise:\n\nhttps://lwn.net/Articles/428111/\n\n> The reason I'm concerned about this is that there's a patch on the\n> table [1] that will probably not behave nicely at all if it's\n> compiled against libedit headers and then executed with libreadline,\n> because it will draw the wrong conclusions about whether the\n> filename quoting hooks are available. So that hack is going to\n> fail on you soon, especially after I add regression testing around\n> the filename completion stuff ;-)\n\nWell, so far, it worked well. (The biggest problem used to be that\nlibedit didn't have the history append function so it wasn't used even\nwith readline, but that got implemented ~2 years ago.)\n\nChristoph\n\n\n",
"msg_date": "Fri, 3 Jan 2020 18:48:39 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Anybody know an easy way to do that in Perl?\n\n> I was going to suggest using Test::More's like() function to do the\n> regex check, but sadly that only escapes things that would break the TAP\n> stream syntax, not non-printables in general. The next obvious thing is\n> Data::Dumper with the 'Useqq' option enabled, which makes it use\n> double-quoted-string escapes (e.g. \"\\a\" for ^G).\n\n> The attaced patch does that, and also bumps $Test::Builder::Level so the\n> diagnostic references the calling line, and uses diag() instad of\n> note(), so it shows even in non-verbose mode.\n\nLGTM, pushed (along with a fix to deal with what hopefully is the\nonly remaining obstacle for Andres' critters).\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 12:55:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 12:48 PM Christoph Berg <myon@debian.org> wrote:\n> > Perhaps more importantly, *why* is it done? It seems to me that it\n> > takes a pretty fevered imagination to suppose that using libreadline\n>\n> Tom, claiming that things are \"fevered\" just because you didn't like\n> them is not appropriate. It's not fun working with PostgreSQL when the\n> tone is like that.\n\n+1.\n\n> > that way meets the terms of its license but just building against\n> > the library normally would not. Certainly when I worked for Red Hat,\n> > their lawyers did not think there was any problem with building\n> > Postgres using both openssl and readline.\n>\n> I'm not starting that debate here, but Debian thinks otherwise:\n>\n> https://lwn.net/Articles/428111/\n\nI take no position on whether Debian is correct in its assessment of\nsuch things, but I reiterate my previous opposition to breaking it\njust because we don't agree with it, or because Tom specifically\ndoesn't. It's too mainstream a platform to arbitrarily break. And it\nwill probably just have the effect of increasing the number of patches\nthey're carrying against our sources, which will not make things\nbetter for anybody.\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Fri, 3 Jan 2020 12:59:12 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2020-01-03 <26339.1578072930@sss.pgh.pa.us>\n>> You do? I went looking in the Debian package source repo just the\n>> other day for some evidence that that was true, and couldn't find\n>> any, so I concluded that it was only an urban legend. Where is that\n>> done exactly?\n\n> /usr/share/postgresql-common/pg_wrapper\n> https://salsa.debian.org/postgresql/postgresql-common/blob/master/pg_wrapper#L129-157\n\nOh, so not in the Postgres package per se.\n\nWhat that means is that our regression tests will pass (as it's\njust a regular libedit install while they're running) but then filename\ncompletion will not work well for actual users. And there's not really\nanything I can do about that from this end.\n\n(On the other hand, filename completion is already kind of buggy,\nwhich is why that patch exists in the first place. So maybe it\nwon't get any worse. Hard to say.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 13:17:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 9:59 AM Robert Haas <robertmhaas@gmail.com> wrote:\n> I take no position on whether Debian is correct in its assessment of\n> such things, but I reiterate my previous opposition to breaking it\n> just because we don't agree with it, or because Tom specifically\n> doesn't. It's too mainstream a platform to arbitrarily break. And it\n> will probably just have the effect of increasing the number of patches\n> they're carrying against our sources, which will not make things\n> better for anybody.\n\nEven with commit 56a3921a, \"make check-world\" is broken on my Ubuntu\n18.04 workstation. This is now adversely impacting my work, so I hope\nit can be resolved soon.\n\nNot sure if the specifics matter, but FWIW \"make check-world\" ended\nwith the following failure just now:\n\nmake[2]: Entering directory '/code/postgresql/patch/build/src/bin/psql'\nrm -rf '/code/postgresql/patch/build/src/bin/psql'/tmp_check\n/bin/mkdir -p '/code/postgresql/patch/build/src/bin/psql'/tmp_check\ncd /code/postgresql/patch/build/../source/src/bin/psql &&\nTESTDIR='/code/postgresql/patch/build/src/bin/psql'\nPATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/bin:$PATH\"\nLD_LIBRARY_PATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/lib\"\n PGPORT='65432'\nPG_REGRESS='/code/postgresql/patch/build/src/bin/psql/../../../src/test/regress/pg_regress'\nREGRESS_SHLIB='/code/postgresql/patch/build/src/test/regress/regress.so'\n/usr/bin/prove -I\n/code/postgresql/patch/build/../source/src/test/perl/ -I\n/code/postgresql/patch/build/../source/src/bin/psql t/*.pl\nt/010_tab_completion.pl .. 8/?\n# Failed test 'offer multiple table choices'\n# at t/010_tab_completion.pl line 105.\n# Actual output was \"\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\"\n#\n# Looks like you failed 1 test of 12.\nt/010_tab_completion.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/12 subtests\n\nTest Summary Report\n-------------------\nt/010_tab_completion.pl (Wstat: 256 Tests: 12 Failed: 1)\n Failed test: 8\n Non-zero exit status: 1\nFiles=1, Tests=12, 7 wallclock secs ( 0.02 usr 0.00 sys + 0.80 cusr\n 0.09 csys = 0.91 CPU)\nResult: FAIL\nMakefile:87: recipe for target 'check' failed\nmake[2]: *** [check] Error 1\nmake[2]: Leaving directory '/code/postgresql/patch/build/src/bin/psql'\nMakefile:41: recipe for target 'check-psql-recurse' failed\nmake[1]: *** [check-psql-recurse] Error 2\nmake[1]: Leaving directory '/code/postgresql/patch/build/src/bin'\nGNUmakefile:70: recipe for target 'check-world-src/bin-recurse' failed\nmake: *** [check-world-src/bin-recurse] Error 2\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 17:32:10 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Not sure if the specifics matter, but FWIW \"make check-world\" ended\n> with the following failure just now:\n\n> # Failed test 'offer multiple table choices'\n> # at t/010_tab_completion.pl line 105.\n> # Actual output was \"\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n> \\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\n> mytab\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n> \\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\n> mytab\"\n\nHuh. What readline or libedit version are you using, on what\nplatform? I'm curious also what is your prevailing setting\nof TERM? (I've been wondering if the test doesn't need to\nforce that to something standard. The buildfarm hasn't shown\nany signs of needing that, but manual invocations might be\na different story.)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 21:16:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 6:16 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Huh. What readline or libedit version are you using, on what\n> platform?\n\nUbuntu 18.04. I used ldd to verify that psql links to the system\nlibreadline, which is libreadline7:amd64 -- that's what Debian\npackages as \"7.0-3\".\n\n> I'm curious also what is your prevailing setting\n> of TERM?\n\nI use zsh, with a fairly customized setup. $TERM is \"xterm-256color\"\nin the affected shell. (I have a feeling that this has something to do\nwith my amazing technicolor terminal.)\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 18:37:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n>> I'm curious also what is your prevailing setting\n>> of TERM?\n\n> I use zsh, with a fairly customized setup. $TERM is \"xterm-256color\"\n> in the affected shell. (I have a feeling that this has something to do\n> with my amazing technicolor terminal.)\n\nHmm. If you set it to plain \"xterm\", does the test pass?\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Fri, 03 Jan 2020 21:51:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 6:51 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm. If you set it to plain \"xterm\", does the test pass?\n\nNo. Also tried setting PG_COLOR=\"off\" and CLICOLOR=0 -- that also\ndidn't help. (This was based on possibly-relevant vars that \"env\"\nshowed were set).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 19:06:12 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 7:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n> No. Also tried setting PG_COLOR=\"off\" and CLICOLOR=0 -- that also\n> didn't help. (This was based on possibly-relevant vars that \"env\"\n> showed were set).\n\nRemoving the single check_completion() test from 010_tab_completion.pl\nthat actually fails on my system (\"offer multiple table choices\")\nfixes the problem for me -- everything else passes.\n\nI suppose that this means that the problem is in \"offer multiple table\nchoices\" specifically.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 19:28:55 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 3, 2020 at 7:06 PM Peter Geoghegan <pg@bowt.ie> wrote:\n>> No. Also tried setting PG_COLOR=\"off\" and CLICOLOR=0 -- that also\n>> didn't help. (This was based on possibly-relevant vars that \"env\"\n>> showed were set).\n\nYeah, that's not terribly surprising, because if I'm reading those\nescape sequences correctly they're not about color. They seem to be\njust cursor movement and line clearing, according to [1].\n\nWhat I'm mystified by is why your copy of libreadline is choosing to\ndo that, rather than just space over to where the word should be printed\nwhich is what every other copy seems to be doing. I have a fresh new\nDebian installation at hand, with\n\n$ dpkg -l | grep readline\nii libreadline-dev:amd64 7.0-5 amd64 GNU readline and history libraries, development files\nii libreadline5:amd64 5.2+dfsg-3+b13 amd64 GNU readline and history libraries, run-time libraries\nii libreadline7:amd64 7.0-5 amd64 GNU readline and history libraries, run-time libraries\nii readline-common 7.0-5 all GNU readline and history libraries, common files\n\nand I'm not seeing the failure on it, either with TERM=xterm\nor with TERM=xterm-256color. So what's the missing ingredient?\n\n> Removing the single check_completion() test from 010_tab_completion.pl\n> that actually fails on my system (\"offer multiple table choices\")\n> fixes the problem for me -- everything else passes.\n> I suppose that this means that the problem is in \"offer multiple table\n> choices\" specifically.\n\nI'd hate to conclude that we can't test any completion behavior that\ninvolves offering a list.\n\nIf we can't coerce libreadline into being less avant-garde in its\nscreen management, I suppose we could write a regex to recognize\nxterm escape sequences and ignore those. But I'd be happier about\nthis if I could reproduce the behavior. I don't like the feeling\nthat there's something going on here that I don't understand.\n\nBTW, it seems somewhat likely that this is less about libreadline\nthan about its dependency libtinfo. On my machine that's from\n\nii libtinfo6:amd64 6.1+20181013-2+deb10u2 amd64 shared low-level terminfo library for terminal handling\n\nwhat about yours?\n\n\t\t\tregards, tom lane\n\n[1] https://www.xfree86.org/current/ctlseqs.html\n\n\n",
"msg_date": "Sat, 04 Jan 2020 00:30:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 9:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> BTW, it seems somewhat likely that this is less about libreadline\n> than about its dependency libtinfo. On my machine that's from\n>\n> ii libtinfo6:amd64 6.1+20181013-2+deb10u2 amd64 shared low-level terminfo library for terminal handling\n\nThis seems promising. By following the same ldd + dpkg -S workflow as\nbefore, I can see that my libtinfo is \"libtinfo5:amd64\". This libtinfo\nappears to be a Ubuntu-specific package:\n\n$ dpkg -l libtinfo5:amd64\nDesired=Unknown/Install/Remove/Purge/Hold\n| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend\n|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)\n||/ Name Version Architecture\n Description\n+++-============================-===================-===================-==============================================================\nii libtinfo5:amd64 6.1-1ubuntu1.18.04 amd64\n shared low-level terminfo library for terminal handling\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 21:38:25 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 3, 2020 at 9:30 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> BTW, it seems somewhat likely that this is less about libreadline\n>> than about its dependency libtinfo. On my machine that's from\n>> ii libtinfo6:amd64 6.1+20181013-2+deb10u2 amd64 shared low-level terminfo library for terminal handling\n\n> This seems promising. By following the same ldd + dpkg -S workflow as\n> before, I can see that my libtinfo is \"libtinfo5:amd64\".\n\nHmm. Usually this sort of software gets more weird in newer\nversions, not less so ;-). Still, it's a starting point.\n\nAttached is a blind attempt to fix this by allowing escape\nsequence(s) instead of spaces between the words. Does this\nwork for you?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 04 Jan 2020 01:39:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Attached is a blind attempt to fix this by allowing escape\n> sequence(s) instead of spaces between the words. Does this\n> work for you?\n\nI'm afraid not; no apparent change. No change in the \"Actual output\nwas\" line, either.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 22:49:02 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Fri, Jan 3, 2020 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Attached is a blind attempt to fix this by allowing escape\n>> sequence(s) instead of spaces between the words. Does this\n>> work for you?\n\n> I'm afraid not; no apparent change. No change in the \"Actual output\n> was\" line, either.\n\nMeh. I must be too tired to get the regexp syntax right.\nWill try tomorrow.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 02:00:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Fri, Jan 3, 2020 at 10:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hmm. Usually this sort of software gets more weird in newer\n> versions, not less so ;-). Still, it's a starting point.\n\nIn case I was unclear: I meant to suggest that this may have something\nto do with Ubuntu having patched the Debian package for who-knows-what\nreason. This is indicated by the fact that the Version string is\n\"6.1-1ubuntu1.18.04\", as opposed to a Debian style Version without the\n\"ubuntu\" (I believe that that's the convention they follow).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Fri, 3 Jan 2020 23:01:59 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> Meh. I must be too tired to get the regexp syntax right.\n\nLooking closer, I see that your actual output included *both*\nspaces and escape sequences between the table names, so it\nneeds to be more like the attached.\n\nAlso, I apparently misread the control sequences. What they\nlook like in the light of morning is\n\n\\e[0m\t\tCharacter Attributes = Normal (no bold, color, etc)\n\\e[K\t\tErase in Line to Right\n\nSo now I'm thinking again that there must be something about\nyour colorized setup that triggers use of at least the first one.\nBut why didn't clearing the relevant environment variables\nchange anything?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 04 Jan 2020 13:19:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 10:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Looking closer, I see that your actual output included *both*\n> spaces and escape sequences between the table names, so it\n> needs to be more like the attached.\n\nThis patch makes the tests pass. Thanks!\n\n> So now I'm thinking again that there must be something about\n> your colorized setup that triggers use of at least the first one.\n> But why didn't clearing the relevant environment variables\n> change anything?\n\nI don't know. It also failed with bash, which doesn't have any of that stuff.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 10:38:49 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sat, Jan 4, 2020 at 10:19 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> Looking closer, I see that your actual output included *both*\n>> spaces and escape sequences between the table names, so it\n>> needs to be more like the attached.\n\n> This patch makes the tests pass. Thanks!\n\nAh, good. Pushed.\n\n>> So now I'm thinking again that there must be something about\n>> your colorized setup that triggers use of at least the first one.\n>> But why didn't clearing the relevant environment variables\n>> change anything?\n\n> I don't know. It also failed with bash, which doesn't have any of that stuff.\n\nI got another data point just now. I was experimenting with a proposed\nfix for the current libedit problem [1], so I needed to build libedit\nfrom source, and the path of least resistance was to do so on my RHEL6\nworkstation. I find that that conglomeration fails the current\n010_tab_completion.pl tests, even with this patch, with symptoms like\n\nnot ok 2 - complete SEL<tab> to SELECT\n\n# Failed test 'complete SEL<tab> to SELECT'\n# at t/010_tab_completion.pl line 91.\n# Actual output was \"postgres=# SEL\\a\\r\\e[15GECT \"\n\nand similarly in a couple other tests. I know where the bell (\\a)\nis coming from: there's a different logic bug in libedit that causes\nit to do el_beep() even when there's a unique completion. But why the\nunnecessary cursor repositioning? Looking closer, I realize that\nlibedit on this software stack is depending on\n\n libtinfo.so.5 => /lib64/libtinfo.so.5 (0x00007fa39a5e2000)\n\nwhich is from\n\n$ rpm -qf /lib64/libtinfo.so.5\nncurses-libs-5.7-4.20090207.el6.x86_64\n\nSeeing that you're also having issues with a stack involving\nlibtinfo.so.5, here's my theory: libtinfo version 5 is brain-dead\nabout whether it needs to issue cursor repositioning commands, and\ntends to do so even when the cursor is in the right place already.\nVersion 6 fixed that, which is why we're not seeing these escape\nsequences on any of the libedit-using buildfarm critters.\n\nSo we're going to have to make some decisions about what range of\nlibedit builds we want to cater for in the tab-completion tests.\nGiven that the plan is to considerably increase the coverage of\nthose tests, I'm hesitant to promise that we'll keep passing with\nanything that isn't explicitly tested in the buildfarm, or by active\ndevelopers such as yourself. Maybe that's fine given that end users\nprobably don't run the TAP tests.\n\nIt would likely help if we could constrain the behavior to avoid\nvariations such as colorization, which is why I'm concerned that\nwe don't seem to have figured out how to turn that off in your\nsetup. I think that bears more investigation, even though we've\nmanaged to work around it for the moment.\n\n\t\t\tregards, tom lane\n\n[1] http://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=54510\n\n\n",
"msg_date": "Sat, 04 Jan 2020 14:50:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> Seeing that you're also having issues with a stack involving\n> libtinfo.so.5, here's my theory: libtinfo version 5 is brain-dead\n> about whether it needs to issue cursor repositioning commands, and\n> tends to do so even when the cursor is in the right place already.\n> Version 6 fixed that, which is why we're not seeing these escape\n> sequences on any of the libedit-using buildfarm critters.\n\nNope, the buildfarm just blew up that theory: Andres' critters are\nfailing in the wake of fac1c04fe, with symptoms exactly like those\nof my franken-libedit build. So newer libtinfo doesn't fix it.\n\nWhat has to have broken those machines was the change to explicitly\nforce TERM to \"xterm\". Now I'm wondering what their prevailing\nsetting was before that. Maybe it was undef, or some absolutely\nvanilla thing that prevents libtinfo from thinking it can use any\nescape sequences at all. I'm going to go find out, because if we\ncan use that behavior globally, it'd be a heck of a lot safer\nsolution than the path of dealing with escape sequences explicitly.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 14:58:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sun, Jan 5, 2020 at 6:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> I wrote:\n> > Seeing that you're also having issues with a stack involving\n> > libtinfo.so.5, here's my theory: libtinfo version 5 is brain-dead\n> > about whether it needs to issue cursor repositioning commands, and\n> > tends to do so even when the cursor is in the right place already.\n> > Version 6 fixed that, which is why we're not seeing these escape\n> > sequences on any of the libedit-using buildfarm critters.\n>\n> Nope, the buildfarm just blew up that theory: Andres' critters are\n> failing in the wake of fac1c04fe, with symptoms exactly like those\n> of my franken-libedit build. So newer libtinfo doesn't fix it.\n>\n> What has to have broken those machines was the change to explicitly\n> force TERM to \"xterm\". Now I'm wondering what their prevailing\n> setting was before that. Maybe it was undef, or some absolutely\n> vanilla thing that prevents libtinfo from thinking it can use any\n> escape sequences at all. I'm going to go find out, because if we\n> can use that behavior globally, it'd be a heck of a lot safer\n> solution than the path of dealing with escape sequences explicitly.\n>\n\n\nYou can see what settings it started with, although only certain\nvalues are whitelisted. See orig_env in the config. e.g. crake, which\nis now failing, has no TERM setting.\n\ncheers\n\nandrew\n\n\n-- \nAndrew Dunstan https://www.2ndQuadrant.com\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Sun, 5 Jan 2020 07:01:40 +1030",
"msg_from": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 11:50 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So we're going to have to make some decisions about what range of\n> libedit builds we want to cater for in the tab-completion tests.\n> Given that the plan is to considerably increase the coverage of\n> those tests, I'm hesitant to promise that we'll keep passing with\n> anything that isn't explicitly tested in the buildfarm, or by active\n> developers such as yourself. Maybe that's fine given that end users\n> probably don't run the TAP tests.\n\nThis sounds similar to \"EXTRA_TESTS=collate.linux.utf8\". I think that\nit's fine to go that way, provided it isn't hard to work around.\n\nFWIW, I find it very surprising that it was possible for the test to\nfail on my workstation/server, without it failing on any buildfarm\nanimals.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 12:46:30 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Andrew Dunstan <andrew.dunstan@2ndquadrant.com> writes:\n> On Sun, Jan 5, 2020 at 6:29 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> What has to have broken those machines was the change to explicitly\n>> force TERM to \"xterm\". Now I'm wondering what their prevailing\n>> setting was before that. Maybe it was undef, or some absolutely\n>> vanilla thing that prevents libtinfo from thinking it can use any\n>> escape sequences at all. I'm going to go find out, because if we\n>> can use that behavior globally, it'd be a heck of a lot safer\n>> solution than the path of dealing with escape sequences explicitly.\n\n> You can see what settings it started with, although only certain\n> values are whitelisted. See orig_env in the config. e.g. crake, which\n> is now failing, has no TERM setting.\n\nAh, right. The one-off patch I just pushed also confirms that the\nTAP tests are seeing TERM-not-set on Andres' machines.\n\nI'm currently investigating the possibility of just unsetting it\nin the tab-completion test and then being able to revert all the\nescape-sequence hocus-pocus. It's looking promising in local\ntesting, although Peter said off-list that it didn't seem to\nwork in his setup.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 15:48:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> FWIW, I find it very surprising that it was possible for the test to\n> fail on my workstation/server, without it failing on any buildfarm\n> animals.\n\nYeah, there is still something unexplained about that. We've so far\nfailed to pin the blame on either readline version or environment\nsettings ... but what else could be causing you to get different\nresults?\n\nFor the record, I'm currently running around and trying the attached\n(on top of latest HEAD, 60ab7c80b) on the various configurations\nI have here. Could you confirm that it works, or doesn't, in your\nenvironment --- and if it doesn't, what's the output?\n\n\t\t\tregards, tom lane",
"msg_date": "Sat, 04 Jan 2020 15:56:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, there is still something unexplained about that. We've so far\n> failed to pin the blame on either readline version or environment\n> settings ... but what else could be causing you to get different\n> results?\n\nBeats me. Could have something to do with the fact that my libtinfo5\nis in fact a patched version -- \"6.1-1ubuntu1.18.04\". (Whatever that\nmeans.)\n\n> For the record, I'm currently running around and trying the attached\n> (on top of latest HEAD, 60ab7c80b) on the various configurations\n> I have here. Could you confirm that it works, or doesn't, in your\n> environment --- and if it doesn't, what's the output?\n\nThis patch shows the same old failure as before:\n\ncd /code/postgresql/patch/build/../source/src/bin/psql &&\nTESTDIR='/code/postgresql/patch/build/src/bin/psql'\nPATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/bin:$PATH\"\nLD_LIBRARY_PATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/lib\"\n PGPORT='65432'\nPG_REGRESS='/code/postgresql/patch/build/src/bin/psql/../../../src/test/regress/pg_regress'\nREGRESS_SHLIB='/code/postgresql/patch/build/src/test/regress/regress.so'\n/usr/bin/prove -I\n/code/postgresql/patch/build/../source/src/test/perl/ -I\n/code/postgresql/patch/build/../source/src/bin/psql t/*.pl\nt/010_tab_completion.pl .. 8/?\n# Failed test 'offer multiple table choices'\n# at t/010_tab_completion.pl line 112.\n# Actual output was \"\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\"\n#\n# Looks like you failed 1 test of 12.\nt/010_tab_completion.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/12 subtests\n\nTest Summary Report\n-------------------\nt/010_tab_completion.pl (Wstat: 256 Tests: 12 Failed: 1)\n Failed test: 8\n Non-zero exit status: 1\nFiles=1, Tests=12, 7 wallclock secs ( 0.01 usr 0.00 sys + 0.37 cusr\n 0.09 csys = 0.47 CPU)\nResult: FAIL\nMakefile:87: recipe for target 'check' failed\nmake: *** [check] Error 1\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 13:09:27 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 12:56 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Yeah, there is still something unexplained about that. We've so far\n> failed to pin the blame on either readline version or environment\n> settings ... but what else could be causing you to get different\n> results?\n\nTom just advised me via private e-mail that my termcap database may be\nthe issue here. Looks like he was right. I did a \"sudo aptitude\nreinstall ncurses-base\", and now the tests pass against HEAD. (I\nprobably should have tried to preserve things before going ahead with\nthat, but that didn't happen -- I had no reason to think that the\nsystem was affected by any kind of corruption.)\n\nI am very sorry for having wasted your time on this wild goose chase,\nTom. The only explanation I can think of is that maybe it relates to\nmy upgrading the OS in-place. That is, maybe my system was affected by\na subtle, low probability bug due to a major version change in\nncurses-base.\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 15:29:13 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> Tom just advised me via private e-mail that my termcap database may be\n> the issue here. Looks like he was right. I did a \"sudo aptitude\n> reinstall ncurses-base\", and now the tests pass against HEAD. (I\n> probably should have tried to preserve things before going ahead with\n> that, but that didn't happen -- I had no reason to think that the\n> system was affected by any kind of corruption.)\n\nHm, well, HEAD still has the hackery with explicit accounting for\nescape sequences. Could you try it with the patch I showed to unset\nTERM and remove that stuff? (It won't apply exactly to HEAD, but\nthe diffs are simple enough, or you could revert to 60ab7c80b first.)\n\n> I am very sorry for having wasted your time on this wild goose chase,\n\nDon't worry about it --- even if it was some upgrade glitch, there\nwas no way to know that in advance.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 18:39:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 3:39 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Hm, well, HEAD still has the hackery with explicit accounting for\n> escape sequences. Could you try it with the patch I showed to unset\n> TERM and remove that stuff? (It won't apply exactly to HEAD, but\n> the diffs are simple enough, or you could revert to 60ab7c80b first.)\n\nWith the attached patch against HEAD (which is based on your earlier\nunset-TERM-in-tab-completion-test.patch), I find that the tests fail\nas follows:\n\ncd /code/postgresql/patch/build/../source/src/bin/psql &&\nTESTDIR='/code/postgresql/patch/build/src/bin/psql'\nPATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/bin:$PATH\"\nLD_LIBRARY_PATH=\"/code/postgresql/patch/build/tmp_install/code/postgresql/patch/install/lib\"\n PGPORT='65432'\nPG_REGRESS='/code/postgresql/patch/build/src/bin/psql/../../../src/test/regress/pg_regress'\nREGRESS_SHLIB='/code/postgresql/patch/build/src/test/regress/regress.so'\n/usr/bin/prove -I\n/code/postgresql/patch/build/../source/src/test/perl/ -I\n/code/postgresql/patch/build/../source/src/bin/psql t/*.pl\nt/010_tab_completion.pl .. 8/?\n# Failed test 'offer multiple table choices'\n# at t/010_tab_completion.pl line 112.\n# Actual output was \"\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\\r\\n\\e[01;35mmytab\\e[0m\\e[K123\\e[0m\\e[K\n\\e[01;35mmytab\\e[0m\\e[K246\\e[0m\\e[K \\r\\npostgres=# select * from\nmytab\"\n#\n# Looks like you failed 1 test of 12.\nt/010_tab_completion.pl .. Dubious, test returned 1 (wstat 256, 0x100)\nFailed 1/12 subtests\n\nTest Summary Report\n-------------------\nt/010_tab_completion.pl (Wstat: 256 Tests: 12 Failed: 1)\n Failed test: 8\n Non-zero exit status: 1\nFiles=1, Tests=12, 7 wallclock secs ( 0.02 usr 0.00 sys + 0.36 cusr\n 0.12 csys = 0.50 CPU)\nResult: FAIL\nMakefile:87: recipe for target 'check' failed\nmake: *** [check] Error 1\n\n-- \nPeter Geoghegan",
"msg_date": "Sat, 4 Jan 2020 16:09:04 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> With the attached patch against HEAD (which is based on your earlier\n> unset-TERM-in-tab-completion-test.patch), I find that the tests fail\n> as follows:\n\nUm, well, that's the same behavior you were seeing before.\nSo the terminfo reinstall didn't really do anything.\n\nI'm still curious about which terminfo file your psql actually\nreads if TERM is unset, and whether that file is visibly\ndifferent from the xterm-related files.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 19:14:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 4:14 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Um, well, that's the same behavior you were seeing before.\n> So the terminfo reinstall didn't really do anything.\n\nSigh.\n\n> I'm still curious about which terminfo file your psql actually\n> reads if TERM is unset, and whether that file is visibly\n> different from the xterm-related files.\n\nI've found the actual problem -- it's my ~/.inputrc. Which is read in by\nlibreadline at some point (determined this using ltrace).\n\nOnce I comment out the following two lines from ~/.inputrc, everything\nworks fine on\nHEAD + HEAD-unset-TERM-in-tab-completion-test.patch:\n\nset colored-completion-prefix on\nset colored-stats on\n\n--\nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 16:42:56 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> I've found the actual problem -- it's my ~/.inputrc. Which is read in by\n> libreadline at some point (determined this using ltrace).\n\nAh-hah!\n\nSo what we really want here, I guess, is for the test script to suppress\nreading of ~/.inputrc, on the same principle that it suppresses reading\nof ~/.psqlrc. A quick look at the readline docs suggests that the\nbest way to do that would be to set envar INPUTRC to /dev/null --- could\nyou confirm that that works for you?\n\n> Once I comment out the following two lines from ~/.inputrc, everything\n> works fine on\n> HEAD + HEAD-unset-TERM-in-tab-completion-test.patch:\n> set colored-completion-prefix on\n> set colored-stats on\n\nHm. I wonder how it is that that leads to ignoring the TERM environment?\nStill, it's just an academic point if we can suppress reading the file.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 20:57:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> Peter Geoghegan <pg@bowt.ie> writes:\n>> Once I comment out the following two lines from ~/.inputrc, everything\n>> works fine on\n>> HEAD + HEAD-unset-TERM-in-tab-completion-test.patch:\n>> set colored-completion-prefix on\n>> set colored-stats on\n\n> Hm. I wonder how it is that that leads to ignoring the TERM environment?\n\nA bit of digging says that readline's color support is just hard-wired\nto use xterm-style escapes -- it doesn't look like there's any connection\nto terminfo at all. See the _rl_color_indicator[] data structure.\nThe \\e[...m and \\e[K escape sequences can both be found there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 21:15:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So what we really want here, I guess, is for the test script to suppress\n> reading of ~/.inputrc, on the same principle that it suppresses reading\n> of ~/.psqlrc.\n\nThat was what I thought of, too.\n\n> A quick look at the readline docs suggests that the\n> best way to do that would be to set envar INPUTRC to /dev/null --- could\n> you confirm that that works for you?\n\nYes -- \"export INPUTRC=/dev/null\" also makes it work for me (on HEAD +\nHEAD-unset-TERM-in-tab-completion-test.patch, but with my\noriginal/problematic ~/.inputrc).\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 18:16:23 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Geoghegan <pg@bowt.ie> writes:\n> On Sat, Jan 4, 2020 at 5:57 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>> A quick look at the readline docs suggests that the\n>> best way to do that would be to set envar INPUTRC to /dev/null --- could\n>> you confirm that that works for you?\n\n> Yes -- \"export INPUTRC=/dev/null\" also makes it work for me (on HEAD +\n> HEAD-unset-TERM-in-tab-completion-test.patch, but with my\n> original/problematic ~/.inputrc).\n\nCool, I'll go commit a fix along those lines. Thanks for tracing\nthis down!\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sat, 04 Jan 2020 21:17:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Sat, Jan 4, 2020 at 6:17 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Cool, I'll go commit a fix along those lines. Thanks for tracing\n> this down!\n\nGlad to help!\n\n-- \nPeter Geoghegan\n\n\n",
"msg_date": "Sat, 4 Jan 2020 18:18:51 -0800",
"msg_from": "Peter Geoghegan <pg@bowt.ie>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Cool, I'll go commit a fix along those lines. Thanks for tracing\n> this down!\n\nHere's one final style cleanup for the TAP test.\n\n- use like() for the banner test\n- pass the regexes around as qr// objects, so they can be\n syntax-highlighted properly, and don't need regex\n metacharacter-escaping backslashes doubled.\n- include the regex that didn't match in the diagnostic\n\n- ilmari\n-- \n\"The surreality of the universe tends towards a maximum\" -- Skud's Law\n\"Never formulate a law or axiom that you're not prepared to live with\n the consequences of.\" -- Skud's Meta-Law\n\n\n\n",
"msg_date": "Sun, 05 Jan 2020 13:35:03 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari Mannsåker) writes:\n\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>\n>> Cool, I'll go commit a fix along those lines. Thanks for tracing\n>> this down!\n>\n> Here's one final style cleanup for the TAP test.\n>\n> - use like() for the banner test\n> - pass the regexes around as qr// objects, so they can be\n> syntax-highlighted properly, and don't need regex\n> metacharacter-escaping backslashes doubled.\n> - include the regex that didn't match in the diagnostic\n\nThis time with the actual attachment...\n\n- ilmari\n-- \n- Twitter seems more influential [than blogs] in the 'gets reported in\n the mainstream press' sense at least. - Matt McLeod\n- That'd be because the content of a tweet is easier to condense down\n to a mainstream media article. - Calle Dybedahl",
"msg_date": "Sun, 05 Jan 2020 13:39:11 +0000",
"msg_from": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=)",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n> Here's one final style cleanup for the TAP test.\n\nLGTM, pushed.\n\nOne minor note: you wanted to change the \\DRD test to\n\n+check_completion(\"\\\\DRD\\t\", qr/\\\\drds /, \"complete \\\\DRD<tab> to \\\\drds\");\n\nbut that doesn't work everywhere, unfortunately. On my machine\nwhat comes out is\n\n# Actual output was \"\\\\DRD\\b\\b\\bdrds \"\n# Did not match \"(?-xism:\\\\drds )\"\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Sun, 05 Jan 2020 11:38:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "\n\nOn 5 January 2020 16:38:36 GMT, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>ilmari@ilmari.org (Dagfinn Ilmari =?utf-8?Q?Manns=C3=A5ker?=) writes:\n>> Here's one final style cleanup for the TAP test.\n>\n>LGTM, pushed.\n\nThanks!\n\n>One minor note: you wanted to change the \\DRD test to\n>\n>+check_completion(\"\\\\DRD\\t\", qr/\\\\drds /, \"complete \\\\DRD<tab> to\n>\\\\drds\");\n>\n>but that doesn't work everywhere, unfortunately. On my machine\n>what comes out is\n>\n># Actual output was \"\\\\DRD\\b\\b\\bdrds \"\n># Did not match \"(?-xism:\\\\drds )\"\n\nSorry, that was something I left in for testing the diagnostic and forgot to remove before committing.\n\n>\t\t\tregards, tom lane\n\n- ilmari\n\n\n",
"msg_date": "Sun, 05 Jan 2020 16:48:18 +0000",
"msg_from": "=?ISO-8859-1?Q?Dagfinn_Ilmari_Manns=E5ker?= <ilmari@ilmari.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> Indeed. It appears that recent libedit breaks tab-completion for\n> words involving a backslash, which is the fault of this upstream\n> commit:\n> http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/filecomplete.c.diff?r1=1.52&r2=1.53\n> Basically what that's doing is applying de-backslashing to EVERY\n> word that completion is attempted on, whether it might be a filename\n> or not. So what psql_complete sees in this test case is just \"DRD\"\n> which of course it does not recognize as a possible psql backslash\n> command.\n\nThe current state of play on this is that I committed a hacky workaround\n[1], but there is now a fix for it in libedit upstream [2][3]. I gather\nfrom looking at Debian's package page that the fix could be expected to\npropagate to Debian unstable within a few weeks, at which point I'd like\nto revert the hack. The libedit bug's only been there a few months\n(it was evidently introduced on 2019-03-31) so we can hope that it hasn't\npropagated into any long-term-support distros.\n\n> I found out while investigating this that the libedit version shipping\n> with buster (3.1-20181209) is differently broken for the same case:\n> instead of inapproriate forced de-escaping of the input of the\n> application-specific completion function, it applies inapproriate\n> forced escaping to the output of said function, so that when we see\n> \"\\DRD\" and return \"\\drds\", what comes out to the user is \"\\\\drds\".\n\nThere's little we can do about that one, but it doesn't affect the\nregression test as currently constituted.\n\nAnother libedit bug that the regression test *is* running into is that\nsome ancient versions fail to emit a trailing space after a successful\ncompletion. snapper is hitting that [4], and I believe locust would\nbe if it were running the TAP tests. While we could work around that\nby removing the trailing spaces from the match regexps, I really\ndon't wish to do so, because that destroys the test's ability to\ndistinguish correct outputs from incorrect-but-longer ones. (That's\nnot a killer problem for any of the current test cases, perhaps, but\nI think it will be in future.) So I'd like to define this problem as\nbeing out of scope. This bug was fixed eleven years ago upstream\n(see change in _rl_completion_append_character_function in [5]), so\nit seems reasonable to insist that people get a newer libedit or not\nrun this test.\n\nAnother issue visible in [4] is that ancient libedit fails to sort\nthe offered completion strings as one would expect. I don't see\nmuch point in working around that either. prairiedog's host has\nthat bug but not the space bug (see [6], from before I suppressed\nrunning the test on that machine), so it affects a larger range of\nlibedit versions, but they're probably all way too old for anyone\nto care. If anyone can point to a new-enough-to-matter libedit\nthat still behaves that way, we can reconsider.\n\n\t\t\tregards, tom lane\n\n[1] https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=ddd87d564508bb1c80aac0a4439cfe74a3c203a9\n[2] http://gnats.netbsd.org/cgi-bin/query-pr-single.pl?number=54510\n[3] http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/filecomplete.c.diff?r1=1.63&r2=1.64\n[4] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=snapper&dt=2020-01-05%2013%3A01%3A46\n[5] http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libedit/readline.c.diff?r1=1.75&r2=1.76\n[6] https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=prairiedog&dt=2020-01-02%2022%3A52%3A36\n\n\n",
"msg_date": "Sun, 05 Jan 2020 13:30:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-05 <25771.1578249042@sss.pgh.pa.us>\n> The current state of play on this is that I committed a hacky workaround\n> [1], but there is now a fix for it in libedit upstream [2][3]. I gather\n> from looking at Debian's package page that the fix could be expected to\n> propagate to Debian unstable within a few weeks, at which point I'd like\n> to revert the hack. The libedit bug's only been there a few months\n> (it was evidently introduced on 2019-03-31) so we can hope that it hasn't\n> propagated into any long-term-support distros.\n[...]\n\nI lost track of what bug is supposed to be where, so here's a summary\nof the state at apt.postgresql.org:\n\nPG13 head work on Debian unstable, buster, stretch.\nDoes not work on Ubuntu bionic, xenial. (Others not tested.)\n\nUbuntu xenial:\n\n07:24:42 # Failed test 'complete SEL<tab> to SELECT'\n07:24:42 # at t/010_tab_completion.pl line 98.\n07:24:42 # Actual output was \"SEL\\tpostgres=# SEL\\a\"\n07:24:42 # Did not match \"(?^:SELECT )\"\n07:24:48\n07:24:48 # Failed test 'complete sel<tab> to select'\n07:24:48 # at t/010_tab_completion.pl line 103.\n07:24:48 # Actual output was \"sel\\b\\b\\bSELECT \"\n07:24:48 # Did not match \"(?^:select )\"\n07:24:54\n07:24:54 # Failed test 'complete t<tab> to tab1'\n07:24:54 # at t/010_tab_completion.pl line 106.\n07:24:54 # Actual output was \"* from t \"\n07:24:54 # Did not match \"(?^:\\* from tab1 )\"\n07:25:00\n07:25:00 # Failed test 'complete my<tab> to mytab when there are multiple choices'\n07:25:00 # at t/010_tab_completion.pl line 112.\n07:25:00 # Actual output was \"select * from my \"\n07:25:00 # Did not match \"(?^:select \\* from my\\a?tab)\"\n07:25:06\n07:25:06 # Failed test 'offer multiple table choices'\n07:25:06 # at t/010_tab_completion.pl line 118.\n07:25:06 # Actual output was \"\\r\\n\\r\\n\\r\\r\\npostgres=# select * from my \\r\\n\\r\\n\\r\\r\\npostgres=# select * from my \"\n07:25:06 # Did not match \"(?^:mytab123 +mytab246)\"\n07:25:12\n07:25:12 # Failed test 'finish completion of one of multiple table choices'\n07:25:12 # at t/010_tab_completion.pl line 123.\n07:25:12 # Actual output was \"2 \"\n07:25:12 # Did not match \"(?^:246 )\"\n07:25:18\n07:25:18 # Failed test 'complete \\DRD<tab> to \\drds'\n07:25:18 # at t/010_tab_completion.pl line 131.\n07:25:18 # Actual output was \"\\\\DRD\\b\\b\\b\\bselect \"\n07:25:18 # Did not match \"(?^:drds )\"\n07:25:18 # Looks like you failed 7 tests of 12.\n07:25:18 t/010_tab_completion.pl ..\n07:25:18 Dubious, test returned 7 (wstat 1792, 0x700)\n07:25:18 Failed 7/12 subtests\n\nUbuntu bionic fails elsewhere:\n\n07:19:51 t/001_stream_rep.pl .................. ok\n07:19:53 t/002_archiving.pl ................... ok\n07:19:59 t/003_recovery_targets.pl ............ ok\n07:20:01 t/004_timeline_switch.pl ............. ok\n07:20:08 t/005_replay_delay.pl ................ ok\n07:20:10 Bailout called. Further testing stopped: system pg_ctl failed\n07:20:10 FAILED--Further testing stopped: system pg_ctl failed\n\n07:20:10 2020-01-06 06:19:41.285 UTC [26415] LOG: received fast shutdown request\n07:20:10 2020-01-06 06:19:41.285 UTC [26415] LOG: aborting any active transactions\n07:20:10 2020-01-06 06:19:41.287 UTC [26415] LOG: background worker \"logical replication launcher\" (PID 26424) exited with exit code 1\n07:20:10 2020-01-06 06:19:41.287 UTC [26419] LOG: shutting down\n07:20:10 2020-01-06 06:19:41.287 UTC [26419] LOG: checkpoint starting: shutdown immediate\n\n(It didn't get to the 010_tab_completion.pl test.)\n\nLibedit versions are:\n\nDebian:\nlibedit2 | 3.1-20140620-2 | oldoldstable | amd64, armel, armhf, i386 (jessie)\nlibedit2 | 3.1-20160903-3 | oldstable | amd64, arm64, armel, armhf, i386, mips, mips64el, m (stretch)\nlibedit2 | 3.1-20181209-1 | stable | amd64, arm64, armel, armhf, i386, mips, mips64el, m (buster)\nlibedit2 | 3.1-20191211-1 | testing | amd64, arm64, armel, armhf, i386, mips64el, mipsel, (bullseye)\nlibedit2 | 3.1-20191231-1 | unstable | amd64, arm64, armel, armhf, i386, mips64el, mipsel,\n\nUbuntu:\n libedit2 | 2.11-20080614-3ubuntu2 | precise | amd64, armel, armhf, i386, powerpc\n libedit2 | 3.1-20130712-2 | trusty | amd64, arm64, armhf, i386, powerpc, ppc64e\n libedit2 | 3.1-20150325-1ubuntu2 | xenial | amd64, arm64, armhf, i386, powerpc, ppc64e\n libedit2 | 3.1-20170329-1 | bionic | amd64, arm64, armhf, i386, ppc64el, s390x\n libedit2 | 3.1-20181209-1 | disco | amd64, arm64, armhf, i386, ppc64el, s390x\n libedit2 | 3.1-20190324-1 | eoan | amd64, arm64, armhf, i386, ppc64el, s390x\n libedit2 | 3.1-20191211-1 | focal | amd64, arm64, armhf, i386, ppc64el, s390x\n libedit2 | 3.1-20191231-1 | focal-proposed | amd64, arm64, armhf, i386, ppc64el, s390x\n\nChristoph\n\n\n",
"msg_date": "Mon, 6 Jan 2020 11:56:08 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> I lost track of what bug is supposed to be where, so here's a summary\n> of the state at apt.postgresql.org:\n\n> PG13 head work on Debian unstable, buster, stretch.\n\nCool.\n\n> Does not work on Ubuntu bionic, xenial. (Others not tested.)\n\nHmm ... do we care? The test output seems to show that xenial's\n3.1-20150325-1ubuntu2 libedit is completely broken. Maybe there's\na way to work around that, but it's not clear to me that that'd\nbe a useful expenditure of time. You're not really going to be\nbuilding PG13 for that release are you?\n\n> Ubuntu bionic fails elsewhere [ apparently in 006_logical_decoding.pl ]\n\nHmm. Not related to this thread then. But if that's reproducible,\nsomebody should have a look. Maybe related to\n\nhttps://www.postgresql.org/message-id/CAA4eK1LMDx6vK8Kdw8WUeW1MjToN2xVffL2kvtHvZg17%3DY6QQg%40mail.gmail.com\n\n??? (cc'ing Amit for that)\n\nMeanwhile, as to the point I was really concerned about, your table of\ncurrent versions looks promising -- libedit's premature-dequote bug is\nevidently only in unstable and the last stable branch, and I presume the\nlast-stables are still getting updated, since those have libedit versions\nthat are less than a month old. I looked at Fedora's git repo and the\nsituation seems similar over there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 06 Jan 2020 10:15:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-06 <3764.1578323719@sss.pgh.pa.us>\n> > Does not work on Ubuntu bionic, xenial. (Others not tested.)\n> \n> Hmm ... do we care? The test output seems to show that xenial's\n> 3.1-20150325-1ubuntu2 libedit is completely broken. Maybe there's\n> a way to work around that, but it's not clear to me that that'd\n> be a useful expenditure of time. You're not really going to be\n> building PG13 for that release are you?\n\nxenial (16.04) is a LTS release with support until 2021-04, and the\ncurrent plan was to support it. I now realize that's semi-close to the\n13 release date, but so far we have tried to really support all\nPG-Distro combinations.\n\nI could probably arrange for that test to be disabled when building\nfor xenial, but it'd be nice if there were a configure switch or\nenvironment variable for it so we don't have to invent it.\n\n> > Ubuntu bionic fails elsewhere [ apparently in 006_logical_decoding.pl ]\n> \n> Hmm. Not related to this thread then. But if that's reproducible,\n\nIt has been failing with the same output since Jan 2, 2020, noon.\n\n> somebody should have a look. Maybe related to\n> \n> https://www.postgresql.org/message-id/CAA4eK1LMDx6vK8Kdw8WUeW1MjToN2xVffL2kvtHvZg17%3DY6QQg%40mail.gmail.com\n\nThe only git change in that build was d207038053837 \"Fix running out\nof file descriptors for spill files.\"\n\nChristoph\n\n\n",
"msg_date": "Mon, 6 Jan 2020 20:13:03 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Re: Tom Lane 2020-01-06 <3764.1578323719@sss.pgh.pa.us>\n>>> Does not work on Ubuntu bionic, xenial. (Others not tested.)\n\n>> Hmm ... do we care? The test output seems to show that xenial's\n>> 3.1-20150325-1ubuntu2 libedit is completely broken. Maybe there's\n>> a way to work around that, but it's not clear to me that that'd\n>> be a useful expenditure of time. You're not really going to be\n>> building PG13 for that release are you?\n\n> xenial (16.04) is a LTS release with support until 2021-04, and the\n> current plan was to support it. I now realize that's semi-close to the\n> 13 release date, but so far we have tried to really support all\n> PG-Distro combinations.\n\nI installed libedit_3.1-20150325.orig.tar.gz from source here, and it\npasses our current regression test and seems to behave just fine in\nlight manual testing. (I did not apply any of the Debian-specific\npatches at [1], but they don't look like they'd explain much.)\nSo I'm a bit at a loss as to what's going wrong for you. Is the test\nenvironment for Xenial the same as for the other branches?\n\n\t\t\tregards, tom lane\n\n[1] https://launchpad.net/ubuntu/+source/libedit/3.1-20150325-1ubuntu2\n\n\n",
"msg_date": "Mon, 06 Jan 2020 18:06:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 12:43 AM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Tom Lane 2020-01-06 <3764.1578323719@sss.pgh.pa.us>\n> > > Does not work on Ubuntu bionic, xenial. (Others not tested.)\n> >\n> > Hmm ... do we care? The test output seems to show that xenial's\n> > 3.1-20150325-1ubuntu2 libedit is completely broken. Maybe there's\n> > a way to work around that, but it's not clear to me that that'd\n> > be a useful expenditure of time. You're not really going to be\n> > building PG13 for that release are you?\n>\n> xenial (16.04) is a LTS release with support until 2021-04, and the\n> current plan was to support it. I now realize that's semi-close to the\n> 13 release date, but so far we have tried to really support all\n> PG-Distro combinations.\n>\n> I could probably arrange for that test to be disabled when building\n> for xenial, but it'd be nice if there were a configure switch or\n> environment variable for it so we don't have to invent it.\n>\n> > > Ubuntu bionic fails elsewhere [ apparently in 006_logical_decoding.pl ]\n> >\n> > Hmm. Not related to this thread then. But if that's reproducible,\n>\n> It has been failing with the same output since Jan 2, 2020, noon.\n>\n> > somebody should have a look. Maybe related to\n> >\n> > https://www.postgresql.org/message-id/CAA4eK1LMDx6vK8Kdw8WUeW1MjToN2xVffL2kvtHvZg17%3DY6QQg%40mail.gmail.com\n>\n> The only git change in that build was d207038053837 \"Fix running out\n> of file descriptors for spill files.\"\n>\n\nThanks, for reporting. Is it possible to get a call stack as we are\nnot able to reproduce this failure? Also, if you don't mind, let's\ndiscuss this on the thread provided by Tom.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Tue, 7 Jan 2020 11:48:10 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> I installed libedit_3.1-20150325.orig.tar.gz from source here, and it\n> passes our current regression test and seems to behave just fine in\n> light manual testing. (I did not apply any of the Debian-specific\n> patches at [1], but they don't look like they'd explain much.)\n> So I'm a bit at a loss as to what's going wrong for you. Is the test\n> environment for Xenial the same as for the other branches?\n\nTo dig deeper, I set up an actual installation of xenial, and on that\nI can replicate the tab-completion misbehavior you reported. The cause\nappears to be that libedit's rl_line_buffer does not contain the current\nline as expected, but the previous line (or, initially, an empty string).\nThus, the hack I put in to make things pass on current libedit actually\nmakes things worse on this version --- although it doesn't fully pass\neven if I revert ddd87d564, since there are other places where we\ndepend on rl_line_buffer to be valid.\n\nSo that raises the question: why does xenial's version of libedit\nnot match either its documentation or the distributed source code?\nBecause it doesn't.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 07 Jan 2020 17:28:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On Mon, Jan 6, 2020 at 4:26 PM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Tom Lane 2020-01-05 <25771.1578249042@sss.pgh.pa.us>\n>\n> Ubuntu bionic fails elsewhere:\n>\n> 07:19:51 t/001_stream_rep.pl .................. ok\n> 07:19:53 t/002_archiving.pl ................... ok\n> 07:19:59 t/003_recovery_targets.pl ............ ok\n> 07:20:01 t/004_timeline_switch.pl ............. ok\n> 07:20:08 t/005_replay_delay.pl ................ ok\n> 07:20:10 Bailout called. Further testing stopped: system pg_ctl failed\n> 07:20:10 FAILED--Further testing stopped: system pg_ctl failed\n>\n> 07:20:10 2020-01-06 06:19:41.285 UTC [26415] LOG: received fast shutdown request\n> 07:20:10 2020-01-06 06:19:41.285 UTC [26415] LOG: aborting any active transactions\n> 07:20:10 2020-01-06 06:19:41.287 UTC [26415] LOG: background worker \"logical replication launcher\" (PID 26424) exited with exit code 1\n> 07:20:10 2020-01-06 06:19:41.287 UTC [26419] LOG: shutting down\n> 07:20:10 2020-01-06 06:19:41.287 UTC [26419] LOG: checkpoint starting: shutdown immediate\n>\n\nIt looks like this failure is more of what we are getting on\n\"sidewinder\" where it failed because of \"insufficient file descriptors\navailable to start server process\". Can you check in the log\n(probably in 006_logical_decoding_master.log) if this is the same you\nare getting or something else.\n\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Wed, 8 Jan 2020 08:29:38 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Amit Kapila 2020-01-08 <CAA4eK1KtQDU=RTAPi6id6Nvv1HkKrpDp_ZZ_uHL1XQ0cjT2_eg@mail.gmail.com>\n> It looks like this failure is more of what we are getting on\n> \"sidewinder\" where it failed because of \"insufficient file descriptors\n> available to start server process\". Can you check in the log\n> (probably in 006_logical_decoding_master.log) if this is the same you\n> are getting or something else.\n\nI can't reproduce it locally now, but it's still failing on the\nbuildd. Is there some setting to make it print the relevant .log\nfiles?\n\n(Fwiw, I can't see your error message in\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-01-07%2022%3A45%3A24)\n\nChristoph\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:38:25 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "src/test/recovery regression failure on bionic"
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> (Fwiw, I can't see your error message in\n> https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-01-07%2022%3A45%3A24)\n\nsidewinder is currently broken due to an unrelated problem.\nThe case Amit is worried about is only manifesting on the\nback branches, eg here:\n\nhttps://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=sidewinder&dt=2020-01-02%2018%3A45%3A25\n\nwhere we're getting postmaster start failures like this one:\n\n2020-01-02 19:51:05.685 CET [24138:1] LOG: starting PostgreSQL 12.1 on x86_64-unknown-netbsd7.0, compiled by gcc (nb2 20150115) 4.8.4, 64-bit\n2020-01-02 19:51:05.686 CET [24138:2] LOG: listening on Unix socket \"/tmp/sxAcn7SAzt/.s.PGSQL.56110\"\n2020-01-02 19:51:05.687 CET [24138:3] FATAL: insufficient file descriptors available to start server process\n2020-01-02 19:51:05.687 CET [24138:4] DETAIL: System allows 19, we need at least 20.\n2020-01-02 19:51:05.687 CET [24138:5] LOG: database system is shut down\n\nThis would happen if anything is causing the postmaster to have\na few more open files than the test added by commit\nd207038053837ae9365df2776371632387f6f655 is allowing for. It's\na test bug and nothing more.\n\nWhy sidewinder is not showing this in HEAD too is an interesting\nquestion, but it isn't. However, it could be that on another\nplatform (ie bionic) the problem does manifest in HEAD.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 10:04:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "I wrote:\n> This would happen if anything is causing the postmaster to have\n> a few more open files than the test added by commit\n> d207038053837ae9365df2776371632387f6f655 is allowing for. It's\n> a test bug and nothing more.\n> Why sidewinder is not showing this in HEAD too is an interesting\n> question, but it isn't. However, it could be that on another\n> platform (ie bionic) the problem does manifest in HEAD.\n\nI set up a NetBSD 7 installation locally, and while I have not\ndirectly reproduced the failure, I believe I understand all the\ncomponents of it now.\n\n(1) d20703805's test will clearly fall over if there are more than six\nFDs open in the postmaster when set_max_safe_fds is called, because it\nsets max_files_per_process = 26 while set_max_safe_fds requires at\nleast 20 usable FDs to be available.\n\n(2) The postmaster's stdin/stdout/stderr will surely eat up three of\nthose.\n\n(3) In HEAD, that's actually all the FDs there are normally, but in the\nback branches there is one more (under the conditions of this test),\nbecause in the back branches we open the postmaster's listen sockets\nbefore we run set_max_safe_fds. (9a86f03b4 changed this.)\n\n(4) NetBSD 7.0's cron leaves three extra open FDs in processes that\nit spawns. I have not looked into why, but I have experimentally\nobserved this. For example, lsof on a \"sleep\" launched from cron\nshows\n\nCOMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\nsleep 7824 tgl cwd VDIR 0,0 512 795201 /home/tgl\nsleep 7824 tgl txt VREG 0,0 10431 1613152 /bin/sleep\nsleep 7824 tgl txt VREG 0,0 1616564 22726 /lib/libc.so.12.193.1\nsleep 7824 tgl txt VREG 0,0 55295 22747 /lib/libgcc_s.so.1.0\nsleep 7824 tgl txt VREG 0,0 187183 22762 /lib/libm.so.0.11\nsleep 7824 tgl txt VREG 0,0 92195 1499524 /libexec/ld.elf_so\nsleep 7824 tgl 0r PIPE 0xfffffe803131eb58 16384 \nsleep 7824 tgl 1w PIPE 0xfffffe8007ec4a30 0 ->0xfffffe800cc0d2c0\nsleep 7824 tgl 2w PIPE 0xfffffe8007ec4a30 0 ->0xfffffe800cc0d2c0\nsleep 7824 tgl 7u unknown file system type: 0\nsleep 7824 tgl 8u unknown file system type: 0\nsleep 7824 tgl 9w PIPE 0xfffffe80036c4dc0 0 \n\nwhile of course \"sleep\" launched by hand has only 0/1/2 open.\n\nWe may conclude that when the regression tests are launched from cron,\nas would be typical for a buildfarm animal, HEAD has exactly zero FDs\nto spare in this test, while the back branches are one FD underwater\nand fail. This matches the observed results from sidewinder.\n\nIt's not clear whether any of this info applies to Christoph's trouble\nwith bionic. If the extra FDs are an old cron bug, it could be that\nbionic shares that bug --- but to explain failure on HEAD, you'd have to\nposit four excess FDs not three. I'm not convinced that what Christoph\nis seeing matches this anyway; he hasn't showed the telltale\n\"insufficient file descriptors\" message, at least. Still, maybe\nlaunched-by-cron vs launched-by-hand is a relevant point there.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 08 Jan 2020 17:31:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "Hi,\n\nOn 2020-01-08 17:31:06 -0500, Tom Lane wrote:\n> (1) d20703805's test will clearly fall over if there are more than six\n> FDs open in the postmaster when set_max_safe_fds is called, because it\n> sets max_files_per_process = 26 while set_max_safe_fds requires at\n> least 20 usable FDs to be available.\n\n> (4) NetBSD 7.0's cron leaves three extra open FDs in processes that\n> it spawns. I have not looked into why, but I have experimentally\n> observed this. For example, lsof on a \"sleep\" launched from cron\n> shows\n> \n> COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME\n> sleep 7824 tgl cwd VDIR 0,0 512 795201 /home/tgl\n> sleep 7824 tgl txt VREG 0,0 10431 1613152 /bin/sleep\n> sleep 7824 tgl txt VREG 0,0 1616564 22726 /lib/libc.so.12.193.1\n> sleep 7824 tgl txt VREG 0,0 55295 22747 /lib/libgcc_s.so.1.0\n> sleep 7824 tgl txt VREG 0,0 187183 22762 /lib/libm.so.0.11\n> sleep 7824 tgl txt VREG 0,0 92195 1499524 /libexec/ld.elf_so\n> sleep 7824 tgl 0r PIPE 0xfffffe803131eb58 16384 \n> sleep 7824 tgl 1w PIPE 0xfffffe8007ec4a30 0 ->0xfffffe800cc0d2c0\n> sleep 7824 tgl 2w PIPE 0xfffffe8007ec4a30 0 ->0xfffffe800cc0d2c0\n> sleep 7824 tgl 7u unknown file system type: 0\n> sleep 7824 tgl 8u unknown file system type: 0\n> sleep 7824 tgl 9w PIPE 0xfffffe80036c4dc0 0 \n> \n> while of course \"sleep\" launched by hand has only 0/1/2 open.\n\nIs it worth having the test close superflous FDs? It'd not be hard to do\nso via brute force (or even going through /proc/self/fd).\n\nGreetings,\n\nAndres Freund\n\n\n",
"msg_date": "Wed, 8 Jan 2020 15:22:05 -0800",
"msg_from": "Andres Freund <andres@anarazel.de>",
"msg_from_op": false,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "Andres Freund <andres@anarazel.de> writes:\n> Is it worth having the test close superflous FDs? It'd not be hard to do\n> so via brute force (or even going through /proc/self/fd).\n\nNo, it isn't, because d20703805's test is broken by design. There\nare any number of reasons why there might be more than three-or-so\nFDs open during postmaster start. Here are a few:\n\n* It seems pretty likely that at least one of those FDs is\nintentionally being left open by cron so it can detect death of\nall child processes (like our postmaster death pipe). Forcibly\nclosing them will not necessarily have nice results. Other\nexecution environments might do similar tricks.\n\n* On platforms where semaphores eat a FD apiece, we intentionally\nopen those before counting free FDs.\n\n* We run process_shared_preload_libraries before counting free FDs,\ntoo. If a loaded library intentionally leaves a FD open in the\npostmaster, counting that against the limit also seems like a good\nidea.\n\nMy opinion is still that we should just get rid of that test case.\nThe odds of it ever finding anything interesting seem too low to\njustify the work that will be involved in (a) getting it to work\nreliably today and then (b) keeping it working. Running on the\nhairy edge of not having enough FDs doesn't seem like a use-case\nthat we want to spend a lot of time on, but we will be if that\ntest stays as it is. Example: if max_safe_fds is only 10, as\nthis test is trying to make it be, then maxAllocatedDescs won't\nbe allowed to exceed 5. Do you want to bet that no code paths\nwill reasonably exceed that limit? [1] What will we do about it\nwhen we find one that does?\n\nI also note that the test can only catch cases where we used\nOpenTransientFile() in an inappropriate way. I think it's at\nleast as likely that somebody would carelessly use open()\ndirectly, and then we have no chance of catching it till the\nkernel complains about EMFILE.\n\nThinking about that some more, maybe the appropriate thing\nto do is not to mess with max_files_per_process as such,\nbut to test with some artificial limit on maxAllocatedDescs.\nWe still won't catch direct use of open(), but that could test\nfor misuse of OpenTransientFile() with a lot less environment\ndependence.\n\n\t\t\tregards, tom lane\n\n[1] Although I notice that the code coverage report shows we\nnever reach the enlargement step in reserveAllocatedDesc(),\nwhich means that the set of tests we run today don't exercise\nany such path. I'm somewhat surprised to see that we *do*\nseem to exercise overrunning max_safe_fds, though, since\nReleaseLruFile() is reached. Maybe this test case is\nresponsible for that?\n\n\n",
"msg_date": "Wed, 08 Jan 2020 19:18:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "On Thu, Jan 9, 2020 at 5:48 AM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>\n> Andres Freund <andres@anarazel.de> writes:\n> > Is it worth having the test close superflous FDs? It'd not be hard to do\n> > so via brute force (or even going through /proc/self/fd).\n>\n> No, it isn't, because d20703805's test is broken by design. There\n> are any number of reasons why there might be more than three-or-so\n> FDs open during postmaster start. Here are a few:\n>\n> * It seems pretty likely that at least one of those FDs is\n> intentionally being left open by cron so it can detect death of\n> all child processes (like our postmaster death pipe). Forcibly\n> closing them will not necessarily have nice results. Other\n> execution environments might do similar tricks.\n>\n> * On platforms where semaphores eat a FD apiece, we intentionally\n> open those before counting free FDs.\n>\n> * We run process_shared_preload_libraries before counting free FDs,\n> too. If a loaded library intentionally leaves a FD open in the\n> postmaster, counting that against the limit also seems like a good\n> idea.\n>\n> My opinion is still that we should just get rid of that test case.\n>\n\nThe point is that we know what is going wrong on sidewinder on back\nbranches. However, we still don't know what is going wrong with tern\nand mandrill on v10 [1][2] where the log is:\n\n2020-01-08 06:38:10.842 UTC [54001846:9] t/006_logical_decoding.pl\nSTATEMENT: SELECT data from pg_logical_slot_get_changes('test_slot',\nNULL, NULL)\n WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;\n2020-01-08 06:38:15.993 UTC [63898020:3] LOG: server process (PID\n54001846) was terminated by signal 11\n2020-01-08 06:38:15.993 UTC [63898020:4] DETAIL: Failed process was\nrunning: SELECT data from pg_logical_slot_get_changes('test_slot',\nNULL, NULL)\n WHERE data LIKE '%INSERT%' ORDER BY lsn LIMIT 1;\n2020-01-08 06:38:15.993 UTC [63898020:5] LOG: terminating any other\nactive server processes\n\nNoah has tried to reproduce it [3] on that buildfarm machine by\nrunning that test in a loop, but he couldn't reproduce it till now. He\nis running the test now for a longer duration. Another point is that\nthe logic in v11 code is the same, but the same test is passing on\nthose machines, so I have a slight suspicion that there might be some\nother problem in v10 which is uncovered by this test, but I am not\nsure on this point.\n\nNow, if we remove that test as per your suggestion, then we might not\nbe able to find out what is going wrong on those machines in v10?\n\n\n[1] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=tern&dt=2020-01-08%2004%3A36%3A27\n[2] - https://buildfarm.postgresql.org/cgi-bin/show_log.pl?nm=mandrill&dt=2020-01-08%2004%3A36%3A27\n[3] - https://www.postgresql.org/message-id/20200104185148.GA2270238%40rfd.leadboat.com\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 9 Jan 2020 08:01:24 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "On Tue, Jan 7, 2020 at 5:29 PM Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> So that raises the question: why does xenial's version of libedit\n> not match either its documentation or the distributed source code?\n> Because it doesn't.\n\nThe level of effort you've put into this is extremely impressive, but\nI can't shake the feeling that you're going to keep finding issues in\nthe test setup, the operating system, or the upstream libraries,\nrather than bugs in PostgreSQL. Maybe this is all just one-time\nstabilization effort, but...\n\n-- \nRobert Haas\nEnterpriseDB: http://www.enterprisedb.com\nThe Enterprise PostgreSQL Company\n\n\n",
"msg_date": "Thu, 9 Jan 2020 09:45:20 -0500",
"msg_from": "Robert Haas <robertmhaas@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Robert Haas 2020-01-09 <CA+TgmoZC_6z3d5+6n1GVfy898uoBHMPPJU4-nAQLG5ZkaEyqdQ@mail.gmail.com>\n> > So that raises the question: why does xenial's version of libedit\n> > not match either its documentation or the distributed source code?\n> > Because it doesn't.\n> \n> The level of effort you've put into this is extremely impressive, but\n> I can't shake the feeling that you're going to keep finding issues in\n> the test setup, the operating system, or the upstream libraries,\n> rather than bugs in PostgreSQL. Maybe this is all just one-time\n> stabilization effort, but...\n\nFwiw if libedit in xenial is Just Broken and fixing the tests would\njust codify the brokenness without adding any value, I'll just disable\nthat test in that particular build. It looks like setting\nwith_readline=no on the make command line should work.\n\nChristoph\n\n\n",
"msg_date": "Thu, 9 Jan 2020 15:59:00 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Robert Haas <robertmhaas@gmail.com> writes:\n> The level of effort you've put into this is extremely impressive, but\n> I can't shake the feeling that you're going to keep finding issues in\n> the test setup, the operating system, or the upstream libraries,\n> rather than bugs in PostgreSQL. Maybe this is all just one-time\n> stabilization effort, but...\n\nYeah, this is pretty nearly what I feared would happen, given libedit's\napparently-well-earned evil reputation. I suspect that the filename\ncompletion tests I posted over in that thread may expose another round\nof issues. If we can get past that, I think that expanding our test\ncoverage for the rest of tab-complete.c should be relatively\nunproblematic, because there aren't very many other readline behaviors\nthat tab-complete.c is depending on.\n\nOne alternative is to give up on the idea of ever having any test\ncoverage for tab-complete.c, which doesn't seem very desirable.\n\nOr we could disable the tests when using libedit, or provide some\nswitch to make it easier for the user/packager to do so.\n\nIn Debian's case, I suspect that they don't care about the behavior\nwhen using libedit, so skipping the test would be just fine for them.\nWhat they really ought to test is what happens after they sub in\nlibreadline at runtime ... but I imagine that their dubious legal\ntheories about all this would prevent them from actually doing that\nduring the package build.\n\n\t\t\tregards, tom lane\n\nPS: Stepping back from the immediate problem, it's obviously better\nfor all concerned if libedit is usable with Postgres. So if they're\nwilling to patch problems, which we've found out they are, then coping\nwith this stuff is a win in the long run.\n\n\n",
"msg_date": "Thu, 09 Jan 2020 10:46:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Christoph Berg <myon@debian.org> writes:\n> Fwiw if libedit in xenial is Just Broken and fixing the tests would\n> just codify the brokenness without adding any value, I'll just disable\n> that test in that particular build. It looks like setting\n> with_readline=no on the make command line should work.\n\nThat would disable psql's tab completion, command editing, and history\naltogether, which I doubt is what you want for production builds.\nIf we conclude we can't work around the testing issues for ancient\nlibedit, probably the right answer is to provide a switch to\ndisable just the test. I've been trying to dance around that\nconclusion, but maybe we should just do it and move on.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 10:50:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "On 2020-01-09 16:50, Tom Lane wrote:\n> Christoph Berg <myon@debian.org> writes:\n>> Fwiw if libedit in xenial is Just Broken and fixing the tests would\n>> just codify the brokenness without adding any value, I'll just disable\n>> that test in that particular build. It looks like setting\n>> with_readline=no on the make command line should work.\n> \n> That would disable psql's tab completion, command editing, and history\n> altogether, which I doubt is what you want for production builds.\n> If we conclude we can't work around the testing issues for ancient\n> libedit, probably the right answer is to provide a switch to\n> disable just the test. I've been trying to dance around that\n> conclusion, but maybe we should just do it and move on.\n\nI think he means something like\n\nmake check with_readline=no\n\nnot for the actual build.\n\n-- \nPeter Eisentraut http://www.2ndQuadrant.com/\nPostgreSQL Development, 24x7 Support, Remote DBA, Training & Services\n\n\n",
"msg_date": "Thu, 9 Jan 2020 17:10:10 +0100",
"msg_from": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n> On 2020-01-09 16:50, Tom Lane wrote:\n>> That would disable psql's tab completion, command editing, and history\n>> altogether, which I doubt is what you want for production builds.\n>> If we conclude we can't work around the testing issues for ancient\n>> libedit, probably the right answer is to provide a switch to\n>> disable just the test. I've been trying to dance around that\n>> conclusion, but maybe we should just do it and move on.\n\n> I think he means something like\n> \tmake check with_readline=no\n> not for the actual build.\n\nOh, I see. I'd rather not codify that though, because it risks\nproblems if that symbol ever gets used any other way. I was\nthinking of making the test script check for some independent\nenvironment variable, say SKIP_READLINE_TESTS.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 11:19:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "I wrote:\n> Peter Eisentraut <peter.eisentraut@2ndquadrant.com> writes:\n>> I think he means something like\n>> \tmake check with_readline=no\n>> not for the actual build.\n\n> Oh, I see. I'd rather not codify that though, because it risks\n> problems if that symbol ever gets used any other way. I was\n> thinking of making the test script check for some independent\n> environment variable, say SKIP_READLINE_TESTS.\n\nI thought of another problem with the with_readline=no method,\nwhich is that it requires the user to be issuing \"make check\"\ndirectly; it wouldn't be convenient for a buildfarm owner, say.\n(*Perhaps* it'd work to set with_readline=no throughout a\nbuildfarm run, but I think that's asking for trouble with the\nbuild part.) I pushed a patch using SKIP_READLINE_TESTS.\nChristoph should be able to set that for the Ubuntu branches\nwhere the test is failing.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Thu, 09 Jan 2020 16:51:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Tom Lane 2020-01-09 <16328.1578606690@sss.pgh.pa.us>\n> build part.) I pushed a patch using SKIP_READLINE_TESTS.\n> Christoph should be able to set that for the Ubuntu branches\n> where the test is failing.\n\nThat \"fixed\" the problem on xenial, thanks.\n\nChristoph\n\n\n",
"msg_date": "Wed, 15 Jan 2020 21:28:29 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql: Add basic TAP tests for psql's tab-completion logic."
},
{
"msg_contents": "Re: Amit Kapila 2020-01-09 <CAA4eK1Kq13BS2z1xOfTDcuSwzga+0NPzSWXgw3ygxX9tRk1rEg@mail.gmail.com>\n> The point is that we know what is going wrong on sidewinder on back\n> branches. However, we still don't know what is going wrong with tern\n> and mandrill on v10 [1][2] where the log is:\n\nFwiw, the problem on bionic disappeared yesterday with the build\ntriggered by \"Revert test added by commit d207038053\".\n\nChristoph\n\n\n",
"msg_date": "Wed, 15 Jan 2020 21:40:39 +0100",
"msg_from": "Christoph Berg <myon@debian.org>",
"msg_from_op": false,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
},
{
"msg_contents": "On Thu, Jan 16, 2020 at 2:10 AM Christoph Berg <myon@debian.org> wrote:\n>\n> Re: Amit Kapila 2020-01-09 <CAA4eK1Kq13BS2z1xOfTDcuSwzga+0NPzSWXgw3ygxX9tRk1rEg@mail.gmail.com>\n> > The point is that we know what is going wrong on sidewinder on back\n> > branches. However, we still don't know what is going wrong with tern\n> > and mandrill on v10 [1][2] where the log is:\n>\n> Fwiw, the problem on bionic disappeared yesterday with the build\n> triggered by \"Revert test added by commit d207038053\".\n>\n\nThanks for the update.\n\n-- \nWith Regards,\nAmit Kapila.\nEnterpriseDB: http://www.enterprisedb.com\n\n\n",
"msg_date": "Thu, 16 Jan 2020 07:45:07 +0530",
"msg_from": "Amit Kapila <amit.kapila16@gmail.com>",
"msg_from_op": false,
"msg_subject": "Re: src/test/recovery regression failure on bionic"
}
] |
[
{
"msg_contents": "In May of last year (2019), I set up pg_ssl_init on gitlab:\n\nhttps://gitlab.com/osfda/pg_ssl_init\n\nAnd announced it on this list:\n\nhttps://www.postgresql.org/message-id/1621ef11-f246-8519-018d-57ba36ecc16b%40osfda.org\n\npg_ssl_init is a set of python3 scripts that configures SSL certificates \nto insure SECURE remote pgadmin access. To do so manually is a multistep \nprocess, fraught with many opportunities for a mistake to be made and \nnumerous hours to be wasted troubleshooting it. It has been tested on \npopular Linux distros (debian, Ubuntu...), though it is written to _try_ \nand work on Windows [But I would need a willing tester to get in touch \nto confirm that and fix any compatibility issues for Windows...] It \nworks great on Debian 10.\n\nFor some VERY odd reason, while the stub of the repo was still visible, \nthe files of the repo stopped appearing on gitlab -but the repo was \nmarked public (?)\n\nI marked the repo private, saved the settings; then marked it public.\nBehold: all the files then reappeared.\n\nI guess gitlab is finding out that it's hard to find good technical help!\n\nIf anyone reading these posts referring to pg_ssl_init has any problem \nin future accessing that repo, get in touch at the contact page at \nosfda.org. Should gitlab let us down again, at that juncture I will \nfigure out another amenable open source code hosting solution that's not \ncorporate...\n\n\n\n\n",
"msg_date": "Fri, 3 Jan 2020 01:55:21 -0500",
"msg_from": "\"steve.b@osfda.org\" <steve.b@osfda.org>",
"msg_from_op": true,
"msg_subject": "pg_ssl_init"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.